Prompt tuning vs In-context learning

Tiya Vaj
2 min readMay 11, 2024

Prompt tuning and in-context learning are both techniques used to get better results from large language models (LLMs) like me. They both involve giving the LLM additional information to help it understand what you want it to do, but they do it in different ways:

  • Prompt tuning involves adding a small piece of trainable text (the prompt) to the beginning of the input you give the LLM. This prompt sort of sets the context for the LLM and helps it focus on the task at hand. For example, if you want the LLM to write a poem, you might use a prompt like “Write a poem about love.”
  • In-context learning is all about giving the LLM examples along with your instructions. These examples can be shown before the actual input, like showing a recipe before asking the LLM to write a grocery list based on the ingredients. The idea is that the examples will help the LLM understand the task and how to complete it.

Here’s a table summarizing the key differences:

FeaturePrompt TuningIn-Context LearningHow it worksAdds a small trainable prompt to the beginning of the inputProvides examples along with the instructionsBenefitsMore efficient than fine-tuning the entire LLMCan be good for complex or specific tasksDrawbacksRequires careful design of the promptCan be less consistent than prompt tuning

In many cases, these techniques can be used together. For instance, you could use prompt tuning to give the LLM a general idea of the task, and then use in-context learning to provide more specific examples.

Here are some resources you can look at to learn more:



Tiya Vaj

Ph.D. Research Scholar in NLP and my passionate towards data-driven for social good.Let's connect here