Few-shot learning is the technique of teaching an LLM a task by including 2–5 examples in the prompt; the model infers the format and behavior from those examples and applies the same pattern to new input. The GPT-3 paper (Brown et al., 2020) showed that few-shot capability scales strongly with model size, which became a core selling point of modern large language models. It produces more reliable results than Zero-shot, particularly for low-resource languages or niche formats. It's the first lever to try before reaching for Fine-tuning.
MEVZU N°124ISTANBULYEAR I — VOL. III
Glossary · Beginner · 2020
Few-shot Learning
Teaching the model a task by showing a handful of examples directly in the prompt.
- EN — English term
- Few-shot Learning
- TR — Turkish term
- Az-Atış Öğrenme