Definition
Prompt engineering is the discipline of designing, testing, and optimizing inputs (prompts) provided to an LLM to obtain outputs that meet specific requirements for quality, format, style, and accuracy.
It’s not about “tricks” but systematic design: structuring requests so the model can produce optimal output by leveraging its capabilities.
Main Techniques
Zero-shot: direct request without examples. Works for simple tasks on capable models.
Few-shot: provide examples of desired input-output pairs in the prompt. Significantly improves consistency of format and quality.
Chain-of-Thought: request step-by-step reasoning. Improves performance on reasoning, math, and logic tasks.
System prompts: persistent instructions that define model behavior, personality, and constraints.
Role prompting: assign a specific role (“You are a legal expert…”) to influence style and simulated expertise.
Structured output: specify exact output format (JSON, XML, Markdown) with schema.
Design Principles
Clarity: explicit and unambiguous instructions. The model doesn’t read minds.
Specificity: define format, length, style, constraints. “Write a response” vs “Write a 2-3 sentence response in formal tone”.
Sufficient context: include necessary information for the task. The model doesn’t have access to implicit knowledge.
Examples: when format is critical, show it with concrete examples.
Decomposition: break complex tasks into manageable steps.
Practical Considerations
Iteration: prompting is empirical. Test on representative cases, analyze errors, refine iteratively.
Versioning: prompts are code. Version them, document them, test them systematically.
Model-specific: prompts optimized for GPT-4 may underperform on Claude or Llama. Test on each target model.
Costs: longer prompts consume tokens. Balance completeness and costs, especially for high-volume applications.
Common Misconceptions
”Prompt engineering is a passing fad”
No. As long as models receive textual input, input quality will influence output quality. Techniques evolve but the discipline remains.
”Just copy prompts from the internet”
Prompts are context-dependent. A viral Twitter prompt rarely works out-of-the-box for a specific use case. Adaptation and testing are required.
”Better models don’t need prompting”
More capable models still benefit from well-structured prompts. The difference is they tolerate ambiguous prompts better, not that they don’t require them.
Related Terms
- LLM: models to which prompting techniques apply
- Chain-of-Thought: specific prompting technique for reasoning
- Fine-tuning: alternative to prompting for customization
Sources
- Anthropic Prompt Engineering Guide
- OpenAI Prompt Engineering Guide
- Wei, J. et al. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. NeurIPS