Prompt Engineering
In short
The skill of writing instructions to an LLM in a way that gets you the best possible answer. The quality of your output depends heavily on the quality of your input.
Like briefing a contractor. If you say “make me a website,” you’ll get something generic. But if you say “build me a 5-page portfolio site for a photography business, dark theme, with a contact form and gallery — here’s an example I like,” you’ll get something much closer to what you actually want. Prompt engineering is writing that better brief.
When you interact with an LLM, there’s a whole range of things you can do to get better results. The simple stuff is just being specific and providing context. The more advanced techniques include giving the model examples to follow (called few-shot prompting), asking it to reason step-by-step (chain of thought), or assigning it a role like “you are a financial analyst.”
Different models respond differently to the same prompt too, which is something people often overlook. What works perfectly with Claude might need tweaking for GPT or Gemini. It’s not a one-size-fits-all thing.
Prompt engineering used to be talked about as its own job role, but by now it’s becoming a standard skill for anyone who works with AI — not a separate position. It’s part of the broader AI Engineering toolkit, and closely tied to how you set up system prompts for applications.
Related
- AI Engineering - prompt engineering is a core skill
- System Prompt - a specific type of prompt for defining behavior
- LLMs - what you’re prompting
- Fine-Tuning - an alternative when prompting isn’t enough