Prompt engineering is the practice of designing structured inputs that guide AI models toward accurate, relevant, and useful outputs.
In this post, you’ll explore how prompts are created, refined, and executed, beginning with the relationship between prompts and prompt templates. You’ll then examine the iterative prompt engineering process, the key components that make up an effective prompt, and a set of foundational principles that improve output quality across text, code, and multimodal use cases. Together, these concepts provide a practical framework for reliably shaping AI behavior and results.
What Is the Prompt Engineering Process? From Templates to Model Output
The figure below shows an example of a prompt and a prompt template. A prompt template is converted to a prompt when invoked. At that point, the variable (in this case, TOPIC) is replaced with an actual value.

Whenever you write a prompt, you follow a certain iterative process, as shown in the next figure. You start with an initial prompt and evaluate its result. Often, you’re satisfied with the result, but sometimes not. In the latter case, you can modify the prompt and run subsequent loops until you’re finally satisfied enough to use the prompt.

When the prompt is passed to an AI model (e.g., an LLM), a response is created. This figure shows an LLM at work, with its user input (the prompt) and the model output.

The prompt holds all the information on the context combined with additional information on styling and the desired output format. The AI system processes the prompt and creates an output (typically text). Code in this regard can be considered as a subset of text output. Other multimodal outputs are possible as well that create images, videos, music, 3D models, and much more.
A prompt has many different components, which we’ll focus on next.
Core Components of an Effective Prompt: Instructions, Context, Format, and Roles
The figure below shows the different components a prompt can have.

Typically, you have a clear idea what you want the model to do. You can pass this directive directly. But the directive can also be implicit by providing helpful examples and leaving a blank space where you want the model to provide an answer. For example, let’s say you simply provide pairs of words in different languages (e.g., English and Spanish). In this way, you are only giving the model some hint of what you want. The model will deduce that we are looking for the Spanish equivalent word when we enter the English word “woman.”
Often, we want the model to return its output in a certain format. If the output needs to be further processed, you might want a certain format like a comma-separated values (CSV) file or JavaScript Object Notation (JSON). You can also control the style of the model response, for example, the tone of an answer, or you can ask for an answer in the style of Shakespeare.
A helpful tactic is to instruct the model to act like a certain role or persona. This information helps the model to better understand how it should behave.
Last, but definitely not least, is the context at hand. You can also provide other relevant information to the model, and it can formulate its answer purely based on the information provided.
Basic Prompt Engineering Principles for Reliable AI Responses
Following some simple principles, you can easily create good prompts. We’ll walk through these principles, from providing clear instructions through generating multiple outputs, in the following sections.
Why Clear, Specific Instructions Are Critical in Prompt Engineering
Clear, specific instructions form the foundation of effective prompt engineering. Rather than vague requests like “write about dogs,” use precise directives such as “describe the key characteristics of golden retrievers as family pets, including their temperament, exercise needs, and grooming requirements.”
Specific instructions help eliminate ambiguity and ensure the AI model understands exactly what output is desired.
A poorly worded instruction would be “write about dogs.” What are the problems with this prompt? First, no focus on a specific topic is made. Also, no guidance on the tone, style, or length of the response is provided, and it is unclear who the target audience is. Furthermore, the user has described no clear purpose or desired outcome.
How can we do better? For example, you could define the instruction, as shown in the prompt below. If you’d like, you can test the examples shown in this section and in subsequent sections in the playgrounds provided by most LLM providers. If you work with OpenAI, find its playground here; additionally, Groq’s playground is available here.
Write a 300-word guide for first-time dog owners about choosing the right breed. Include:
Top 3 factors to consider (lifestyle, living space, experience level)Common beginner-friendly breedsRed flags to watch out forEstimated costs of ownership
Target audience: Urban professionals aged 25-35 Tone: Informative but conversational Format: Use headers and bullet points for easy scanning
As a short exercise, look at the following prompt:
Analyze this sales data and tell me what you find.
2021: $150K
2022: $180K
2023: $165K
Think about what the problems are. Then, think about how to improve prompt. Read on for our analysis.
Our thoughts: What are the issues with this prompt? There is no clear metric provided. Further context on who is doing the analysis, and why, is missing. The output format is not defined. Also, it is unclear what the questioner is looking for.
The example shown here improves the prompt.
Analyze the following annual sales data from our retail store:
Sales Data:
2021: $150K
2022: $180K
2023: $165K
Please provide:
Year-over-year growth rates (%)Identify the best and worst performing yearsCalculate the 3-year averageHighlight any trends or patterns
Format: Present findings in a bullet-point list
Include: Percentage calculations and specific numbers
Context: We're a small retail business evaluating our growth trajectory
Task Decomposition: Breaking Complex Prompts into Manageable Steps
Breaking complex tasks into smaller, manageable subtasks improves the quality and accuracy of model responses. Instead of requesting a complete business plan in one prompt, divide it into sections like market analysis, financial projections, and marketing strategy.
This approach allows you to focus on each component individually and helps the AI model provide more detailed, focused responses for each subtask.
The key differences between effective and poor task decomposition are as follows:
- Level of detail: Effective decomposition breaks tasks down into specific, actionable items.
- Structure: Good decomposition shows clear relationships and dependencies.
- Completeness: Good decomposition covers all aspects, including support tasks.
- Measurable outcomes: Clear deliverables should be defined.
Now, consider the example shown here:
Task: Write a research paper on climate change
Steps: 1. Research the topic
2. Write the paper
3. Edit it
Submit
On the positive side, this prompt has some steps defined. But no specific research focus is provided. The steps are way too general. The methodology to be used is missing as ell as the desired structural elements. This prompt shows an improved version.
Task: Write a research paper on climate change impacts on coastal cities
Phase 1: Research Planning
1.1. Topic Refinement
- Define specific research question
- Identify key variables
- Establish scope and limitations
- Create research timeline
1.2. Literature Review Preparation
- Identify key databases
- List relevant keywords
- Create citation management system
- Develop screening criteria
Phase 2: Data Collection
2.1. Literature Review
- Search academic databases
- Review relevant papers
- Document key findings
- Create literature matrix
2.2. Data Gathering
- Collect climate data
- Gather city statistics
- Document methodology
- Organize data sets
...
Using Delimiters to Structure Prompt Context and Instructions
Delimiters (such as triple quotes, XML tags, or special characters) help structure both inputs and outputs clearly. They separate different parts of the prompt, like separating the context from the instructions, making it easier for the model to understand where one element ends and another begins.
For example, using XML tags like <context> and <question> helps organize information and ensures the model processes different components appropriately, as shown here.
Using the context and question below, provide an answer:
<context>
The Great Wall of China was built over many centuries by different dynasties.
The most famous sections were built during the Ming Dynasty (1368-1644).
The total length of all walls built over various dynasties is approximately
21,196 kilometers (13,171 miles).
</context>
<question>
When was the most famous part of the Great Wall built and what is its total
length?
</question>
According to the context, the most famous sections of the Great Wall of China were built during the Ming Dynasty, which was from 1368 to 1644. The total length of all walls built over various dynasties is approximately 21,196 kilometers (13,171 miles).
How Requesting Explanations Encourages System 2 Reasoning in AI Models
Asking the model to explain its reasoning or thought process helps ensure more reliable and thoughtful responses. By including phrases like “explain your reasoning” or “walk through your approach step by step,” you encourage the model to be more thorough and methodical in its analysis, leading to better-quality outputs and helping you understand how the model arrived at its conclusions.
LLMs have shown that they have problems solving mathematical equations. The results have improved a lot, but smaller models still have issues. You can test any model with the following prompt:
Solve: 3x² + 6x - 24 = 0
Try different models and find out if they come up with the correct answer:
x1 = 2 and x2 = −4.
The 70-billion parameter Llama 3 model failed at this task. At the same time, gemma-7B found the exact answers, so the issue is not one of model size. LLMs are considered to follow a system 1 thinking approach, as defined by Daniel Kahnemann in his famous book Thinking, Fast and Slow. System 1 thinking is characterized by fast, automatic, and quick answering. In this mode, a model jumps to conclusions without an elaborate thought process. In contrast, system 2 thinking is slow, deliberate, and requires intentional effort.
Your goal is to guide the model into system 2 thinking.
This prompt shows an improved prompt that tries to solve the equation.
Solve the quadratic equation 3x² + 6x - 24 = 0
-explain each step of your reasoning and calculations
-describe how you identify the values of the coefficients
-simplify the equation if possible, and apply the quadratic formula or
factoring to find the values of x
-include each step in detail, and explain why it's necessary for solving the equation.
Leveraging Personas to Control Expertise, Tone, and Perspective
Assigning specific roles or personas to the model can help shape its responses to match particular expertise or communication styles. For instance, asking the model to “respond as an experienced pediatrician” or “write as a financial analyst” helps to frame the response within the appropriate context and technical depth.
This approach is particularly effective when you need responses that reflect specific professional perspectives or expertise levels.
You’re not limited to human-like personas. For example, you can tell the model to act like the system prompt, as shown in this prompt.
You shall act like a Windows CMD-prompt. Only reply with CMD output. Don‘t write explanations.
C:\Users\username>dir
Volume in drive C is Windows
Volume serial number is 12345678
Directory of C:\Users\username
09/01/2022 02:00 PM <DIR> Documents
09/01/2022 02:00 PM <DIR> Pictures
09/01/2022 02:00 PM <DIR> Music
09/01/2022 02:00 PM <DIR> Videos
0 File(s) 0 bytes
4 Dir(s) 12,345,678,901 bytes free
Providing Examples (Few-Shot Learning) to Guide AI Output
Including examples in your prompts helps establish the desired format, style, and level of detail for the model’s response. Known as few-shot learning, this technique involves showing the model one or more examples of the type of output you want. For instance, when requesting product descriptions, providing a sample description helps the AI understand the expected tone, structure, and level of detail.
Output Control Techniques: Managing Format, Length, and Style
Implementing specific parameters for output format, length, and style helps ensure the responses meet your needs. You might include specifying word counts, requesting particular formats (like bullet points or tables), or defining the technical level of the response. Clear output parameters help maintain consistency and ensure the responses are suitable for your intended use case.
Below shows a simple example. You might instruct the model on how to respond in a specific way.
System prompt: respond in exactly three bullet points
User prompt: what is a dog?
Here are three bullet points about what a dog is:
A dog is a domesticated animal that belongs to the family Canidae and is closely related to wolves and foxes.Dogs are typically carnivores, and they require a diet that includes protein-rich foods such as meat, fish, and eggs.Dogs are often kept as pets, and they are known for their ability to be trained for various tasks such as hunting, herding, and assisting people with disabilities, as well as providing companionship and emotional support.
Generating Multiple Outputs to Compare, Refine, and Select the Best Result
Generating multiple variations of a response and selecting the best one can also lead to higher-quality results. This approach involves requesting several different versions or suggesting several approaches to the same task, then evaluating the responses based on specific criteria to choose the most appropriate one. This process is basically what you’re doing when you iteratively improve a prompt.
This method is particularly useful for creative tasks or when seeking optimal solutions to complex problems.
Conclusion
Effective prompt engineering is less about clever wording and more about clarity, structure, and intent. By understanding how prompts are constructed, how tasks can be decomposed, and how techniques such as delimiters, personas, examples, and output controls influence model behavior, you can consistently produce higher-quality results. As AI systems continue to evolve, these principles remain essential tools for guiding models from quick, surface-level responses toward deliberate, well-reasoned outputs that align with real-world goals.
Editor’s note: This post has been adapted from a section of the book Generative AI with Python: The Developer’s Guide to Pretrained LLMs, Vector Databases, Retrieval-Augmented Generation, and Agentic Systems by Bert Gollnick. Bert is a senior data scientist who specializes in renewable energies. For many years, he has taught courses about data science and machine learning, and more recently, about generative AI and natural language processing. Bert studied aeronautics at the Technical University of Berlin and economics at the University of Hagen. His main areas of interest are machine learning and data science.
This post was originally published 12/2025.
Comments