Podcast Title

Author Name

0:00
0:00
Album Art

Context Engineering Explained in 5 Steps

By 10xdev team August 03, 2025

Context engineering is a powerful technique for getting better results from Large Language Models (LLMs). It emphasizes preparing a rich, detailed context before you ask the model a question, moving beyond simple prompt engineering.

Think of a generic prompt as toast with no butter—plain and uninspired. It yields a generic answer. Context engineering, on the other hand, provides all the necessary ingredients for the model to give you a specific, tailored response that perfectly meets your needs.

Step 1: Define Your Desired Outcome

The first step in context engineering is to clarify exactly what you want. Instead of a vague request, provide detailed instructions.

  • Vague: "Write a recipe."
  • Specific: "Write a vegan jackfruit recipe that sounds like a friend texting you at 11:00 a.m. on a Sunday."

The first prompt gives a generic answer. The second provides more context and information, resulting in a much better, more personalized output. Be specific about the tone, purpose, and intended audience.

Step 2: Provide Key Information

To engineer your context effectively, you need to give the model the right information. Consider these questions:

  • Who is the model pretending to be? Is it a chef, a teacher, or a friend?
  • Who is the model talking to? Is the audience beginners, experts, or children? The language will change accordingly.
  • Are there any rules to consider? Should the response be under 80 words? Should it avoid jargon or use a specific language?
  • Are there examples to follow? Providing a sample tone, style, or format is crucial.

Remember, the model doesn't know anything until you tell it. If you don't specify that the output should be under 30 minutes, it won't be. Don't assume the model thinks like you do.

Step 3: Structure Your Input Effectively

The order of your information matters. An effective structure places the most critical information towards the end of the prompt, as models like GPT pay more attention to what's closer to the final instruction.

A highly effective structure is:

  1. Role/Voice: Define the persona the model should adopt.
  2. Facts/Rules: State what must be included or followed.
  3. Example: Provide a clear example of the desired output.
  4. Prompt: The final, direct instruction.

Structuring your context is crucial for guiding the model toward the desired result.

Step 4: Be Concise and Eliminate Fluff

Every model has a token limit, so the information you provide must be crisp and to the point.

  • Avoid explaining things more than once.
  • Remove general greetings or conversational filler.
  • Delete anything that doesn't help the model perform its job.

Read through your context and remove anything fluffy. This is the essence of context engineering.

Step 5: Be Clear, Literal, and Objective

Think of the LLM as a smart but new intern. If you are vague, it plays it safe. If you are clear, it gets the job done right.

On Tone: * Vague: "Make it fun." * Clear: "Write in a playful tone with pop culture references and occasional puns, like you're explaining it to a friend over coffee."

On Length: * Vague: "Keep it short." * Clear: "Limit the response to three paragraphs, with no more than three sentences each."

Your definition of "short" can be very different from the LLM's. Be objective rather than subjective.

Practical Examples of Context Engineering

Let's look at a couple of examples that show the difference between a basic prompt and a context-engineered one.

Example 1: Getting a Jackfruit Recipe

  • Goal: A fun, quick vegan recipe using jackfruit.
  • Basic Prompt: "Give me a jackfruit recipe."
    • Result: A safe, boring, and generic answer with no personality.
  • Context-Engineered Prompt: > Role: You are a casual vegan food blogger with a playful tone. > Rules: Main ingredient is jackfruit. Time is under 30 minutes. > Example Tone: "Drain the jackfruit. No, seriously, that brine is nasty." > Prompt: Write a full recipe in that tone and format.
    • Result: A much more engaging and specific recipe that matches the user's intent.

Example 2: AI Automation Blog Post

  • Goal: Write a blog post about the risks of AI and automation.
  • Basic Prompt: "Write a blog on AI and automation risk."
    • Result: A generic, wordy article with no real opinion.
  • Context-Engineered Prompt: > Role: You are a mid-career software engineer writing for your peers—experienced developers who are curious but cautious about LLMs. > Example Tone: "You've seen the hype, but let's talk about what happens when the robots actually mess up the codebase." > Prompt: Write a 700-word blog post in this style.
    • Result: A targeted, insightful article that resonates with the intended audience.

How to Develop Your Context

So, how do you come up with the right context?

  1. Ask Yourself: "If I were writing this myself, what information would I need?" If your manager asked you to write a blog post, you would ask clarifying questions about length, tone, and audience. The LLM needs the same information.
  2. Imagine the End Result: Work backward from your ideal outcome. If you want to read an article that is friendly and not too polished, define that casual tone and beginner-friendly audience in your context.
  3. Provide Real Examples: Adding examples to your prompts can elevate the output to another level. For the AI blog, framing it as a chat between experienced developers and adding a joke requirement forces a less formal writing style. Feed the model the same signals you would need yourself.

A Simple Framework for Better Prompts

Here is a basic framework you can use to improve your results.

Basic Approach:

  • Add an example to your prompt.
  • Rewrite it to change the tone.
  • Give a description of the audience.
  • Remove extra words to save space.

Improved Approach:

  • Think like you are writing for someone. Don't just say, "make it fun"; show what fun looks like with an example.
  • Always assume the model needs clear direction. Never assume the model already knows what you want. Whatever you tell it constitutes its entire knowledge for the task at hand.

Recommended For You