• Home
  • Generative AI 101
  • Training & Learning
  • Prompt Engineering
  • AI Tools We Like
  • AI Use Cases in EA
  • University Guidelines
  • About Us
  • Forum Slide Deck

Anatomy of a Good Prompt

Define a Role

Specify who the AI should emulate. 


"You are a study abroad advisor helping students choose the best program based on their major."

Context Setting

Offer relevant background to guide the response.


"A student interested in sustainability is considering their study abroad options. They want hands-on experience in environmental projects. Recommend three programs that align with these interests."

Task Specification

Cleary state what needs to be done.


"List three education abroad programs focused on comparative healthcare systems. Include their location, key features, and why they're a good fit for pre-nursing majors."

Output Format

Describe how you want the answer structured.


"Provide the answer in a table with columns for Program Name, Location, Key Features, and Why It's a Good Fit."

Quality Parameters

Include tone, style, detail requirements. 


"Use a professional yet student-friendly tone. Keep responses concise but informative."

Useful Links

The Complete ChatGPT Cheat Sheet 2025Button

Six Tactics to Level Up Your Prompting

Tactic 1: Include details in your prompt to get a more accurate response.

To get the best response, include key details and context in your prompt. If you're too vague, the language model has to guess at what you mean. 


Example: Less Clear --> More Specific

  • "How do I add numbers in Excel?" --> "How do I sum a row of dollar amounts in Excel and apply this automatically across a sheet, with totals appearing in a Total column on the right?"
  • "Who's president?" --> "Who was Mexico's president in 2021, and how often were elections held?"
  • "Summarize the meeting notes." --> "Summarize the meeting notes in one paragraph, then create a markdown list of speakers and their key points. Finally, list any follow up or action items mentioned."

Tactic 2: In your prompt, tell the model to adopt a persona.

You can and should specify a tone or writing style for the model to follow.


Example: "You are a highly capable, thoughtful, and precise assistant. Your goal is to deeply understand the user's intent, ask clarifying questions when needed, think step-by-step through complex problems, provide clear and accurate answers, and proactively anticipate helpful follow-up information. Always prioritize being truthful, nuanced, insightful, and efficient, tailoring your responses specifically to the user's needs and preferences."

Tactic 3: Use delimiters to clearly indicate distinct parts of the prompt.

Delimiters like triple quotation marks, tags, section titles, etc. can help demarcate sections of text to be treated differently. The more complex a prompt, the more important it is for you to break down the details. Don't make the model have to do more work than it needs to in order to understand what you are asking.


Example:
Summarize the text delimited by triple quotes with a haiku.  


"""insert text here"""


You will be provided with two articles, delimited with tags. Summarize the articles then indicate which of them makes a better argument and explain why.


<article> insert first article here </article>


<article> insert second article here </article>

Tactic 4: Specify the steps required to complete the prompt.

Some prompts are best specified as a sequence of steps. Writing out the steps can make it easier for the model to follow what you're trying to accomplish. 


Example:

Use the following step-by-step instructions to respond to user inputs.  


Step 1 - The user will provide you with text in triple quotes. Summarize this text in one sentence with a prefix that says "Summary: ".  


Step 2 - Translate the summary from Step 1 into Spanish, with a prefix that says "Translation: ". 

Tactic 5: Provide examples.

Giving general instructions is usually more efficient than showing every possible example, but sometimes examples work better—especially when teaching a specific response style that’s hard to explain. This approach is called “few-shot” prompting.


Example: 

“Describe study abroad programs in a single sentence using this format: ‘[Program Name] offers [academic focus] in [location], providing [key benefit].’ Follow the examples below:”

  • Barcelona SAE offers immersive Spanish language and cultural studies in Barcelona, providing hands-on learning and local engagement.
  • CET Japan offers intensive Japanese language programs in Osaka, providing full language immersion and homestay experiences.
  • IES Abroad Berlin offers political science and history courses in Berlin, providing site visits to historical landmarks and institutions.

Now describe this program: SIT Morocco – Multiculturalism and Human Rights

Tactic 6: Specify the desired length of the output.

Ask model to produce outputs that are of a given target length. The targeted output length can be specified in terms of the count of words, sentences, paragraphs, bullet points, etc. 


Example:

  • Summarize the text in about 50 words. 
  • Summarize the text in 2 paragraphs. 
  • Summary the text in ten bullet points. 

Six Power Moves for More Accurate Responses

1. Write clear instructions.

3. Break down complex tasks into simpler subtasks.

1. Write clear instructions.

Language models can’t read your mind. If outputs are too long, ask for brief replies. If outputs are too simple, ask for expert-level writing. If you dislike the format, demonstrate the format you’d like to see. The less the model has to guess at what you want, the more likely you’ll get what you want.

2. Provide reference text.

3. Break down complex tasks into simpler subtasks.

1. Write clear instructions.

Language models can confidently hallucinate answers, especially when asked about complex or nuanced topics or for citations and URLs. In the same way that a sheet of notes can help a student do better on a test, providing reference text to help language models answer with more accuracy.

3. Break down complex tasks into simpler subtasks.

3. Break down complex tasks into simpler subtasks.

3. Break down complex tasks into simpler subtasks.

Just as it is good practice in education abroad to break a complex program application into smaller steps, the same is true of tasks submitted to a language model. Complex tasks should be broken down into smaller steps, where the results of one step become the starting point for the next.

4. Give the model time to think.

4. Give the model time to think.

3. Break down complex tasks into simpler subtasks.

 If you’re asked to multiply 17 by 28, you might not know it instantly but can figure it out. Models work the same way—they make more mistakes when rushing. Asking for a “chain of thought” first helps them reason through problems more accurately.

5. Use external tools.

4. Give the model time to think.

6. Test changes systematically.

 Use other tools to cover the model’s weaknesses. For example, a text retrieval system (RAG) provides relevant documents, and a code execution engine handles math and coding. If a tool can do a task better than the model, let it—this maximizes efficiency and accuracy.

6. Test changes systematically.

4. Give the model time to think.

6. Test changes systematically.

 It’s easier to improve performance when you can measure it. A prompt tweak might work for a few cases but hurt overall results. To be sure a change actually helps, you may need a full test set (aka an “eval”).

Got a shareworthy resource?

Submit Here

Copyright © 2025 Generative AI in Education Abroad - All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept