Specify who the AI should emulate.
"You are a study abroad advisor helping students choose the best program based on their major."
Offer relevant background to guide the response.
"A student interested in sustainability is considering their study abroad options. They want hands-on experience in environmental projects. Recommend three programs that align with these interests."
Cleary state what needs to be done.
"List three education abroad programs focused on comparative healthcare systems. Include their location, key features, and why they're a good fit for pre-nursing majors."
Describe how you want the answer structured.
"Provide the answer in a table with columns for Program Name, Location, Key Features, and Why It's a Good Fit."
Include tone, style, detail requirements.
"Use a professional yet student-friendly tone. Keep responses concise but informative."
To get the best response, include key details and context in your prompt. If you're too vague, the language model has to guess at what you mean.
Example: Less Clear --> More Specific
You can and should specify a tone or writing style for the model to follow.
Example: "You are a highly capable, thoughtful, and precise assistant. Your goal is to deeply understand the user's intent, ask clarifying questions when needed, think step-by-step through complex problems, provide clear and accurate answers, and proactively anticipate helpful follow-up information. Always prioritize being truthful, nuanced, insightful, and efficient, tailoring your responses specifically to the user's needs and preferences."
Delimiters like triple quotation marks, tags, section titles, etc. can help demarcate sections of text to be treated differently. The more complex a prompt, the more important it is for you to break down the details. Don't make the model have to do more work than it needs to in order to understand what you are asking.
Example:
Summarize the text delimited by triple quotes with a haiku.
"""insert text here"""
You will be provided with two articles, delimited with tags. Summarize the articles then indicate which of them makes a better argument and explain why.
<article> insert first article here </article>
<article> insert second article here </article>
Some prompts are best specified as a sequence of steps. Writing out the steps can make it easier for the model to follow what you're trying to accomplish.
Example:
Use the following step-by-step instructions to respond to user inputs.
Step 1 - The user will provide you with text in triple quotes. Summarize this text in one sentence with a prefix that says "Summary: ".
Step 2 - Translate the summary from Step 1 into Spanish, with a prefix that says "Translation: ".
Giving general instructions is usually more efficient than showing every possible example, but sometimes examples work better—especially when teaching a specific response style that’s hard to explain. This approach is called “few-shot” prompting.
Example:
“Describe study abroad programs in a single sentence using this format: ‘[Program Name] offers [academic focus] in [location], providing [key benefit].’ Follow the examples below:”
Now describe this program: SIT Morocco – Multiculturalism and Human Rights
Ask model to produce outputs that are of a given target length. The targeted output length can be specified in terms of the count of words, sentences, paragraphs, bullet points, etc.
Example:
Language models can’t read your mind. If outputs are too long, ask for brief replies. If outputs are too simple, ask for expert-level writing. If you dislike the format, demonstrate the format you’d like to see. The less the model has to guess at what you want, the more likely you’ll get what you want.
Language models can confidently hallucinate answers, especially when asked about complex or nuanced topics or for citations and URLs. In the same way that a sheet of notes can help a student do better on a test, providing reference text to help language models answer with more accuracy.
Just as it is good practice in education abroad to break a complex program application into smaller steps, the same is true of tasks submitted to a language model. Complex tasks should be broken down into smaller steps, where the results of one step become the starting point for the next.
If you’re asked to multiply 17 by 28, you might not know it instantly but can figure it out. Models work the same way—they make more mistakes when rushing. Asking for a “chain of thought” first helps them reason through problems more accurately.
Use other tools to cover the model’s weaknesses. For example, a text retrieval system (RAG) provides relevant documents, and a code execution engine handles math and coding. If a tool can do a task better than the model, let it—this maximizes efficiency and accuracy.
It’s easier to improve performance when you can measure it. A prompt tweak might work for a few cases but hurt overall results. To be sure a change actually helps, you may need a full test set (aka an “eval”).
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.