Writing better
prompts

What are some tactics that can increase the likelihood that LLMs' response will be accurate, aligned and appropriate?

Provide reference text

Providing reference text to LLMs models can improve answers.  Types of content that can be included:

Example:

Use the provided articles delimited by triple quotes to answer questions. If the answer cannot be found in the articles, write "I could not find an answer." 


You will be provided with a document delimited by triple quotes and a question. 

Your task is to answer the question using only the provided document and to cite the passage(s) of the document used to answer the question.

If the document does not contain the information needed to answer this question then simply write: "Insufficient information."

If an answer to the question is provided, it must be annotated with a citation.

Use the following format for to cite relevant passages ({"citation": …}). """<insert document here>"""

Question: <insert question here>

Break complex tasks into sub-tasks.

Complex tasks tend to have higher error rates than simpler tasks. "Chain-of thought" (CoT)  prompting involves providing an example with intermediate reasoning steps or asking the LLM to think step by step.

Worse:

Generate 5-7 knowledge-level objectives. 

Better

Generate 5-7 knowledge-level objectives. Ensure each objective starts with a verb. Nest enabling objectives under terminal objectives. Only use the words in the text. Refer to this text [...]


Worse:

Summarize the text, which is in the quotes " " 

Better

Summarize this text in a about 50-100 words. Use bullet points wherever possible. Write the summary so that a tenth grader can understand it. Refer to this text [...]

LLM's role

Define the role you want the LLM to take on. Give the LLM a:

Examples

Target audience

Describe the target audience by providing relevant details:

Examples


Desired format

Include details of how you want the response to be displayed or formatted.  Common formats include:

Examples (Few-shot prompting)

Including examples in a prompts is called few-shot prompting. Basically you're demonstating to the model how to respond to your query. 

For simple tasks, one example (1-shot) will suffice. For more difficult tasks, we can experiment with increasing the demonstrations (e.g., 3-shot, 5-shot, 10-shot, etc.).


Task

Sentiment analysis

Example prompt

This is awesome! // Positive

This is bad! // Positive

Wow that movie was rad! // Positive

What a horrible show! //

Example response

Negative

Constraints / Limits

Provide the model rules to follow: 

Length 

Keywords 

Vocabulary

Audience's age 


Example

Summarize the text in the triple quotes in about 3-4 sentences. """insert text here"""

Context/ Backgound

Provide relevant context and background information related to your topic, project and desired response.

Sources

This blog was informed by conversations with Bard and content from: