Prompting techniques
I recently listened to an excellent prompt engineering discussion by Anthropic engineers. However, since the video was over an hour long, I wanted to distill the key insights and techniques into a brief guide. To do this, I used a YouTube downloader, Whisper Turbo for transcription, and GPT-4 to create an outline and examples based on the techniques discussed. The entire process took me less than five minutes.
// download video
yt-dlp https://www.youtube.com/watch?v=T9aRN5JkmL8
// convert to .wav
ffmpeg -i video.webm -ar 16000 -ac 1 -c:a pcm_s16le video.wav
// transcribe (https://github.com/ggerganov/whisper.cpp)
./main -nt -m ggml-large-v3-turbo-q5_0.bin video.wav
// send to GPT4o
'# Task
- Summarize key points
- Focus on techniques mentioned
- List them as succinct but detailed recipes
- Provide examples of each type of prompt technique
# Transcription:
Basically, this entire round table session here is just going to be focused mainly on...'
Below is the outcome for the process:
Key Techniques Summarized as "Prompt Engineering Recipes"
1. Clear Communication
- Recipe: Provide a straightforward, unambiguous description of the task. Avoid abstract or overly complex instructions.
- Example: "Extract all email addresses from the text provided. If none exist, state 'No email addresses found.'"
2. Iterate and Refine
- Recipe: Treat prompt engineering as an iterative process. Adjust the prompt and observe the model's output to refine the results progressively.
- Example: Start with: "Summarize this article." If the output is vague, refine it: "Summarize the article in three sentences, focusing on key arguments."
3. Edge Case Testing
- Recipe: Consider unusual or error-prone inputs to test the prompt's robustness. Explicitly instruct the model on handling exceptions.
- Example: "Identify names starting with 'G.' If no such name exists, respond with 'No matching names found.' If input is not a text document, state 'Invalid input.'"
4. Set the Model's Role
- Recipe: Define a role or persona for the model to adopt to shape its responses and improve context alignment.
- Example: "You are an assistant guiding a novice programmer through Python code. Break down each line of the code and explain key concepts."
5. Use High-Quality Examples (Few-shot Prompting)
- Recipe: Provide examples of input-output pairs to guide the model's behavior and set a clear pattern.
- Example:
- Input: "I'm looking for some feedback on my project."
- Output: "I would appreciate your insights on my project. Could you please provide your thoughts?"
6. Meta Prompts
- Recipe: Ask the model to help design prompts for other tasks. This can enhance the quality of subsequent prompts.
- Example: "You are a prompt designer. Create a prompt for extracting key takeaways from a meeting transcript. Include instructions for handling discussions, action items, and conclusions."
7. Collaborate with the Model
- Recipe: Use back-and-forth interaction with the model to identify areas for improvement in your prompts.
- Example: "You didn't correctly identify the key point. What part of the instructions was unclear? Suggest an alternative prompt."
8. Inspect Model Outputs for Clarity
- Recipe: Carefully read and analyze model outputs to understand how it interprets instructions and where it goes wrong.
- Example: "List the steps the model took to arrive at its response. Identify any incorrect steps and provide a revised prompt."
9. Provide Context and Trust the Model's Capabilities
- Recipe: Assume the model has a high level of knowledge and can handle complex instructions. Trust it with more context when possible.
- Example: "Here is a research paper. Summarize the key findings for an audience of graduate students familiar with machine learning concepts."
10. Elicit Feedback from the Model
- Recipe: Ask the model directly for feedback on its own performance and how the prompt can be improved.
- Example: "Here are your instructions. Where do you find ambiguities? Suggest how these instructions can be clarified."
11. Step-by-step Breakdown (Chain of Thought)
- Recipe: Request the model to break down its process step-by-step to encourage thorough reasoning.
- Example: "List all the steps you would take to solve this math problem. Then provide the final answer."
12. Treat Prompts Like Code
- Recipe: Approach prompt crafting methodically, as if you are programming a complex system. Track changes and refine as you would in software development.
- Example: Initial prompt: "Draft a rejection email." Revised prompt: "Compose a rejection email including appreciation, a clear decision, and encouragement to apply again."
13. Iterative Feedback Loops
- Recipe: Create a continuous feedback loop by using the model's output to improve both the prompt and the desired output.
- Example: "Your summary missed key points. Can you identify which aspects were unclear in the instructions? Modify the instructions for a better summary."
14. Handling Ambiguous Instructions
- Recipe: Allow the model to flag uncertainties in its responses to refine its behavior for edge cases.
- Example: "If an instruction is unclear or input doesn't match the expected format, output 'Uncertain' and state the reason."
15. Future-Focused Elicitation
- Recipe: Have the model engage in a dialogue to clarify complex requirements.
- Example: "Claude, I want to create a survey. Ask me questions to gather all the details needed to design the survey."
By applying these techniques as "recipes," you can develop prompts that are clear, contextually accurate, and designed to elicit the best possible response from language models.