Stop Chatting with GPT, Start Prompt Engineering!

6 min read

Categories

ChatGPT
Coding
Linguistics
Prompt
Prompt Engineering

Hey there, fellow prompt engineers and tech enthusiasts! As a sophomore majoring in computing, ChatGPT and Co-Pilot are my right hand man. And so I decided to dig deep into the art and science of prompt engineering. I’ve put together this comprehensive guide based on my notes from Vanderbilt University’s Coursera course on Prompt Engineering for ChatGPT. If you’re into learning how to make ChatGPT sing (figuratively), then this is for you. Let’s get right into it!

Generated by ChatGPT 4.0

Application in Real-World Scenarios

Large Language Models (LLMs) like ChatGPT can streamline tasks and enhance productivity. They’re not here to replace us but to augment our capabilities.

  • Streamlining Tasks: Tools like ChatGPT can assist in writing reports, allowing professionals to focus on more rewarding aspects of their jobs.
  • Enhancing Productivity: Generating detailed assessments or analyses can save time and enhance productivity, especially in fields where documentation is key.

Implications

  • No Replacement for Professionals: ChatGPT isn’t replacing anyone but is a tool to augment professional work.
  • Focus on High-Value Activities: Automating repetitive tasks lets professionals spend more time on high-value activities, like client interaction.

What Are Large Language Models?

LLMs like ChatGPT are advanced AI trained on vast text data. They predict the next word or token in a sequence to generate coherent responses.

  • Word-by-Word Generation: They predict the next word based on the input given.
  • Training Process: Trained using unsupervised learning on massive text data from the internet.

Key Concepts for Designing Prompts

  • Context Understanding: LLMs use the context provided to generate relevant responses.
  • Variation in Outputs: Outputs can vary due to inherent randomness, allowing for creative and diverse responses.

Implications for Prompt Engineering

  • Rapid Evolution: The field is advancing quickly with new models and enhanced capabilities.
  • Experimentation and Creativity: Effective use of LLMs requires openness to experimentation.
  • Managing Expectations: It’s essential to manage expectations regarding the accuracy and consistency of outputs.

Practical Tips for Using LLMs

  • Prompt Specificity: Specificity and context in prompts influence the quality and relevance of the output.
  • Incorporating New Information: Explicitly include relevant details in the prompt for accurate responses.
  • Understanding Limitations: Recognize the limitations of LLMs, including their dependence on training data.

Embracing Variability

LLMs incorporate randomness, which is great for creative tasks but challenging for consistent outputs.

Prompt Engineering: A Strategy for Control

Prompt engineering aims to direct the model toward more predictable and useful responses.

Techniques and Challenges

  • Constraining Responses: For binary tasks (yes/no), nudge the model towards straightforward answers.
  • Dealing with Randomness: Accept and manage randomness effectively.

Understanding Prompts

What Is a Prompt?

  • Understanding the Concept: A prompt is more than just a question; it’s a call to action for the model to generate output.
  • Temporal Dimension: Prompts can influence current and future interactions.
  • Memory and Information Enhancement: They can serve as memory cues for the model.

Intuition Behind Prompts

  • Understanding Patterns: Patterns in prompts shape the model’s response.
  • Effect of Altering Patterns: Changing the pattern disrupts the prediction process, leading to varied responses.
  • Specificity in Prompts: Specific words and phrases guide the model.
  • Adapting Prompts: Modify language or specify output structures to influence behavior.

Everyone Can Program with Prompts

  • Prompts as Programs: Prompts can function as instructions for ChatGPT.
  • Dynamic Programming: Users can dynamically adjust prompts to refine behavior.

The Persona Pattern

  • Overview: Instructing a model to adopt a specific persona for specialized outputs.
  • Examples and Applications: Use personas to match desired outcomes.
  • Strategic Implementation: Choose and clearly define personas in prompts.
Act as Persona X
Perform task Y

Advanced Prompt Engineering Techniques

Cognitive Verifier Pattern

  1. Overview: Breaks down complex questions into smaller subproblems.
  2. Pattern Steps: Generate additional questions to understand and address the main problem.
  3. Example Usage:
Original Question: "How many mosquitoes probably live in my front yard?"
Subquestions:
What is the size of your front yard?
What is the climate like in your area?
What time of year is it?
Are there any bodies of water nearby?
Are there plants that attract mosquitoes

Flipped Interaction Pattern

  • Reverse Interaction: The LLM asks questions, and the user answers.
  • Applications: Useful for problem-solving and quizzing.

Few-Shot Prompting

  • Pattern Learning: Provide examples of input and expected output.
Input: "The movie was good but a bit long." Output: Sentiment - Neutral
Input: "I didn't really like this book." Output: Sentiment - Negative
Input: "I love this book." Output: Sentiment - Positive

Writing Effective Few-Shot

Chain of Thought Prompting

  • Importance: Demonstrating reasoning and steps for problem-solving.
  • Chain of Thought: Encourages logical thinking.
Bike race scenario: Breaks down calculation of total miles ridden.
Staging process scenario: Calculates the number of groups needed.
Spaceship scenario: Considers absence of gravity and its implications.
Encourages detailed reasoning before providing answers.

ReACT Prompting

  • External Tools: Incorporate external tools into problem-solving.
  • Example: Calculate arrival time for a BMX race.
Example Task: Calculate arrival time for a BMX race.
Example Steps:
Think: Identify the first step in the process (e.g., finding the race start time).
Action Search: Perform a web search for relevant information.
Result: Provide the obtained information.
Think: Proceed to the next step based on the obtained result.
Action Video: Watch a video to gather additional data.
Result: Record the relevant information from the video.
Think: Analyze the collected data and calculate the arrival time.

Using Patterns for Enhanced Interaction

Game Play Pattern

  • Initiate the Game: Start by telling the LLM you want to play.
  • Define Rules: Outline the rules.
  • Provide Context: Give specific contexts for tasks.
  • Start the Game: Ask the LLM to give you the first task.
  • Write the Prompt: Solve the task based on the prompt.
  • Evaluate Output: The LLM generates an output and provides feedback.
  • Iterate and Improve: Use feedback to improve prompts.

Template Pattern

  • Define the Template: Describe the format and placeholders.
  • Describe Placeholders: Use capitalized words for placeholders.
  • Maintain Formatting: Ensure the model preserves formatting.
  • Provide Data: Supply content for processing.
I'm going to give you a template for your output. Capitalized words are my placeholders. Fill in my placeholders with your output and please preserve the overall formatting of my template.

My template is:
# NAME
## Executive Summary
EXECUTIVE_SUMMARY
## Full Description
FULL_DESCRIPTION

Meta Language Creation Pattern

  • Create Shorthand Language: Develop concise notation for the domain.
  • Explain Notation: Describe what each part means.
  • Use Examples: Provide examples of shorthand and full descriptions.
  • Test Shorthand: Use shorthand to interact with the LLM.

Recipe Pattern

  • Define the Goal: State what you want to achieve.
  • Provide Known Steps: List known steps or information.
  • Indicate Gaps: Use placeholders for the LLM to fill gaps.
  • Complete the Recipe: Ask the LLM to complete the process.

Optimizing Prompts with Patterns

Ask for Input Pattern

  • Defining Rules: Establish rules for the model to follow.
  • Challenges: Prevent premature responses.
  • Ask for Input: Instruct the model to ask for the next input after rules.

Outline Expansion Pattern

  • Limitations: LLMs have input and output size limits.
  • Working with Pieces: Break down large tasks.
  • Outline Expansion: Generate an initial outline and expand on points.

Menu Actions Pattern

  • Menu Actions Concept: Similar to software menus for streamlining tasks.
  • Defining Actions: Predefine actions for easy reuse.
  • Running Actions: Perform tasks without retyping prompts.

Fact Check List Pattern

  • Limitations: LLMs can produce inaccurate outputs.
  • Identifying Facts: Verify the accuracy of generated information.

Resources and Further Reading


That’s a wrap! I hope this guide helps you navigate the exciting world of prompt engineering. Remember, practice makes perfect, so keep experimenting and refining your prompts. Happy engineering!

Feel free to reach out if you have any questions or need further clarification on any of the topics. Let’s learn and grow together!

👋 Connect with Me

I’m Javian Ng, an aspiring Full-Stack Infrastructure Architect & LLM Solutions Engineer based in Singapore. I love building scalable infrastructure and AI systems.

Feel free to reach out or explore more about my projects and experiences.