Skip to content

sgalawar/openai-prompt-eng-course

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 

Repository files navigation

Prompt Engineering Course | by OpenAI

DISCLOSURE: The README in this repository, while comprehensive and educational about the course content, may ironically fail to fulfill its intended role by not exclusively explaining the included code and instead providing an overly verbose course summary.

What are these notes?

  • These notes are from the ChatGPT prompt engineering course by DeepLearning.AI. It aims to enhance prompt creation for large language models and explore their capabilities.

What are the goals of the course material?

  • Develop precise, reliable, and validated NLP prompts for models like ChatGPT.
  • Gain deeper insights into the potential and constraints of natural language processing models.

Introduction

What is a Large Language Model (LLM)?

  • It is an AI that produces human-like text from the data it's trained on. Although it can't understand or hold opinions (for now), it's helpful for tasks like writing, coding, and language learning.

Two types of Large Language Models (LLM)

  • Base LLM: Predicts next word based on the training data.
 Example 1 - Prompt: Once upon a time, there was a unicorn | Answer: that lived in a magical forest with all her unicorn friends.

Or

Example 2 - Prompt: What is the capital of France? | Answer: What is France's largest city? What is France's population? What is the currency of France?
  • Instruction Tuned LLM: Tries to follow indstructions. Fine-tune on instructions and good attempts at following those instructions. RLHF: Reinforcement Learning with Human Feedback. Helpful, honest, harmless.
Example: Prompt: What is the capital of France? | Answer: The capital of France is Paris.

Guidelines for Prompting

This is the setup to load the API key and relevant Python libraries.

To install the OpenAI Python library:

!pip install openai

The library needs to be configured with your account's secret key, which is available on the website.

You can either set it as the OPENAI_API_KEY environment variable before using the library:

!export OPENAI_API_KEY='sk-...'

Or, set openai.api_key to its value:

import openai
openai.api_key = "sk-..."

This code loads the OpenAI API key for you:

import openai
import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())

openai.api_key  = os.getenv('OPENAI_API_KEY')

And this is a helper function that makes it easier to use prompts and look at the generated outputs:

def get_completion(prompt, model="gpt-3.5-turbo"):
    messages = [{"role": "user", "content": prompt}]
    response = openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=0, # this is the degree of randomness of the model's output
    )
    return response.choices[0].message["content"]

Prompting Principles

  • Principle 1: Write clear and specific instructions.
  • Principle 2: Give the model time to "think".

Here are some tactics to help you achieve these principles:

For principle 1:

  • Tactic 1: Use delimiters to clearly indicate distinct parts of the input. Delimiters can be anything: ```, """, < >, , :.
  • Tactic 2: Ask for a structured output. JSON, HTML. Example: Provide them in JSON format with the following keys: book_id, title, author, genre.
  • Tactic 3: Ask the model to check whether conditions are satisfied. Example: If the text does not contain X then do Y.
  • Tactic 4: "Few-shot" prompting, where the AI is given a few examples of a task before performing a similar task. Example: Your task is to answer in a consistent style. X: blah. Y: blah blah blah. X: blah blah.

For principle 2:

  • Tactic 1: Specify the steps required to complete a task. Example: Perform the following actions: 1- Summarize blah. 2- Translate bla blah blah. 3- List and output blah blah.

  • Tactic 2: Ask for output in a specified format. Example: Text: 'text to summarize', Summary: 'summary', Translation: 'summary translation', Names: 'list of names'.

  • Tactic 3: Instruct the model to work out its own solution before rushing to a conclusion.

Model Limitations: Hallucinations

Hallucinations in AI refer to the generation of plausible but incorrect or unverifiable information by the model. Example: If you ask an AI model, "What color is a Math Fairy's dress?" and the AI responds with "A Math Fairy's dress is typically blue with equations written all over it.", this would be considered a hallucination. In reality, a "Math Fairy" does not exist, so the AI is inventing or "hallucinating" this information.


Iterative Prompt Development

This concept means to continuously improve the instructions given to the language model to get better results.

Issues a user my encounter

  • Issue 1: The text retrieved is too long.
    • Solution: Limit the number of words/sentences/characters. Example: "Your task is X. Use at most 50 words."
  • Issue 2: The text retrieved focuses on the wrong details.
    • Solution: Include in your prompt to focus on the aspects that are relevant to the intended audience. Example: "Your task is X. The X is intended for Y audience, so X should focus on Z."
  • Issue 3: The text retrieved requires more specific information.
    • Solution: Be specific about the output you want.

Summarizing

  • Request to summarize with a word limit. Example: Summarize the following text, delimited by curly braces, in at most 30 words. Review: { text to summarize }.
  • Summarize with a focus on something specific. Example: Summarize X, and focus on Y.
  • Depending on how you want your retrieved text, you can try using the word "extract" instead of "summarize" to get information that is more mostly relevant to the input text.
  • Use a for loop to summarize multiple long texts in succession. Example:
for i in range(len(reviews)):
    prompt = f"""
    Your task is to generate a short summary of a product \ 
    review from an ecommerce site. 

    Summarize the review below, delimited by triple \
    backticks in at most 20 words. 

    Review: ```{reviews[i]}```
    """

    response = get_completion(prompt)
    print(i, response, "\n")

Acknowledgements