Skip to content

adela-almasan/gen-ai-workshop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

LLMs Introduction: What they are and how to leverage them


This repository contains the content for the hands-on workshop.

This workshop is an introduction to Generative AI. We will use OpenAI API to generate text and surfacing it in a basic front-end implementation.

Prerequisites

Optional

  • OpenAI API Key

We'll be providing a key for the workshop, but you can also sign up for a free key at https://beta.openai.com/signup/. We will delete the key after this workshop.

Intro


What is Generative AI?

Generative AI refers to artificial intelligence systems capable of creating new content, such as text, images, or music, based on patterns and examples in the data they are trained on. Generative AI models the patterns and structure of their input training data and then generate new data that has similar characteristics.

What is a LLM?

A Large Language Model (LLM) is a type of AI that can process and produce natural language text. It learns from a massive amount of text data such as books, articles, and web pages to discover patterns and rules of language from them. Examples of LLMs include GPT (Generative Pre-trained Transformer) models like GPT-3/4.

What is prompt engineering?

Prompt engineering is the process of designing and refining the input prompts to an LLM to achieve the desired output. It is a crucial step in leveraging LLMs to generate high-quality content.

Getting Started


For this workshop we will use a codesandbox environment. Please FORK the sandbox to your account. We’ll be generating content using a basic HTTP request to an Open AI endpoint. We will be using gpt-35-turbo or gpt-4 models.

app.png

Generate text

Steps:

  1. Play with configuration
  2. Customize the user input: keyword and ingredients
  3. Generate a pizza name
  4. Improve the results by refining the prompt and Regenerate the pizza name

Prompt Engineering

Some best practices for prompt engineering:

  • Give the model a persona
  • Be as specific as possible (detailed context, desired format, output length, level of details, etc.)
  • Provide examples of the desired output (sample text, format, templates, graph, etc.)
  • Break down the instructions into smaller steps
  • Prompt Iteratively: Prompt, Evaluate, Repeat

Prompting formula for generating text:

Role + Goal + Specify the audience + Set style/tone/restrictions + Format = 🤍

Prompting

const defaultPrompt =
  "You are a chef who specializes in delicious authentic Italian pizza.\n" +
  "Your goal is to generate new pizza names.\n" +
  "The pizza names should be creative, fantasy-like and based on user input.\n";

Exercise: Improve the response by refining the prompt. Try a prompts like these:

The pizza name should not include the word PIZZA.
The pizza name should mention a diminutive of one of the user's ingredients.
The pizza names should not be longer than 3 words.

OpenAI Chat Parameters

body: JSON.stringify({
  model: "gpt-3.5-turbo",
  max_tokens: 256,
  temperature: 0.5,
  messages: [
      {
        role: "system",
        content: defaultPrompt,
      },
      {
        role: "user",
        content: `Generate a pizza name using the keyword: ${keyword} and the following ingredients: ${ingredients}`,
      },
    ],
}),

configuration.png

Model

The model parameter specifies the model to use for generating the response. More info about different models: https://platform.openai.com/docs/models/models

Max Tokens

The max_tokens parameter controls how many tokens the model will generate. The more tokens, the longer the generated text will be. A token can be as short as one character or as long as one word in English. For instance, both “a” and “apple” represent one token. Tokenize tool: https://platform.openai.com/tokenizer

Temperature

The temperature parameter influences the randomness of the generated responses. Possible values are between 0 and 1. A lower value, such as 0.2, makes the answers more precise, while a higher value, like 0.8, makes them more creative.

Messages

This parameter is an array of message objects, each having a role.

Roles:

  • system - high level instructions, such as prompts or context. Eg. You are a chef
  • user - messages generated by the user, such as queries or prompts. Eg. Generate a pizza name using the keyword: "unicorn".
  • assistant - the model’s response. Eg. Kitten’s Whisker Whisper: Prosciutto, Mushroom, and Spinach Fantasia

Exercise: Try changing the temperature and the model and see how the output changes.

Known limitations

  • Rate limit reached

    This error (429: 'Too Many Requests' or RateLimitError) will give you a sense of your usage rate and permitted usage.

    • Wait until your rate limit resets and retry.
    • Send fewer tokens or requests or slow down.
    • Increase the rate limit.
  • Conversational responses

limitations_conversation.png

  • Use prompting to redirect the model towards the answers you're expecting. Try prompts like:
Provide concise results without additional explanations or apologies.
Give me the information directly without any introductory sentences.

Exercise: Ask a question in the "improve" input. Analyze the answer. Then change the prompt and regenerate.

Resources


https://platform.openai.com/

https://github.com/openai/openai-cookbook

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published