Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Prompt_Engineering_with_Llama_3_On_Amazon_Bedrock.ipynb #515

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

ejsd1989
Copy link
Contributor

This an updated notebook for prompt engineering llama 3 on AWS using Bedrock. It also includes a brief into example using Llama Guard 2.

Before submitting

  • [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [x ] Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

Thanks for contributing 馃帀!

"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clear the output of pip install before pushing the code to repo.

"from langchain.schema.messages import get_buffer_string\n",
"import os\n",
"\n",
"# bedrock = boto3.client(service_name='bedrock', region_name='us-east-1')\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its bit unclear why we have these 3 lines commented. Can we add some description on why they exist here?

") -> str:\n",
" llm = Bedrock(model_id=model)\n",
" prompt_str = ''.join(prompt)\n",
" if not prompt_str.endswith(\"<|eot_id|><|start_header_id|>assistant<|end_header_id|>\"):\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets extract this to a variable, and name appropriately so developers knows why we are doing this check explicitly? (We are doing this to determine if the agent must respond back or not. Is that correct?)

"\n",
"def complete_and_print(prompt: str, model: str = DEFAULT_MODEL, verbose: bool = True):\n",
" print(f'==============\\n{prompt}\\n==============')\n",
" if not prompt.endswith(\"<|eot_id|><|start_header_id|>assistant<|end_header_id|>\"):\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable used above can be res-used here as well.

"name": "stdout",
"output_type": "stream",
"text": [
"==============\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you have saved the output of these prompts across the notebook. Ignore my first comment if thats how are you are creating all the notebooks. If its not a standard lets create this output as well.

Copy link
Contributor

@albertodepaola albertodepaola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! added a couple of comments on the Llama Guard section. Also, do we want to keep the Llama 2 version of this notebook?

"# Now before we decide to return the response from the inference, we can run the\n",
"# generated output through Llama Guard again\n",
"\n",
"messages_input = [{\"role\": \"user\", \"content\": bedrock_output}]\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here it should be a role of type "Agent".

"# generated output through Llama Guard again\n",
"\n",
"messages_input = [{\"role\": \"user\", \"content\": bedrock_output}]\n",
"payload_input_guard = {\"inputs\": format_guard_messages(messages_input)}\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, the user prompt should be in this list as well, to showcase that when evaluating the model output, the user input is also taken into consideration.

"source": [
"TASK = \"Task: Check if there is unsafe content in '{{ role }}' messages in conversations according our safety policy with the below categories.\"\n",
"\n",
"INSTRUCTION = \"\"\"\\\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about adding the build_default_prompt methods from the utils file? We can add a small section on how to use those methods to customize categories as well, but for default ones, we can leverage the existing methods.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, that would be great. I believe that I tested some of our utils file when i hacked this together, but opted for this approach for speed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha! I think you should be able to import the build_default_prompt function from here to be able to build the prompt quickly. The import should be as with any package, if the recipes are installed:
from llama_recipes.inference.prompt_format_utils import build_default_prompt, create_conversation, LlamaGuardVersion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants