Skip to content

Commit 02f6a08

Browse files
lgrammeljon-spaeth
andauthoredJun 21, 2024··
feat (@ai-sdk/amazon-bedrock): add Amazon Bedrock provider (#2045)
Co-authored-by: Jon Spaeth <jon.spaeth@caylent.com>
1 parent 9938468 commit 02f6a08

34 files changed

+2914
-68
lines changed
 

‎.changeset/red-panthers-study.md

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'@ai-sdk/provider-utils': patch
3+
---
4+
5+
feat (provider-utils): add convertArrayToAsyncIterable test helper

‎.changeset/tame-ducks-drum.md

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'@ai-sdk/amazon-bedrock': patch
3+
---
4+
5+
feat (@ai-sdk/amazon-bedrock): add Amazon Bedrock provider
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,170 @@
1+
---
2+
title: Amazon Bedrock
3+
description: Learn how to use the Amazon Bedrock provider.
4+
---
5+
6+
# Amazon Bedrock Provider
7+
8+
The Amazon Bedrock provider for the [Vercel AI SDK](https://sdk.vercel.ai/docs) contains language model support for the [Amazon Bedrock](https://aws.amazon.com/bedrock) APIs.
9+
10+
## Setup
11+
12+
The Bedrock provider is available in the `@ai-sdk/amazon-bedrock` module. You can install it with
13+
14+
<Tabs items={['pnpm', 'npm', 'yarn']}>
15+
<Tab>
16+
<Snippet text="pnpm install @ai-sdk/amazon-bedrock" dark />
17+
</Tab>
18+
<Tab>
19+
<Snippet text="npm install @ai-sdk/amazon-bedrock" dark />
20+
</Tab>
21+
<Tab>
22+
<Snippet text="yarn add @ai-sdk/amazon-bedrock" dark />
23+
</Tab>
24+
</Tabs>
25+
26+
### Prerequisites
27+
28+
Access to Amazon Bedrock foundation models isn't granted by default. In order to gain access to a foundation model, an IAM user with sufficient permissions needs to request access to it through the console. Once access is provided to a model, it is available for all users in the account.
29+
30+
See the [Model Access Docs](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html) for more information.
31+
32+
### Authentication
33+
34+
**Step 1: Creating AWS Access Key and Secret Key**
35+
36+
To get started, you'll need to create an AWS access key and secret key. Here's how:
37+
38+
**Login to AWS Management Console**
39+
40+
- Go to the [AWS Management Console](https://console.aws.amazon.com/) and log in with your AWS account credentials.
41+
42+
**Create an IAM User**
43+
44+
- Navigate to the [IAM dashboard](https://console.aws.amazon.com/iam/home) and click on "Users" in the left-hand navigation menu.
45+
- Click on "Create user" and fill in the required details to create a new IAM user.
46+
- Make sure to select "Programmatic access" as the access type.
47+
- The user account needs the `AmazonBedrockFullAccess` policy attached to it.
48+
49+
**Create Access Key**
50+
51+
- Click on the "Security credentials" tab and then click on "Create access key".
52+
- Click "Create access key" to generate a new access key pair.
53+
- Download the `.csv` file containing the access key ID and secret access key.
54+
55+
**Step 2: Configuring the Access Key and Secret Key**
56+
57+
Within your project add a `.env` file if you don't already have one. This file will be used to set the access key and secret key as environment variables. Add the following lines to the `.env` file:
58+
59+
```makefile
60+
AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID
61+
AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY
62+
AWS_REGION=YOUR_REGION
63+
```
64+
65+
<Note>
66+
Many frameworks such as [Next.js](https://nextjs.org/) load the `.env` file
67+
automatically. If you're using a different framework, you may need to load the
68+
`.env` file manually using a package like
69+
[`dotenv`](https://github.com/motdotla/dotenv).
70+
</Note>
71+
72+
Remember to replace `YOUR_ACCESS_KEY_ID`, `YOUR_SECRET_ACCESS_KEY`, and `YOUR_REGION` with the actual values from your AWS account.
73+
74+
## Provider Instance
75+
76+
You can import the default provider instance `bedrock` from `@ai-sdk/amazon-bedrock`:
77+
78+
```ts
79+
import { bedrock } from '@ai-sdk/amazon-bedrock';
80+
```
81+
82+
If you need a customized setup, you can import `createAmazonBedrock` from `@ai-sdk/amazon-bedrock` and create a provider instance with your settings:
83+
84+
```ts
85+
import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock';
86+
87+
const bedrock = createAmazonBedrock({
88+
region: 'us-east-1',
89+
accessKeyId: 'xxxxxxxxx',
90+
secretAccessKey: 'xxxxxxxxx',
91+
});
92+
```
93+
94+
You can use the following optional settings to customize the Amazon Bedrock provider instance:
95+
96+
- **region** _string_
97+
98+
The AWS region that you want to use for the API calls.
99+
It uses the `AWS_REGION` environment variable by default.
100+
101+
- **accessKeyId** _string_
102+
103+
The AWS access key ID that you want to use for the API calls.
104+
It uses the `AWS_ACCESS_KEY_ID` environment variable by default.
105+
106+
- **secretAccessKey** _string_
107+
108+
The AWS secret access key that you want to use for the API calls.
109+
It uses the `AWS_SECRET_ACCESS_KEY` environment variable by default.
110+
111+
## Language Models
112+
113+
You can create models that call the Bedrock API using the provider instance.
114+
The first argument is the model id, e.g. `meta.llama3-70b-instruct-v1:0`.
115+
116+
```ts
117+
const model = bedrock('meta.llama3-70b-instruct-v1:0');
118+
```
119+
120+
Amazon Bedrock models also support some model specific settings that are not part of the [standard call settings](/docs/ai-sdk-core/settings).
121+
You can pass them as an options argument:
122+
123+
```ts
124+
const model = bedrock('anthropic.claude-3-sonnet-20240229-v1:0', {
125+
additionalModelRequestFields: { top_k: 350 },
126+
});
127+
```
128+
129+
Documentation for additional settings based on the selected model can be found within the [Amazon Bedrock Inference Parameter Documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html).
130+
131+
### Example
132+
133+
You can use Amazon Bedrock language models to generate text with the `generateText` function:
134+
135+
```ts
136+
import { bedrock } from '@ai-sdk/amazon-bedrock';
137+
import { generateText } from 'ai'
138+
139+
const { text } = await generateText({
140+
model: bedrock('meta.llama3-70b-instruct-v1:0')
141+
prompt: 'Write a vegetarian lasagna recipe for 4 people.'
142+
})
143+
```
144+
145+
Amazon Bedrock language models can also be used in the `streamText` function
146+
(see [AI SDK Core](/docs/ai-sdk-core)).
147+
148+
### Model Capabilities
149+
150+
> Note: This model list is ever changing and may not be complete. Refer to the [Amazon Bedrock ](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html#conversation-inference-supported-models-features) documentation for up to date information.
151+
152+
| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
153+
| ------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
154+
| `amazon.titan-tg1-large` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
155+
| `amazon.titan-text-express-v1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
156+
| `anthropic.claude-v2:1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
157+
| `anthropic.claude-3-sonnet-20240229-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
158+
| `anthropic.claude-3-haiku-20240307-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
159+
| `anthropic.claude-3-opus-20240229-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
160+
| `anthropic.claude-3-5-sonnet-20240620-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
161+
| `cohere.command-r-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Cross size={18} /> |
162+
| `cohere.command-r-plus-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Cross size={18} /> |
163+
| `meta.llama2-13b-chat-v1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
164+
| `meta.llama2-70b-chat-v1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
165+
| `meta.llama3-8b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
166+
| `meta.llama3-70b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
167+
| `mistral.mistral-7b-instruct-v0:2` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
168+
| `mistral.mixtral-8x7b-instruct-v0:1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
169+
| `mistral.mistral-large-2402-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
170+
| `mistral.mistral-small-2402-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |

‎examples/ai-core/.env.example

+11-5
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,15 @@
11
ANTHROPIC_API_KEY=""
2-
OPENAI_API_KEY=""
3-
MISTRAL_API_KEY=""
2+
AWS_ACCESS_KEY_ID=""
3+
AWS_SECRET_ACCESS_KEY=""
4+
AWS_REGION=""
5+
AZURE_API_KEY=""
6+
AZURE_RESOURCE_NAME=""
7+
COHERE_API_KEY=""
8+
FIREWORKS_API_KEY=""
49
GOOGLE_GENERATIVE_AI_API_KEY=""
5-
GOOGLE_VERTEX_PROJECT=""
610
GOOGLE_VERTEX_LOCATION=""
7-
FIREWORKS_API_KEY=""
11+
GOOGLE_VERTEX_PROJECT=""
812
GROQ_API_KEY=""
9-
PERPLEXITY_API_KEY=""
13+
MISTRAL_API_KEY=""
14+
OPENAI_API_KEY=""
15+
PERPLEXITY_API_KEY=""

‎examples/ai-core/package.json

+2-1
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
"@ai-sdk/google-vertex": "latest",
1111
"@ai-sdk/mistral": "latest",
1212
"@ai-sdk/openai": "latest",
13+
"@ai-sdk/amazon-bedrock": "latest",
1314
"ai": "latest",
1415
"dotenv": "16.4.5",
1516
"mathjs": "12.4.2",
@@ -24,4 +25,4 @@
2425
"tsx": "4.7.1",
2526
"typescript": "5.1.3"
2627
}
27-
}
28+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
import { bedrock } from '@ai-sdk/amazon-bedrock';
2+
import { CoreMessage, generateText } from 'ai';
3+
import dotenv from 'dotenv';
4+
import * as readline from 'node:readline/promises';
5+
import { weatherTool } from '../tools/weather-tool';
6+
7+
dotenv.config();
8+
9+
const terminal = readline.createInterface({
10+
input: process.stdin,
11+
output: process.stdout,
12+
});
13+
14+
const messages: CoreMessage[] = [];
15+
16+
async function main() {
17+
let toolResponseAvailable = false;
18+
19+
while (true) {
20+
if (!toolResponseAvailable) {
21+
const userInput = await terminal.question('You: ');
22+
messages.push({ role: 'user', content: userInput });
23+
}
24+
25+
const { text, toolCalls, toolResults, responseMessages } =
26+
await generateText({
27+
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
28+
tools: { weatherTool },
29+
system: `You are a helpful, respectful and honest assistant. If the weatehr is requested use the `,
30+
messages,
31+
});
32+
33+
toolResponseAvailable = false;
34+
35+
if (text) {
36+
process.stdout.write(`\nAssistant: ${text}`);
37+
}
38+
39+
for (const { toolName, args } of toolCalls) {
40+
process.stdout.write(
41+
`\nTool call: '${toolName}' ${JSON.stringify(args)}`,
42+
);
43+
}
44+
45+
for (const { toolName, result } of toolResults) {
46+
process.stdout.write(
47+
`\nTool response: '${toolName}' ${JSON.stringify(result)}`,
48+
);
49+
}
50+
51+
process.stdout.write('\n\n');
52+
53+
messages.push(...responseMessages);
54+
55+
toolResponseAvailable = toolCalls.length > 0;
56+
}
57+
}
58+
59+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
import { bedrock } from '@ai-sdk/amazon-bedrock';
2+
import { generateText } from 'ai';
3+
import dotenv from 'dotenv';
4+
5+
dotenv.config();
6+
7+
async function main() {
8+
const result = await generateText({
9+
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
10+
maxTokens: 512,
11+
messages: [
12+
{
13+
role: 'user',
14+
content: [
15+
{ type: 'text', text: 'Describe the image in detail.' },
16+
{
17+
type: 'image',
18+
image:
19+
'https://github.com/vercel/ai/blob/main/examples/ai-core/data/comic-cat.png?raw=true',
20+
},
21+
],
22+
},
23+
],
24+
});
25+
26+
console.log(result.text);
27+
}
28+
29+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
import { bedrock } from '@ai-sdk/amazon-bedrock';
2+
import { generateText } from 'ai';
3+
import dotenv from 'dotenv';
4+
import fs from 'node:fs';
5+
6+
dotenv.config();
7+
8+
async function main() {
9+
const result = await generateText({
10+
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
11+
maxTokens: 512,
12+
messages: [
13+
{
14+
role: 'user',
15+
content: [
16+
{ type: 'text', text: 'Describe the image in detail.' },
17+
{ type: 'image', image: fs.readFileSync('./data/comic-cat.png') },
18+
],
19+
},
20+
],
21+
});
22+
23+
console.log(result.text);
24+
}
25+
26+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
import { generateText, tool } from 'ai';
2+
import dotenv from 'dotenv';
3+
import { z } from 'zod';
4+
import { weatherTool } from '../tools/weather-tool';
5+
import { bedrock } from '@ai-sdk/amazon-bedrock';
6+
7+
dotenv.config();
8+
9+
async function main() {
10+
const result = await generateText({
11+
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
12+
maxTokens: 512,
13+
tools: {
14+
weather: weatherTool,
15+
cityAttractions: tool({
16+
parameters: z.object({ city: z.string() }),
17+
}),
18+
},
19+
prompt:
20+
'What is the weather in San Francisco and what attractions should I visit?',
21+
});
22+
23+
// typed tool calls:
24+
for (const toolCall of result.toolCalls) {
25+
switch (toolCall.toolName) {
26+
case 'cityAttractions': {
27+
toolCall.args.city; // string
28+
break;
29+
}
30+
31+
case 'weather': {
32+
toolCall.args.location; // string
33+
break;
34+
}
35+
}
36+
}
37+
38+
// typed tool results for tools with execute method:
39+
for (const toolResult of result.toolResults) {
40+
switch (toolResult.toolName) {
41+
// NOT AVAILABLE (NO EXECUTE METHOD)
42+
// case 'cityAttractions': {
43+
// toolResult.args.city; // string
44+
// toolResult.result;
45+
// break;
46+
// }
47+
48+
case 'weather': {
49+
toolResult.args.location; // string
50+
toolResult.result.location; // string
51+
toolResult.result.temperature; // number
52+
break;
53+
}
54+
}
55+
}
56+
57+
console.log(JSON.stringify(result, null, 2));
58+
}
59+
60+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
import { generateText, tool } from 'ai';
2+
import dotenv from 'dotenv';
3+
import { z } from 'zod';
4+
import { weatherTool } from '../tools/weather-tool';
5+
import { bedrock } from '@ai-sdk/amazon-bedrock';
6+
7+
dotenv.config();
8+
9+
async function main() {
10+
const result = await generateText({
11+
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
12+
maxTokens: 512,
13+
tools: {
14+
weather: weatherTool,
15+
cityAttractions: tool({
16+
parameters: z.object({ city: z.string() }),
17+
}),
18+
},
19+
toolChoice: {
20+
type: 'tool',
21+
toolName: 'weather',
22+
},
23+
prompt:
24+
'What is the weather in San Francisco and what attractions should I visit?',
25+
});
26+
27+
console.log(JSON.stringify(result, null, 2));
28+
}
29+
30+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
import { bedrock } from '@ai-sdk/amazon-bedrock';
2+
import { generateText } from 'ai';
3+
import dotenv from 'dotenv';
4+
5+
dotenv.config();
6+
7+
async function main() {
8+
const result = await generateText({
9+
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
10+
prompt: 'Invent a new holiday and describe its traditions.',
11+
});
12+
13+
console.log(result.text);
14+
console.log();
15+
console.log('Token usage:', result.usage);
16+
console.log('Finish reason:', result.finishReason);
17+
}
18+
19+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
import { bedrock } from '@ai-sdk/amazon-bedrock';
2+
import { CoreMessage, streamText } from 'ai';
3+
import dotenv from 'dotenv';
4+
import * as readline from 'node:readline/promises';
5+
6+
dotenv.config();
7+
8+
const terminal = readline.createInterface({
9+
input: process.stdin,
10+
output: process.stdout,
11+
});
12+
13+
const messages: CoreMessage[] = [];
14+
15+
async function main() {
16+
while (true) {
17+
const userInput = await terminal.question('You: ');
18+
19+
messages.push({ role: 'user', content: userInput });
20+
21+
const result = await streamText({
22+
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
23+
system: `You are a helpful, respectful and honest assistant.`,
24+
messages,
25+
});
26+
27+
let fullResponse = '';
28+
process.stdout.write('\nAssistant: ');
29+
for await (const delta of result.textStream) {
30+
fullResponse += delta;
31+
process.stdout.write(delta);
32+
}
33+
process.stdout.write('\n\n');
34+
35+
messages.push({ role: 'assistant', content: fullResponse });
36+
}
37+
}
38+
39+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
import { bedrock } from '@ai-sdk/amazon-bedrock';
2+
import { streamText } from 'ai';
3+
import dotenv from 'dotenv';
4+
import { z } from 'zod';
5+
import { weatherTool } from '../tools/weather-tool';
6+
7+
dotenv.config();
8+
9+
async function main() {
10+
const result = await streamText({
11+
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
12+
tools: {
13+
weather: weatherTool,
14+
cityAttractions: {
15+
parameters: z.object({ city: z.string() }),
16+
},
17+
},
18+
prompt: 'What is the weather in San Francisco?',
19+
});
20+
21+
for await (const part of result.fullStream) {
22+
switch (part.type) {
23+
case 'text-delta': {
24+
console.log('Text delta:', part.textDelta);
25+
break;
26+
}
27+
28+
case 'tool-call': {
29+
switch (part.toolName) {
30+
case 'cityAttractions': {
31+
console.log('TOOL CALL cityAttractions');
32+
console.log(`city: ${part.args.city}`); // string
33+
break;
34+
}
35+
36+
case 'weather': {
37+
console.log('TOOL CALL weather');
38+
console.log(`location: ${part.args.location}`); // string
39+
break;
40+
}
41+
}
42+
43+
break;
44+
}
45+
46+
case 'tool-result': {
47+
switch (part.toolName) {
48+
// NOT AVAILABLE (NO EXECUTE METHOD)
49+
// case 'cityAttractions': {
50+
// console.log('TOOL RESULT cityAttractions');
51+
// console.log(`city: ${part.args.city}`); // string
52+
// console.log(`result: ${part.result}`);
53+
// break;
54+
// }
55+
56+
case 'weather': {
57+
console.log('TOOL RESULT weather');
58+
console.log(`location: ${part.args.location}`); // string
59+
console.log(`temperature: ${part.result.temperature}`); // number
60+
break;
61+
}
62+
}
63+
64+
break;
65+
}
66+
67+
case 'error':
68+
console.error('Error:', part.error);
69+
break;
70+
}
71+
}
72+
}
73+
74+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
import { bedrock } from '@ai-sdk/amazon-bedrock';
2+
import { streamText } from 'ai';
3+
import dotenv from 'dotenv';
4+
import fs from 'node:fs';
5+
6+
dotenv.config();
7+
8+
async function main() {
9+
const result = await streamText({
10+
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
11+
maxTokens: 512,
12+
messages: [
13+
{
14+
role: 'user',
15+
content: [
16+
{ type: 'text', text: 'Describe the image in detail.' },
17+
{ type: 'image', image: fs.readFileSync('./data/comic-cat.png') },
18+
],
19+
},
20+
],
21+
});
22+
23+
for await (const textPart of result.textStream) {
24+
process.stdout.write(textPart);
25+
}
26+
}
27+
28+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
import { bedrock } from '@ai-sdk/amazon-bedrock';
2+
import { streamText } from 'ai';
3+
import dotenv from 'dotenv';
4+
5+
dotenv.config();
6+
7+
async function main() {
8+
const result = await streamText({
9+
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
10+
prompt: 'Invent a new holiday and describe its traditions.',
11+
});
12+
13+
for await (const textPart of result.textStream) {
14+
process.stdout.write(textPart);
15+
}
16+
17+
console.log();
18+
console.log('Token usage:', await result.usage);
19+
console.log('Finish reason:', await result.finishReason);
20+
}
21+
22+
main().catch(console.error);

‎packages/amazon-bedrock/README.md

+36
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# Vercel AI SDK - Amazon Bedrock Provider
2+
3+
The **[Amazon Bedrock provider](https://sdk.vercel.ai/providers/ai-sdk-providers/amazon-bedrock)** for the [Vercel AI SDK](https://sdk.vercel.ai/docs)
4+
contains language model support for the Amazon Bedrock [converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html).
5+
6+
## Setup
7+
8+
The Amazon Bedrock provider is available in the `@ai-sdk/amazon-bedrock` module. You can install it with
9+
10+
```bash
11+
npm i @ai-sdk/amazon-bedrock
12+
```
13+
14+
## Provider Instance
15+
16+
You can import the default provider instance `bedrock` from `@ai-sdk/amazon-bedrock`:
17+
18+
```ts
19+
import { bedrock } from '@ai-sdk/amazon-bedrock';
20+
```
21+
22+
## Example
23+
24+
```ts
25+
import { bedrock } from '@ai-sdk/amazon-bedrock';
26+
import { generateText } from 'ai';
27+
28+
const { text } = await generateText({
29+
model: bedrock('meta.llama3-8b-instruct-v1:0'),
30+
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
31+
});
32+
```
33+
34+
## Documentation
35+
36+
Please check out the **[Amazon Bedrock provider documentation](https://sdk.vercel.ai/providers/ai-sdk-providers/amazon-bedrock)** for more information.

‎packages/amazon-bedrock/package.json

+65
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
{
2+
"name": "@ai-sdk/amazon-bedrock",
3+
"version": "0.0.0",
4+
"license": "Apache-2.0",
5+
"sideEffects": false,
6+
"main": "./dist/index.js",
7+
"module": "./dist/index.mjs",
8+
"types": "./dist/index.d.ts",
9+
"files": [
10+
"dist/**/*"
11+
],
12+
"scripts": {
13+
"build": "tsup",
14+
"clean": "rm -rf dist",
15+
"dev": "tsup --watch",
16+
"lint": "eslint \"./**/*.ts*\"",
17+
"type-check": "tsc --noEmit",
18+
"prettier-check": "prettier --check \"./**/*.ts*\"",
19+
"test": "pnpm test:node && pnpm test:edge",
20+
"test:edge": "vitest --config vitest.edge.config.js --run",
21+
"test:node": "vitest --config vitest.node.config.js --run"
22+
},
23+
"exports": {
24+
"./package.json": "./package.json",
25+
".": {
26+
"types": "./dist/index.d.ts",
27+
"import": "./dist/index.mjs",
28+
"require": "./dist/index.js"
29+
}
30+
},
31+
"dependencies": {
32+
"@ai-sdk/provider": "0.0.10",
33+
"@ai-sdk/provider-utils": "0.0.15",
34+
"@aws-sdk/client-bedrock-runtime": "3.602.0"
35+
},
36+
"devDependencies": {
37+
"@smithy/types": "^3.1.0",
38+
"@types/node": "^18",
39+
"@vercel/ai-tsconfig": "workspace:*",
40+
"aws-sdk-client-mock": "^4.0.1",
41+
"tsup": "^8",
42+
"typescript": "5.1.3",
43+
"zod": "3.23.8"
44+
},
45+
"peerDependencies": {
46+
"zod": "^3.0.0"
47+
},
48+
"engines": {
49+
"node": ">=18"
50+
},
51+
"publishConfig": {
52+
"access": "public"
53+
},
54+
"homepage": "https://sdk.vercel.ai/docs",
55+
"repository": {
56+
"type": "git",
57+
"url": "git+https://github.com/vercel/ai.git"
58+
},
59+
"bugs": {
60+
"url": "https://github.com/vercel/ai/issues"
61+
},
62+
"keywords": [
63+
"ai"
64+
]
65+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,336 @@
1+
import { LanguageModelV1Prompt } from '@ai-sdk/provider';
2+
import { mockClient } from 'aws-sdk-client-mock';
3+
import { createAmazonBedrock } from './bedrock-provider';
4+
import {
5+
BedrockRuntimeClient,
6+
ConverseCommand,
7+
ConverseStreamCommand,
8+
ConverseStreamOutput,
9+
StopReason,
10+
} from '@aws-sdk/client-bedrock-runtime';
11+
import {
12+
convertArrayToAsyncIterable,
13+
convertReadableStreamToArray,
14+
} from '@ai-sdk/provider-utils/test';
15+
16+
const TEST_PROMPT: LanguageModelV1Prompt = [
17+
{ role: 'system', content: 'System Prompt' },
18+
{ role: 'user', content: [{ type: 'text', text: 'Hello' }] },
19+
];
20+
21+
const bedrockMock = mockClient(BedrockRuntimeClient);
22+
23+
const provider = createAmazonBedrock({
24+
region: 'us-east-1',
25+
accessKeyId: 'test-access-key',
26+
secretAccessKey: 'test-secret-key',
27+
});
28+
29+
const model = provider('anthropic.claude-3-haiku-20240307-v1:0');
30+
31+
describe('doGenerate', () => {
32+
beforeEach(() => {
33+
bedrockMock.reset();
34+
});
35+
36+
it('should extract text response', async () => {
37+
bedrockMock.on(ConverseCommand).resolves({
38+
output: {
39+
message: { role: 'assistant', content: [{ text: 'Hello, World!' }] },
40+
},
41+
});
42+
43+
const { text } = await model.doGenerate({
44+
inputFormat: 'prompt',
45+
mode: { type: 'regular' },
46+
prompt: TEST_PROMPT,
47+
});
48+
49+
expect(text).toStrictEqual('Hello, World!');
50+
});
51+
52+
it('should extract usage', async () => {
53+
bedrockMock.on(ConverseCommand).resolves({
54+
usage: { inputTokens: 4, outputTokens: 34, totalTokens: 38 },
55+
});
56+
57+
const { usage } = await model.doGenerate({
58+
inputFormat: 'prompt',
59+
mode: { type: 'regular' },
60+
prompt: TEST_PROMPT,
61+
});
62+
63+
expect(usage).toStrictEqual({
64+
promptTokens: 4,
65+
completionTokens: 34,
66+
});
67+
});
68+
69+
it('should extract finish reason', async () => {
70+
bedrockMock.on(ConverseCommand).resolves({
71+
stopReason: 'stop_sequence',
72+
});
73+
74+
const response = await model.doGenerate({
75+
inputFormat: 'prompt',
76+
mode: { type: 'regular' },
77+
prompt: TEST_PROMPT,
78+
});
79+
80+
expect(response.finishReason).toStrictEqual('stop');
81+
});
82+
83+
it('should support unknown finish reason', async () => {
84+
bedrockMock.on(ConverseCommand).resolves({
85+
stopReason: 'eos' as StopReason,
86+
});
87+
88+
const response = await model.doGenerate({
89+
inputFormat: 'prompt',
90+
mode: { type: 'regular' },
91+
prompt: TEST_PROMPT,
92+
});
93+
94+
expect(response.finishReason).toStrictEqual('unknown');
95+
});
96+
97+
it('should pass the model and the messages', async () => {
98+
bedrockMock.on(ConverseCommand).resolves({
99+
output: {
100+
message: { role: 'assistant', content: [{ text: 'Testing' }] },
101+
},
102+
});
103+
104+
await model.doGenerate({
105+
inputFormat: 'prompt',
106+
mode: { type: 'regular' },
107+
prompt: TEST_PROMPT,
108+
});
109+
110+
expect(
111+
bedrockMock.commandCalls(ConverseCommand, {
112+
modelId: 'anthropic.claude-3-haiku-20240307-v1:0',
113+
messages: [{ role: 'user', content: [{ text: 'Hello' }] }],
114+
}).length,
115+
).toBe(1);
116+
});
117+
118+
it('should pass settings', async () => {
119+
bedrockMock.on(ConverseCommand).resolves({
120+
output: {
121+
message: { role: 'assistant', content: [{ text: 'Testing' }] },
122+
},
123+
});
124+
125+
await provider('amazon.titan-tg1-large', {
126+
additionalModelRequestFields: { top_k: 10 },
127+
}).doGenerate({
128+
inputFormat: 'prompt',
129+
mode: { type: 'regular' },
130+
prompt: TEST_PROMPT,
131+
maxTokens: 100,
132+
temperature: 0.5,
133+
topP: 0.5,
134+
});
135+
136+
expect(
137+
bedrockMock.commandCalls(ConverseCommand, {
138+
modelId: 'amazon.titan-tg1-large',
139+
messages: [{ role: 'user', content: [{ text: 'Hello' }] }],
140+
additionalModelRequestFields: { top_k: 10 },
141+
system: [{ text: 'System Prompt' }],
142+
inferenceConfig: {
143+
maxTokens: 100,
144+
temperature: 0.5,
145+
topP: 0.5,
146+
},
147+
}).length,
148+
).toBe(1);
149+
});
150+
});
151+
152+
describe('doStream', () => {
153+
beforeEach(() => {
154+
bedrockMock.reset();
155+
});
156+
157+
it('should stream text deltas', async () => {
158+
const streamData: ConverseStreamOutput[] = [
159+
{ contentBlockDelta: { contentBlockIndex: 0, delta: { text: 'Hello' } } },
160+
{ contentBlockDelta: { contentBlockIndex: 1, delta: { text: ', ' } } },
161+
{
162+
contentBlockDelta: { contentBlockIndex: 2, delta: { text: 'World!' } },
163+
},
164+
{
165+
metadata: {
166+
usage: { inputTokens: 4, outputTokens: 34, totalTokens: 38 },
167+
metrics: { latencyMs: 10 },
168+
},
169+
},
170+
{
171+
messageStop: { stopReason: 'stop_sequence' },
172+
},
173+
];
174+
175+
bedrockMock.on(ConverseStreamCommand).resolves({
176+
stream: convertArrayToAsyncIterable(streamData),
177+
});
178+
179+
const { stream } = await model.doStream({
180+
inputFormat: 'prompt',
181+
mode: { type: 'regular' },
182+
prompt: TEST_PROMPT,
183+
});
184+
185+
expect(await convertReadableStreamToArray(stream)).toStrictEqual([
186+
{ type: 'text-delta', textDelta: 'Hello' },
187+
{ type: 'text-delta', textDelta: ', ' },
188+
{ type: 'text-delta', textDelta: 'World!' },
189+
{
190+
type: 'finish',
191+
finishReason: 'stop',
192+
usage: { promptTokens: 4, completionTokens: 34 },
193+
},
194+
]);
195+
});
196+
197+
it('should stream tool deltas', async () => {
198+
const streamData: ConverseStreamOutput[] = [
199+
{
200+
contentBlockStart: {
201+
contentBlockIndex: 0,
202+
start: { toolUse: { toolUseId: 'tool-use-id', name: 'test-tool' } },
203+
},
204+
},
205+
{
206+
contentBlockDelta: {
207+
contentBlockIndex: 1,
208+
delta: { toolUse: { input: '{"value":' } },
209+
},
210+
},
211+
{
212+
contentBlockDelta: {
213+
contentBlockIndex: 2,
214+
delta: { toolUse: { input: '"Sparkle Day"}' } },
215+
},
216+
},
217+
{ contentBlockStop: { contentBlockIndex: 3 } },
218+
{ messageStop: { stopReason: 'tool_use' } },
219+
];
220+
221+
bedrockMock.on(ConverseStreamCommand).resolves({
222+
stream: convertArrayToAsyncIterable(streamData),
223+
});
224+
225+
const { stream } = await model.doStream({
226+
inputFormat: 'prompt',
227+
mode: {
228+
type: 'regular',
229+
tools: [
230+
{
231+
type: 'function',
232+
name: 'test-tool',
233+
parameters: {
234+
type: 'object',
235+
properties: { value: { type: 'string' } },
236+
required: ['value'],
237+
additionalProperties: false,
238+
$schema: 'http://json-schema.org/draft-07/schema#',
239+
},
240+
},
241+
],
242+
toolChoice: { type: 'tool', toolName: 'test-tool' },
243+
},
244+
prompt: TEST_PROMPT,
245+
});
246+
247+
expect(await convertReadableStreamToArray(stream)).toStrictEqual([
248+
{
249+
type: 'tool-call-delta',
250+
toolCallId: 'tool-use-id',
251+
toolCallType: 'function',
252+
toolName: 'test-tool',
253+
argsTextDelta: '{"value":',
254+
},
255+
{
256+
type: 'tool-call-delta',
257+
toolCallId: 'tool-use-id',
258+
toolCallType: 'function',
259+
toolName: 'test-tool',
260+
argsTextDelta: '"Sparkle Day"}',
261+
},
262+
{
263+
type: 'tool-call',
264+
toolCallId: 'tool-use-id',
265+
toolCallType: 'function',
266+
toolName: 'test-tool',
267+
args: '{"value":"Sparkle Day"}',
268+
},
269+
{
270+
type: 'finish',
271+
finishReason: 'tool-calls',
272+
usage: { promptTokens: NaN, completionTokens: NaN },
273+
},
274+
]);
275+
});
276+
277+
it('should handle error stream parts', async () => {
278+
bedrockMock.on(ConverseStreamCommand).resolves({
279+
stream: convertArrayToAsyncIterable([
280+
{
281+
internalServerException: {
282+
message: 'Internal Server Error',
283+
name: 'InternalServerException',
284+
$fault: 'server',
285+
$metadata: {},
286+
},
287+
},
288+
]),
289+
});
290+
291+
const { stream } = await model.doStream({
292+
inputFormat: 'prompt',
293+
mode: { type: 'regular' },
294+
prompt: TEST_PROMPT,
295+
});
296+
297+
expect(await convertReadableStreamToArray(stream)).toStrictEqual([
298+
{
299+
type: 'error',
300+
error: {
301+
message: 'Internal Server Error',
302+
name: 'InternalServerException',
303+
$fault: 'server',
304+
$metadata: {},
305+
},
306+
},
307+
{
308+
finishReason: 'error',
309+
type: 'finish',
310+
usage: {
311+
completionTokens: NaN,
312+
promptTokens: NaN,
313+
},
314+
},
315+
]);
316+
});
317+
318+
it('should pass the messages and the model', async () => {
319+
bedrockMock.on(ConverseStreamCommand).resolves({
320+
stream: convertArrayToAsyncIterable([]),
321+
});
322+
323+
await model.doStream({
324+
inputFormat: 'prompt',
325+
mode: { type: 'regular' },
326+
prompt: TEST_PROMPT,
327+
});
328+
329+
expect(
330+
bedrockMock.commandCalls(ConverseStreamCommand, {
331+
modelId: 'anthropic.claude-3-haiku-20240307-v1:0',
332+
messages: [{ role: 'user', content: [{ text: 'Hello' }] }],
333+
}).length,
334+
).toBe(1);
335+
});
336+
});
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,367 @@
1+
import {
2+
LanguageModelV1,
3+
LanguageModelV1CallWarning,
4+
LanguageModelV1FinishReason,
5+
LanguageModelV1StreamPart,
6+
UnsupportedFunctionalityError,
7+
} from '@ai-sdk/provider';
8+
import { ParseResult } from '@ai-sdk/provider-utils';
9+
import {
10+
BedrockRuntimeClient,
11+
ConverseCommand,
12+
ConverseCommandInput,
13+
ConverseStreamCommand,
14+
ConverseStreamOutput,
15+
Tool,
16+
ToolConfiguration,
17+
} from '@aws-sdk/client-bedrock-runtime';
18+
import {
19+
BedrockChatModelId,
20+
BedrockChatSettings,
21+
} from './bedrock-chat-settings';
22+
import { convertToBedrockChatMessages } from './convert-to-bedrock-chat-messages';
23+
import { mapBedrockFinishReason } from './map-bedrock-finish-reason';
24+
25+
type BedrockChatConfig = {
26+
client: BedrockRuntimeClient;
27+
generateId: () => string;
28+
};
29+
30+
export class BedrockChatLanguageModel implements LanguageModelV1 {
31+
readonly specificationVersion = 'v1';
32+
readonly provider = 'amazon-bedrock';
33+
readonly defaultObjectGenerationMode = 'tool';
34+
35+
readonly modelId: BedrockChatModelId;
36+
readonly settings: BedrockChatSettings;
37+
38+
private readonly config: BedrockChatConfig;
39+
40+
constructor(
41+
modelId: BedrockChatModelId,
42+
settings: BedrockChatSettings,
43+
config: BedrockChatConfig,
44+
) {
45+
this.modelId = modelId;
46+
this.settings = settings;
47+
this.config = config;
48+
}
49+
50+
private async getArgs({
51+
mode,
52+
prompt,
53+
maxTokens,
54+
temperature,
55+
topP,
56+
frequencyPenalty,
57+
presencePenalty,
58+
seed,
59+
}: Parameters<LanguageModelV1['doGenerate']>[0]) {
60+
const type = mode.type;
61+
62+
const warnings: LanguageModelV1CallWarning[] = [];
63+
64+
if (frequencyPenalty != null) {
65+
warnings.push({
66+
type: 'unsupported-setting',
67+
setting: 'frequencyPenalty',
68+
});
69+
}
70+
71+
if (presencePenalty != null) {
72+
warnings.push({
73+
type: 'unsupported-setting',
74+
setting: 'presencePenalty',
75+
});
76+
}
77+
78+
if (seed != null) {
79+
warnings.push({
80+
type: 'unsupported-setting',
81+
setting: 'seed',
82+
});
83+
}
84+
85+
const { system, messages } = await convertToBedrockChatMessages({ prompt });
86+
87+
const baseArgs: ConverseCommandInput = {
88+
modelId: this.modelId,
89+
system: system ? [{ text: system }] : undefined,
90+
additionalModelRequestFields: this.settings.additionalModelRequestFields,
91+
inferenceConfig: {
92+
maxTokens,
93+
temperature,
94+
topP,
95+
},
96+
messages,
97+
};
98+
99+
switch (type) {
100+
case 'regular': {
101+
const toolConfig = prepareToolsAndToolChoice(mode);
102+
103+
return {
104+
...baseArgs,
105+
...(toolConfig.tools?.length ? { toolConfig } : {}),
106+
} satisfies ConverseCommandInput;
107+
}
108+
109+
case 'object-json': {
110+
throw new UnsupportedFunctionalityError({
111+
functionality: 'json-mode object generation',
112+
});
113+
}
114+
115+
case 'object-tool': {
116+
return {
117+
...baseArgs,
118+
toolConfig: {
119+
tools: [
120+
{
121+
toolSpec: {
122+
name: mode.tool.name,
123+
description: mode.tool.description,
124+
inputSchema: { json: JSON.stringify(mode.tool.parameters) },
125+
},
126+
},
127+
],
128+
toolChoice: { tool: { name: mode.tool.name } },
129+
},
130+
} satisfies ConverseCommandInput;
131+
}
132+
133+
case 'object-grammar': {
134+
throw new UnsupportedFunctionalityError({
135+
functionality: 'grammar-mode object generation',
136+
});
137+
}
138+
139+
default: {
140+
const _exhaustiveCheck: never = type;
141+
throw new Error(`Unsupported type: ${_exhaustiveCheck}`);
142+
}
143+
}
144+
}
145+
146+
async doGenerate(
147+
options: Parameters<LanguageModelV1['doGenerate']>[0],
148+
): Promise<Awaited<ReturnType<LanguageModelV1['doGenerate']>>> {
149+
const args = await this.getArgs(options);
150+
151+
const response = await this.config.client.send(new ConverseCommand(args));
152+
153+
const { messages: rawPrompt, ...rawSettings } = args;
154+
155+
return {
156+
text:
157+
response.output?.message?.content
158+
?.map(part => part.text ?? '')
159+
.join('') ?? undefined,
160+
toolCalls: response.output?.message?.content
161+
?.filter(part => !!part.toolUse)
162+
?.map(part => ({
163+
toolCallType: 'function',
164+
toolCallId: part.toolUse?.toolUseId ?? this.config.generateId(),
165+
toolName: part.toolUse?.name ?? `tool-${this.config.generateId()}`,
166+
args: JSON.stringify(part.toolUse?.input ?? ''),
167+
})),
168+
finishReason: mapBedrockFinishReason(response.stopReason),
169+
usage: {
170+
promptTokens: response.usage?.inputTokens ?? Number.NaN,
171+
completionTokens: response.usage?.outputTokens ?? Number.NaN,
172+
},
173+
rawCall: { rawPrompt, rawSettings },
174+
warnings: [],
175+
};
176+
}
177+
178+
async doStream(
179+
options: Parameters<LanguageModelV1['doStream']>[0],
180+
): Promise<Awaited<ReturnType<LanguageModelV1['doStream']>>> {
181+
const args = await this.getArgs(options);
182+
183+
const response = await this.config.client.send(
184+
new ConverseStreamCommand({ ...args }),
185+
);
186+
187+
const { messages: rawPrompt, ...rawSettings } = args;
188+
189+
let finishReason: LanguageModelV1FinishReason = 'other';
190+
let usage: { promptTokens: number; completionTokens: number } = {
191+
promptTokens: Number.NaN,
192+
completionTokens: Number.NaN,
193+
};
194+
195+
if (!response.stream) {
196+
throw new Error('No stream found');
197+
}
198+
199+
const stream = new ReadableStream<any>({
200+
async start(controller) {
201+
for await (const chunk of response.stream!) {
202+
controller.enqueue({ success: true, value: chunk });
203+
}
204+
controller.close();
205+
},
206+
});
207+
208+
let toolName = '';
209+
let toolCallId = '';
210+
let toolCallArgs = '';
211+
212+
return {
213+
stream: stream.pipeThrough(
214+
new TransformStream<
215+
ParseResult<ConverseStreamOutput>,
216+
LanguageModelV1StreamPart
217+
>({
218+
transform(chunk, controller) {
219+
function enqueueError(error: Error) {
220+
finishReason = 'error';
221+
controller.enqueue({ type: 'error', error });
222+
}
223+
224+
// handle failed chunk parsing / validation:
225+
if (!chunk.success) {
226+
enqueueError(chunk.error);
227+
return;
228+
}
229+
230+
const value = chunk.value;
231+
232+
// handle errors:
233+
if (value.internalServerException) {
234+
enqueueError(value.internalServerException);
235+
return;
236+
}
237+
if (value.modelStreamErrorException) {
238+
enqueueError(value.modelStreamErrorException);
239+
return;
240+
}
241+
if (value.throttlingException) {
242+
enqueueError(value.throttlingException);
243+
return;
244+
}
245+
if (value.validationException) {
246+
enqueueError(value.validationException);
247+
return;
248+
}
249+
250+
if (value.messageStop) {
251+
finishReason = mapBedrockFinishReason(
252+
value.messageStop.stopReason,
253+
);
254+
}
255+
256+
if (value.metadata) {
257+
usage = {
258+
promptTokens: value.metadata.usage?.inputTokens ?? Number.NaN,
259+
completionTokens:
260+
value.metadata.usage?.outputTokens ?? Number.NaN,
261+
};
262+
}
263+
264+
if (value.contentBlockDelta?.delta?.text) {
265+
controller.enqueue({
266+
type: 'text-delta',
267+
textDelta: value.contentBlockDelta.delta.text,
268+
});
269+
}
270+
271+
if (value.contentBlockStart?.start?.toolUse) {
272+
// store the tool name and id for the next chunk
273+
const toolUse = value.contentBlockStart.start.toolUse;
274+
toolName = toolUse.name ?? '';
275+
toolCallId = toolUse.toolUseId ?? '';
276+
}
277+
278+
if (value.contentBlockDelta?.delta?.toolUse) {
279+
// continue to get the chunks of the tool call args
280+
toolCallArgs += value.contentBlockDelta.delta.toolUse.input ?? '';
281+
282+
controller.enqueue({
283+
type: 'tool-call-delta',
284+
toolCallType: 'function',
285+
toolCallId,
286+
toolName,
287+
argsTextDelta:
288+
value.contentBlockDelta.delta.toolUse.input ?? '',
289+
});
290+
}
291+
292+
// if the content is done and a tool call was made, send it
293+
if (value.contentBlockStop && toolCallArgs.length > 0) {
294+
controller.enqueue({
295+
type: 'tool-call',
296+
toolCallType: 'function',
297+
toolCallId,
298+
toolName,
299+
args: toolCallArgs,
300+
});
301+
}
302+
},
303+
304+
flush(controller) {
305+
controller.enqueue({
306+
type: 'finish',
307+
finishReason,
308+
usage,
309+
});
310+
},
311+
}),
312+
),
313+
rawCall: { rawPrompt, rawSettings },
314+
warnings: [],
315+
};
316+
}
317+
}
318+
319+
function prepareToolsAndToolChoice(
320+
mode: Parameters<LanguageModelV1['doGenerate']>[0]['mode'] & {
321+
type: 'regular';
322+
},
323+
): ToolConfiguration {
324+
// when the tools array is empty, change it to undefined to prevent errors:
325+
const tools = mode.tools?.length ? mode.tools : undefined;
326+
327+
if (tools == null) {
328+
return { tools: undefined, toolChoice: undefined };
329+
}
330+
331+
const mappedTools: Tool[] = tools.map(tool => ({
332+
toolSpec: {
333+
name: tool.name,
334+
description: tool.description,
335+
inputSchema: {
336+
json: tool.parameters as any,
337+
},
338+
},
339+
}));
340+
341+
const toolChoice = mode.toolChoice;
342+
343+
if (toolChoice == null) {
344+
return { tools: mappedTools, toolChoice: undefined };
345+
}
346+
347+
const type = toolChoice.type;
348+
349+
switch (type) {
350+
case 'auto':
351+
return { tools: mappedTools, toolChoice: { auto: {} } };
352+
case 'required':
353+
return { tools: mappedTools, toolChoice: { any: {} } };
354+
case 'none':
355+
// Bedrock does not support 'none' tool choice, so we remove the tools:
356+
return { tools: undefined, toolChoice: undefined };
357+
case 'tool':
358+
return {
359+
tools: mappedTools,
360+
toolChoice: { tool: { name: toolChoice.toolName } },
361+
};
362+
default: {
363+
const _exhaustiveCheck: never = type;
364+
throw new Error(`Unsupported tool choice type: ${_exhaustiveCheck}`);
365+
}
366+
}
367+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
import { ContentBlock } from '@aws-sdk/client-bedrock-runtime';
2+
3+
export type BedrockMessagesPrompt = {
4+
system?: string;
5+
messages: BedrockMessages;
6+
};
7+
8+
export type BedrockMessages = Array<ChatCompletionMessageParam>;
9+
10+
export type ChatCompletionMessageParam =
11+
| ChatCompletionUserMessageParam
12+
| ChatCompletionAssistantMessageParam;
13+
14+
export interface ChatCompletionUserMessageParam {
15+
role: 'user';
16+
content: Array<ContentBlock>;
17+
}
18+
19+
export interface ChatCompletionAssistantMessageParam {
20+
role: 'assistant';
21+
content: Array<ContentBlock>;
22+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
// https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html
2+
export type BedrockChatModelId =
3+
| 'amazon.titan-tg1-large'
4+
| 'amazon.titan-text-express-v1'
5+
| 'anthropic.claude-v2:1'
6+
| 'anthropic.claude-3-sonnet-20240229-v1:0'
7+
| 'anthropic.claude-3-5-sonnet-20240620-v1:0'
8+
| 'anthropic.claude-3-haiku-20240307-v1:0'
9+
| 'anthropic.claude-3-opus-20240229-v1:0'
10+
| 'cohere.command-r-v1:0'
11+
| 'cohere.command-r-plus-v1:0'
12+
| 'meta.llama2-13b-chat-v1'
13+
| 'meta.llama2-70b-chat-v1'
14+
| 'meta.llama3-8b-instruct-v1:0'
15+
| 'meta.llama3-70b-instruct-v1:0'
16+
| 'mistral.mistral-7b-instruct-v0:2'
17+
| 'mistral.mixtral-8x7b-instruct-v0:1'
18+
| 'mistral.mistral-large-2402-v1:0'
19+
| 'mistral.mistral-small-2402-v1:0'
20+
| (string & {});
21+
22+
export interface BedrockChatSettings {
23+
/**
24+
Additional inference parameters that the model supports,
25+
beyond the base set of inference parameters that Converse
26+
supports in the inferenceConfig field
27+
*/
28+
additionalModelRequestFields?: Record<string, any>;
29+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
import { generateId, loadSetting } from '@ai-sdk/provider-utils';
2+
import { BedrockRuntimeClient } from '@aws-sdk/client-bedrock-runtime';
3+
import { BedrockChatLanguageModel } from './bedrock-chat-language-model';
4+
import {
5+
BedrockChatModelId,
6+
BedrockChatSettings,
7+
} from './bedrock-chat-settings';
8+
9+
export interface AmazonBedrockProviderSettings {
10+
region?: string;
11+
accessKeyId?: string;
12+
secretAccessKey?: string;
13+
14+
// for testing
15+
generateId?: () => string;
16+
}
17+
18+
export interface AmazonBedrockProvider {
19+
(
20+
modelId: BedrockChatModelId,
21+
settings?: BedrockChatSettings,
22+
): BedrockChatLanguageModel;
23+
24+
languageModel(
25+
modelId: BedrockChatModelId,
26+
settings?: BedrockChatSettings,
27+
): BedrockChatLanguageModel;
28+
}
29+
30+
/**
31+
Create an Amazon Bedrock provider instance.
32+
*/
33+
export function createAmazonBedrock(
34+
options: AmazonBedrockProviderSettings = {},
35+
): AmazonBedrockProvider {
36+
const createBedrockRuntimeClient = () => {
37+
const config = {
38+
region: loadSetting({
39+
settingValue: options.region,
40+
settingName: 'region',
41+
environmentVariableName: 'AWS_REGION',
42+
description: 'AWS region',
43+
}),
44+
credentials: {
45+
accessKeyId: loadSetting({
46+
settingValue: options.accessKeyId,
47+
settingName: 'accessKeyId',
48+
environmentVariableName: 'AWS_ACCESS_KEY_ID',
49+
description: 'AWS access key ID',
50+
}),
51+
secretAccessKey: loadSetting({
52+
settingValue: options.secretAccessKey,
53+
settingName: 'secretAccessKey',
54+
environmentVariableName: 'AWS_SECRET_ACCESS_KEY',
55+
description: 'AWS secret access key',
56+
}),
57+
},
58+
};
59+
60+
return new BedrockRuntimeClient(config);
61+
};
62+
63+
const createChatModel = (
64+
modelId: BedrockChatModelId,
65+
settings: BedrockChatSettings = {},
66+
) =>
67+
new BedrockChatLanguageModel(modelId, settings, {
68+
client: createBedrockRuntimeClient(),
69+
generateId,
70+
});
71+
72+
const provider = function (
73+
modelId: BedrockChatModelId,
74+
settings?: BedrockChatSettings,
75+
) {
76+
if (new.target) {
77+
throw new Error(
78+
'The Amazon Bedrock model function cannot be called with the new keyword.',
79+
);
80+
}
81+
82+
return createChatModel(modelId, settings);
83+
};
84+
85+
provider.languageModel = createChatModel;
86+
87+
return provider as AmazonBedrockProvider;
88+
}
89+
90+
/**
91+
Default Bedrock provider instance.
92+
*/
93+
export const bedrock = createAmazonBedrock();
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
import { convertToBedrockChatMessages } from './convert-to-bedrock-chat-messages';
2+
3+
describe('user messages', () => {
4+
it('should convert messages with image and text parts to multiple parts', async () => {
5+
const { messages } = await convertToBedrockChatMessages({
6+
prompt: [
7+
{
8+
role: 'user',
9+
content: [
10+
{ type: 'text', text: 'Hello' },
11+
{
12+
type: 'image',
13+
image: new Uint8Array([0, 1, 2, 3]),
14+
mimeType: 'image/png',
15+
},
16+
],
17+
},
18+
],
19+
});
20+
21+
expect(messages).toEqual([
22+
{
23+
role: 'user',
24+
content: [
25+
{ text: 'Hello' },
26+
{
27+
image: {
28+
format: 'png',
29+
source: { bytes: new Uint8Array([0, 1, 2, 3]) },
30+
},
31+
},
32+
],
33+
},
34+
]);
35+
});
36+
37+
it('should download images for user image parts with URLs', async () => {
38+
const result = await convertToBedrockChatMessages({
39+
prompt: [
40+
{
41+
role: 'user',
42+
content: [
43+
{
44+
type: 'image',
45+
image: new URL('https://example.com/image.png'),
46+
},
47+
],
48+
},
49+
],
50+
downloadImplementation: async ({ url }) => {
51+
expect(url).toEqual(new URL('https://example.com/image.png'));
52+
53+
return {
54+
data: new Uint8Array([0, 1, 2, 3]),
55+
mimeType: 'image/png',
56+
};
57+
},
58+
});
59+
60+
expect(result).toEqual({
61+
messages: [
62+
{
63+
role: 'user',
64+
content: [
65+
{
66+
image: {
67+
format: 'png',
68+
source: { bytes: new Uint8Array([0, 1, 2, 3]) },
69+
},
70+
},
71+
],
72+
},
73+
],
74+
system: undefined,
75+
});
76+
});
77+
78+
it('should extract the system message', async () => {
79+
const { system } = await convertToBedrockChatMessages({
80+
prompt: [
81+
{
82+
role: 'system',
83+
content: 'Hello',
84+
},
85+
],
86+
});
87+
88+
expect(system).toEqual('Hello');
89+
});
90+
91+
it('should throw an error if multiple system messages are provided', async () => {
92+
expect(() =>
93+
convertToBedrockChatMessages({
94+
prompt: [
95+
{
96+
role: 'system',
97+
content: 'Hello',
98+
},
99+
{
100+
role: 'system',
101+
content: 'World',
102+
},
103+
],
104+
}),
105+
).rejects.toThrowError();
106+
});
107+
});
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,143 @@
1+
import {
2+
LanguageModelV1Prompt,
3+
UnsupportedFunctionalityError,
4+
} from '@ai-sdk/provider';
5+
import { BedrockMessages, BedrockMessagesPrompt } from './bedrock-chat-prompt';
6+
import { ContentBlock, ImageFormat } from '@aws-sdk/client-bedrock-runtime';
7+
import { download } from '@ai-sdk/provider-utils';
8+
9+
type ConvertToBedrockChatMessagesArgs = {
10+
prompt: LanguageModelV1Prompt;
11+
downloadImplementation?: typeof download;
12+
};
13+
14+
export async function convertToBedrockChatMessages({
15+
prompt,
16+
downloadImplementation = download,
17+
}: ConvertToBedrockChatMessagesArgs): Promise<BedrockMessagesPrompt> {
18+
let system: string | undefined = undefined;
19+
const messages: BedrockMessages = [];
20+
21+
for (const { role, content } of prompt) {
22+
switch (role) {
23+
case 'system': {
24+
if (system != null) {
25+
throw new UnsupportedFunctionalityError({
26+
functionality: 'Multiple system messages',
27+
});
28+
}
29+
30+
system = content;
31+
break;
32+
}
33+
34+
case 'user': {
35+
const bedrockMessageContent: ContentBlock[] = [];
36+
37+
for (const part of content) {
38+
switch (part.type) {
39+
case 'text': {
40+
bedrockMessageContent.push({ text: part.text });
41+
break;
42+
}
43+
44+
case 'image': {
45+
let data: Uint8Array;
46+
let mimeType: string | undefined;
47+
48+
if (part.image instanceof URL) {
49+
const downloadResult = await downloadImplementation({
50+
url: part.image,
51+
});
52+
53+
data = downloadResult.data;
54+
mimeType = downloadResult.mimeType;
55+
} else {
56+
data = part.image;
57+
mimeType = part.mimeType;
58+
}
59+
60+
bedrockMessageContent.push({
61+
image: {
62+
format: (mimeType ?? part.mimeType)?.split(
63+
'/',
64+
)?.[1] as ImageFormat,
65+
source: {
66+
bytes: data ?? (part.image as Uint8Array),
67+
},
68+
},
69+
});
70+
break;
71+
}
72+
}
73+
}
74+
75+
messages.push({
76+
role: 'user',
77+
content: bedrockMessageContent,
78+
});
79+
80+
break;
81+
}
82+
83+
case 'assistant': {
84+
const toolUse: Array<{
85+
toolUseId: string;
86+
name: string;
87+
input: any;
88+
}> = [];
89+
90+
let text = '';
91+
for (const part of content) {
92+
switch (part.type) {
93+
case 'text': {
94+
text += part.text;
95+
break;
96+
}
97+
case 'tool-call': {
98+
toolUse.push({
99+
toolUseId: part.toolCallId,
100+
name: part.toolName,
101+
input: part.args,
102+
});
103+
break;
104+
}
105+
default: {
106+
const _exhaustiveCheck: never = part;
107+
throw new Error(`Unsupported part: ${_exhaustiveCheck}`);
108+
}
109+
}
110+
}
111+
112+
messages.push({
113+
role: 'assistant',
114+
content: [
115+
...(text ? [{ text }] : []),
116+
...toolUse.map(toolUse => ({ toolUse: toolUse })),
117+
],
118+
});
119+
120+
break;
121+
}
122+
123+
case 'tool':
124+
messages.push({
125+
role: 'user',
126+
content: content.map(part => ({
127+
toolResult: {
128+
toolUseId: part.toolCallId,
129+
status: part.isError ? 'error' : 'success',
130+
content: [{ text: JSON.stringify(part.result) }],
131+
},
132+
})),
133+
});
134+
break;
135+
136+
default: {
137+
throw new Error(`Unsupported role: ${role}`);
138+
}
139+
}
140+
}
141+
142+
return { system, messages };
143+
}

‎packages/amazon-bedrock/src/index.ts

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
export * from './bedrock-provider';
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
import { LanguageModelV1FinishReason } from '@ai-sdk/provider';
2+
import { StopReason } from '@aws-sdk/client-bedrock-runtime';
3+
4+
export function mapBedrockFinishReason(
5+
finishReason?: StopReason,
6+
): LanguageModelV1FinishReason {
7+
switch (finishReason) {
8+
case 'stop_sequence':
9+
case 'end_turn':
10+
return 'stop';
11+
case 'max_tokens':
12+
return 'length';
13+
case 'content_filtered':
14+
return 'content-filter';
15+
case 'tool_use':
16+
return 'tool-calls';
17+
default:
18+
return 'unknown';
19+
}
20+
}

‎packages/amazon-bedrock/tsconfig.json

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
{
2+
"extends": "./node_modules/@vercel/ai-tsconfig/react-library.json",
3+
"compilerOptions": {
4+
"target": "ES2018",
5+
"stripInternal": true
6+
},
7+
"include": ["."],
8+
"exclude": ["dist", "build", "node_modules"]
9+
}
+10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
import { defineConfig } from 'tsup';
2+
3+
export default defineConfig([
4+
{
5+
entry: ['src/index.ts'],
6+
format: ['cjs', 'esm'],
7+
dts: true,
8+
sourcemap: true,
9+
},
10+
]);

‎packages/amazon-bedrock/turbo.json

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
{
2+
"extends": ["//"],
3+
"pipeline": {
4+
"build": {
5+
"outputs": ["**/dist/**"]
6+
}
7+
}
8+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
import { defineConfig } from 'vite';
2+
3+
// https://vitejs.dev/config/
4+
export default defineConfig({
5+
test: {
6+
environment: 'edge-runtime',
7+
globals: true,
8+
include: ['**/*.test.ts', '**/*.test.tsx'],
9+
},
10+
});
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
import { defineConfig } from 'vite';
2+
3+
// https://vitejs.dev/config/
4+
export default defineConfig({
5+
test: {
6+
environment: 'node',
7+
globals: true,
8+
include: ['**/*.test.ts', '**/*.test.tsx'],
9+
},
10+
});
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
export function convertArrayToAsyncIterable<T>(values: T[]): AsyncIterable<T> {
2+
return {
3+
async *[Symbol.asyncIterator]() {
4+
for (const value of values) {
5+
yield value;
6+
}
7+
},
8+
};
9+
}

‎packages/provider-utils/src/test/index.ts

+1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
export * from './convert-array-to-readable-stream';
2+
export * from './convert-array-to-async-iterable';
23
export * from './convert-async-iterable-to-array';
34
export * from './convert-readable-stream-to-array';
45
export * from './json-test-server';

‎pnpm-lock.yaml

+1,059-62
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)
Please sign in to comment.