Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: vercel/ai
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: ai@3.2.28
Choose a base ref
...
head repository: vercel/ai
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: ai@3.2.29
Choose a head ref
  • 5 commits
  • 26 files changed
  • 4 contributors

Commits on Jul 18, 2024

  1. fix (ai/core): race condition in mergeStreams (#2325)

    lgrammel authored Jul 18, 2024
    Copy the full SHA
    e710b38 View commit details
  2. feat (ai/core): introduce stream data support in toAIStreamResponse (#…

    lgrammel authored Jul 18, 2024
    Copy the full SHA
    6078a69 View commit details
  3. docs: add core message reference (#2330)

    nicoalbanese authored Jul 18, 2024
    Copy the full SHA
    cb1de2d View commit details
  4. feat (provider/google-vertex): change vertexai library into peer depe…

    …ndency (#2331)
    lgrammel authored Jul 18, 2024
    Copy the full SHA
    0eabc79 View commit details
  5. Version Packages (#2328)

    Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
    github-actions[bot] and github-actions[bot] authored Jul 18, 2024
    Copy the full SHA
    c53f301 View commit details
Showing with 559 additions and 175 deletions.
  1. +9 −14 content/docs/02-getting-started/02-nextjs-app-router.mdx
  2. +9 −14 content/docs/02-getting-started/03-nextjs-pages-router.mdx
  3. +8 −13 content/docs/02-getting-started/04-svelte.mdx
  4. +10 −15 content/docs/02-getting-started/05-nuxt.mdx
  5. +2 −2 content/docs/05-ai-sdk-ui/05-completion.mdx
  6. +20 −15 content/docs/05-ai-sdk-ui/20-streaming-data.mdx
  7. +2 −2 content/docs/06-advanced/06-rate-limiting.mdx
  8. +39 −1 content/docs/07-reference/ai-sdk-core/02-stream-text.mdx
  9. +151 −0 content/docs/07-reference/ai-sdk-core/30-core-message.mdx
  10. +56 −0 content/docs/07-reference/ai-sdk-ui/31-convert-to-core-message.mdx
  11. +7 −12 examples/next-openai/app/api/completion/route.ts
  12. +10 −16 examples/next-openai/app/api/use-chat-streamdata/route.ts
  13. +3 −3 examples/solidstart-openai/src/routes/api/use-chat-vision/index.ts
  14. +7 −12 examples/sveltekit-openai/src/routes/api/completion/+server.ts
  15. +7 −0 packages/core/CHANGELOG.md
  16. +96 −28 packages/core/core/generate-text/stream-text.test.ts
  17. +39 −6 packages/core/core/generate-text/stream-text.ts
  18. +40 −0 packages/core/core/util/merge-streams.test.ts
  19. +4 −4 packages/core/core/util/merge-streams.ts
  20. +1 −1 packages/core/package.json
  21. +8 −0 packages/core/tests/e2e/next-server/CHANGELOG.md
  22. +6 −0 packages/google-vertex/CHANGELOG.md
  23. +5 −2 packages/google-vertex/package.json
  24. +9 −0 packages/provider-utils/src/test/convert-response-stream-to-array.ts
  25. +1 −0 packages/provider-utils/src/test/index.ts
  26. +10 −15 pnpm-lock.yaml
23 changes: 9 additions & 14 deletions content/docs/02-getting-started/02-nextjs-app-router.mdx
Original file line number Diff line number Diff line change
@@ -179,42 +179,37 @@ Depending on your use case, you may want to stream additional data alongside the

Make the following changes to your Route Handler (`app/api/chat/route.ts`):

```ts filename="app/api/chat/route.ts" highlight="2,15-25"
```ts filename="app/api/chat/route.ts" highlight="2,10-11,16-18,21"
import { openai } from '@ai-sdk/openai';
import { StreamingTextResponse, streamText, StreamData } from 'ai';
import { streamText, StreamData } from 'ai';

// Allow streaming responses up to 30 seconds
export const maxDuration = 30;

export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
model: openai('gpt-4-turbo'),
messages,
});

const data = new StreamData();

data.append({ test: 'value' });

const stream = result.toAIStream({
onFinal(_) {
const result = await streamText({
model: openai('gpt-4-turbo'),
messages,
onFinish() {
data.close();
},
});

return new StreamingTextResponse(stream, {}, data);
return result.toAIStreamResponse({ data });
}
```

In this code, you:

1. Create a new instance of `StreamData`.
2. Append the data you want to stream alongside the model's response.
3. Create a new AI stream with the `toAIStream` method on the `StreamTextResult` object.
4. Listen for the `onFinal` callback on the AI Stream created above.
5. Pass the data alongside the stream to the new `StreamingTextResponse`.
3. Listen for the `onFinish` callback on `streamText` and close the stream data.
4. Pass the data into the `toAIStreamResponse` method.

### Update your frontend

23 changes: 9 additions & 14 deletions content/docs/02-getting-started/03-nextjs-pages-router.mdx
Original file line number Diff line number Diff line change
@@ -178,42 +178,37 @@ Depending on your use case, you may want to stream additional data alongside the

Make the following changes to your Route Handler (`app/api/chat/route.ts`)

```ts filename="app/api/chat/route.ts" highlight="2,15-25"
```ts filename="app/api/chat/route.ts" highlight="2,10-11,16-18,21"
import { openai } from '@ai-sdk/openai';
import { StreamingTextResponse, streamText, StreamData } from 'ai';
import { streamText, StreamData } from 'ai';

// Allow streaming responses up to 30 seconds
export const maxDuration = 30;

export async function POST(req: Request) {
const { messages } = await req.json();

const result = await streamText({
model: openai('gpt-4-turbo'),
messages,
});

const data = new StreamData();

data.append({ test: 'value' });

const stream = result.toAIStream({
onFinal(_) {
const result = await streamText({
model: openai('gpt-4-turbo'),
messages,
onFinish() {
data.close();
},
});

return new StreamingTextResponse(stream, {}, data);
return result.toAIStreamResponse({ data });
}
```

In this code, you:

1. Create a new instance of `StreamData`.
2. Append the data you want to stream alongside the model's response.
3. Create a new AI stream with the `toAIStream` method on the `StreamTextResult` object.
4. Listen for the `onFinal` callback on the AI Stream created above.
5. Pass the data alongside the stream to the new `StreamingTextResponse`.
3. Listen for the `onFinish` callback on `streamText` and close the stream data.
4. Pass the data into the `toAIStreamResponse` method.

### Update your frontend

21 changes: 8 additions & 13 deletions content/docs/02-getting-started/04-svelte.mdx
Original file line number Diff line number Diff line change
@@ -162,7 +162,7 @@ Depending on your use case, you may want to stream additional data alongside the

Make the following changes to your POST endpoint (`src/routes/api/chat/+server.ts`)

```ts filename="src/routes/api/chat/+server.ts" highlight="2,19-29"
```ts filename="src/routes/api/chat/+server.ts" highlight="2,14-15,19-21,25"
import { createOpenAI } from '@ai-sdk/openai';
import { StreamData, StreamingTextResponse, streamText } from 'ai';
import type { RequestHandler } from './$types';
@@ -176,32 +176,27 @@ const openai = createOpenAI({
export const POST = (async ({ request }) => {
const { messages } = await request.json();

const result = await streamText({
model: openai('gpt-3.5-turbo'),
messages,
});

const data = new StreamData();

data.append({ test: 'value' });

const stream = result.toAIStream({
onFinal(_) {
const result = await streamText({
model: openai('gpt-3.5-turbo'),
onFinish() {
data.close();
},
messages,
});

return new StreamingTextResponse(stream, {}, data);
return result.toAIStreamResponse({ data });
}) satisfies RequestHandler;
```

In this code, you:

1. Create a new instance of `StreamData`.
2. Append the data you want to stream alongside the model's response.
3. Create a new AI stream with the `toAIStream` method on the `StreamTextResult` object.
4. Listen for the `onFinal` callback on the AI Stream created above.
5. Pass the data alongside the stream to the new `StreamingTextResponse`
3. Listen for the `onFinish` callback on `streamText` and close the stream data.
4. Pass the data into the `toAIStreamResponse` method.

### Update your frontend

25 changes: 10 additions & 15 deletions content/docs/02-getting-started/05-nuxt.mdx
Original file line number Diff line number Diff line change
@@ -109,7 +109,7 @@ export default defineLazyEventHandler(async () => {
messages,
});

return new StreamingTextResponse(result.toAIStream());
return result.toAIStreamResponse();
});
});
```
@@ -176,8 +176,8 @@ Depending on your use case, you may want to stream additional data alongside the

Make the following changes to your API route (`pages/api/chat.ts`)

```ts filename="server/api/chat.ts" highlight="1,19-29"
import { StreamingTextResponse, streamText, StreamData } from 'ai';
```ts filename="server/api/chat.ts"
import { streamText, StreamData } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';

export default defineLazyEventHandler(async () => {
@@ -190,22 +190,18 @@ export default defineLazyEventHandler(async () => {
return defineEventHandler(async (event: any) => {
const { messages } = await readBody(event);

const result = await streamText({
model: openai('gpt-4-turbo'),
messages,
});

const data = new StreamData();

data.append({ test: 'value' });

const stream = result.toAIStream({
onFinal(_) {
const result = await streamText({
model: openai('gpt-4-turbo'),
onFinish() {
data.close();
},
messages,
});

return new StreamingTextResponse(stream, {}, data);
return result.toAIStreamResponse({ data });
});
});
```
@@ -214,9 +210,8 @@ In this code, you:

1. Create a new instance of `StreamData`.
2. Append the data you want to stream alongside the model's response.
3. Create a new AI stream with the `toAIStream` method on the `StreamTextResult` object.
4. Listen for the `onFinal` callback on the AI Stream created above.
5. Pass the data alongside the stream to the new `StreamingTextResponse`
3. Listen for the `onFinish` callback on `streamText` and close the stream data.
4. Pass the data into the `toAIStreamResponse` method.

### Update your frontend

4 changes: 2 additions & 2 deletions content/docs/05-ai-sdk-ui/05-completion.mdx
Original file line number Diff line number Diff line change
@@ -37,7 +37,7 @@ export default function Page() {
```

```ts filename='app/api/completion/route.ts'
import { StreamingTextResponse, streamText } from 'ai';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

// Allow streaming responses up to 30 seconds
@@ -51,7 +51,7 @@ export async function POST(req: Request) {
prompt,
});

return new StreamingTextResponse(result.toAIStream());
return result.toAIStreamResponse();
}
```

35 changes: 20 additions & 15 deletions content/docs/05-ai-sdk-ui/20-streaming-data.mdx
Original file line number Diff line number Diff line change
@@ -5,15 +5,24 @@ description: Welcome to the Vercel AI SDK documentation!

# Streaming Data

Depending on your use case, you may want to stream additional data alongside the model's response. This can be achieved with [`StreamData`](/docs/reference/stream-helpers/stream-data).
Depending on your use case, you may want to stream additional data alongside the model's response.
This can be achieved with [`StreamData`](/docs/reference/stream-helpers/stream-data).

## What is StreamData

The `StreamData` class allows you to stream arbitrary data to the client alongside your LLM response. This can be particularly useful in applications that need to augment AI responses with metadata, auxiliary information, or custom data structures that are relevant to the ongoing interaction.
The `StreamData` class allows you to stream arbitrary data to the client alongside your LLM response.
This can be particularly useful in applications that need to augment AI responses with metadata, auxiliary information,
or custom data structures that are relevant to the ongoing interaction.

## How To Use StreamData

To use `StreamData`, create a `StreamData` value on the server, append some data and then return it alongside the model response with [`StreamingTextResponse`](/docs/reference/stream-helpers/streaming-text-response). On the client, the [`useChat`](/docs/reference/ai-sdk-ui/use-chat) hook returns `data`, which will contain the additional data.
To use `StreamData`, create a `StreamData` value on the server,
append some data, and then include it in `toAIStreamResponse`.

You need to call `close()` on the `StreamData` object to ensure the data is sent to the client.
This can best be done in the `onFinish` callback of `streamText`.

On the client, the [`useChat`](/docs/reference/ai-sdk-ui/use-chat) hook returns `data`, which will contain the additional data.

### On the server

@@ -22,9 +31,9 @@ To use `StreamData`, create a `StreamData` value on the server, append some data
any framework.
</Note>

```tsx highlight="17-31"
```tsx
import { openai } from '@ai-sdk/openai';
import { StreamingTextResponse, streamText, StreamData } from 'ai';
import { streamText, StreamData } from 'ai';

// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
@@ -33,27 +42,23 @@ export async function POST(req: Request) {
// Extract the `messages` from the body of the request
const { messages } = await req.json();

// Call the language model
const result = await streamText({
model: openai('gpt-4-turbo'),
messages,
});

// Create a new StreamData
const data = new StreamData();

// Append additional data
data.append({ test: 'value' });

// Convert the response into a friendly text-stream
const stream = result.toAIStream({
onFinal(_) {
// Call the language model
const result = await streamText({
model: openai('gpt-4-turbo'),
onFinish() {
data.close();
},
messages,
});

// Respond with the stream and additional StreamData
return new StreamingTextResponse(stream, {}, data);
return result.toAIStreamResponse({ data });
}
```

4 changes: 2 additions & 2 deletions content/docs/06-advanced/06-rate-limiting.mdx
Original file line number Diff line number Diff line change
@@ -19,7 +19,7 @@ and [Upstash Ratelimit](https://github.com/upstash/ratelimit).
```tsx filename='app/api/generate/route.ts'
import kv from '@vercel/kv';
import { openai } from '@ai-sdk/openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';
import { StreamingTextResponse } from 'ai';
import { Ratelimit } from '@upstash/ratelimit';
import { NextRequest } from 'next/server';

@@ -49,7 +49,7 @@ export async function POST(req: NextRequest) {
messages,
});

return new StreamingTextResponse(result.toAIStream());
return result.toAIStreamResponse();
}
```

40 changes: 39 additions & 1 deletion content/docs/07-reference/ai-sdk-core/02-stream-text.mdx
Original file line number Diff line number Diff line change
@@ -761,9 +761,47 @@ for await (const textPart of textStream) {
},
{
name: 'toAIStreamResponse',
type: '(init?: ResponseInit) => Response',
type: '(options?: ToAIStreamOptions) => Response',
description:
'Converts the result to a streamed response object with a stream data part stream. It can be used with the `useChat` and `useCompletion` hooks.',
properties: [
{
type: 'ToAIStreamOptions',
parameters: [
{
name: 'init',
type: 'ResponseInit',
optional: true,
description: 'The response init options.',
properties: [
{
type: 'ResponseInit',
parameters: [
{
name: 'status',
type: 'number',
optional: true,
description: 'The response status code.',
},
{
name: 'headers',
type: 'Record<string, string>',
optional: true,
description: 'The response headers.',
},
],
},
],
},
{
name: 'data',
type: 'StreamData',
optional: true,
description: 'The stream data object.',
},
],
},
],
},
{
name: 'toTextStreamResponse',
Loading