Skip to content

Commit 6078a69

Browse files
authoredJul 18, 2024··
feat (ai/core): introduce stream data support in toAIStreamResponse (#2327)
1 parent e710b38 commit 6078a69

File tree

17 files changed

+276
-153
lines changed

17 files changed

+276
-153
lines changed
 

‎.changeset/six-starfishes-count.md

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'ai': patch
3+
---
4+
5+
feat (ai/core): introduce stream data support in toAIStreamResponse

‎content/docs/02-getting-started/02-nextjs-app-router.mdx

+9-14
Original file line numberDiff line numberDiff line change
@@ -179,42 +179,37 @@ Depending on your use case, you may want to stream additional data alongside the
179179

180180
Make the following changes to your Route Handler (`app/api/chat/route.ts`):
181181

182-
```ts filename="app/api/chat/route.ts" highlight="2,15-25"
182+
```ts filename="app/api/chat/route.ts" highlight="2,10-11,16-18,21"
183183
import { openai } from '@ai-sdk/openai';
184-
import { StreamingTextResponse, streamText, StreamData } from 'ai';
184+
import { streamText, StreamData } from 'ai';
185185

186186
// Allow streaming responses up to 30 seconds
187187
export const maxDuration = 30;
188188

189189
export async function POST(req: Request) {
190190
const { messages } = await req.json();
191191

192-
const result = await streamText({
193-
model: openai('gpt-4-turbo'),
194-
messages,
195-
});
196-
197192
const data = new StreamData();
198-
199193
data.append({ test: 'value' });
200194

201-
const stream = result.toAIStream({
202-
onFinal(_) {
195+
const result = await streamText({
196+
model: openai('gpt-4-turbo'),
197+
messages,
198+
onFinish() {
203199
data.close();
204200
},
205201
});
206202

207-
return new StreamingTextResponse(stream, {}, data);
203+
return result.toAIStreamResponse({ data });
208204
}
209205
```
210206

211207
In this code, you:
212208

213209
1. Create a new instance of `StreamData`.
214210
2. Append the data you want to stream alongside the model's response.
215-
3. Create a new AI stream with the `toAIStream` method on the `StreamTextResult` object.
216-
4. Listen for the `onFinal` callback on the AI Stream created above.
217-
5. Pass the data alongside the stream to the new `StreamingTextResponse`.
211+
3. Listen for the `onFinish` callback on `streamText` and close the stream data.
212+
4. Pass the data into the `toAIStreamResponse` method.
218213

219214
### Update your frontend
220215

‎content/docs/02-getting-started/03-nextjs-pages-router.mdx

+9-14
Original file line numberDiff line numberDiff line change
@@ -178,42 +178,37 @@ Depending on your use case, you may want to stream additional data alongside the
178178

179179
Make the following changes to your Route Handler (`app/api/chat/route.ts`)
180180

181-
```ts filename="app/api/chat/route.ts" highlight="2,15-25"
181+
```ts filename="app/api/chat/route.ts" highlight="2,10-11,16-18,21"
182182
import { openai } from '@ai-sdk/openai';
183-
import { StreamingTextResponse, streamText, StreamData } from 'ai';
183+
import { streamText, StreamData } from 'ai';
184184

185185
// Allow streaming responses up to 30 seconds
186186
export const maxDuration = 30;
187187

188188
export async function POST(req: Request) {
189189
const { messages } = await req.json();
190190

191-
const result = await streamText({
192-
model: openai('gpt-4-turbo'),
193-
messages,
194-
});
195-
196191
const data = new StreamData();
197-
198192
data.append({ test: 'value' });
199193

200-
const stream = result.toAIStream({
201-
onFinal(_) {
194+
const result = await streamText({
195+
model: openai('gpt-4-turbo'),
196+
messages,
197+
onFinish() {
202198
data.close();
203199
},
204200
});
205201

206-
return new StreamingTextResponse(stream, {}, data);
202+
return result.toAIStreamResponse({ data });
207203
}
208204
```
209205

210206
In this code, you:
211207

212208
1. Create a new instance of `StreamData`.
213209
2. Append the data you want to stream alongside the model's response.
214-
3. Create a new AI stream with the `toAIStream` method on the `StreamTextResult` object.
215-
4. Listen for the `onFinal` callback on the AI Stream created above.
216-
5. Pass the data alongside the stream to the new `StreamingTextResponse`.
210+
3. Listen for the `onFinish` callback on `streamText` and close the stream data.
211+
4. Pass the data into the `toAIStreamResponse` method.
217212

218213
### Update your frontend
219214

‎content/docs/02-getting-started/04-svelte.mdx

+8-13
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,7 @@ Depending on your use case, you may want to stream additional data alongside the
162162

163163
Make the following changes to your POST endpoint (`src/routes/api/chat/+server.ts`)
164164

165-
```ts filename="src/routes/api/chat/+server.ts" highlight="2,19-29"
165+
```ts filename="src/routes/api/chat/+server.ts" highlight="2,14-15,19-21,25"
166166
import { createOpenAI } from '@ai-sdk/openai';
167167
import { StreamData, StreamingTextResponse, streamText } from 'ai';
168168
import type { RequestHandler } from './$types';
@@ -176,32 +176,27 @@ const openai = createOpenAI({
176176
export const POST = (async ({ request }) => {
177177
const { messages } = await request.json();
178178

179-
const result = await streamText({
180-
model: openai('gpt-3.5-turbo'),
181-
messages,
182-
});
183-
184179
const data = new StreamData();
185-
186180
data.append({ test: 'value' });
187181

188-
const stream = result.toAIStream({
189-
onFinal(_) {
182+
const result = await streamText({
183+
model: openai('gpt-3.5-turbo'),
184+
onFinish() {
190185
data.close();
191186
},
187+
messages,
192188
});
193189

194-
return new StreamingTextResponse(stream, {}, data);
190+
return result.toAIStreamResponse({ data });
195191
}) satisfies RequestHandler;
196192
```
197193

198194
In this code, you:
199195

200196
1. Create a new instance of `StreamData`.
201197
2. Append the data you want to stream alongside the model's response.
202-
3. Create a new AI stream with the `toAIStream` method on the `StreamTextResult` object.
203-
4. Listen for the `onFinal` callback on the AI Stream created above.
204-
5. Pass the data alongside the stream to the new `StreamingTextResponse`
198+
3. Listen for the `onFinish` callback on `streamText` and close the stream data.
199+
4. Pass the data into the `toAIStreamResponse` method.
205200

206201
### Update your frontend
207202

‎content/docs/02-getting-started/05-nuxt.mdx

+10-15
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ export default defineLazyEventHandler(async () => {
109109
messages,
110110
});
111111

112-
return new StreamingTextResponse(result.toAIStream());
112+
return result.toAIStreamResponse();
113113
});
114114
});
115115
```
@@ -176,8 +176,8 @@ Depending on your use case, you may want to stream additional data alongside the
176176

177177
Make the following changes to your API route (`pages/api/chat.ts`)
178178

179-
```ts filename="server/api/chat.ts" highlight="1,19-29"
180-
import { StreamingTextResponse, streamText, StreamData } from 'ai';
179+
```ts filename="server/api/chat.ts"
180+
import { streamText, StreamData } from 'ai';
181181
import { createOpenAI } from '@ai-sdk/openai';
182182

183183
export default defineLazyEventHandler(async () => {
@@ -190,22 +190,18 @@ export default defineLazyEventHandler(async () => {
190190
return defineEventHandler(async (event: any) => {
191191
const { messages } = await readBody(event);
192192

193-
const result = await streamText({
194-
model: openai('gpt-4-turbo'),
195-
messages,
196-
});
197-
198193
const data = new StreamData();
199-
200194
data.append({ test: 'value' });
201195

202-
const stream = result.toAIStream({
203-
onFinal(_) {
196+
const result = await streamText({
197+
model: openai('gpt-4-turbo'),
198+
onFinish() {
204199
data.close();
205200
},
201+
messages,
206202
});
207203

208-
return new StreamingTextResponse(stream, {}, data);
204+
return result.toAIStreamResponse({ data });
209205
});
210206
});
211207
```
@@ -214,9 +210,8 @@ In this code, you:
214210

215211
1. Create a new instance of `StreamData`.
216212
2. Append the data you want to stream alongside the model's response.
217-
3. Create a new AI stream with the `toAIStream` method on the `StreamTextResult` object.
218-
4. Listen for the `onFinal` callback on the AI Stream created above.
219-
5. Pass the data alongside the stream to the new `StreamingTextResponse`
213+
3. Listen for the `onFinish` callback on `streamText` and close the stream data.
214+
4. Pass the data into the `toAIStreamResponse` method.
220215

221216
### Update your frontend
222217

‎content/docs/05-ai-sdk-ui/05-completion.mdx

+2-2
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ export default function Page() {
3737
```
3838

3939
```ts filename='app/api/completion/route.ts'
40-
import { StreamingTextResponse, streamText } from 'ai';
40+
import { streamText } from 'ai';
4141
import { openai } from '@ai-sdk/openai';
4242

4343
// Allow streaming responses up to 30 seconds
@@ -51,7 +51,7 @@ export async function POST(req: Request) {
5151
prompt,
5252
});
5353

54-
return new StreamingTextResponse(result.toAIStream());
54+
return result.toAIStreamResponse();
5555
}
5656
```
5757

‎content/docs/05-ai-sdk-ui/20-streaming-data.mdx

+20-15
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,24 @@ description: Welcome to the Vercel AI SDK documentation!
55

66
# Streaming Data
77

8-
Depending on your use case, you may want to stream additional data alongside the model's response. This can be achieved with [`StreamData`](/docs/reference/stream-helpers/stream-data).
8+
Depending on your use case, you may want to stream additional data alongside the model's response.
9+
This can be achieved with [`StreamData`](/docs/reference/stream-helpers/stream-data).
910

1011
## What is StreamData
1112

12-
The `StreamData` class allows you to stream arbitrary data to the client alongside your LLM response. This can be particularly useful in applications that need to augment AI responses with metadata, auxiliary information, or custom data structures that are relevant to the ongoing interaction.
13+
The `StreamData` class allows you to stream arbitrary data to the client alongside your LLM response.
14+
This can be particularly useful in applications that need to augment AI responses with metadata, auxiliary information,
15+
or custom data structures that are relevant to the ongoing interaction.
1316

1417
## How To Use StreamData
1518

16-
To use `StreamData`, create a `StreamData` value on the server, append some data and then return it alongside the model response with [`StreamingTextResponse`](/docs/reference/stream-helpers/streaming-text-response). On the client, the [`useChat`](/docs/reference/ai-sdk-ui/use-chat) hook returns `data`, which will contain the additional data.
19+
To use `StreamData`, create a `StreamData` value on the server,
20+
append some data, and then include it in `toAIStreamResponse`.
21+
22+
You need to call `close()` on the `StreamData` object to ensure the data is sent to the client.
23+
This can best be done in the `onFinish` callback of `streamText`.
24+
25+
On the client, the [`useChat`](/docs/reference/ai-sdk-ui/use-chat) hook returns `data`, which will contain the additional data.
1726

1827
### On the server
1928

@@ -22,9 +31,9 @@ To use `StreamData`, create a `StreamData` value on the server, append some data
2231
any framework.
2332
</Note>
2433

25-
```tsx highlight="17-31"
34+
```tsx
2635
import { openai } from '@ai-sdk/openai';
27-
import { StreamingTextResponse, streamText, StreamData } from 'ai';
36+
import { streamText, StreamData } from 'ai';
2837

2938
// Allow streaming responses up to 30 seconds
3039
export const maxDuration = 30;
@@ -33,27 +42,23 @@ export async function POST(req: Request) {
3342
// Extract the `messages` from the body of the request
3443
const { messages } = await req.json();
3544

36-
// Call the language model
37-
const result = await streamText({
38-
model: openai('gpt-4-turbo'),
39-
messages,
40-
});
41-
4245
// Create a new StreamData
4346
const data = new StreamData();
4447

4548
// Append additional data
4649
data.append({ test: 'value' });
4750

48-
// Convert the response into a friendly text-stream
49-
const stream = result.toAIStream({
50-
onFinal(_) {
51+
// Call the language model
52+
const result = await streamText({
53+
model: openai('gpt-4-turbo'),
54+
onFinish() {
5155
data.close();
5256
},
57+
messages,
5358
});
5459

5560
// Respond with the stream and additional StreamData
56-
return new StreamingTextResponse(stream, {}, data);
61+
return result.toAIStreamResponse({ data });
5762
}
5863
```
5964

‎content/docs/06-advanced/06-rate-limiting.mdx

+2-2
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ and [Upstash Ratelimit](https://github.com/upstash/ratelimit).
1919
```tsx filename='app/api/generate/route.ts'
2020
import kv from '@vercel/kv';
2121
import { openai } from '@ai-sdk/openai';
22-
import { OpenAIStream, StreamingTextResponse } from 'ai';
22+
import { StreamingTextResponse } from 'ai';
2323
import { Ratelimit } from '@upstash/ratelimit';
2424
import { NextRequest } from 'next/server';
2525

@@ -49,7 +49,7 @@ export async function POST(req: NextRequest) {
4949
messages,
5050
});
5151

52-
return new StreamingTextResponse(result.toAIStream());
52+
return result.toAIStreamResponse();
5353
}
5454
```
5555

‎content/docs/07-reference/ai-sdk-core/02-stream-text.mdx

+39-1
Original file line numberDiff line numberDiff line change
@@ -761,9 +761,47 @@ for await (const textPart of textStream) {
761761
},
762762
{
763763
name: 'toAIStreamResponse',
764-
type: '(init?: ResponseInit) => Response',
764+
type: '(options?: ToAIStreamOptions) => Response',
765765
description:
766766
'Converts the result to a streamed response object with a stream data part stream. It can be used with the `useChat` and `useCompletion` hooks.',
767+
properties: [
768+
{
769+
type: 'ToAIStreamOptions',
770+
parameters: [
771+
{
772+
name: 'init',
773+
type: 'ResponseInit',
774+
optional: true,
775+
description: 'The response init options.',
776+
properties: [
777+
{
778+
type: 'ResponseInit',
779+
parameters: [
780+
{
781+
name: 'status',
782+
type: 'number',
783+
optional: true,
784+
description: 'The response status code.',
785+
},
786+
{
787+
name: 'headers',
788+
type: 'Record<string, string>',
789+
optional: true,
790+
description: 'The response headers.',
791+
},
792+
],
793+
},
794+
],
795+
},
796+
{
797+
name: 'data',
798+
type: 'StreamData',
799+
optional: true,
800+
description: 'The stream data object.',
801+
},
802+
],
803+
},
804+
],
767805
},
768806
{
769807
name: 'toTextStreamResponse',
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
import { openai } from '@ai-sdk/openai';
2-
import { StreamData, StreamingTextResponse, streamText } from 'ai';
2+
import { StreamData, streamText } from 'ai';
33

44
// Allow streaming responses up to 30 seconds
55
export const maxDuration = 30;
@@ -8,25 +8,20 @@ export async function POST(req: Request) {
88
// Extract the `prompt` from the body of the request
99
const { prompt } = await req.json();
1010

11-
const result = await streamText({
12-
model: openai('gpt-3.5-turbo-instruct'),
13-
maxTokens: 2000,
14-
prompt,
15-
});
16-
1711
// optional: use stream data
1812
const data = new StreamData();
19-
2013
data.append('call started');
2114

22-
// Convert the response to an AI data stream
23-
const stream = result.toAIStream({
24-
onFinal(completion) {
15+
const result = await streamText({
16+
model: openai('gpt-3.5-turbo-instruct'),
17+
maxTokens: 2000,
18+
prompt,
19+
onFinish: () => {
2520
data.append('call completed');
2621
data.close();
2722
},
2823
});
2924

3025
// Respond with the stream
31-
return new StreamingTextResponse(stream, {}, data);
26+
return result.toAIStreamResponse({ data });
3227
}
Original file line numberDiff line numberDiff line change
@@ -1,30 +1,24 @@
11
import { openai } from '@ai-sdk/openai';
2-
import { StreamData, StreamingTextResponse, streamText } from 'ai';
2+
import { StreamData, streamText } from 'ai';
33

44
// Allow streaming responses up to 30 seconds
55
export const maxDuration = 30;
66

77
export async function POST(req: Request) {
88
const { messages } = await req.json();
99

10+
// optional: use stream data
11+
const data = new StreamData();
12+
data.append('initialized call');
13+
1014
const result = await streamText({
1115
model: openai('gpt-4-turbo'),
1216
messages,
17+
onFinish() {
18+
data.append('call completed');
19+
data.close();
20+
},
1321
});
1422

15-
// optional: use stream data
16-
const data = new StreamData();
17-
18-
data.append('initialized call');
19-
20-
return new StreamingTextResponse(
21-
result.toAIStream({
22-
onFinal() {
23-
data.append('call completed');
24-
data.close();
25-
},
26-
}),
27-
{},
28-
data,
29-
);
23+
return result.toAIStreamResponse({ data });
3024
}

‎examples/solidstart-openai/src/routes/api/use-chat-vision/index.ts

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
import { StreamingTextResponse, convertToCoreMessages, streamText } from 'ai';
2-
import { APIEvent } from '@solidjs/start/server';
31
import { openai } from '@ai-sdk/openai';
2+
import { APIEvent } from '@solidjs/start/server';
3+
import { convertToCoreMessages, streamText } from 'ai';
44

55
export const POST = async (event: APIEvent) => {
66
// 'data' contains the additional data that you have sent:
@@ -27,5 +27,5 @@ export const POST = async (event: APIEvent) => {
2727
});
2828

2929
// Respond with the stream
30-
return new StreamingTextResponse(result.toAIStream());
30+
return result.toAIStreamResponse();
3131
};
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
import { createOpenAI } from '@ai-sdk/openai';
2-
import { StreamData, StreamingTextResponse, streamText } from 'ai';
2+
import { StreamData, streamText } from 'ai';
33

44
import { env } from '$env/dynamic/private';
55
// You may want to replace the above with a static private env variable
@@ -17,24 +17,19 @@ export const POST = (async ({ request }) => {
1717
// Extract the `prompt` from the body of the request
1818
const { prompt } = await request.json();
1919

20-
// Ask OpenAI for a streaming chat completion given the prompt
21-
const result = await streamText({
22-
model: openai('gpt-4-turbo-preview'),
23-
prompt,
24-
});
25-
2620
// optional: use stream data
2721
const data = new StreamData();
28-
2922
data.append({ test: 'value' });
3023

31-
// Convert the response into a friendly text-stream
32-
const stream = result.toAIStream({
33-
onFinal(completion) {
24+
// Ask OpenAI for a streaming chat completion given the prompt
25+
const result = await streamText({
26+
model: openai('gpt-4-turbo-preview'),
27+
prompt,
28+
onFinish() {
3429
data.close();
3530
},
3631
});
3732

3833
// Respond with the stream
39-
return new StreamingTextResponse(stream, {}, data);
34+
return result.toAIStreamResponse({ data });
4035
}) satisfies RequestHandler;

‎packages/core/core/generate-text/stream-text.test.ts

+96-28
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,16 @@ import {
22
convertArrayToReadableStream,
33
convertAsyncIterableToArray,
44
convertReadableStreamToArray,
5+
convertResponseStreamToArray,
56
} from '@ai-sdk/provider-utils/test';
67
import assert from 'node:assert';
78
import { z } from 'zod';
8-
import { formatStreamPart } from '../../streams';
9+
import { StreamData, formatStreamPart } from '../../streams';
10+
import { setTestTracer } from '../telemetry/get-tracer';
911
import { MockLanguageModelV1 } from '../test/mock-language-model-v1';
1012
import { createMockServerResponse } from '../test/mock-server-response';
11-
import { streamText } from './stream-text';
1213
import { MockTracer } from '../test/mock-tracer';
13-
import { setTestTracer } from '../telemetry/get-tracer';
14+
import { streamText } from './stream-text';
1415

1516
describe('result.textStream', () => {
1617
it('should send text deltas', async () => {
@@ -921,16 +922,14 @@ describe('result.toAIStreamResponse', () => {
921922
it('should create a Response with a stream data stream', async () => {
922923
const result = await streamText({
923924
model: new MockLanguageModelV1({
924-
doStream: async ({ prompt, mode }) => {
925-
return {
926-
stream: convertArrayToReadableStream([
927-
{ type: 'text-delta', textDelta: 'Hello' },
928-
{ type: 'text-delta', textDelta: ', ' },
929-
{ type: 'text-delta', textDelta: 'world!' },
930-
]),
931-
rawCall: { rawPrompt: 'prompt', rawSettings: {} },
932-
};
933-
},
925+
doStream: async () => ({
926+
stream: convertArrayToReadableStream([
927+
{ type: 'text-delta', textDelta: 'Hello' },
928+
{ type: 'text-delta', textDelta: ', ' },
929+
{ type: 'text-delta', textDelta: 'world!' },
930+
]),
931+
rawCall: { rawPrompt: 'prompt', rawSettings: {} },
932+
}),
934933
}),
935934
prompt: 'test-input',
936935
});
@@ -943,16 +942,86 @@ describe('result.toAIStreamResponse', () => {
943942
'text/plain; charset=utf-8',
944943
);
945944

946-
// Read the chunks into an array
947-
const reader = response.body!.getReader();
948-
const chunks = [];
949-
while (true) {
950-
const { value, done } = await reader.read();
951-
if (done) break;
952-
chunks.push(new TextDecoder().decode(value));
953-
}
945+
assert.deepStrictEqual(await convertResponseStreamToArray(response), [
946+
'0:"Hello"\n',
947+
'0:", "\n',
948+
'0:"world!"\n',
949+
]);
950+
});
951+
952+
it('should create a Response with a stream data stream and custom headers', async () => {
953+
const result = await streamText({
954+
model: new MockLanguageModelV1({
955+
doStream: async () => ({
956+
stream: convertArrayToReadableStream([
957+
{ type: 'text-delta', textDelta: 'Hello' },
958+
{ type: 'text-delta', textDelta: ', ' },
959+
{ type: 'text-delta', textDelta: 'world!' },
960+
]),
961+
rawCall: { rawPrompt: 'prompt', rawSettings: {} },
962+
}),
963+
}),
964+
prompt: 'test-input',
965+
});
966+
967+
const response = result.toAIStreamResponse({
968+
status: 201,
969+
statusText: 'foo',
970+
headers: {
971+
'custom-header': 'custom-value',
972+
},
973+
});
974+
975+
assert.strictEqual(response.status, 201);
976+
assert.strictEqual(response.statusText, 'foo');
977+
assert.strictEqual(
978+
response.headers.get('Content-Type'),
979+
'text/plain; charset=utf-8',
980+
);
981+
assert.strictEqual(response.headers.get('custom-header'), 'custom-value');
982+
983+
assert.deepStrictEqual(await convertResponseStreamToArray(response), [
984+
'0:"Hello"\n',
985+
'0:", "\n',
986+
'0:"world!"\n',
987+
]);
988+
});
989+
990+
it('should support merging with existing stream data', async () => {
991+
const result = await streamText({
992+
model: new MockLanguageModelV1({
993+
doStream: async () => ({
994+
stream: convertArrayToReadableStream([
995+
{ type: 'text-delta', textDelta: 'Hello' },
996+
{ type: 'text-delta', textDelta: ', ' },
997+
{ type: 'text-delta', textDelta: 'world!' },
998+
]),
999+
rawCall: { rawPrompt: 'prompt', rawSettings: {} },
1000+
}),
1001+
}),
1002+
prompt: 'test-input',
1003+
});
1004+
1005+
const streamData = new StreamData();
1006+
streamData.append('stream-data-value');
1007+
streamData.close();
1008+
1009+
const response = result.toAIStreamResponse({ data: streamData });
1010+
1011+
assert.strictEqual(response.status, 200);
1012+
assert.strictEqual(
1013+
response.headers.get('Content-Type'),
1014+
'text/plain; charset=utf-8',
1015+
);
1016+
1017+
const chunks = await convertResponseStreamToArray(response);
9541018

955-
assert.deepStrictEqual(chunks, ['0:"Hello"\n', '0:", "\n', '0:"world!"\n']);
1019+
assert.deepStrictEqual(chunks, [
1020+
'2:["stream-data-value"]\n',
1021+
'0:"Hello"\n',
1022+
'0:", "\n',
1023+
'0:"world!"\n',
1024+
]);
9561025
});
9571026
});
9581027

@@ -982,12 +1051,11 @@ describe('result.toTextStreamResponse', () => {
9821051
'text/plain; charset=utf-8',
9831052
);
9841053

985-
assert.deepStrictEqual(
986-
await convertReadableStreamToArray(
987-
response.body!.pipeThrough(new TextDecoderStream()),
988-
),
989-
['Hello', ', ', 'world!'],
990-
);
1054+
assert.deepStrictEqual(await convertResponseStreamToArray(response), [
1055+
'Hello',
1056+
', ',
1057+
'world!',
1058+
]);
9911059
});
9921060
});
9931061

‎packages/core/core/generate-text/stream-text.ts

+39-6
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ import { Span } from '@opentelemetry/api';
22
import { ServerResponse } from 'node:http';
33
import {
44
AIStreamCallbacksAndOptions,
5-
StreamingTextResponse,
5+
StreamData,
66
formatStreamPart,
77
} from '../../streams';
88
import { CallSettings } from '../prompt/call-settings';
@@ -28,6 +28,7 @@ import {
2828
AsyncIterableStream,
2929
createAsyncIterableStream,
3030
} from '../util/async-iterable-stream';
31+
import { mergeStreams } from '../util/merge-streams';
3132
import { prepareResponseHeaders } from '../util/prepare-response-headers';
3233
import { retryWithExponentialBackoff } from '../util/retry-with-exponential-backoff';
3334
import { runToolsTransformation } from './run-tools-transformation';
@@ -592,7 +593,7 @@ Stream callbacks that will be called when the stream emits events.
592593
},
593594
});
594595

595-
const streamDataTransformer = new TransformStream<
596+
const streamPartsTransformer = new TransformStream<
596597
TextStreamPart<TOOLS>,
597598
string
598599
>({
@@ -654,7 +655,7 @@ Stream callbacks that will be called when the stream emits events.
654655

655656
return this.fullStream
656657
.pipeThrough(callbackTransformer)
657-
.pipeThrough(streamDataTransformer)
658+
.pipeThrough(streamPartsTransformer)
658659
.pipeThrough(new TextEncoderStream());
659660
}
660661

@@ -736,12 +737,44 @@ writes each text delta as a separate chunk.
736737
Converts the result to a streamed response object with a stream data part stream.
737738
It can be used with the `useChat` and `useCompletion` hooks.
738739
739-
@param init Optional headers.
740+
@param options An object with an init property (ResponseInit) and a data property.
741+
You can also pass in a ResponseInit directly (deprecated).
740742
741743
@return A response object.
742744
*/
743-
toAIStreamResponse(init?: ResponseInit): Response {
744-
return new StreamingTextResponse(this.toAIStream(), init);
745+
toAIStreamResponse(
746+
options?: ResponseInit | { init?: ResponseInit; data?: StreamData },
747+
): Response {
748+
const init: ResponseInit | undefined =
749+
options == null
750+
? undefined
751+
: 'init' in options
752+
? options.init
753+
: {
754+
headers: 'headers' in options ? options.headers : undefined,
755+
status: 'status' in options ? options.status : undefined,
756+
statusText:
757+
'statusText' in options ? options.statusText : undefined,
758+
};
759+
760+
const data: StreamData | undefined =
761+
options == null
762+
? undefined
763+
: 'data' in options
764+
? options.data
765+
: undefined;
766+
767+
const stream = data
768+
? mergeStreams(data.stream, this.toAIStream())
769+
: this.toAIStream();
770+
771+
return new Response(stream, {
772+
status: init?.status ?? 200,
773+
statusText: init?.statusText,
774+
headers: prepareResponseHeaders(init, {
775+
contentType: 'text/plain; charset=utf-8',
776+
}),
777+
});
745778
}
746779

747780
/**
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
import { convertReadableStreamToArray } from './convert-readable-stream-to-array';
2+
3+
export async function convertResponseStreamToArray(
4+
response: Response,
5+
): Promise<string[]> {
6+
return convertReadableStreamToArray(
7+
response.body!.pipeThrough(new TextDecoderStream()),
8+
);
9+
}

‎packages/provider-utils/src/test/index.ts

+1
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@ export * from './convert-array-to-async-iterable';
22
export * from './convert-array-to-readable-stream';
33
export * from './convert-async-iterable-to-array';
44
export * from './convert-readable-stream-to-array';
5+
export * from './convert-response-stream-to-array';
56
export * from './json-test-server';
67
export * from './streaming-test-server';
78
export * from './test-server';

0 commit comments

Comments
 (0)
Please sign in to comment.