Skip to content

Commit 008725e

Browse files
authoredJun 19, 2024··
feat (ui/react): add experimental_useObject hook (#2019)
1 parent 520fb2d commit 008725e

37 files changed

+1070
-60
lines changed
 

‎.changeset/clever-numbers-applaud.md

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'ai': patch
3+
---
4+
5+
feat (ai): add textStream, toTextStreamResponse(), and pipeTextStreamToResponse() to streamObject

‎.changeset/metal-dots-burn.md

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'@ai-sdk/react': patch
3+
---
4+
5+
feat (@ai-sdk/react): add experimental_useObject to @ai-sdk/react

‎.changeset/tough-chicken-add.md

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'@ai-sdk/ui-utils': patch
3+
---
4+
5+
chore (@ai-sdk/ui-utils): move functions

‎content/docs/05-ai-sdk-ui/01-overview.mdx

+5-3
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,10 @@ Vercel AI SDK UI is designed to help you build interactive chat, completion, and
99

1010
Vercel AI SDK UI provides robust abstractions that simplify the complex tasks of managing chat streams and UI updates on the frontend, enabling you to develop dynamic AI-driven interfaces more efficiently. With three main hooks — **`useChat`**, **`useCompletion`**, and **`useAssistant`** — you can incorporate real-time chat capabilities, text completions, and interactive assistant features into your app.
1111

12-
- **`useChat`** offers real-time streaming of chat messages, abstracting state management for inputs, messages, loading, and errors, allowing for seamless integration into any UI design.
13-
- **`useCompletion`** enables you to handle text completions in your applications, managing chat input state and automatically updating the UI as new completions are streamed from your AI provider.
14-
- **`useAssistant`** is designed to facilitate interaction with OpenAI-compatible assistant APIs, managing UI state and updating it automatically as responses are streamed.
12+
- **[`useChat`](/docs/reference/ai-sdk-ui/use-chat)** offers real-time streaming of chat messages, abstracting state management for inputs, messages, loading, and errors, allowing for seamless integration into any UI design.
13+
- **[`useCompletion`](/docs/reference/ai-sdk-ui/use-completion)** enables you to handle text completions in your applications, managing chat input state and automatically updating the UI as new completions are streamed from your AI provider.
14+
- **[`useObject`](/docs/reference/ai-sdk-ui/use-object)** is a hook that allows you to consume streamed JSON objects, providing a simple way to handle and display structured data in your application.
15+
- **[`useAssistant`](/docs/reference/ai-sdk-ui/use-assistant)** is designed to facilitate interaction with OpenAI-compatible assistant APIs, managing UI state and updating it automatically as responses are streamed.
1516

1617
These hooks are designed to reduce the complexity and time required to implement AI interactions, letting you focus on creating exceptional user experiences.
1718

@@ -25,6 +26,7 @@ Here is a comparison of the supported functions across these frameworks:
2526
| [useChat](/docs/reference/ai-sdk-ui/use-chat) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
2627
| [useChat](/docs/reference/ai-sdk-ui/use-chat) tool calling | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
2728
| [useCompletion](/docs/reference/ai-sdk-ui/use-completion) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
29+
| [useObject](/docs/reference/ai-sdk-ui/use-object) | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
2830
| [useAssistant](/docs/reference/ai-sdk-ui/use-assistant) | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
2931

3032
<Note>
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,115 @@
1+
---
2+
title: Object Generation
3+
description: Learn how to use the useObject hook.
4+
---
5+
6+
# Object Generation
7+
8+
<Note>`useObject` is an experimental feature and only available in React.</Note>
9+
10+
The [`useObject`](/docs/reference/ai-sdk-ui/use-object) hook allows you to create interfaces that represent a structured JSON object that is being streamed.
11+
12+
In this guide, you will learn how to use the `useObject` hook in your application to generate UIs for structured data on the fly.
13+
14+
## Example
15+
16+
The example shows a small notfications demo app that generates fake notifications in real-time.
17+
18+
### Schema
19+
20+
It is helpful to set up the schema in a separate file that is imported on both the client and server.
21+
22+
```ts filename='app/api/use-object/schema.ts'
23+
import { DeepPartial } from 'ai';
24+
import { z } from 'zod';
25+
26+
// define a schema for the notifications
27+
export const notificationSchema = z.object({
28+
notifications: z.array(
29+
z.object({
30+
name: z.string().describe('Name of a fictional person.'),
31+
message: z.string().describe('Message. Do not use emojis or links.'),
32+
minutesAgo: z.number(),
33+
}),
34+
),
35+
});
36+
37+
// define a type for the partial notifications during generation
38+
export type PartialNotification = DeepPartial<typeof notificationSchema>;
39+
```
40+
41+
### Client
42+
43+
The client uses [`useObject`](/docs/reference/ai-sdk-ui/use-object) to stream the object generation process.
44+
45+
The results are partial and are displayed as they are received.
46+
Please note the code for handling `undefined` values in the JSX.
47+
48+
```tsx filename='app/page.tsx'
49+
'use client';
50+
51+
import { experimental_useObject as useObject } from '@ai-sdk/react';
52+
import { notificationSchema } from './api/use-object/schema';
53+
54+
export default function Page() {
55+
const { setInput, object } = useObject({
56+
api: '/api/use-object',
57+
schema: notificationSchema,
58+
});
59+
60+
return (
61+
<div>
62+
<button
63+
onClick={async () => {
64+
setInput('Messages during finals week.');
65+
}}
66+
>
67+
Generate notifications
68+
</button>
69+
70+
<div>
71+
{object?.notifications?.map((notification, index) => (
72+
<div key={index}>
73+
<div>
74+
<div>
75+
<p>{notification?.name}</p>
76+
<p>
77+
{notification?.minutesAgo}
78+
{notification?.minutesAgo != null ? ' minutes ago' : ''}
79+
</p>
80+
</div>
81+
<p>{notification?.message}</p>
82+
</div>
83+
</div>
84+
))}
85+
</div>
86+
</div>
87+
);
88+
}
89+
```
90+
91+
### Server
92+
93+
On the server, we use [`streamObject`](/docs/reference/ai-sdk-core/stream-object) to stream the object generation process.
94+
95+
```typescript filename='app/api/use-object/route.ts'
96+
import { openai } from '@ai-sdk/openai'
97+
import { streamObject } from 'ai'
98+
import { notificationSchema } from './schema'
99+
100+
// Allow streaming responses up to 30 seconds
101+
export const maxDuration = 30
102+
103+
export async function POST(req: Request) {
104+
const context = await req.json()
105+
106+
const result = await streamObject({
107+
model: openai('gpt-4-turbo'),
108+
schema: notificationSchema
109+
prompt:
110+
`Generate 3 notifications for a messages app in this context:` + context,
111+
})
112+
113+
return result.toTextStreamResponse()
114+
}
115+
```

‎content/docs/05-ai-sdk-ui/index.mdx

+5
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,11 @@ description: Learn about the Vercel AI SDK UI.
2828
description: 'Learn how to integrate an interface for text completion.',
2929
href: '/docs/ai-sdk-ui/completion',
3030
},
31+
{
32+
title: 'Object Generation',
33+
description: 'Learn how to integrate an interface for object generation.',
34+
href: '/docs/ai-sdk-ui/object-generation',
35+
},
3136
{
3237
title: 'OpenAI Assistants',
3338
description: 'Learn how to integrate an interface for OpenAI Assistants.',

‎content/docs/07-reference/ai-sdk-core/04-stream-object.mdx

+40-3
Original file line numberDiff line numberDiff line change
@@ -429,12 +429,19 @@ for await (const partialObject of partialObjectStream) {
429429
name: 'partialObjectStream',
430430
type: 'AsyncIterableStream<DeepPartial<T>>',
431431
description:
432-
'Note that the partial object is not validated. If you want to be certain that the actual content matches your schema, you need to implement your own validation for partial results.',
432+
'Stream of partial objects. It gets more complete as the stream progresses. Note that the partial object is not validated. If you want to be certain that the actual content matches your schema, you need to implement your own validation for partial results.',
433+
},
434+
{
435+
name: 'textStream',
436+
type: 'AsyncIterableStream<string>',
437+
description:
438+
'Text stream of the JSON representation of the generated object. It contains text chunks. When the stream is finished, the object is valid JSON that can be parsed.',
433439
},
434440
{
435441
name: 'fullStream',
436442
type: 'AsyncIterableStream<ObjectStreamPart<T>>',
437-
description: 'The full stream of the object.',
443+
description:
444+
'Stream of different types of events, including partial objects, errors, and finish events.',
438445
properties: [
439446
{
440447
type: 'ObjectPart',
@@ -450,6 +457,20 @@ for await (const partialObject of partialObjectStream) {
450457
},
451458
],
452459
},
460+
{
461+
type: 'TextDeltaPart',
462+
parameters: [
463+
{
464+
name: 'type',
465+
type: "'text-delta'",
466+
},
467+
{
468+
name: 'textDelta',
469+
type: 'string',
470+
description: 'The text delta for the underlying raw JSON text.',
471+
},
472+
],
473+
},
453474
{
454475
type: 'ErrorPart',
455476
parameters: [
@@ -514,6 +535,18 @@ for await (const partialObject of partialObjectStream) {
514535
description:
515536
'Warnings from the model provider (e.g. unsupported settings).',
516537
},
538+
{
539+
name: 'pipeTextStreamToResponse',
540+
type: '(response: ServerResponse, init?: { headers?: Record<string, string>; status?: number } => void',
541+
description:
542+
'Writes text delta output to a Node.js response-like object. It sets a `Content-Type` header to `text/plain; charset=utf-8` and writes each text delta as a separate chunk.',
543+
},
544+
{
545+
name: 'toTextStreamResponse',
546+
type: '(init?: ResponseInit) => Response',
547+
description:
548+
'Creates a simple text stream response. Each text delta is encoded as UTF-8 and sent as a separate chunk. Non-text-delta events are ignored.',
549+
},
517550
]}
518551
/>
519552

@@ -522,9 +555,13 @@ for await (const partialObject of partialObjectStream) {
522555
<ExampleLinks
523556
examples={[
524557
{
525-
title: 'Streaming Object Generation (Next.js App Router)',
558+
title: 'Streaming Object Generation with RSC',
526559
link: '/examples/next-app/basics/streaming-object-generation',
527560
},
561+
{
562+
title: 'Streaming Object Generation with useObject',
563+
link: '/examples/next-pages/basics/streaming-object-generation',
564+
},
528565
{
529566
title: 'Streaming Partial Objects',
530567
link: '/examples/node/streaming-structured-data/stream-object',

‎content/docs/07-reference/ai-sdk-ui/02-use-completion.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: API reference for the useCompletion hook.
55

66
# `useCompletion()`
77

8-
Allows you to create text completion based capibilities for your application. It enables the streaming of text completions from your AI provider, manages the state for chat input, and updates the UI automatically as new messages are received.
8+
Allows you to create text completion based capabilities for your application. It enables the streaming of text completions from your AI provider, manages the state for chat input, and updates the UI automatically as new messages are received.
99

1010
## Import
1111

Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
---
2+
title: useObject
3+
description: API reference for the useObject hook.
4+
---
5+
6+
# `experimental_useObject()`
7+
8+
<Note>`useObject` is an experimental feature and only available in React.</Note>
9+
10+
Allows you to consume text streams that represent a JSON object and parse them into a complete object based on a Zod schema.
11+
You can use it together with [`streamObject`](/docs/reference/ai-sdk-core/stream-object) in the backend.
12+
13+
```tsx
14+
'use client';
15+
16+
import { experimental_useObject as useObject } from '@ai-sdk/react';
17+
18+
export default function Page() {
19+
const { setInput, object } = useObject({
20+
api: '/api/use-object',
21+
schema: z.object({ content: z.string() }),
22+
});
23+
24+
return (
25+
<div>
26+
<button onClick={() => setInput('example input')}>Generate</button>
27+
{object?.content && <p>{object.content}</p>}
28+
</div>
29+
);
30+
}
31+
```
32+
33+
## Import
34+
35+
<Snippet
36+
text="import { experimental_useObject as useObject } from '@ai-sdk/react'"
37+
dark
38+
prompt={false}
39+
/>
40+
41+
## API Signature
42+
43+
### Parameters
44+
45+
<PropertiesTable
46+
content={[
47+
{
48+
name: 'api',
49+
type: 'string',
50+
description:
51+
'The API endpoint. It should stream JSON that matches the schema as chunked text.',
52+
},
53+
{
54+
name: 'schema',
55+
type: 'ZodSchema<RESULT>',
56+
description:
57+
'A Zod schema that defines the shape of the complete object.',
58+
},
59+
{
60+
name: 'id?',
61+
type: 'string',
62+
description:
63+
'Allows you to consume text streams that represent a JSON object and parse them into a complete object based on a Zod schema.',
64+
},
65+
{
66+
name: 'initialValue?',
67+
type: 'DeepPartial<RESULT> | undefined',
68+
description: 'An optional value for the initial object.',
69+
},
70+
]}
71+
/>
72+
73+
### Returns
74+
75+
<PropertiesTable
76+
content={[
77+
{
78+
name: 'setInput',
79+
type: '(input: INPUT) => void',
80+
description: 'Calls the API with the provided input as JSON body.',
81+
},
82+
{
83+
name: 'object',
84+
type: 'DeepPartial<RESULT> | undefined',
85+
description:
86+
'The current value for the generated object. Updated as the API streams JSON chunks.',
87+
},
88+
{
89+
name: 'error',
90+
type: 'undefined | unknown',
91+
description: 'The error object if the API call fails.',
92+
93+
}
94+
95+
]}
96+
/>
97+
98+
## Examples
99+
100+
<ExampleLinks
101+
examples={[
102+
{
103+
title: 'Streaming Object Generation with useObject',
104+
link: '/examples/next-pages/basics/streaming-object-generation',
105+
},
106+
]}
107+
/>

‎content/docs/07-reference/ai-sdk-ui/index.mdx

+7-1
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,11 @@ AI SDK UI contains the following hooks:
2424
'Use a hook to interact with language models in a completion interface.',
2525
href: '/docs/reference/ai-sdk-ui/use-completion',
2626
},
27+
{
28+
title: 'useObject',
29+
description: 'Use a hook for consuming a streamed JSON objects.',
30+
href: '/docs/reference/ai-sdk-ui/use-object',
31+
},
2732
{
2833
title: 'useAssistant',
2934
description: 'Use a hook to interact with OpenAI assistants.',
@@ -46,14 +51,15 @@ It also contains the following helper functions:
4651

4752
## UI Framework Support
4853

49-
AI SDK UI supports several frameworks: [React](https://react.dev/), [Svelte](https://svelte.dev/), [Vue.js](https://vuejs.org/), and [SolidJS](https://www.solidjs.com/).
54+
AI SDK UI supports the following frameworks: [React](https://react.dev/), [Svelte](https://svelte.dev/), [Vue.js](https://vuejs.org/), and [SolidJS](https://www.solidjs.com/).
5055
Here is a comparison of the supported functions across these frameworks:
5156

5257
| Function | React | Svelte | Vue.js | SolidJS |
5358
| ---------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
5459
| [useChat](/docs/reference/ai-sdk-ui/use-chat) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
5560
| [useChat](/docs/reference/ai-sdk-ui/use-chat) tool calling | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
5661
| [useCompletion](/docs/reference/ai-sdk-ui/use-completion) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
62+
| [useObject](/docs/reference/ai-sdk-ui/use-object) | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
5763
| [useAssistant](/docs/reference/ai-sdk-ui/use-assistant) | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
5864

5965
<Note>
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
---
2+
title: Streaming Object Generation
3+
description: Learn to stream object generations using the Vercel AI SDK in your Next.js App Router application
4+
---
5+
6+
# Stream Object Generation
7+
8+
Object generation can sometimes take a long time to complete, especially when you're generating a large schema.
9+
In such cases, it is useful to stream the object generation process to the client in real-time.
10+
This allows the client to display the generated object as it is being generated,
11+
rather than have users wait for it to complete before displaying the result.
12+
13+
<Browser>
14+
<ObjectGeneration
15+
stream
16+
object={{
17+
notifications: [
18+
{
19+
name: 'Jamie Roberts',
20+
message: "Hey! How's the study grind going? Need a coffee boost?",
21+
minutesAgo: 15,
22+
},
23+
{
24+
name: 'Prof. Morgan',
25+
message:
26+
'Reminder: Your term paper is due promptly at 8 AM tomorrow. Please ensure it meets the submission guidelines outlined.',
27+
minutesAgo: 46,
28+
},
29+
{
30+
name: 'Alex Chen',
31+
message:
32+
"Dude, urgent! Borrow your notes for tomorrow's exam? I swear mine got eaten by my dog!",
33+
minutesAgo: 30,
34+
},
35+
],
36+
}}
37+
/>
38+
</Browser>
39+
40+
## Schema
41+
42+
It is helpful to set up the schema in a separate file that is imported on both the client and server.
43+
44+
```ts filename='app/api/use-object/schema.ts'
45+
import { DeepPartial } from 'ai';
46+
import { z } from 'zod';
47+
48+
// define a schema for the notifications
49+
export const notificationSchema = z.object({
50+
notifications: z.array(
51+
z.object({
52+
name: z.string().describe('Name of a fictional person.'),
53+
message: z.string().describe('Message. Do not use emojis or links.'),
54+
minutesAgo: z.number(),
55+
}),
56+
),
57+
});
58+
59+
// define a type for the partial notifications during generation
60+
export type PartialNotification = DeepPartial<typeof notificationSchema>;
61+
```
62+
63+
## Client
64+
65+
The client uses [`useObject`](/docs/reference/ai-sdk-ui/use-object) to stream the object generation process.
66+
67+
The results are partial and are displayed as they are received.
68+
Please note the code for handling `undefined` values in the JSX.
69+
70+
```tsx filename='app/page.tsx'
71+
'use client';
72+
73+
import { experimental_useObject as useObject } from '@ai-sdk/react';
74+
import { notificationSchema } from './api/use-object/schema';
75+
76+
export default function Page() {
77+
const { setInput, object } = useObject({
78+
api: '/api/use-object',
79+
schema: notificationSchema,
80+
});
81+
82+
return (
83+
<div>
84+
<button
85+
onClick={async () => {
86+
setInput('Messages during finals week.');
87+
}}
88+
>
89+
Generate notifications
90+
</button>
91+
92+
<div>
93+
{object?.notifications?.map((notification, index) => (
94+
<div key={index}>
95+
<div>
96+
<div>
97+
<p>{notification?.name}</p>
98+
<p>
99+
{notification?.minutesAgo}
100+
{notification?.minutesAgo != null ? ' minutes ago' : ''}
101+
</p>
102+
</div>
103+
<p>{notification?.message}</p>
104+
</div>
105+
</div>
106+
))}
107+
</div>
108+
</div>
109+
);
110+
}
111+
```
112+
113+
## Server
114+
115+
On the server, we use [`streamObject`](/docs/reference/ai-sdk-core/stream-object) to stream the object generation process.
116+
117+
```typescript filename='app/api/use-object/route.ts'
118+
import { openai } from '@ai-sdk/openai'
119+
import { streamObject } from 'ai'
120+
import { notificationSchema } from './schema'
121+
122+
// Allow streaming responses up to 30 seconds
123+
export const maxDuration = 30
124+
125+
export async function POST(req: Request) {
126+
const context = await req.json()
127+
128+
const result = await streamObject({
129+
model: openai('gpt-4-turbo'),
130+
schema: notificationSchema
131+
prompt:
132+
`Generate 3 notifications for a messages app in this context:` + context,
133+
})
134+
135+
return result.toTextStreamResponse()
136+
}
137+
```

‎content/examples/02-next-pages/01-basics/index.mdx

+5
Original file line numberDiff line numberDiff line change
@@ -26,5 +26,10 @@ Beyond text, you will also learn to generate structured data by providing a sche
2626
description: 'Learn how to generate structured data.',
2727
href: '/examples/next-pages/basics/generating-object',
2828
},
29+
{
30+
title: 'Stream Object Generation',
31+
description: 'Learn how to stream a structured data generation.',
32+
href: '/examples/next-pages/basics/streaming-object-generation',
33+
},
2934
]}
3035
/>
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
import { openai } from '@ai-sdk/openai';
2+
import { streamObject } from 'ai';
3+
import { notificationSchema } from './schema';
4+
5+
// Allow streaming responses up to 30 seconds
6+
export const maxDuration = 30;
7+
8+
export async function POST(req: Request) {
9+
const context = await req.json();
10+
11+
const result = await streamObject({
12+
model: openai('gpt-4-turbo'),
13+
prompt: `Generate 3 notifications for a messages app in this context: ${context}`,
14+
schema: notificationSchema,
15+
});
16+
17+
return result.toTextStreamResponse();
18+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
import { DeepPartial } from 'ai';
2+
import { z } from 'zod';
3+
4+
// define a schema for the notifications
5+
export const notificationSchema = z.object({
6+
notifications: z.array(
7+
z.object({
8+
name: z.string().describe('Name of a fictional person.'),
9+
message: z.string().describe('Message. Do not use emojis or links.'),
10+
minutesAgo: z.number(),
11+
}),
12+
),
13+
});
14+
15+
// define a type for the partial notifications during generation
16+
export type PartialNotification = DeepPartial<typeof notificationSchema>;

‎examples/next-openai/app/stream-object/schema.ts

+1-3
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,4 @@ export const notificationSchema = z.object({
1313
});
1414

1515
// define a type for the partial notifications during generation
16-
export type PartialNotification = DeepPartial<
17-
z.infer<typeof notificationSchema>
18-
>;
16+
export type PartialNotification = DeepPartial<typeof notificationSchema>;
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
'use client';
2+
3+
import { experimental_useObject as useObject } from '@ai-sdk/react';
4+
import { notificationSchema } from '../api/use-object/schema';
5+
6+
export default function Page() {
7+
const { setInput, object } = useObject({
8+
api: '/api/use-object',
9+
schema: notificationSchema,
10+
});
11+
12+
return (
13+
<div className="flex flex-col items-center min-h-screen p-4 m-4">
14+
<button
15+
className="px-4 py-2 mt-4 text-white bg-blue-500 rounded-md"
16+
onClick={async () => {
17+
setInput('Messages during finals week.');
18+
}}
19+
>
20+
Generate notifications
21+
</button>
22+
23+
<div className="flex flex-col gap-4 mt-4">
24+
{object?.notifications?.map((notification, index) => (
25+
<div
26+
className="flex items-start gap-4 p-4 bg-gray-100 rounded-md dark:bg-gray-800"
27+
key={index}
28+
>
29+
<div className="flex-1 space-y-1">
30+
<div className="flex items-center justify-between">
31+
<p className="font-medium">{notification?.name}</p>
32+
<p className="text-sm text-gray-500 dark:text-gray-400">
33+
{notification?.minutesAgo}
34+
{notification?.minutesAgo != null ? ' minutes ago' : ''}
35+
</p>
36+
</div>
37+
<p className="text-gray-700 dark:text-gray-300">
38+
{notification?.message}
39+
</p>
40+
</div>
41+
</div>
42+
))}
43+
</div>
44+
</div>
45+
);
46+
}

‎packages/core/core/generate-object/stream-object.test.ts

+175-16
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,13 @@ import { TypeValidationError } from '@ai-sdk/provider';
22
import {
33
convertArrayToReadableStream,
44
convertAsyncIterableToArray,
5+
convertReadableStreamToArray,
56
} from '@ai-sdk/provider-utils/test';
67
import assert from 'node:assert';
78
import { z } from 'zod';
89
import { MockLanguageModelV1 } from '../test/mock-language-model-v1';
910
import { streamObject } from './stream-object';
11+
import { createMockServerResponse } from '../test/mock-server-response';
1012

1113
describe('result.objectStream', () => {
1214
it('should send object deltas with json mode', async () => {
@@ -148,18 +150,6 @@ describe('result.fullStream', () => {
148150
const result = await streamObject({
149151
model: new MockLanguageModelV1({
150152
doStream: async ({ prompt, mode }) => {
151-
assert.deepStrictEqual(mode, { type: 'object-json' });
152-
assert.deepStrictEqual(prompt, [
153-
{
154-
role: 'system',
155-
content:
156-
'JSON schema:\n' +
157-
'{"type":"object","properties":{"content":{"type":"string"}},"required":["content"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}\n' +
158-
'You MUST answer with a JSON object that matches the JSON schema above.',
159-
},
160-
{ role: 'user', content: [{ type: 'text', text: 'prompt' }] },
161-
]);
162-
163153
return {
164154
stream: convertArrayToReadableStream([
165155
{ type: 'text-delta', textDelta: '{ ' },
@@ -187,10 +177,42 @@ describe('result.fullStream', () => {
187177
assert.deepStrictEqual(
188178
await convertAsyncIterableToArray(result.fullStream),
189179
[
190-
{ type: 'object', object: {} },
191-
{ type: 'object', object: { content: 'Hello, ' } },
192-
{ type: 'object', object: { content: 'Hello, world' } },
193-
{ type: 'object', object: { content: 'Hello, world!' } },
180+
{
181+
type: 'object',
182+
object: {},
183+
},
184+
{
185+
type: 'text-delta',
186+
textDelta: '{ ',
187+
},
188+
{
189+
type: 'object',
190+
object: { content: 'Hello, ' },
191+
},
192+
{
193+
type: 'text-delta',
194+
textDelta: '"content": "Hello, ',
195+
},
196+
{
197+
type: 'object',
198+
object: { content: 'Hello, world' },
199+
},
200+
{
201+
type: 'text-delta',
202+
textDelta: 'world',
203+
},
204+
{
205+
type: 'object',
206+
object: { content: 'Hello, world!' },
207+
},
208+
{
209+
type: 'text-delta',
210+
textDelta: '!"',
211+
},
212+
{
213+
type: 'text-delta',
214+
textDelta: ' }',
215+
},
194216
{
195217
type: 'finish',
196218
finishReason: 'stop',
@@ -208,6 +230,143 @@ describe('result.fullStream', () => {
208230
});
209231
});
210232

233+
describe('result.textStream', () => {
234+
it('should send text stream', async () => {
235+
const result = await streamObject({
236+
model: new MockLanguageModelV1({
237+
doStream: async ({ prompt, mode }) => {
238+
return {
239+
stream: convertArrayToReadableStream([
240+
{ type: 'text-delta', textDelta: '{ ' },
241+
{ type: 'text-delta', textDelta: '"content": ' },
242+
{ type: 'text-delta', textDelta: `"Hello, ` },
243+
{ type: 'text-delta', textDelta: `world` },
244+
{ type: 'text-delta', textDelta: `!"` },
245+
{ type: 'text-delta', textDelta: ' }' },
246+
{
247+
type: 'finish',
248+
finishReason: 'stop',
249+
usage: { completionTokens: 10, promptTokens: 2 },
250+
},
251+
]),
252+
rawCall: { rawPrompt: 'prompt', rawSettings: {} },
253+
};
254+
},
255+
}),
256+
schema: z.object({ content: z.string() }),
257+
mode: 'json',
258+
prompt: 'prompt',
259+
});
260+
261+
assert.deepStrictEqual(
262+
await convertAsyncIterableToArray(result.textStream),
263+
['{ ', '"content": "Hello, ', 'world', '!"', ' }'],
264+
);
265+
});
266+
});
267+
268+
describe('result.toTextStreamResponse', () => {
269+
it('should create a Response with a text stream', async () => {
270+
const result = await streamObject({
271+
model: new MockLanguageModelV1({
272+
doStream: async ({ prompt, mode }) => {
273+
return {
274+
stream: convertArrayToReadableStream([
275+
{ type: 'text-delta', textDelta: '{ ' },
276+
{ type: 'text-delta', textDelta: '"content": ' },
277+
{ type: 'text-delta', textDelta: `"Hello, ` },
278+
{ type: 'text-delta', textDelta: `world` },
279+
{ type: 'text-delta', textDelta: `!"` },
280+
{ type: 'text-delta', textDelta: ' }' },
281+
{
282+
type: 'finish',
283+
finishReason: 'stop',
284+
usage: { completionTokens: 10, promptTokens: 2 },
285+
},
286+
]),
287+
rawCall: { rawPrompt: 'prompt', rawSettings: {} },
288+
};
289+
},
290+
}),
291+
schema: z.object({ content: z.string() }),
292+
mode: 'json',
293+
prompt: 'prompt',
294+
});
295+
296+
const response = result.toTextStreamResponse();
297+
298+
assert.strictEqual(response.status, 200);
299+
assert.strictEqual(
300+
response.headers.get('Content-Type'),
301+
'text/plain; charset=utf-8',
302+
);
303+
304+
assert.deepStrictEqual(
305+
await convertReadableStreamToArray(
306+
response.body!.pipeThrough(new TextDecoderStream()),
307+
),
308+
['{ ', '"content": "Hello, ', 'world', '!"', ' }'],
309+
);
310+
});
311+
});
312+
313+
describe('result.pipeTextStreamToResponse', async () => {
314+
it('should write text deltas to a Node.js response-like object', async () => {
315+
const mockResponse = createMockServerResponse();
316+
317+
const result = await streamObject({
318+
model: new MockLanguageModelV1({
319+
doStream: async ({ prompt, mode }) => {
320+
return {
321+
stream: convertArrayToReadableStream([
322+
{ type: 'text-delta', textDelta: '{ ' },
323+
{ type: 'text-delta', textDelta: '"content": ' },
324+
{ type: 'text-delta', textDelta: `"Hello, ` },
325+
{ type: 'text-delta', textDelta: `world` },
326+
{ type: 'text-delta', textDelta: `!"` },
327+
{ type: 'text-delta', textDelta: ' }' },
328+
{
329+
type: 'finish',
330+
finishReason: 'stop',
331+
usage: { completionTokens: 10, promptTokens: 2 },
332+
},
333+
]),
334+
rawCall: { rawPrompt: 'prompt', rawSettings: {} },
335+
};
336+
},
337+
}),
338+
schema: z.object({ content: z.string() }),
339+
mode: 'json',
340+
prompt: 'prompt',
341+
});
342+
343+
result.pipeTextStreamToResponse(mockResponse);
344+
345+
// Wait for the stream to finish writing to the mock response
346+
await new Promise(resolve => {
347+
const checkIfEnded = () => {
348+
if (mockResponse.ended) {
349+
resolve(undefined);
350+
} else {
351+
setImmediate(checkIfEnded);
352+
}
353+
};
354+
checkIfEnded();
355+
});
356+
357+
const decoder = new TextDecoder();
358+
359+
assert.strictEqual(mockResponse.statusCode, 200);
360+
assert.deepStrictEqual(mockResponse.headers, {
361+
'Content-Type': 'text/plain; charset=utf-8',
362+
});
363+
assert.deepStrictEqual(
364+
mockResponse.writtenChunks.map(chunk => decoder.decode(chunk)),
365+
['{ ', '"content": "Hello, ', 'world', '!"', ' }'],
366+
);
367+
});
368+
});
369+
211370
describe('result.usage', () => {
212371
it('should resolve with token usage', async () => {
213372
const result = await streamObject({

‎packages/core/core/generate-object/stream-object.ts

+126-5
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,12 @@ import {
22
LanguageModelV1CallOptions,
33
LanguageModelV1StreamPart,
44
} from '@ai-sdk/provider';
5+
import { safeValidateTypes } from '@ai-sdk/provider-utils';
6+
import {
7+
DeepPartial,
8+
isDeepEqualData,
9+
parsePartialJson,
10+
} from '@ai-sdk/ui-utils';
511
import { z } from 'zod';
612
import { TokenUsage, calculateTokenUsage } from '../generate-text/token-usage';
713
import { CallSettings } from '../prompt/call-settings';
@@ -15,12 +21,10 @@ import {
1521
createAsyncIterableStream,
1622
} from '../util/async-iterable-stream';
1723
import { convertZodToJSONSchema } from '../util/convert-zod-to-json-schema';
18-
import { DeepPartial } from '../util/deep-partial';
19-
import { isDeepEqualData } from '../util/is-deep-equal-data';
20-
import { parsePartialJson } from '../util/parse-partial-json';
2124
import { retryWithExponentialBackoff } from '../util/retry-with-exponential-backoff';
2225
import { injectJsonSchemaIntoSystem } from './inject-json-schema-into-system';
23-
import { safeValidateTypes } from '@ai-sdk/provider-utils';
26+
import { prepareResponseHeaders } from '../util/prepare-response-headers';
27+
import { ServerResponse } from 'http';
2428

2529
/**
2630
Generate a structured, typed object for a given prompt and schema using a language model.
@@ -290,6 +294,10 @@ export type ObjectStreamPart<T> =
290294
| {
291295
type: 'object';
292296
object: DeepPartial<T>;
297+
}
298+
| {
299+
type: 'text-delta';
300+
textDelta: string;
293301
};
294302

295303
/**
@@ -362,6 +370,7 @@ Response headers.
362370

363371
// pipe chunks through a transformation stream that extracts metadata:
364372
let accumulatedText = '';
373+
let delta = '';
365374
let latestObject: DeepPartial<T> | undefined = undefined;
366375

367376
this.originalStream = stream.pipeThrough(
@@ -370,6 +379,7 @@ Response headers.
370379
// process partial text chunks
371380
if (typeof chunk === 'string') {
372381
accumulatedText += chunk;
382+
delta += chunk;
373383

374384
const currentObject = parsePartialJson(
375385
accumulatedText,
@@ -378,14 +388,32 @@ Response headers.
378388
if (!isDeepEqualData(latestObject, currentObject)) {
379389
latestObject = currentObject;
380390

381-
controller.enqueue({ type: 'object', object: currentObject });
391+
controller.enqueue({
392+
type: 'object',
393+
object: currentObject,
394+
});
395+
396+
controller.enqueue({
397+
type: 'text-delta',
398+
textDelta: delta,
399+
});
400+
401+
delta = '';
382402
}
383403

384404
return;
385405
}
386406

387407
switch (chunk.type) {
388408
case 'finish': {
409+
// send final text delta:
410+
if (delta !== '') {
411+
controller.enqueue({
412+
type: 'text-delta',
413+
textDelta: delta,
414+
});
415+
}
416+
389417
// store usage for promises and onFinish callback:
390418
usage = calculateTokenUsage(chunk.usage);
391419

@@ -441,6 +469,12 @@ Response headers.
441469
);
442470
}
443471

472+
/**
473+
Stream of partial objects. It gets more complete as the stream progresses.
474+
475+
Note that the partial object is not validated.
476+
If you want to be certain that the actual content matches your schema, you need to implement your own validation for partial results.
477+
*/
444478
get partialObjectStream(): AsyncIterableStream<DeepPartial<T>> {
445479
return createAsyncIterableStream(this.originalStream, {
446480
transform(chunk, controller) {
@@ -449,6 +483,7 @@ Response headers.
449483
controller.enqueue(chunk.object);
450484
break;
451485

486+
case 'text-delta':
452487
case 'finish':
453488
break;
454489

@@ -465,13 +500,99 @@ Response headers.
465500
});
466501
}
467502

503+
/**
504+
Text stream of the JSON representation of the generated object. It contains text chunks.
505+
When the stream is finished, the object is valid JSON that can be parsed.
506+
*/
507+
get textStream(): AsyncIterableStream<string> {
508+
return createAsyncIterableStream(this.originalStream, {
509+
transform(chunk, controller) {
510+
switch (chunk.type) {
511+
case 'text-delta':
512+
controller.enqueue(chunk.textDelta);
513+
break;
514+
515+
case 'object':
516+
case 'finish':
517+
break;
518+
519+
case 'error':
520+
controller.error(chunk.error);
521+
break;
522+
523+
default: {
524+
const _exhaustiveCheck: never = chunk;
525+
throw new Error(`Unsupported chunk type: ${_exhaustiveCheck}`);
526+
}
527+
}
528+
},
529+
});
530+
}
531+
532+
/**
533+
Stream of different types of events, including partial objects, errors, and finish events.
534+
*/
468535
get fullStream(): AsyncIterableStream<ObjectStreamPart<T>> {
469536
return createAsyncIterableStream(this.originalStream, {
470537
transform(chunk, controller) {
471538
controller.enqueue(chunk);
472539
},
473540
});
474541
}
542+
543+
/**
544+
Writes text delta output to a Node.js response-like object.
545+
It sets a `Content-Type` header to `text/plain; charset=utf-8` and
546+
writes each text delta as a separate chunk.
547+
548+
@param response A Node.js response-like object (ServerResponse).
549+
@param init Optional headers and status code.
550+
*/
551+
pipeTextStreamToResponse(
552+
response: ServerResponse,
553+
init?: { headers?: Record<string, string>; status?: number },
554+
) {
555+
response.writeHead(init?.status ?? 200, {
556+
'Content-Type': 'text/plain; charset=utf-8',
557+
...init?.headers,
558+
});
559+
560+
const reader = this.textStream
561+
.pipeThrough(new TextEncoderStream())
562+
.getReader();
563+
564+
const read = async () => {
565+
try {
566+
while (true) {
567+
const { done, value } = await reader.read();
568+
if (done) break;
569+
response.write(value);
570+
}
571+
} catch (error) {
572+
throw error;
573+
} finally {
574+
response.end();
575+
}
576+
};
577+
578+
read();
579+
}
580+
581+
/**
582+
Creates a simple text stream response.
583+
Each text delta is encoded as UTF-8 and sent as a separate chunk.
584+
Non-text-delta events are ignored.
585+
586+
@param init Optional headers and status code.
587+
*/
588+
toTextStreamResponse(init?: ResponseInit): Response {
589+
return new Response(this.textStream.pipeThrough(new TextEncoderStream()), {
590+
status: init?.status ?? 200,
591+
headers: prepareResponseHeaders(init, {
592+
contentType: 'text/plain; charset=utf-8',
593+
}),
594+
});
595+
}
475596
}
476597

477598
/**

‎packages/core/core/generate-text/stream-text.test.ts

+6-10
Original file line numberDiff line numberDiff line change
@@ -570,16 +570,12 @@ describe('result.toTextStreamResponse', () => {
570570
'text/plain; charset=utf-8',
571571
);
572572

573-
// Read the chunks into an array
574-
const reader = response.body!.getReader();
575-
const chunks = [];
576-
while (true) {
577-
const { value, done } = await reader.read();
578-
if (done) break;
579-
chunks.push(new TextDecoder().decode(value));
580-
}
581-
582-
assert.deepStrictEqual(chunks, ['Hello', ', ', 'world!']);
573+
assert.deepStrictEqual(
574+
await convertReadableStreamToArray(
575+
response.body!.pipeThrough(new TextDecoderStream()),
576+
),
577+
['Hello', ', ', 'world!'],
578+
);
583579
});
584580
});
585581

‎packages/core/core/index.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,5 +5,5 @@ export * from './prompt';
55
export * from './registry';
66
export * from './tool';
77
export * from './types';
8-
export type { DeepPartial } from './util/deep-partial';
8+
export type { DeepPartial } from '@ai-sdk/ui-utils';
99
export { cosineSimilarity } from './util/cosine-similarity';

‎packages/react/package.json

+7-2
Original file line numberDiff line numberDiff line change
@@ -43,14 +43,19 @@
4343
"msw": "2.0.9",
4444
"react-dom": "^18",
4545
"tsup": "^7.2.0",
46-
"typescript": "5.1.3"
46+
"typescript": "5.1.3",
47+
"zod": "3.23.8"
4748
},
4849
"peerDependencies": {
49-
"react": "^18 || ^19"
50+
"react": "^18 || ^19",
51+
"zod": "^3.0.0"
5052
},
5153
"peerDependenciesMeta": {
5254
"react": {
5355
"optional": true
56+
},
57+
"zod": {
58+
"optional": true
5459
}
5560
},
5661
"engines": {

‎packages/react/src/index.ts

+2-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
export * from './use-assistant';
12
export * from './use-chat';
23
export * from './use-completion';
3-
export * from './use-assistant';
4+
export * from './use-object';

‎packages/react/src/use-assistant.ui.test.tsx

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ describe('stream data stream', () => {
1818
<div>
1919
<div data-testid="status">{status}</div>
2020
{messages.map((m, idx) => (
21-
<div data-testid={`message-${idx}`} key={m.id}>
21+
<div data-testid={`message-${idx}`} key={idx}>
2222
{m.role === 'user' ? 'User: ' : 'AI: '}
2323
{m.content}
2424
</div>

‎packages/react/src/use-object.ts

+123
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
import {
2+
DeepPartial,
3+
isDeepEqualData,
4+
parsePartialJson,
5+
} from '@ai-sdk/ui-utils';
6+
import { useId, useState } from 'react';
7+
import useSWR from 'swr';
8+
import z from 'zod';
9+
10+
export type Experimental_UseObjectOptions<RESULT> = {
11+
/**
12+
* The API endpoint. It should stream JSON that matches the schema as chunked text.
13+
*/
14+
api: string;
15+
16+
/**
17+
* A Zod schema that defines the shape of the complete object.
18+
*/
19+
schema: z.Schema<RESULT>;
20+
21+
/**
22+
* An unique identifier. If not provided, a random one will be
23+
* generated. When provided, the `useObject` hook with the same `id` will
24+
* have shared states across components.
25+
*/
26+
id?: string;
27+
28+
/**
29+
* An optional value for the initial object.
30+
*/
31+
initialValue?: DeepPartial<RESULT>;
32+
};
33+
34+
export type Experimental_UseObjectHelpers<RESULT, INPUT> = {
35+
/**
36+
* Calls the API with the provided input as JSON body.
37+
*/
38+
setInput: (input: INPUT) => void;
39+
40+
/**
41+
* The current value for the generated object. Updated as the API streams JSON chunks.
42+
*/
43+
object: DeepPartial<RESULT> | undefined;
44+
45+
/**
46+
* The error object of the API request if any.
47+
*/
48+
error: undefined | unknown;
49+
};
50+
51+
function useObject<RESULT, INPUT = any>({
52+
api,
53+
id,
54+
schema, // required, in the future we will use it for validation
55+
initialValue,
56+
}: Experimental_UseObjectOptions<RESULT>): Experimental_UseObjectHelpers<
57+
RESULT,
58+
INPUT
59+
> {
60+
// Generate an unique id if not provided.
61+
const hookId = useId();
62+
const completionId = id ?? hookId;
63+
64+
// Store the completion state in SWR, using the completionId as the key to share states.
65+
const { data, mutate } = useSWR<DeepPartial<RESULT>>(
66+
[api, completionId],
67+
null,
68+
{ fallbackData: initialValue },
69+
);
70+
71+
const [error, setError] = useState<undefined | unknown>(undefined);
72+
73+
return {
74+
async setInput(input) {
75+
try {
76+
const response = await fetch(api, {
77+
method: 'POST',
78+
headers: { 'Content-Type': 'application/json' },
79+
body: JSON.stringify(input),
80+
});
81+
82+
if (!response.ok) {
83+
throw new Error(
84+
(await response.text()) ?? 'Failed to fetch the response.',
85+
);
86+
}
87+
88+
if (response.body == null) {
89+
throw new Error('The response body is empty.');
90+
}
91+
92+
let accumulatedText = '';
93+
let latestObject: DeepPartial<RESULT> | undefined = undefined;
94+
95+
response.body!.pipeThrough(new TextDecoderStream()).pipeTo(
96+
new WritableStream<string>({
97+
write(chunk) {
98+
accumulatedText += chunk;
99+
100+
const currentObject = parsePartialJson(
101+
accumulatedText,
102+
) as DeepPartial<RESULT>;
103+
104+
if (!isDeepEqualData(latestObject, currentObject)) {
105+
latestObject = currentObject;
106+
107+
mutate(currentObject);
108+
}
109+
},
110+
}),
111+
);
112+
113+
setError(undefined);
114+
} catch (error) {
115+
setError(error);
116+
}
117+
},
118+
object: data,
119+
error,
120+
};
121+
}
122+
123+
export const experimental_useObject = useObject;
+79
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
import { mockFetchDataStream, mockFetchError } from '@ai-sdk/ui-utils/test';
2+
import '@testing-library/jest-dom/vitest';
3+
import { cleanup, render, screen } from '@testing-library/react';
4+
import userEvent from '@testing-library/user-event';
5+
import { z } from 'zod';
6+
import { experimental_useObject } from './use-object';
7+
8+
describe('text stream', () => {
9+
const TestComponent = () => {
10+
const { object, error, setInput } = experimental_useObject({
11+
api: '/api/use-object',
12+
schema: z.object({ content: z.string() }),
13+
});
14+
15+
return (
16+
<div>
17+
<div data-testid="object">{JSON.stringify(object)}</div>
18+
<div data-testid="error">{error?.toString()}</div>
19+
<button
20+
data-testid="submit-button"
21+
onClick={async () => setInput('test-input')}
22+
>
23+
Generate
24+
</button>
25+
</div>
26+
);
27+
};
28+
29+
beforeEach(() => {
30+
render(<TestComponent />);
31+
});
32+
33+
afterEach(() => {
34+
vi.restoreAllMocks();
35+
cleanup();
36+
});
37+
38+
describe("when the API returns 'Hello, world!'", () => {
39+
let mockFetch: ReturnType<typeof mockFetchDataStream>;
40+
41+
beforeEach(async () => {
42+
mockFetch = mockFetchDataStream({
43+
url: 'https://example.com/api/use-object',
44+
chunks: ['{ ', '"content": "Hello, ', 'world', '!"'],
45+
});
46+
47+
await userEvent.click(screen.getByTestId('submit-button'));
48+
});
49+
50+
it('should render stream', async () => {
51+
await screen.findByTestId('object');
52+
expect(screen.getByTestId('object')).toHaveTextContent(
53+
JSON.stringify({ content: 'Hello, world!' }),
54+
);
55+
});
56+
57+
it("should send 'test' to the API", async () => {
58+
expect(await mockFetch.requestBody).toBe(JSON.stringify('test-input'));
59+
});
60+
61+
it('should not have an error', async () => {
62+
await screen.findByTestId('error');
63+
expect(screen.getByTestId('error')).toBeEmptyDOMElement();
64+
});
65+
});
66+
67+
describe('when the API returns a 404', () => {
68+
beforeEach(async () => {
69+
mockFetchError({ statusCode: 404, errorMessage: 'Not found' });
70+
71+
await userEvent.click(screen.getByTestId('submit-button'));
72+
});
73+
74+
it('should render error', async () => {
75+
await screen.findByTestId('error');
76+
expect(screen.getByTestId('error')).toHaveTextContent('Error: Not found');
77+
});
78+
});
79+
});

‎packages/ui-utils/package.json

+12-2
Original file line numberDiff line numberDiff line change
@@ -36,14 +36,24 @@
3636
}
3737
},
3838
"dependencies": {
39-
"@ai-sdk/provider-utils": "0.0.15"
39+
"@ai-sdk/provider-utils": "0.0.15",
40+
"secure-json-parse": "2.7.0"
4041
},
4142
"devDependencies": {
4243
"@types/react": "^18",
4344
"@types/node": "^18",
4445
"@vercel/ai-tsconfig": "workspace:*",
4546
"tsup": "^8",
46-
"typescript": "5.1.3"
47+
"typescript": "5.1.3",
48+
"zod": "3.23.8"
49+
},
50+
"peerDependencies": {
51+
"zod": "^3.0.0"
52+
},
53+
"peerDependenciesMeta": {
54+
"zod": {
55+
"optional": true
56+
}
4757
},
4858
"engines": {
4959
"node": ">=18"
File renamed without changes.
File renamed without changes.

‎packages/ui-utils/src/index.ts

+3
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,10 @@ export { generateId } from '@ai-sdk/provider-utils';
77
export { callChatApi } from './call-chat-api';
88
export { callCompletionApi } from './call-completion-api';
99
export { createChunkDecoder } from './create-chunk-decoder';
10+
export type { DeepPartial } from './deep-partial';
11+
export { isDeepEqualData } from './is-deep-equal-data';
1012
export { parseComplexResponse } from './parse-complex-response';
13+
export { parsePartialJson } from './parse-partial-json';
1114
export { processChatStream } from './process-chat-stream';
1215
export { readDataStream } from './read-data-stream';
1316
export { formatStreamPart, parseStreamPart } from './stream-parts';

‎packages/ui-utils/src/test/mock-fetch.ts

+8-11
Original file line numberDiff line numberDiff line change
@@ -77,18 +77,15 @@ export function mockFetchDataStreamWithGenerator({
7777
ok: true,
7878
status: 200,
7979
bodyUsed: false,
80-
body: {
81-
getReader() {
82-
return {
83-
read() {
84-
return Promise.resolve(chunkGenerator.next());
85-
},
86-
releaseLock() {},
87-
cancel() {},
88-
};
80+
body: new ReadableStream({
81+
async start(controller) {
82+
for await (const chunk of chunkGenerator) {
83+
controller.enqueue(chunk);
84+
}
85+
controller.close();
8986
},
90-
},
91-
} as unknown as Response;
87+
}),
88+
} as Response;
9289
});
9390

9491
return {

‎pnpm-lock.yaml

+9
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)
Please sign in to comment.