Skip to content

Commit 99ddbb7

Browse files
authoredJul 16, 2024··
feat (ai/react): add experimental support for managing attachments to useChat (#2226)
1 parent 0589428 commit 99ddbb7

File tree

22 files changed

+1371
-40
lines changed

22 files changed

+1371
-40
lines changed
 

‎.changeset/red-spiders-reflect.md

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
---
2+
'next-openai': patch
3+
'@ai-sdk/ui-utils': patch
4+
'@ai-sdk/react': patch
5+
'ai': patch
6+
---
7+
8+
feat (ai/react): add experimental support for managing attachments to useChat

‎content/docs/05-ai-sdk-ui/01-overview.mdx

+1
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ Here is a comparison of the supported functions across these frameworks:
2525
| ---------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
2626
| [useChat](/docs/reference/ai-sdk-ui/use-chat) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
2727
| [useChat](/docs/reference/ai-sdk-ui/use-chat) tool calling | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> |
28+
| [useChat](/docs/reference/ai-sdk-ui/use-chat) attachments | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
2829
| [useCompletion](/docs/reference/ai-sdk-ui/use-completion) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
2930
| [useObject](/docs/reference/ai-sdk-ui/use-object) | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
3031
| [useAssistant](/docs/reference/ai-sdk-ui/use-assistant) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |

‎content/docs/05-ai-sdk-ui/02-chatbot.mdx

+167
Original file line numberDiff line numberDiff line change
@@ -349,3 +349,170 @@ export async function POST(req: Request) {
349349
//...
350350
}
351351
```
352+
353+
## Attachments (Experimental)
354+
355+
The `useChat` hook supports sending attachments along with a message as well as rendering them on the client. This can be useful for building applications that involve sending images, files, or other media content to the AI provider.
356+
357+
> **Note:** Attachments is currently only available for React frameworks.
358+
359+
There are two ways to send attachments with a message, either by providing a `FileList` object or a list of URLs to the `handleSubmit` function:
360+
361+
### FileList
362+
363+
By using `FileList`, you can send multiple files as attachments along with a message using the file input element. The `useChat` hook will automatically convert them into data URLs and send them to the AI provider.
364+
365+
> **Note:** Currently, only `image/*` and `text/*` content types get automatically converted into [multi-modal content parts](https://sdk.vercel.ai/docs/foundations/prompts#multi-modal-messages). You will need to handle other content types manually.
366+
367+
```tsx filename="app/page.tsx"
368+
'use client';
369+
370+
import { useChat } from 'ai/react';
371+
import { useRef, useState } from 'react';
372+
373+
export default function Page() {
374+
const { messages, input, handleSubmit, handleInputChange, isLoading } =
375+
useChat();
376+
377+
const [files, setFiles] = useState<FileList | undefined>(undefined);
378+
const fileInputRef = useRef<HTMLInputElement>(null);
379+
380+
return (
381+
<div>
382+
<div>
383+
{messages.map(message => (
384+
<div key={message.id}>
385+
<div>{`${message.role}: `}</div>
386+
387+
<div>
388+
{message.content}
389+
390+
<div>
391+
{message.experimental_attachments
392+
?.filter(attachment =>
393+
attachment.contentType.startsWith('image/'),
394+
)
395+
.map((attachment, index) => (
396+
<img
397+
key={`${message.id}-${index}`}
398+
src={attachment.url}
399+
alt={attachment.name}
400+
/>
401+
))}
402+
</div>
403+
</div>
404+
</div>
405+
))}
406+
</div>
407+
408+
<form
409+
onSubmit={event => {
410+
handleSubmit(event, {
411+
experimental_attachments: files,
412+
});
413+
414+
setFiles(undefined);
415+
416+
if (fileInputRef.current) {
417+
fileInputRef.current.value = '';
418+
}
419+
}}
420+
>
421+
<input
422+
type="file"
423+
onChange={event => {
424+
if (event.target.files) {
425+
setFiles(event.target.files);
426+
}
427+
}}
428+
multiple
429+
ref={fileInputRef}
430+
/>
431+
<input
432+
value={input}
433+
placeholder="Send message..."
434+
onChange={handleInputChange}
435+
disabled={isLoading}
436+
/>
437+
</form>
438+
</div>
439+
);
440+
}
441+
```
442+
443+
### URLs
444+
445+
You can also send URLs as attachments along with a message. This can be useful for sending links to external resources or media content.
446+
447+
> **Note:** The URL can also be a data URL, which is a base64-encoded string that represents the content of a file. Currently, only `image/*` content types get automatically converted into [multi-modal content parts](https://sdk.vercel.ai/docs/foundations/prompts#multi-modal-messages). You will need to handle other content types manually.
448+
449+
```tsx filename="app/page.tsx"
450+
'use client';
451+
452+
import { useChat } from 'ai/react';
453+
import { useState } from 'react';
454+
import { Attachment } from '@ai-sdk/ui-utils';
455+
456+
export default function Page() {
457+
const { messages, input, handleSubmit, handleInputChange, isLoading } =
458+
useChat();
459+
460+
const [attachments] = useState<Attachment[]>([
461+
{
462+
name: 'earth.png',
463+
contentType: 'image/png',
464+
url: 'https://example.com/earth.png',
465+
},
466+
{
467+
name: 'moon.png',
468+
contentType: 'image/png',
469+
url: 'data:image/png;base64,iVBORw0KGgo...',
470+
},
471+
]);
472+
473+
return (
474+
<div>
475+
<div>
476+
{messages.map(message => (
477+
<div key={message.id}>
478+
<div>{`${message.role}: `}</div>
479+
480+
<div>
481+
{message.content}
482+
483+
<div>
484+
{message.experimental_attachments
485+
?.filter(attachment =>
486+
attachment.contentType?.startsWith('image/'),
487+
)
488+
.map((attachment, index) => (
489+
<img
490+
key={`${message.id}-${index}`}
491+
src={attachment.url}
492+
alt={attachment.name}
493+
/>
494+
))}
495+
</div>
496+
</div>
497+
</div>
498+
))}
499+
</div>
500+
501+
<form
502+
onSubmit={event => {
503+
handleSubmit(event, {
504+
experimental_attachments: attachments,
505+
});
506+
}}
507+
>
508+
<input
509+
value={input}
510+
placeholder="Send message..."
511+
onChange={handleInputChange}
512+
disabled={isLoading}
513+
/>
514+
</form>
515+
</div>
516+
);
517+
}
518+
```

‎content/docs/07-reference/ai-sdk-ui/01-use-chat.mdx

+124
Original file line numberDiff line numberDiff line change
@@ -158,6 +158,89 @@ Allows you to easily create a conversational user interface for your chatbot app
158158
name: 'messages',
159159
type: 'Message[]',
160160
description: 'The current array of chat messages.',
161+
properties: [
162+
{
163+
type: 'Message',
164+
parameters: [
165+
{
166+
name: 'id',
167+
type: 'string',
168+
description: 'The unique identifier of the message.',
169+
},
170+
{
171+
name: 'role',
172+
type: "'system' | 'user' | 'assistant' | 'data'",
173+
description: 'The role of the message.',
174+
},
175+
{
176+
name: 'content',
177+
type: 'string',
178+
description: 'The content of the message.',
179+
},
180+
{
181+
name: 'createdAt',
182+
type: 'Date',
183+
isOptional: true,
184+
description: 'The creation date of the message.',
185+
},
186+
{
187+
name: 'name',
188+
type: 'string',
189+
isOptional: true,
190+
description: 'The name of the message.',
191+
},
192+
{
193+
name: 'data',
194+
type: 'JSONValue',
195+
isOptional: true,
196+
description: 'Additional data sent along with the message.',
197+
},
198+
{
199+
name: 'annotations',
200+
type: 'Array<JSONValue>',
201+
isOptional: true,
202+
description:
203+
'Additional annotations sent along with the message.',
204+
},
205+
{
206+
name: 'experimental_attachments',
207+
type: 'Array<Attachment>',
208+
isOptional: true,
209+
description:
210+
'Additional attachments sent along with the message.',
211+
properties: [
212+
{
213+
type: 'Attachment',
214+
description:
215+
'An attachment object that can be used to describe the metadata of the file.',
216+
parameters: [
217+
{
218+
name: 'name',
219+
type: 'string',
220+
isOptional: true,
221+
description:
222+
'The name of the attachment, usually the file name.',
223+
},
224+
{
225+
name: 'contentType',
226+
type: 'string',
227+
isOptional: true,
228+
description:
229+
'A string indicating the media type of the file.',
230+
},
231+
{
232+
name: 'url',
233+
type: 'string',
234+
description:
235+
'The URL of the attachment. It can either be a URL to a hosted file or a Data URL.',
236+
},
237+
],
238+
},
239+
],
240+
},
241+
],
242+
},
243+
],
161244
},
162245
{
163246
name: 'error',
@@ -253,6 +336,47 @@ Allows you to easily create a conversational user interface for your chatbot app
253336
type: 'JSONValue',
254337
description: 'Additional data to be sent to the API endpoint.',
255338
},
339+
{
340+
name: 'experimental_attachments',
341+
type: 'FileList | Array<Attachment>',
342+
isOptional: true,
343+
description:
344+
'An array of attachments to be sent to the API endpoint.',
345+
properties: [
346+
{
347+
type: 'FileList',
348+
description:
349+
"A list of files that have been selected by the user using an <input type='file'> element. It's also used for a list of files dropped into web content when using the drag and drop API.",
350+
},
351+
{
352+
type: 'Attachment',
353+
description:
354+
'An attachment object that can be used to describe the metadata of the file.',
355+
parameters: [
356+
{
357+
name: 'name',
358+
type: 'string',
359+
isOptional: true,
360+
description:
361+
'The name of the attachment, usually the file name.',
362+
},
363+
{
364+
name: 'contentType',
365+
type: 'string',
366+
isOptional: true,
367+
description:
368+
'A string indicating the media type of the file.',
369+
},
370+
{
371+
name: 'url',
372+
type: 'string',
373+
description:
374+
'The URL of the attachment. It can either be a URL to a hosted file or a Data URL.',
375+
},
376+
],
377+
},
378+
],
379+
},
256380
],
257381
},
258382
],

‎content/docs/07-reference/ai-sdk-ui/index.mdx

+1
Original file line numberDiff line numberDiff line change
@@ -58,6 +58,7 @@ Here is a comparison of the supported functions across these frameworks:
5858
| ---------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
5959
| [useChat](/docs/reference/ai-sdk-ui/use-chat) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
6060
| [useChat](/docs/reference/ai-sdk-ui/use-chat) tool calling | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> |
61+
| [useChat](/docs/reference/ai-sdk-ui/use-chat) attachments | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
6162
| [useCompletion](/docs/reference/ai-sdk-ui/use-completion) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
6263
| [useObject](/docs/reference/ai-sdk-ui/use-object) | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
6364
| [useAssistant](/docs/reference/ai-sdk-ui/use-assistant) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
+11-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,11 @@
1-
OPENAI_API_KEY=xxxxxxx
1+
# You must first activate a Billing Account here: https://platform.openai.com/account/billing/overview
2+
# Then get your OpenAI API Key here: https://platform.openai.com/account/api-keys
3+
OPENAI_API_KEY=xxxxxxx
4+
5+
# You must first create an OpenAI Assistant here: https://platform.openai.com/assistants
6+
# Then get your Assistant ID here: https://platform.openai.com/assistants
7+
ASSISTANT_ID=xxxxxxx
8+
9+
# If you choose to use external files for attachments, you will need to configure a Vercel Blob Store.
10+
# Instructions to create a Vercel Blob Store here: https://vercel.com/docs/storage/vercel-blob
11+
BLOB_READ_WRITE_TOKEN=xxxxxxx

‎examples/next-openai/README.md

+5-4
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ This example shows how to use the [Vercel AI SDK](https://sdk.vercel.ai/docs) wi
66

77
Deploy the example using [Vercel](https://vercel.com?utm_source=github&utm_medium=readme&utm_campaign=ai-sdk-example):
88

9-
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fvercel%2Fai%2Ftree%2Fmain%2Fexamples%2Fnext-openai&env=OPENAI_API_KEY&envDescription=OpenAI%20API%20Key&envLink=https%3A%2F%2Fplatform.openai.com%2Faccount%2Fapi-keys&project-name=vercel-ai-chat-openai&repository-name=vercel-ai-chat-openai)
9+
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https://github.com/vercel/ai/tree/main/examples/next-openai&env=OPENAI_API_KEY,ASSISTANT_ID&envDescription=Learn more about how to get these environment variables&envLink=https://github.com/vercel/ai/blob/main/examples/next-openai/.env.local.example&project-name=ai-sdk-next-openai&repository-name=ai-sdk-next-openai&stores=[{"type":"blob"}])
1010

1111
## How to use
1212

@@ -28,9 +28,10 @@ To run the example locally you need to:
2828

2929
1. Sign up at [OpenAI's Developer Platform](https://platform.openai.com/signup).
3030
2. Go to [OpenAI's dashboard](https://platform.openai.com/account/api-keys) and create an API KEY.
31-
3. Set the required OpenAI environment variable as the token value as shown [the example env file](./.env.local.example) but in a new file called `.env.local`
32-
4. `pnpm install` to install the required dependencies.
33-
5. `pnpm dev` to launch the development server.
31+
3. If you choose to use external files for attachments, then create a [Vercel Blob Store](https://vercel.com/docs/storage/vercel-blob).
32+
4. Set the required environment variable as the token value as shown [the example env file](./.env.local.example) but in a new file called `.env.local`
33+
5. `pnpm install` to install the required dependencies.
34+
6. `pnpm dev` to launch the development server.
3435

3536
## Learn More
3637

‎examples/next-openai/app/api/chat/route.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ export async function POST(req: Request) {
1010

1111
// Call the language model
1212
const result = await streamText({
13-
model: openai('gpt-4-turbo'),
13+
model: openai('gpt-4o'),
1414
messages: convertToCoreMessages(messages),
1515
async onFinish({ text, toolCalls, toolResults, usage, finishReason }) {
1616
// implement your own logic here, e.g. for storing messages
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
import { handleUpload, type HandleUploadBody } from '@vercel/blob/client';
2+
import { NextResponse } from 'next/server';
3+
4+
/*
5+
* This route is used to upload files to Vercel's Blob Storage.
6+
* Example from https://vercel.com/docs/storage/vercel-blob/client-upload#create-a-client-upload-route
7+
*/
8+
export async function POST(request: Request): Promise<NextResponse> {
9+
const body = (await request.json()) as HandleUploadBody;
10+
11+
try {
12+
const jsonResponse = await handleUpload({
13+
body,
14+
request,
15+
onBeforeGenerateToken: async (
16+
pathname,
17+
/* clientPayload */
18+
) => {
19+
// Generate a client token for the browser to upload the file
20+
// ⚠️ Authenticate and authorize users before generating the token.
21+
// Otherwise, you're allowing anonymous uploads.
22+
23+
return {
24+
allowedContentTypes: [
25+
'image/jpeg',
26+
'image/png',
27+
'image/gif',
28+
'application/pdf',
29+
'text/plain',
30+
],
31+
tokenPayload: JSON.stringify({
32+
// optional, sent to your server on upload completion
33+
// you could pass a user id from auth, or a value from clientPayload
34+
}),
35+
};
36+
},
37+
onUploadCompleted: async ({ blob, tokenPayload }) => {
38+
// Get notified of client upload completion
39+
// ⚠️ This will not work on `localhost` websites,
40+
// Use ngrok or similar to get the full upload flow
41+
42+
console.log('file upload completed', blob, tokenPayload);
43+
44+
try {
45+
// Run any logic after the file upload completed
46+
// const { userId } = JSON.parse(tokenPayload);
47+
// await db.update({ avatar: blob.url, userId });
48+
} catch (error) {
49+
throw new Error('Could not complete operation');
50+
}
51+
},
52+
});
53+
54+
return NextResponse.json(jsonResponse);
55+
} catch (error) {
56+
return NextResponse.json(
57+
{ error: (error as Error).message },
58+
{ status: 400 }, // The webhook will retry 5 times waiting for a 200
59+
);
60+
}
61+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,115 @@
1+
'use client';
2+
3+
/* eslint-disable @next/next/no-img-element */
4+
import { useChat } from 'ai/react';
5+
import { useRef, useState } from 'react';
6+
import { upload } from '@vercel/blob/client';
7+
import { Attachment } from '@ai-sdk/ui-utils';
8+
9+
export default function Page() {
10+
const { messages, input, handleSubmit, handleInputChange, isLoading } =
11+
useChat({
12+
api: '/api/chat',
13+
});
14+
15+
const [attachments, setAttachments] = useState<Attachment[]>([]);
16+
const [isUploading, setIsUploading] = useState<boolean>(false);
17+
const fileInputRef = useRef<HTMLInputElement>(null);
18+
19+
return (
20+
<div className="flex flex-col gap-2">
21+
<div className="flex flex-col p-2 gap-2">
22+
{messages.map(message => (
23+
<div key={message.id} className="flex flex-row gap-2">
24+
<div className="w-24 text-zinc-500 flex-shrink-0">{`${message.role}: `}</div>
25+
26+
<div className="flex flex-col gap-2">
27+
{message.content}
28+
29+
<div className="flex flex-row gap-2">
30+
{message.experimental_attachments?.map((attachment, index) => (
31+
<img
32+
key={`${message.id}-${index}`}
33+
className="w-24 rounded-md"
34+
src={attachment.url}
35+
alt={attachment.name}
36+
/>
37+
))}
38+
</div>
39+
</div>
40+
</div>
41+
))}
42+
</div>
43+
44+
<form
45+
onSubmit={event => {
46+
if (isUploading) {
47+
alert('Please wait for the files to finish uploading.');
48+
return;
49+
}
50+
51+
handleSubmit(event, {
52+
experimental_attachments: attachments,
53+
});
54+
55+
setAttachments([]);
56+
57+
if (fileInputRef.current) {
58+
fileInputRef.current.value = '';
59+
}
60+
}}
61+
className="flex flex-col gap-2 fixed bottom-0 p-2 w-full"
62+
>
63+
<div className="flex flex-row gap-2 fixed right-2 bottom-14 items-end">
64+
{Array.from(attachments)
65+
.filter(attachment => attachment.contentType?.startsWith('image/'))
66+
.map(attachment => (
67+
<div key={attachment.name}>
68+
<img
69+
className="w-24 rounded-md"
70+
src={attachment.url}
71+
alt={attachment.name}
72+
/>
73+
<span className="text-sm text-zinc-500">{attachment.name}</span>
74+
</div>
75+
))}
76+
</div>
77+
<input
78+
type="file"
79+
onChange={async event => {
80+
if (event.target.files) {
81+
setIsUploading(true);
82+
83+
for (const file of Array.from(event.target.files)) {
84+
const blob = await upload(file.name, file, {
85+
access: 'public',
86+
handleUploadUrl: '/api/file',
87+
});
88+
89+
setAttachments(prevAttachments => [
90+
...prevAttachments,
91+
{
92+
name: file.name,
93+
contentType: blob.contentType,
94+
url: blob.url,
95+
},
96+
]);
97+
}
98+
99+
setIsUploading(false);
100+
}
101+
}}
102+
multiple
103+
ref={fileInputRef}
104+
/>
105+
<input
106+
value={input}
107+
placeholder="Send message..."
108+
onChange={handleInputChange}
109+
className="bg-zinc-100 w-full p-2"
110+
disabled={isLoading}
111+
/>
112+
</form>
113+
</div>
114+
);
115+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
'use client';
2+
3+
/* eslint-disable @next/next/no-img-element */
4+
import { getTextFromDataUrl } from '@ai-sdk/ui-utils';
5+
import { useChat } from 'ai/react';
6+
import { useRef, useState } from 'react';
7+
8+
export default function Page() {
9+
const { messages, input, handleSubmit, handleInputChange, isLoading } =
10+
useChat({
11+
api: '/api/chat',
12+
});
13+
14+
const [files, setFiles] = useState<FileList | undefined>(undefined);
15+
const fileInputRef = useRef<HTMLInputElement>(null);
16+
17+
return (
18+
<div className="flex flex-col gap-2">
19+
<div className="flex flex-col p-2 gap-2">
20+
{messages.map(message => (
21+
<div key={message.id} className="flex flex-row gap-2">
22+
<div className="w-24 text-zinc-500 flex-shrink-0">{`${message.role}: `}</div>
23+
24+
<div className="flex flex-col gap-2">
25+
{message.content}
26+
27+
<div className="flex flex-row gap-2">
28+
{message.experimental_attachments?.map((attachment, index) =>
29+
attachment.contentType?.includes('image/') ? (
30+
<img
31+
key={`${message.id}-${index}`}
32+
className="w-24 rounded-md"
33+
src={attachment.url}
34+
alt={attachment.name}
35+
/>
36+
) : attachment.contentType?.includes('text/') ? (
37+
<div className="w-32 h-24 rounded-md text-xs ellipsis overflow-hidden p-2 text-zinc-500 border">
38+
{getTextFromDataUrl(attachment.url)}
39+
</div>
40+
) : null,
41+
)}
42+
</div>
43+
</div>
44+
</div>
45+
))}
46+
</div>
47+
48+
<form
49+
onSubmit={event => {
50+
handleSubmit(event, {
51+
experimental_attachments: files,
52+
});
53+
setFiles(undefined);
54+
55+
if (fileInputRef.current) {
56+
fileInputRef.current.value = '';
57+
}
58+
}}
59+
className="flex flex-col gap-2 fixed bottom-0 p-2 w-full"
60+
>
61+
<div className="flex flex-row gap-2 fixed right-2 bottom-14 items-end">
62+
{files
63+
? Array.from(files).map(attachment => {
64+
const { type } = attachment;
65+
66+
if (type.startsWith('image/')) {
67+
return (
68+
<div key={attachment.name}>
69+
<img
70+
className="w-24 rounded-md"
71+
src={URL.createObjectURL(attachment)}
72+
alt={attachment.name}
73+
/>
74+
<span className="text-sm text-zinc-500">
75+
{attachment.name}
76+
</span>
77+
</div>
78+
);
79+
} else if (type.startsWith('text/')) {
80+
return (
81+
<div
82+
key={attachment.name}
83+
className="w-24 text-zinc-500 flex-shrink-0 text-sm flex flex-col gap-1"
84+
>
85+
<div className="w-16 h-20 bg-zinc-100 rounded-md" />
86+
{attachment.name}
87+
</div>
88+
);
89+
}
90+
})
91+
: ''}
92+
</div>
93+
<input
94+
type="file"
95+
onChange={event => {
96+
if (event.target.files) {
97+
setFiles(event.target.files);
98+
}
99+
}}
100+
multiple
101+
ref={fileInputRef}
102+
/>
103+
<input
104+
value={input}
105+
placeholder="Send message..."
106+
onChange={handleInputChange}
107+
className="bg-zinc-100 w-full p-2"
108+
disabled={isLoading}
109+
/>
110+
</form>
111+
</div>
112+
);
113+
}

‎examples/next-openai/package.json

+2
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@
1111
"dependencies": {
1212
"@ai-sdk/openai": "latest",
1313
"@ai-sdk/react": "latest",
14+
"@ai-sdk/ui-utils": "latest",
15+
"@vercel/blob": "^0.23.4",
1416
"ai": "latest",
1517
"next": "latest",
1618
"openai": "4.52.6",
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
import { Attachment } from '@ai-sdk/ui-utils';
2+
import { ImagePart, TextPart } from './content-part';
3+
import {
4+
convertDataContentToUint8Array,
5+
convertUint8ArrayToText,
6+
} from './data-content';
7+
8+
type ContentPart = TextPart | ImagePart;
9+
10+
/**
11+
* Converts a list of attachments to a list of content parts
12+
* for consumption by ai/core functions.
13+
* Currently only supports images and text attachments.
14+
*/
15+
export function attachmentsToParts(attachments: Attachment[]): ContentPart[] {
16+
const parts: ContentPart[] = [];
17+
18+
for (const attachment of attachments) {
19+
let url;
20+
21+
try {
22+
url = new URL(attachment.url);
23+
} catch (error) {
24+
throw new Error(`Invalid URL: ${attachment.url}`);
25+
}
26+
27+
switch (url.protocol) {
28+
case 'http:':
29+
case 'https:': {
30+
if (attachment.contentType?.startsWith('image/')) {
31+
parts.push({ type: 'image', image: url });
32+
}
33+
break;
34+
}
35+
36+
case 'data:': {
37+
let header;
38+
let base64Content;
39+
let mimeType;
40+
41+
try {
42+
[header, base64Content] = attachment.url.split(',');
43+
mimeType = header.split(';')[0].split(':')[1];
44+
} catch (error) {
45+
throw new Error(`Error processing data URL: ${attachment.url}`);
46+
}
47+
48+
if (mimeType == null || base64Content == null) {
49+
throw new Error(`Invalid data URL format: ${attachment.url}`);
50+
}
51+
52+
if (attachment.contentType?.startsWith('image/')) {
53+
parts.push({
54+
type: 'image',
55+
image: convertDataContentToUint8Array(base64Content),
56+
});
57+
} else if (attachment.contentType?.startsWith('text/')) {
58+
parts.push({
59+
type: 'text',
60+
text: convertUint8ArrayToText(
61+
convertDataContentToUint8Array(base64Content),
62+
),
63+
});
64+
}
65+
66+
break;
67+
}
68+
69+
default: {
70+
throw new Error(`Unsupported URL protocol: ${url.protocol}`);
71+
}
72+
}
73+
}
74+
75+
return parts;
76+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,195 @@
1+
import { convertToCoreMessages } from './convert-to-core-messages';
2+
import { Attachment } from '@ai-sdk/ui-utils';
3+
import { ToolResult } from '../generate-text/tool-result';
4+
5+
describe('convertToCoreMessages', () => {
6+
describe('user message', () => {
7+
it('should convert a simple user message', () => {
8+
const result = convertToCoreMessages([
9+
{ role: 'user', content: 'Hello, AI!' },
10+
]);
11+
12+
expect(result).toEqual([{ role: 'user', content: 'Hello, AI!' }]);
13+
});
14+
15+
it('should handle user message with attachments', () => {
16+
const attachment: Attachment = {
17+
contentType: 'image/jpeg',
18+
url: 'https://example.com/image.jpg',
19+
};
20+
21+
const result = convertToCoreMessages([
22+
{
23+
role: 'user',
24+
content: 'Check this image',
25+
experimental_attachments: [attachment],
26+
},
27+
]);
28+
29+
expect(result).toEqual([
30+
{
31+
role: 'user',
32+
content: [
33+
{ type: 'text', text: 'Check this image' },
34+
{ type: 'image', image: new URL('https://example.com/image.jpg') },
35+
],
36+
},
37+
]);
38+
});
39+
40+
it('should handle user message with attachment URLs', () => {
41+
const attachment: Attachment = {
42+
contentType: 'image/jpeg',
43+
url: 'data:image/jpg;base64,dGVzdA==',
44+
};
45+
46+
const result = convertToCoreMessages([
47+
{
48+
role: 'user',
49+
content: 'Check this image',
50+
experimental_attachments: [attachment],
51+
},
52+
]);
53+
54+
expect(result).toEqual([
55+
{
56+
role: 'user',
57+
content: [
58+
{ type: 'text', text: 'Check this image' },
59+
{ type: 'image', image: new Uint8Array([116, 101, 115, 116]) },
60+
],
61+
},
62+
]);
63+
});
64+
65+
it('should throw an error for invalid attachment URLs', () => {
66+
const attachment: Attachment = {
67+
contentType: 'image/jpeg',
68+
url: 'invalid-url',
69+
};
70+
71+
expect(() => {
72+
convertToCoreMessages([
73+
{
74+
role: 'user',
75+
content: 'Check this image',
76+
experimental_attachments: [attachment],
77+
},
78+
]);
79+
}).toThrow('Invalid URL: invalid-url');
80+
});
81+
82+
it('should throw an error for invalid data URL format', () => {
83+
const attachment: Attachment = {
84+
contentType: 'image/jpeg',
85+
url: 'data:image/jpg;base64',
86+
};
87+
88+
expect(() => {
89+
convertToCoreMessages([
90+
{
91+
role: 'user',
92+
content: 'Check this image',
93+
experimental_attachments: [attachment],
94+
},
95+
]);
96+
}).toThrow(`Invalid data URL format: ${attachment.url}`);
97+
});
98+
99+
it('should throw an error for unsupported attachment protocols', () => {
100+
const attachment: Attachment = {
101+
contentType: 'image/jpeg',
102+
url: 'ftp://example.com/image.jpg',
103+
};
104+
105+
expect(() => {
106+
convertToCoreMessages([
107+
{
108+
role: 'user',
109+
content: 'Check this image',
110+
experimental_attachments: [attachment],
111+
},
112+
]);
113+
}).toThrow('Unsupported URL protocol: ftp:');
114+
});
115+
});
116+
117+
describe('assistant message', () => {
118+
it('should convert a simple assistant message', () => {
119+
const result = convertToCoreMessages([
120+
{ role: 'assistant', content: 'Hello, human!' },
121+
]);
122+
123+
expect(result).toEqual([{ role: 'assistant', content: 'Hello, human!' }]);
124+
});
125+
126+
it('should handle assistant message with tool invocations', () => {
127+
const toolInvocation: ToolResult<string, unknown, unknown> = {
128+
toolCallId: 'call1',
129+
toolName: 'calculator',
130+
args: { operation: 'add', numbers: [1, 2] },
131+
result: '3',
132+
};
133+
const result = convertToCoreMessages([
134+
{
135+
role: 'assistant',
136+
content: 'Let me calculate that for you.',
137+
toolInvocations: [toolInvocation],
138+
},
139+
]);
140+
141+
expect(result).toEqual([
142+
{
143+
role: 'assistant',
144+
content: [
145+
{ type: 'text', text: 'Let me calculate that for you.' },
146+
{
147+
type: 'tool-call',
148+
toolCallId: 'call1',
149+
toolName: 'calculator',
150+
args: { operation: 'add', numbers: [1, 2] },
151+
},
152+
],
153+
},
154+
{
155+
role: 'tool',
156+
content: [
157+
{
158+
type: 'tool-result',
159+
toolCallId: 'call1',
160+
toolName: 'calculator',
161+
args: { operation: 'add', numbers: [1, 2] },
162+
result: '3',
163+
},
164+
],
165+
},
166+
]);
167+
});
168+
});
169+
170+
describe('multiple messages', () => {
171+
it('should handle a conversation with multiple messages', () => {
172+
const result = convertToCoreMessages([
173+
{ role: 'user', content: "What's the weather like?" },
174+
{ role: 'assistant', content: "I'll check that for you." },
175+
{ role: 'user', content: 'Thanks!' },
176+
]);
177+
178+
expect(result).toEqual([
179+
{ role: 'user', content: "What's the weather like?" },
180+
{ role: 'assistant', content: "I'll check that for you." },
181+
{ role: 'user', content: 'Thanks!' },
182+
]);
183+
});
184+
});
185+
186+
describe('error handling', () => {
187+
it('should throw an error for unhandled roles', () => {
188+
expect(() => {
189+
convertToCoreMessages([
190+
{ role: 'system' as any, content: 'System message' },
191+
]);
192+
}).toThrow('Unhandled role: system');
193+
});
194+
});
195+
});

‎packages/core/core/prompt/convert-to-core-messages.ts

+18-2
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
1+
import { Attachment } from '@ai-sdk/ui-utils';
12
import { ToolResult } from '../generate-text/tool-result';
23
import { CoreMessage } from '../prompt';
4+
import { attachmentsToParts } from './attachments-to-parts';
35

46
/**
57
Converts an array of messages from useChat into an array of CoreMessages that can be used
@@ -10,14 +12,28 @@ export function convertToCoreMessages(
1012
role: 'user' | 'assistant';
1113
content: string;
1214
toolInvocations?: Array<ToolResult<string, unknown, unknown>>;
15+
experimental_attachments?: Attachment[];
1316
}>,
1417
) {
1518
const coreMessages: CoreMessage[] = [];
1619

17-
for (const { role, content, toolInvocations } of messages) {
20+
for (const {
21+
role,
22+
content,
23+
toolInvocations,
24+
experimental_attachments,
25+
} of messages) {
1826
switch (role) {
1927
case 'user': {
20-
coreMessages.push({ role: 'user', content });
28+
coreMessages.push({
29+
role: 'user',
30+
content: experimental_attachments
31+
? [
32+
{ type: 'text', text: content },
33+
...attachmentsToParts(experimental_attachments),
34+
]
35+
: content,
36+
});
2137
break;
2238
}
2339

‎packages/core/core/prompt/data-content.ts

+15-1
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ export function convertDataContentToUint8Array(
4646
} catch (error) {
4747
throw new InvalidDataContentError({
4848
message:
49-
'Invalid data content. Content string is not a base64-encoded image.',
49+
'Invalid data content. Content string is not a base64-encoded media.',
5050
content,
5151
cause: error,
5252
});
@@ -59,3 +59,17 @@ export function convertDataContentToUint8Array(
5959

6060
throw new InvalidDataContentError({ content });
6161
}
62+
63+
/**
64+
* Converts a Uint8Array to a string of text.
65+
*
66+
* @param uint8Array - The Uint8Array to convert.
67+
* @returns The converted string.
68+
*/
69+
export function convertUint8ArrayToText(uint8Array: Uint8Array): string {
70+
try {
71+
return new TextDecoder().decode(uint8Array);
72+
} catch (error) {
73+
throw new Error('Error decoding Uint8Array to text');
74+
}
75+
}

‎packages/react/src/use-chat.ts

+48-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
import type {
22
ChatRequest,
33
ChatRequestOptions,
4+
Attachment,
45
CreateMessage,
56
FetchFunction,
67
IdGenerator,
@@ -108,6 +109,7 @@ const getStreamedResponse = async (
108109
({
109110
role,
110111
content,
112+
experimental_attachments,
111113
name,
112114
data,
113115
annotations,
@@ -118,6 +120,9 @@ const getStreamedResponse = async (
118120
}) => ({
119121
role,
120122
content,
123+
...(experimental_attachments !== undefined && {
124+
experimental_attachments,
125+
}),
121126
...(name !== undefined && { name }),
122127
...(data !== undefined && { data }),
123128
...(annotations !== undefined && { annotations }),
@@ -513,7 +518,7 @@ By default, it's set to 0, which will disable the feature.
513518
const [input, setInput] = useState(initialInput);
514519

515520
const handleSubmit = useCallback(
516-
(
521+
async (
517522
event?: { preventDefault?: () => void },
518523
options: ChatRequestOptions = {},
519524
metadata?: Object,
@@ -527,6 +532,44 @@ By default, it's set to 0, which will disable the feature.
527532

528533
event?.preventDefault?.();
529534

535+
const attachmentsForRequest: Attachment[] = [];
536+
const attachmentsFromOptions = options.experimental_attachments;
537+
538+
if (attachmentsFromOptions) {
539+
if (attachmentsFromOptions instanceof FileList) {
540+
for (const attachment of Array.from(attachmentsFromOptions)) {
541+
const { name, type } = attachment;
542+
543+
const dataUrl = await new Promise<string>((resolve, reject) => {
544+
const reader = new FileReader();
545+
reader.onload = readerEvent => {
546+
resolve(readerEvent.target?.result as string);
547+
};
548+
reader.onerror = error => reject(error);
549+
reader.readAsDataURL(attachment);
550+
});
551+
552+
attachmentsForRequest.push({
553+
name,
554+
contentType: type,
555+
url: dataUrl,
556+
});
557+
}
558+
} else if (Array.isArray(attachmentsFromOptions)) {
559+
for (const file of attachmentsFromOptions) {
560+
const { name, url, contentType } = file;
561+
562+
attachmentsForRequest.push({
563+
name,
564+
contentType,
565+
url,
566+
});
567+
}
568+
} else {
569+
throw new Error('Invalid attachments type');
570+
}
571+
}
572+
530573
const requestOptions = {
531574
headers: options.headers ?? options.options?.headers,
532575
body: options.body ?? options.options?.body,
@@ -538,6 +581,10 @@ By default, it's set to 0, which will disable the feature.
538581
id: generateId(),
539582
role: 'user',
540583
content: input,
584+
experimental_attachments:
585+
attachmentsForRequest.length > 0
586+
? attachmentsForRequest
587+
: undefined,
541588
})
542589
: messagesRef.current,
543590
options: requestOptions,

‎packages/react/src/use-chat.ui.test.tsx

+298-5
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,21 @@
1+
/* eslint-disable @next/next/no-img-element */
12
import {
23
mockFetchDataStream,
34
mockFetchDataStreamWithGenerator,
45
mockFetchError,
56
} from '@ai-sdk/ui-utils/test';
67
import '@testing-library/jest-dom/vitest';
7-
import { cleanup, findByText, render, screen } from '@testing-library/react';
8+
import {
9+
act,
10+
cleanup,
11+
findByText,
12+
render,
13+
screen,
14+
} from '@testing-library/react';
815
import userEvent from '@testing-library/user-event';
9-
import React from 'react';
16+
import React, { useRef, useState } from 'react';
1017
import { useChat } from './use-chat';
11-
import { formatStreamPart } from '@ai-sdk/ui-utils';
18+
import { formatStreamPart, getTextFromDataUrl } from '@ai-sdk/ui-utils';
1219

1320
describe('stream data stream', () => {
1421
const TestComponent = () => {
@@ -227,12 +234,11 @@ describe('form actions', () => {
227234
</div>
228235
))}
229236

230-
<form onSubmit={handleSubmit} className="fixed bottom-0 w-full p-2">
237+
<form onSubmit={handleSubmit}>
231238
<input
232239
value={input}
233240
placeholder="Send message..."
234241
onChange={handleInputChange}
235-
className="w-full p-2 bg-zinc-100"
236242
disabled={isLoading}
237243
data-testid="do-input"
238244
/>
@@ -556,3 +562,290 @@ describe('maxToolRoundtrips', () => {
556562
});
557563
});
558564
});
565+
566+
describe('file attachments with data url', () => {
567+
const TestComponent = () => {
568+
const { messages, handleSubmit, handleInputChange, isLoading, input } =
569+
useChat({
570+
api: '/api/stream-chat',
571+
});
572+
573+
const [attachments, setAttachments] = useState<FileList | undefined>(
574+
undefined,
575+
);
576+
const fileInputRef = useRef<HTMLInputElement>(null);
577+
578+
return (
579+
<div>
580+
{messages.map((m, idx) => (
581+
<div data-testid={`message-${idx}`} key={m.id}>
582+
{m.role === 'user' ? 'User: ' : 'AI: '}
583+
{m.content}
584+
{m.experimental_attachments?.map(attachment => {
585+
if (attachment.contentType?.startsWith('image/')) {
586+
return (
587+
<img
588+
key={attachment.name}
589+
src={attachment.url}
590+
alt={attachment.name}
591+
data-testid={`attachment-${idx}`}
592+
/>
593+
);
594+
} else if (attachment.contentType?.startsWith('text/')) {
595+
return (
596+
<div key={attachment.name} data-testid={`attachment-${idx}`}>
597+
{getTextFromDataUrl(attachment.url)}
598+
</div>
599+
);
600+
}
601+
})}
602+
</div>
603+
))}
604+
605+
<form
606+
onSubmit={event => {
607+
handleSubmit(event, {
608+
experimental_attachments: attachments,
609+
});
610+
setAttachments(undefined);
611+
if (fileInputRef.current) {
612+
fileInputRef.current.value = '';
613+
}
614+
}}
615+
data-testid="chat-form"
616+
>
617+
<input
618+
type="file"
619+
onChange={event => {
620+
if (event.target.files) {
621+
setAttachments(event.target.files);
622+
}
623+
}}
624+
multiple
625+
ref={fileInputRef}
626+
data-testid="file-input"
627+
/>
628+
<input
629+
value={input}
630+
onChange={handleInputChange}
631+
disabled={isLoading}
632+
data-testid="message-input"
633+
/>
634+
<button type="submit" data-testid="submit-button">
635+
Send
636+
</button>
637+
</form>
638+
</div>
639+
);
640+
};
641+
642+
beforeEach(() => {
643+
render(<TestComponent />);
644+
});
645+
646+
afterEach(() => {
647+
vi.restoreAllMocks();
648+
cleanup();
649+
});
650+
651+
it('should handle text file attachment and submission', async () => {
652+
const file = new File(['test file content'], 'test.txt', {
653+
type: 'text/plain',
654+
});
655+
656+
const { requestBody } = mockFetchDataStream({
657+
url: '/api/stream-chat',
658+
chunks: ['0:"Response to message with text attachment"\n'],
659+
});
660+
661+
const fileInput = screen.getByTestId('file-input');
662+
await userEvent.upload(fileInput, file);
663+
664+
const messageInput = screen.getByTestId('message-input');
665+
await userEvent.type(messageInput, 'Message with text attachment');
666+
667+
const submitButton = screen.getByTestId('submit-button');
668+
await userEvent.click(submitButton);
669+
670+
const sentBody = JSON.parse((await requestBody) as string);
671+
expect(sentBody.messages[0].content).toBe('Message with text attachment');
672+
expect(sentBody.messages[0].experimental_attachments).toBeDefined();
673+
expect(sentBody.messages[0].experimental_attachments.length).toBe(1);
674+
expect(sentBody.messages[0].experimental_attachments[0].name).toBe(
675+
'test.txt',
676+
);
677+
678+
await screen.findByTestId('message-0');
679+
expect(screen.getByTestId('message-0')).toHaveTextContent(
680+
'User: Message with text attachment',
681+
);
682+
683+
await screen.findByTestId('attachment-0');
684+
expect(screen.getByTestId('attachment-0')).toHaveTextContent(
685+
'test file content',
686+
);
687+
688+
await screen.findByTestId('message-1');
689+
expect(screen.getByTestId('message-1')).toHaveTextContent(
690+
'AI: Response to message with text attachment',
691+
);
692+
});
693+
694+
// image file
695+
696+
it('should handle image file attachment and submission', async () => {
697+
const file = new File(['test image content'], 'test.png', {
698+
type: 'image/png',
699+
});
700+
701+
const { requestBody } = mockFetchDataStream({
702+
url: '/api/stream-chat',
703+
chunks: ['0:"Response to message with image attachment"\n'],
704+
});
705+
706+
const fileInput = screen.getByTestId('file-input');
707+
await userEvent.upload(fileInput, file);
708+
709+
const messageInput = screen.getByTestId('message-input');
710+
await userEvent.type(messageInput, 'Message with image attachment');
711+
712+
const submitButton = screen.getByTestId('submit-button');
713+
await userEvent.click(submitButton);
714+
715+
const sentBody = JSON.parse((await requestBody) as string);
716+
expect(sentBody.messages[0].content).toBe('Message with image attachment');
717+
expect(sentBody.messages[0].experimental_attachments).toBeDefined();
718+
expect(sentBody.messages[0].experimental_attachments.length).toBe(1);
719+
expect(sentBody.messages[0].experimental_attachments[0].name).toBe(
720+
'test.png',
721+
);
722+
723+
await screen.findByTestId('message-0');
724+
expect(screen.getByTestId('message-0')).toHaveTextContent(
725+
'User: Message with image attachment',
726+
);
727+
728+
await screen.findByTestId('attachment-0');
729+
expect(screen.getByTestId('attachment-0')).toHaveAttribute(
730+
'src',
731+
expect.stringContaining('data:image/png;base64'),
732+
);
733+
734+
await screen.findByTestId('message-1');
735+
expect(screen.getByTestId('message-1')).toHaveTextContent(
736+
'AI: Response to message with image attachment',
737+
);
738+
});
739+
});
740+
741+
describe('file attachments with url', () => {
742+
const TestComponent = () => {
743+
const { messages, handleSubmit, handleInputChange, isLoading, input } =
744+
useChat({
745+
api: '/api/stream-chat',
746+
});
747+
748+
return (
749+
<div>
750+
{messages.map((m, idx) => (
751+
<div data-testid={`message-${idx}`} key={m.id}>
752+
{m.role === 'user' ? 'User: ' : 'AI: '}
753+
{m.content}
754+
{m.experimental_attachments?.map(attachment => {
755+
if (attachment.contentType?.startsWith('image/')) {
756+
return (
757+
<img
758+
key={attachment.name}
759+
src={attachment.url}
760+
alt={attachment.name}
761+
data-testid={`attachment-${idx}`}
762+
/>
763+
);
764+
} else if (attachment.contentType?.startsWith('text/')) {
765+
return (
766+
<div key={attachment.name} data-testid={`attachment-${idx}`}>
767+
{Buffer.from(
768+
attachment.url.split(',')[1],
769+
'base64',
770+
).toString('utf-8')}
771+
</div>
772+
);
773+
}
774+
})}
775+
</div>
776+
))}
777+
778+
<form
779+
onSubmit={event => {
780+
handleSubmit(event, {
781+
experimental_attachments: [
782+
{
783+
name: 'test.png',
784+
contentType: 'image/png',
785+
url: 'https://example.com/image.png',
786+
},
787+
],
788+
});
789+
}}
790+
data-testid="chat-form"
791+
>
792+
<input
793+
value={input}
794+
onChange={handleInputChange}
795+
disabled={isLoading}
796+
data-testid="message-input"
797+
/>
798+
<button type="submit" data-testid="submit-button">
799+
Send
800+
</button>
801+
</form>
802+
</div>
803+
);
804+
};
805+
806+
beforeEach(() => {
807+
render(<TestComponent />);
808+
});
809+
810+
afterEach(() => {
811+
vi.restoreAllMocks();
812+
cleanup();
813+
});
814+
815+
it('should handle image file attachment and submission', async () => {
816+
const { requestBody } = mockFetchDataStream({
817+
url: '/api/stream-chat',
818+
chunks: ['0:"Response to message with image attachment"\n'],
819+
});
820+
821+
const messageInput = screen.getByTestId('message-input');
822+
await userEvent.type(messageInput, 'Message with image attachment');
823+
824+
const submitButton = screen.getByTestId('submit-button');
825+
await userEvent.click(submitButton);
826+
827+
const sentBody = JSON.parse((await requestBody) as string);
828+
expect(sentBody.messages[0].content).toBe('Message with image attachment');
829+
expect(sentBody.messages[0].experimental_attachments).toBeDefined();
830+
expect(sentBody.messages[0].experimental_attachments.length).toBe(1);
831+
expect(sentBody.messages[0].experimental_attachments[0].name).toBe(
832+
'test.png',
833+
);
834+
835+
await screen.findByTestId('message-0');
836+
expect(screen.getByTestId('message-0')).toHaveTextContent(
837+
'User: Message with image attachment',
838+
);
839+
840+
await screen.findByTestId('attachment-0');
841+
expect(screen.getByTestId('attachment-0')).toHaveAttribute(
842+
'src',
843+
expect.stringContaining('https://example.com/image.png'),
844+
);
845+
846+
await screen.findByTestId('message-1');
847+
expect(screen.getByTestId('message-1')).toHaveTextContent(
848+
'AI: Response to message with image attachment',
849+
);
850+
});
851+
});

‎packages/ui-utils/src/data-url.ts

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
/**
2+
* Converts a data URL of type text/* to a text string.
3+
*/
4+
export function getTextFromDataUrl(dataUrl: string): string {
5+
const [header, base64Content] = dataUrl.split(',');
6+
const mimeType = header.split(';')[0].split(':')[1];
7+
8+
if (mimeType == null || base64Content == null) {
9+
throw new Error('Invalid data URL format');
10+
}
11+
12+
try {
13+
return window.atob(base64Content);
14+
} catch (error) {
15+
throw new Error(`Error decoding data URL`);
16+
}
17+
}

‎packages/ui-utils/src/index.ts

+1
Original file line numberDiff line numberDiff line change
@@ -15,3 +15,4 @@ export { processChatStream } from './process-chat-stream';
1515
export { readDataStream } from './read-data-stream';
1616
export { formatStreamPart, parseStreamPart } from './stream-parts';
1717
export type { StreamPart, StreamString } from './stream-parts';
18+
export { getTextFromDataUrl } from './data-url';

‎packages/ui-utils/src/types.ts

+31
Original file line numberDiff line numberDiff line change
@@ -117,6 +117,27 @@ export type ToolInvocation =
117117
| CoreToolCall<string, any>
118118
| CoreToolResult<string, any, any>;
119119

120+
/**
121+
* An attachment that can be sent along with a message.
122+
*/
123+
export interface Attachment {
124+
/**
125+
* The name of the attachment, usually the file name.
126+
*/
127+
name?: string;
128+
129+
/**
130+
* A string indicating the [media type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type).
131+
* By default, it's extracted from the pathname's extension.
132+
*/
133+
contentType?: string;
134+
135+
/**
136+
* The URL of the attachment. It can either be a URL to a hosted file or a [Data URL](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs).
137+
*/
138+
url: string;
139+
}
140+
120141
/**
121142
* AI SDK UI Messages. They are used in the client and to communicate between the frontend and the API routes.
122143
*/
@@ -126,6 +147,11 @@ export interface Message {
126147

127148
content: string;
128149

150+
/**
151+
* Additional attachments to be sent along with the message.
152+
*/
153+
experimental_attachments?: Attachment[];
154+
129155
/**
130156
* @deprecated Use AI SDK 3.1 `toolInvocations` instead.
131157
*/
@@ -282,6 +308,11 @@ Additional data to be sent to the API endpoint.
282308
*/
283309
data?: JSONValue;
284310

311+
/**
312+
* Additional files to be sent to the server.
313+
*/
314+
experimental_attachments?: FileList | Array<Attachment>;
315+
285316
/**
286317
The options to be passed to the fetch call.
287318

‎pnpm-lock.yaml

+63-25
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)
Please sign in to comment.