You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/docs/03-ai-sdk-core/03-prompts.mdx
+60-3
Original file line number
Diff line number
Diff line change
@@ -70,21 +70,78 @@ const result = await generateText({
70
70
71
71
<Note>
72
72
Multi-modal refers to interacting with a model across different data types
73
-
(text, images, sound etc.).
73
+
such as text, image, or audio data.
74
74
</Note>
75
75
76
+
Instead of sending a text in the `content` property, you can send an array of parts that include text and other data types.
77
+
Currently image and text parts are supported.
78
+
76
79
For models that support multi-modal inputs, user messages can include images. An `image` can be a base64-encoded image (`string`), an `ArrayBuffer`, a `Uint8Array`,
77
80
a `Buffer`, or a `URL` object. It is possible to mix text and multiple images.
78
81
79
-
```ts highlight="3-11"
82
+
<Notetype="warning">
83
+
Not all models support all types of multi-modal inputs. Check the model's
84
+
capabilities before using this feature.
85
+
</Note>
86
+
87
+
#### Example: Buffer images
88
+
89
+
```ts highlight="8-11"
80
90
const result =awaitgenerateText({
81
91
model,
82
92
messages: [
83
93
{
84
94
role: 'user',
85
95
content: [
86
96
{ type: 'text', text: 'Describe the image in detail.' },
0 commit comments