Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: vercel/ai
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: ai@3.2.23
Choose a base ref
...
head repository: vercel/ai
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: ai@3.2.24
Choose a head ref
  • 7 commits
  • 17 files changed
  • 4 contributors

Commits on Jul 15, 2024

  1. fix (docs): fix packages/core readme typos (#2273)

    suemor233 authored Jul 15, 2024
    Copy the full SHA
    9013b13 View commit details
  2. fix (docs) (#2279)

    lgrammel authored Jul 15, 2024
    Copy the full SHA
    018d6fc View commit details
  3. feat (docs): document replacing last message on error (#2280)

    lgrammel authored Jul 15, 2024
    Copy the full SHA
    ea1d64e View commit details
  4. feat (docs): error handling for generateObject (#2281)

    lgrammel authored Jul 15, 2024
    Copy the full SHA
    e7ed54a View commit details
  5. chore (docs): improve generateObject error handling example (#2282)

    lgrammel authored Jul 15, 2024
    Copy the full SHA
    91fcd45 View commit details
  6. feat (ai/core): add roundtrips property to generateText result (#2283)

    lgrammel authored Jul 15, 2024
    Copy the full SHA
    f041c05 View commit details
  7. Version Packages (#2284)

    Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
    github-actions[bot] and github-actions[bot] authored Jul 15, 2024
    Copy the full SHA
    5faa581 View commit details
56 changes: 55 additions & 1 deletion content/docs/03-ai-sdk-core/10-generating-structured-data.mdx
Original file line number Diff line number Diff line change
@@ -63,7 +63,10 @@ for await (const partialObject of partialObjectStream) {

You can use `streamObject` to stream generated UIs in combination with React Server Components (see [Generative UI](../ai-sdk-rsc))) or the [`useObject`](/docs/reference/ai-sdk-ui/use-object) hook.

## Guide
## Schema Writing Tips

The mapping from Zod schemas to LLM inputs (typically JSON schema) is not always straightforward, since the mapping is not one-to-one.
Please checkout the following tips and the [Prompt Engineering with Tools](/docs/ai-sdk-core/tools-and-tool-calling#prompt-engineering-with-tools) guide.

### Generating Arrays

@@ -108,3 +111,54 @@ const result = await generateObject({
prompt: 'Generate a fake user profile for testing.',
});
```

## Error Handling

When you use `generateObject`, errors are thrown when the model fails to generate proper JSON (`JSONParseError`)
or when the generated JSON does not match the schema (`TypeValidationError`).
Both error types contain additional information, e.g. the generated text or the invalid value.

You can use this to e.g. design a function that safely process the result object and also returns values in error cases:

```ts
import { openai } from '@ai-sdk/openai';
import { JSONParseError, TypeValidationError, generateObject } from 'ai';
import { z } from 'zod';

const recipeSchema = z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
steps: z.array(z.string()),
}),
});

type Recipe = z.infer<typeof recipeSchema>;

async function generateRecipe(
food: string,
): Promise<
| { type: 'success'; recipe: Recipe }
| { type: 'parse-error'; text: string }
| { type: 'validation-error'; value: unknown }
| { type: 'unknown-error'; error: unknown }
> {
try {
const result = await generateObject({
model: openai('gpt-4-turbo'),
schema: recipeSchema,
prompt: `Generate a ${food} recipe.`,
});

return { type: 'success', recipe: result.object };
} catch (error) {
if (TypeValidationError.isTypeValidationError(error)) {
return { type: 'validation-error', value: error.value };
} else if (JSONParseError.isJSONParseError(error)) {
return { type: 'parse-error', text: error.text };
} else {
return { type: 'unknown-error', error };
}
}
}
```
18 changes: 17 additions & 1 deletion content/docs/03-ai-sdk-core/17-agents.mdx
Original file line number Diff line number Diff line change
@@ -12,7 +12,7 @@ One approach to implementing agents is to allow the LLM to choose the next step
With `generateText`, you can combine [tools](/docs/ai-sdk-core/tools-and-tool-calling) with `maxToolRoundtrips`.
This makes it possible to implement basic agents that reason at each step and make decisions based on the context.

## Example
### Example

This example demonstrates how to create an agent that solves math problems.
It has a calculator tool (using [math.js](https://mathjs.org/)) that it can call to evaluate mathematical expressions.
@@ -55,3 +55,19 @@ const { text: answer } = await generateText({

console.log(`ANSWER: ${answer}`);
```

## Accessing information from all roundtrips

Calling `generateText` with `maxToolRoundtrips` can result in several calls to the LLM (roundtrips).
You can access information from all roundtrips by using the `roundtrips` property of the response.

```ts
const { roundtrips } = await generateText({
model: openai('gpt-4-turbo'),
maxToolRoundtrips: 10,
// ...
});

// extract all tool calls from the roundtrips:
const allToolCalls = roundtrips.flatMap(roundtrip => roundtrip.toolCalls);
```
4 changes: 3 additions & 1 deletion content/docs/05-ai-sdk-ui/02-chatbot.mdx
Original file line number Diff line number Diff line change
@@ -137,7 +137,7 @@ export default function Page() {
### Error State

Similarly, the `error` state reflects the error object thrown during the fetch request.
It can be used to display an error message, disable the submit button, or show a retry button.
It can be used to display an error message, disable the submit button, or show a retry button:

<Note>
We recommend showing a generic error message to the user, such as "Something
@@ -185,6 +185,8 @@ export default function Chat() {
}
```

Please also see the [error handling](/docs/ai-sdk-ui/error-handling) guide for more information.

### Modify messages

Sometimes, you may want to directly modify some existing messages. For example, a delete button can be added to each message to allow users to remove them from the chat history.
6 changes: 3 additions & 3 deletions content/docs/05-ai-sdk-ui/03-chatbot-with-tool-calling.mdx
Original file line number Diff line number Diff line change
@@ -119,7 +119,7 @@ There are three things worth mentioning:
3. The [`maxToolRoundtrips`](/docs/reference/ai-sdk-ui/use-chat#max-tool-roundtrips) option is set to 5.
This enables several tool use iterations between the client and the server.

```tsx filename='app/page.tsx' highlight="9,12,26"
```tsx filename='app/page.tsx' highlight="9,12,31"
'use client';

import { ToolInvocation } from 'ai';
@@ -145,7 +145,7 @@ export default function Chat() {
});

return (
<div>
<>
{messages?.map((m: Message) => (
<div key={m.id}>
<strong>{m.role}:</strong>
@@ -191,7 +191,7 @@ export default function Chat() {
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
</form>
</div>
</>
);
}
```
89 changes: 65 additions & 24 deletions content/docs/05-ai-sdk-ui/21-error-handling.mdx
Original file line number Diff line number Diff line change
@@ -5,25 +5,12 @@ description: Learn how to handle errors in the AI SDK UI

# Error Handling

### Error Handling Callback

Errors can be processed by passing an [`onError`](/docs/reference/ai-sdk-ui/use-chat#on-error) callback function as an option to the [`useChat`](/docs/reference/ai-sdk-ui/use-chat), [`useCompletion`](/docs/reference/ai-sdk-ui/use-completion) or [`useAssistant`](/docs/reference/ai-sdk-ui/use-assistant) hooks.
The callback function receives an error object as an argument.

```tsx file="app/page.tsx" highlight="7-10"
import { useChat } from 'ai/react';
### useChat: Keep Last Message on Error

export default function Page() {
const {
/* ... */
} = useChat({
onError: error => {
// handle error
console.error(error);
},
});
}
```
`useChat` has a `keepLastMessageOnError` option that defaults to `false`.
This option can be enabled to keep the last message on error.
We will make this the default behavior in the next major release,
and recommend enabling it.

### Error Helper Object

@@ -36,7 +23,7 @@ You can use the error object to show an error message, disable the submit button
server.
</Note>

```tsx file="app/page.tsx" highlight="7,19-26,32"
```tsx file="app/page.tsx" highlight="8,19-26,32"
'use client';

import { useChat } from 'ai/react';
@@ -76,12 +63,66 @@ export default function Chat() {
}
```

### useChat: Keep Last Message on Error
#### Alternative: replace last message

`useChat` has a `keepLastMessageOnError` option that defaults to `false`.
This option can be enabled to keep the last message on error.
We will make this the default behavior in the next major release.
Please enable it and update your error handling/resubmit behavior.
Alternatively you can write a custom submit handler that replaces the last message when an error is present.

```tsx file="app/page.tsx" highlight="8,11-16,29"
'use client';

import { useChat } from 'ai/react';

export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, error, messages } =
useChat({
keepLastMessageOnError: true,
});

function customSubmit(event: React.FormEvent<HTMLFormElement>) {
if (error != null) {
setMessages(messages.slice(0, -1)); // remove last message
}

handleSubmit(event);
}

return (
<div>
{messages.map(m => (
<div key={m.id}>
{m.role}: {m.content}
</div>
))}

{error && <div>An error occurred.</div>}

<form onSubmit={customSubmit}>
<input value={input} onChange={handleInputChange} />
</form>
</div>
);
}
```

### Error Handling Callback

Errors can be processed by passing an [`onError`](/docs/reference/ai-sdk-ui/use-chat#on-error) callback function as an option to the [`useChat`](/docs/reference/ai-sdk-ui/use-chat), [`useCompletion`](/docs/reference/ai-sdk-ui/use-completion) or [`useAssistant`](/docs/reference/ai-sdk-ui/use-assistant) hooks.
The callback function receives an error object as an argument.

```tsx file="app/page.tsx" highlight="7-10"
import { useChat } from 'ai/react';

export default function Page() {
const {
/* ... */
} = useChat({
onError: error => {
// handle error
console.error(error);
},
});
}
```

### Injecting Errors for Testing

89 changes: 88 additions & 1 deletion content/docs/07-reference/ai-sdk-core/01-generate-text.mdx
Original file line number Diff line number Diff line change
@@ -436,7 +436,7 @@ console.log(text);
type: 'RawResponse',
parameters: [
{
name: 'header',
name: 'headers',
optional: true,
type: 'Record<string, string>',
description: 'Response headers.',
@@ -457,6 +457,93 @@ console.log(text);
description:
'The response messages that were generated during the call. It consists of an assistant message, potentially containing tool calls. When there are tool results, there is an additional tool message with the tool results that are available. If there are tools that do not have execute functions, they are not included in the tool results and need to be added separately.',
},
{
name: 'roundtrips',
type: 'Array<Roundtrip>',
description:
'Response information for every roundtrip. You can use this to get information about intermediate steps, such as the tool calls or the response headers.',
properties: [
{
type: 'Roundtrip',
parameters: [
{
name: 'text',
type: 'string',
description: 'The generated text by the model.',
},
{
name: 'toolCalls',
type: 'array',
description: 'A list of tool calls made by the model.',
},
{
name: 'toolResults',
type: 'array',
description:
'A list of tool results returned as responses to earlier tool calls.',
},
{
name: 'finishReason',
type: "'stop' | 'length' | 'content-filter' | 'tool-calls' | 'error' | 'other' | 'unknown'",
description: 'The reason the model finished generating the text.',
},
{
name: 'usage',
type: 'CompletionTokenUsage',
description: 'The token usage of the generated text.',
properties: [
{
type: 'CompletionTokenUsage',
parameters: [
{
name: 'promptTokens',
type: 'number',
description: 'The total number of tokens in the prompt.',
},
{
name: 'completionTokens',
type: 'number',
description:
'The total number of tokens in the completion.',
},
{
name: 'totalTokens',
type: 'number',
description: 'The total number of tokens generated.',
},
],
},
],
},
{
name: 'rawResponse',
type: 'RawResponse',
optional: true,
description: 'Optional raw response data.',
properties: [
{
type: 'RawResponse',
parameters: [
{
name: 'headers',
optional: true,
type: 'Record<string, string>',
description: 'Response headers.',
},
],
},
],
},
{
name: 'warnings',
type: 'Warning[] | undefined',
description:
'Warnings from the model provider (e.g. unsupported settings).',
},
],
},
],
},
]}
/>

4 changes: 2 additions & 2 deletions content/docs/07-reference/ai-sdk-core/02-stream-text.mdx
Original file line number Diff line number Diff line change
@@ -439,7 +439,7 @@ for await (const textPart of textStream) {
type: 'RawResponse',
parameters: [
{
name: 'header',
name: 'headers',
optional: true,
type: 'Record<string, string>',
description: 'Response headers.',
@@ -521,7 +521,7 @@ for await (const textPart of textStream) {
type: 'RawResponse',
parameters: [
{
name: 'header',
name: 'headers',
optional: true,
type: 'Record<string, string>',
description: 'Response headers.',
Original file line number Diff line number Diff line change
@@ -364,7 +364,7 @@ console.log(JSON.stringify(object, null, 2));
type: 'RawResponse',
parameters: [
{
name: 'header',
name: 'headers',
optional: true,
type: 'Record<string, string>',
description: 'Response headers.',
Loading