Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: vercel/ai
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: ai@3.0.0
Choose a base ref
...
head repository: vercel/ai
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: ai@3.0.1
Choose a head ref

Commits on Feb 29, 2024

  1. examples: Add ai/rsc example (#1010)

    MaxLeiter authored Feb 29, 2024
    Copy the full SHA
    243d8d0 View commit details
  2. Update README.md (#1012)

    jaredpalmer authored Feb 29, 2024
    Copy the full SHA
    6e4ac5e View commit details
  3. Update layout.tsx

    jaredpalmer authored Feb 29, 2024
    Copy the full SHA
    14b95bc View commit details
  4. Update README.md

    jaredpalmer authored Feb 29, 2024
    Copy the full SHA
    af028e0 View commit details
  5. Fix turbo.json declaration of env vars

    jaredpalmer committed Feb 29, 2024
    Copy the full SHA
    0e06ecb View commit details
  6. eslint: fix useEffect dependencies in examples

    MaxLeiter committed Feb 29, 2024
    Copy the full SHA
    1860d52 View commit details
  7. update readme

    shuding authored Feb 29, 2024
    Copy the full SHA
    4a1c9dc View commit details

Commits on Mar 1, 2024

  1. docs/concepts/ai-rscs: Add some recipes, minor cleanup (#1013)

    MaxLeiter authored Mar 1, 2024
    Copy the full SHA
    1d0a0cc View commit details
  2. Improve Gen UI docs, group providers/gen ui in API reference, misc im…

    …provements (#1014)
    
    Co-authored-by: Jared Palmer <jared@jaredpalmer.com>
    Co-authored-by: Shu Ding <g@shud.in>
    3 people authored Mar 1, 2024
    Copy the full SHA
    c65f654 View commit details
  3. 3.0 docs tweaks (#1016)

    jaredpalmer authored Mar 1, 2024
    Copy the full SHA
    46aab53 View commit details
  4. Add onText callback. (#1015)

    lgrammel authored Mar 1, 2024
    Copy the full SHA
    0bb0810 View commit details
  5. Improve docs (#1018)

    shuding authored Mar 1, 2024
    Copy the full SHA
    5841945 View commit details
  6. Update tools.mdx (#1019)

    jaredpalmer authored Mar 1, 2024
    Copy the full SHA
    bac5101 View commit details
  7. Update empty-screen.tsx (#1020)

    jaredpalmer authored Mar 1, 2024
    Copy the full SHA
    0d7f3fe View commit details
  8. Add onText changeset. (#1017)

    lgrammel authored Mar 1, 2024
    Copy the full SHA
    b88778f View commit details
  9. Fix render() generator cases and use onText (#1021)

    Co-authored-by: Max Leiter <max.leiter@vercel.com>
    shuding and MaxLeiter authored Mar 1, 2024
    Copy the full SHA
    ac20a25 View commit details
  10. Fix specifier in example

    jaredpalmer committed Mar 1, 2024
    Copy the full SHA
    5708e2c View commit details
  11. Update README.md

    jaredpalmer authored Mar 1, 2024
    Copy the full SHA
    2e43cc3 View commit details
  12. Fix example

    jaredpalmer committed Mar 1, 2024
    Copy the full SHA
    893688a View commit details
  13. Fix example version

    jaredpalmer committed Mar 1, 2024
    Copy the full SHA
    5f13867 View commit details
  14. Version Packages (#1022)

    Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
    github-actions[bot] and github-actions[bot] authored Mar 1, 2024
    Copy the full SHA
    8329858 View commit details
Showing with 5,026 additions and 445 deletions.
  1. +2 −10 docs/pages/docs/api-reference/_meta.json
  2. +2 −0 docs/pages/docs/api-reference/generative-ui/_meta.json
  3. +9 −6 docs/pages/docs/api-reference/{ → generative-ui}/create-ai.mdx
  4. 0 docs/pages/docs/api-reference/{ → generative-ui}/create-streamable-ui.mdx
  5. 0 docs/pages/docs/api-reference/{ → generative-ui}/create-streamable-value.mdx
  6. +1 −1 docs/pages/docs/api-reference/{ → generative-ui}/get-ai-state.mdx
  7. +2 −2 docs/pages/docs/api-reference/{ → generative-ui}/get-mutable-ai-state.mdx
  8. +221 −0 docs/pages/docs/api-reference/generative-ui/render.mdx
  9. 0 docs/pages/docs/api-reference/{ → generative-ui}/stream-data.mdx
  10. +58 −0 docs/pages/docs/api-reference/generative-ui/use-actions.mdx
  11. +9 −7 docs/pages/docs/api-reference/{ → generative-ui}/use-ai-state.mdx
  12. +4 −1 docs/pages/docs/api-reference/{ → generative-ui}/use-ui-state.mdx
  13. +12 −0 docs/pages/docs/api-reference/providers.mdx
  14. +12 −0 docs/pages/docs/api-reference/providers/_meta.json
  15. 0 docs/pages/docs/api-reference/{ → providers}/anthropic-stream.mdx
  16. 0 docs/pages/docs/api-reference/{ → providers}/assistant-response.mdx
  17. 0 docs/pages/docs/api-reference/{ → providers}/aws-bedrock-stream.mdx
  18. 0 docs/pages/docs/api-reference/{ → providers}/cohere-stream.mdx
  19. 0 docs/pages/docs/api-reference/{ → providers}/google-generative-ai-stream.mdx
  20. 0 docs/pages/docs/api-reference/{ → providers}/huggingface-stream.mdx
  21. 0 docs/pages/docs/api-reference/{ → providers}/inkeep-stream.mdx
  22. 0 docs/pages/docs/api-reference/{ → providers}/langchain-stream.mdx
  23. 0 docs/pages/docs/api-reference/{ → providers}/mistral-stream.mdx
  24. 0 docs/pages/docs/api-reference/{ → providers}/openai-stream.mdx
  25. 0 docs/pages/docs/api-reference/{ → providers}/replicate-stream.mdx
  26. +5 −2 docs/pages/docs/api-reference/streaming-react-response.mdx
  27. +0 −74 docs/pages/docs/api-reference/use-actions.mdx
  28. +1 −1 docs/pages/docs/concepts/_meta.json
  29. +209 −71 docs/pages/docs/concepts/ai-rsc.mdx
  30. +3 −2 docs/pages/docs/concepts/tools.mdx
  31. +1 −1 docs/pages/docs/index.mdx
  32. +54 −0 examples/next-ai-rsc/README.md
  33. +380 −0 examples/next-ai-rsc/app/action.tsx
  34. +59 −0 examples/next-ai-rsc/app/globals.css
  35. +79 −0 examples/next-ai-rsc/app/layout.tsx
  36. +183 −0 examples/next-ai-rsc/app/page.tsx
  37. +17 −0 examples/next-ai-rsc/components.json
  38. +17 −0 examples/next-ai-rsc/components/chat-list.tsx
  39. +73 −0 examples/next-ai-rsc/components/empty-screen.tsx
  40. +29 −0 examples/next-ai-rsc/components/external-link.tsx
  41. +20 −0 examples/next-ai-rsc/components/footer.tsx
  42. +50 −0 examples/next-ai-rsc/components/header.tsx
  43. +30 −0 examples/next-ai-rsc/components/llm-stocks/event.tsx
  44. +15 −0 examples/next-ai-rsc/components/llm-stocks/events-skeleton.tsx
  45. +38 −0 examples/next-ai-rsc/components/llm-stocks/index.tsx
  46. +72 −0 examples/next-ai-rsc/components/llm-stocks/message.tsx
  47. +16 −0 examples/next-ai-rsc/components/llm-stocks/spinner.tsx
  48. +134 −0 examples/next-ai-rsc/components/llm-stocks/stock-purchase.tsx
  49. +22 −0 examples/next-ai-rsc/components/llm-stocks/stock-skeleton.tsx
  50. +201 −0 examples/next-ai-rsc/components/llm-stocks/stock.tsx
  51. +9 −0 examples/next-ai-rsc/components/llm-stocks/stocks-skeleton.tsx
  52. +53 −0 examples/next-ai-rsc/components/llm-stocks/stocks.tsx
  53. +111 −0 examples/next-ai-rsc/components/prompt-form.tsx
  54. +15 −0 examples/next-ai-rsc/components/providers.tsx
  55. +57 −0 examples/next-ai-rsc/components/ui/button.tsx
  56. +298 −0 examples/next-ai-rsc/components/ui/icons.tsx
  57. +25 −0 examples/next-ai-rsc/components/ui/input.tsx
  58. +26 −0 examples/next-ai-rsc/components/ui/label.tsx
  59. +31 −0 examples/next-ai-rsc/components/ui/separator.tsx
  60. +24 −0 examples/next-ai-rsc/components/ui/textarea.tsx
  61. +127 −0 examples/next-ai-rsc/components/ui/toast.tsx
  62. +35 −0 examples/next-ai-rsc/components/ui/toaster.tsx
  63. +30 −0 examples/next-ai-rsc/components/ui/tooltip.tsx
  64. +189 −0 examples/next-ai-rsc/components/ui/use-toast.ts
  65. +28 −0 examples/next-ai-rsc/lib/hooks/chat-scroll-anchor.tsx
  66. +23 −0 examples/next-ai-rsc/lib/hooks/use-at-bottom.tsx
  67. +23 −0 examples/next-ai-rsc/lib/hooks/use-enter-submit.tsx
  68. +114 −0 examples/next-ai-rsc/lib/utils/index.tsx
  69. +25 −0 examples/next-ai-rsc/lib/utils/tool-definition.ts
  70. +5 −0 examples/next-ai-rsc/next-env.d.ts
  71. +4 −0 examples/next-ai-rsc/next.config.mjs
  72. +53 −0 examples/next-ai-rsc/package.json
  73. +5 −0 examples/next-ai-rsc/postcss.config.js
  74. BIN examples/next-ai-rsc/public/apple-touch-icon.png
  75. BIN examples/next-ai-rsc/public/favicon-16x16.png
  76. BIN examples/next-ai-rsc/public/favicon.ico
  77. +74 −0 examples/next-ai-rsc/tailwind.config.ts
  78. +27 −0 examples/next-ai-rsc/tsconfig.json
  79. +1 −1 examples/next-anthropic/package.json
  80. +1 −1 examples/next-aws-bedrock/package.json
  81. +1 −1 examples/next-fireworks/package.json
  82. +1 −1 examples/next-google-generative-ai/package.json
  83. +1 −1 examples/next-huggingface/package.json
  84. +1 −1 examples/next-inkeep/package.json
  85. +1 −1 examples/next-langchain/package.json
  86. +1 −1 examples/next-mistral/package.json
  87. +1 −1 examples/next-openai-pages/package.json
  88. +1 −1 examples/next-openai-rate-limits/package.json
  89. +1 −1 examples/next-openai/package.json
  90. +1 −1 examples/next-perplexity/package.json
  91. +1 −1 examples/nuxt-openai/package.json
  92. +1 −1 examples/solidstart-openai/package.json
  93. +1 −1 examples/sveltekit-openai/package.json
  94. +7 −0 packages/core/CHANGELOG.md
  95. +1 −60 packages/core/README.md
  96. +1 −1 packages/core/package.json
  97. +34 −17 packages/core/rsc/streamable.tsx
  98. +17 −6 packages/core/streams/ai-stream.ts
  99. +24 −1 packages/core/streams/openai-stream.test.ts
  100. +38 −10 packages/core/streams/openai-stream.ts
  101. +1,436 −156 pnpm-lock.yaml
  102. +2 −1 turbo.json
12 changes: 2 additions & 10 deletions docs/pages/docs/api-reference/_meta.json
Original file line number Diff line number Diff line change
@@ -1,18 +1,10 @@
{
"generative-ui": "Generative UI",
"providers": "Providers",
"use-assistant": "experimental_useAssistant",
"use-chat": "useChat",
"use-completion": "useCompletion",
"ai-stream": "AIStream",
"anthropic-stream": "AnthropicStream",
"aws-bedrock-stream": "AWSBedrock*Stream",
"cohere-stream": "CohereStream",
"google-generative-ai-stream": "GoogleGenerativeAIStream",
"huggingface-stream": "HuggingFaceStream",
"langchain-stream": "LangChainStream",
"openai-stream": "OpenAIStream",
"mistral-stream": "MistralStream",
"replicate-stream": "ReplicateStream",
"inkeep-stream": "InkeepStream",
"stream-data": "experimental_StreamData",
"streaming-text-response": "StreamingTextResponse",
"stream-to-response": "streamToResponse",
2 changes: 2 additions & 0 deletions docs/pages/docs/api-reference/generative-ui/_meta.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
{
}
Original file line number Diff line number Diff line change
@@ -10,21 +10,24 @@ import { Tabs, Tab } from 'nextra-theme-docs';

## `createAI`

`createAI` is a function that creates a new `ai/rsc` instance.
`createAI` creates a new `ai/rsc` instance.

## Parameters

### `AIState`
### `initialAIState`: [`AIState`](/docs/concepts/ai-rsc#aistate)

`AIState` is a JSON representation of all the context the LLM needs to read. Usually for a chat app, `AIState` contains the textual conversation history between the user and the assistant. In practice, it can also be used to store other values and meta information such as `createdAt` of each message. `AIState` by default, can be accessed/modified on both Server and Client.
`AIState` is a JSON representation of all the context the LLM needs to read. Usually for a chat app, `AIState` contains the textual conversation history between the user and the assistant. It can also be used to store other values and information such as a `createdAt` field for each message, or a `chatId` field besides all the messages.
`AIState` by default, can be accessed/modified on both Server and Client, so the values must be serializable.

### `UIState`
### `initialUIState`: [`UIState`](/docs/concepts/ai-rsc#uistate)

`UIState` is what the application uses to display the UI. It is a fully client-side state (very similar to `useState`) and can keep data and UI elements returned by the LLM. This state can be anything, but can't be accessed on the server.
`UIState` is what the application uses to display the UI. It is a client-side state (very similar to `useState`) and contains data and UI elements returned by the LLM and your Server Actions. Unlike `AIState`, `UIState` can only be accessed on the client,
but this means that it can contain non-serializable values such as functions and React nodes.

## Returns

The method returns an `<AI>` instance, which is a provider component that is used to wrap a part of your component tree and pass the context value down the tree. Any component that needs the context value can access it, no matter how deep it is in the component tree.
The method returns an `<AI>` context provider that can be used to wrap the parts of your tree that use
the client-side hooks [`useAIState`](/docs/api-reference/use-ai-state) and [`useUIState`](/docs/api-reference/use-ui-state).

## Example

Original file line number Diff line number Diff line change
@@ -10,7 +10,7 @@ import { Tabs, Tab } from 'nextra-theme-docs';

## `getAIState`

`getAIState` is called within a Server Action to get the current [AI state](../concepts/ai-rsc#ai-state). If `key` is provided, it will return the value of the specified key in the AI state if it's an object. If it's not an object, it will throw an error.
`getAIState` is called within a Server Action to get the current [AI state](/docs/concepts/ai-rsc#ai-state). If `key` is provided, it will return the value of the specified key in the AI state if it's an object. If it's not an object, it will throw an error.

The AI state returned is read-only so if you want to make updates to it, you should use [getMutableAIState](./get-mutable-ai-state).

Original file line number Diff line number Diff line change
@@ -10,7 +10,7 @@ import { Tabs, Tab } from 'nextra-theme-docs';

## `getMutableAIState`

`getMutableAIState` is called within a Server Action to make updates to the [AI state](../concepts/ai-rsc#aistate).
`getMutableAIState` is called within a Server Action to make updates to the [AI state](/docs/concepts/ai-rsc#aistate).

## Methods

@@ -24,7 +24,7 @@ The `update` method updates the AI state with a new value.

### `done`

You must call `.done()` when you're finished updating the AI state.
You must call `.done()` when you're finished updating the AI state. This "seals" the AI state and marks it ready to be synced with the client as well as external storage.

## Examples

221 changes: 221 additions & 0 deletions docs/pages/docs/api-reference/generative-ui/render.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,221 @@
---
title: render
---

import { Callout, Tabs, Tab, Steps } from 'nextra-theme-docs';
import { UIPreviewCard, Card } from '@/components/home/card';
import { Browser } from '@/components/home/browser';
import { EventPlanning } from '@/components/home/event-planning';
import { Searching } from '@/components/home/searching';

# render()

The `render` function is a powerful helper function to create a streamable UIs from an LLM response.

Note: this function only supports OpenAI SDK-compatible models/providers.

## Streaming Text

By default, `render` will stream the text content of the LLM response wrapped with a React Fragment `<>` tag.

<Tabs items={['Next.js (App Router)']}>
<Tab>
```tsx filename="app/actions.tsx"
import { OpenAI } from "openai";
import { render } from "ai/rsc";

const openai = new OpenAI();

async function submitUserMessage(content: string) {
"use server";

const aiState = getMutableAIState();

// Update AI state with new message.
aiState.update([
...aiState.get(),
{
role: "user",
content,
},
]);

// render() returns a stream of UI components
const ui = render({
model: 'gpt-4-turbo',
provider: openai,
// You may want to construct messages from your AI state
messages: [
{ role: 'system', content: 'You are a flight assistant' },
{ role: 'user', content: userInput }
],
})

return {
id: Date.now(),
// You can render ui on the client with something like `{message.display}`
display: ui
};
}
```
</Tab>

</Tabs>

You can also customize the React component streamed for text responses by using the `text` key.

<Tabs items={['Next.js (App Router)']}>
<Tab>
```tsx filename="app/actions.tsx"
async function submitUserMessage(content: string) {
"use server";

// ... same as above

const ui = render({

// ... same as above

// `text` is called when an AI returns a text response (as opposed to a tool call)
text: ({ content, done }) => {
// text can be streamed from the LLM, but we only want to close the stream with .done() when its completed.
// done() marks the state as available for the client to access
if (done) {
aiState.done({
...aiState.get(),
{
role: "assistant",
content
}
})
}

return <div>{content}</div>
}
})

return {
id: Date.now(),
// You can render UI on the client with something like `{message.display}` and the
// result yielded in`text` will be displayed on the client and streamed
// in as it is returned from the model.
display: ui
};
}
```
</Tab>

</Tabs>

## Tools and Function Calls with React Server Components

The `render` function allows you to map [OpenAI-compatible model/providers with Function Calls and Assistants Tools](https://platform.openai.com/docs/guides/function-calling) to [React Server Components](https://vercel.com/blog/understanding-react-server-components) using the `tools` key.
Note that both `text` and `render` can be specified at the same time, with `text` being the fallback for when no function is called by the model.

If you use other models, you can [prompt engineer](docs/concepts/prompt-engineering) them
into returning structured data and manually handle the streaming UI with [`createStreamableUI`](./create-streamable-ui) and [`createStreamableValue`](./create-streamable-value).

Tool/Function Schema definitions use [Zod](https://zod.dev/) schemas to specify the function parameters and return values. `render` will automatically validate the schemas and throw an error if the function call is invalid.

To install Zod, run:

```sh
pnpm install zod
```

Each tool specified also accepts a nested `render` function for returning React components. There are a few different signatures you can use:

- `() => ReactNode` - a function that returns a React Node
- `async () => ReactNode` - an async function that returns a React Node
- `function* () {}` - an generator function that returns a React Node.
- `async function* () {}` - an async generator function that returns a React Node.

If you use a generator signature, you can `yield` React Nodes and they will be sent as distinct updates to the client. This is very powerful for loading states and agentic, multi-step behaviors.

<Tabs items={['Next.js (App Router)']}>
<Tab>
```tsx filename="app/actions.tsx"
import { OpenAI } from "openai";
import { render } from "ai/rsc";
import { z } from "zod";

const openai = new OpenAI();

async function submitUserMessage(content: string) {
"use server";

const aiState = getMutableAIState();

// Update AI state with new message.
aiState.update([
...aiState.get(),
{
role: "user",
content,
},
]);

// render() returns a stream of UI components
const ui = render({
model: 'gpt-4-turbo',
provider: openai,
// You may want to construct messages from your AI state
messages: [
{ role: 'system', content: 'You are a flight assistant' },
{ role: 'user', content: userInput }
],
// `text` is called when an AI returns a text response (as opposed to a tool call)
text: ({ content, done }) => {
// text can be streamed from the LLM, but we only want to close the stream with .done() when its completed.
// done() marks the state as available for the client to access
if (done) {
aiState.done({
...aiState.get(),
{
role: "assistant",
content
}
})
}

return <div>{content}</div>
}
tools: {
get_flight_info: {
description: 'Get the information for a flight',
parameters: z.object({
flightNumber: z.string().describe('the number of the flight')
}).required(),
// flightNumber is inferred from the parameters passed above
render: async function* ({ flightNumber }) {
yield <Spinner/>
const flightInfo = await getFlightInfo(flightNumber)

aiState.done([
...aiState.get(),
{
role: "function",
name: "get_flight_info",
// Content can be any string to provide context to the LLM in the rest of the conversation
content: JSON.stringify(flightInfo),
}
]);

return <FlightCard flightInfo={flightInfo} />
}
}
}
})

return {
id: Date.now(),
// You can render UI on the client with something like `{message.display}` and the
// result yielded in `render` or `text` will be displayed on the client and streamed
// in as it is returned from the model.
display: ui
};
}
```
</Tab>

</Tabs>
58 changes: 58 additions & 0 deletions docs/pages/docs/api-reference/generative-ui/use-actions.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
---
title: useActions
layout:
toc: false
---

import { Tabs, Tab } from 'nextra-theme-docs';

# useActions

## `useActions`

`useActions` is a hook to help you access your Server Actions from the client. It’s required to access these server actions via this hook (because we patched these actions when they’re passing through this context), and if you directly import the actions instead of using useActions, you might run into the current issue of “cannot find client component.”
This is particularly useful for building interfaces that require user interactions with the server.

<Tabs items={['Next.js (App Router)']}>
<Tab>
```tsx filename="app/action.tsx"
async function viewStock(symbol: string) {
"use server"

const price = getStockPrice(symbol);
const uiStream = createUIStream();

uiStream.close(
<div>
{symbol}: {price}
</div>
);

return {
id: Date.now(),
display: uiStream.value,
};
}
```

```tsx filename="app/components/button.tsx"
export function Button({ setMessages }) {
const { viewStock } = useActions();
const [messages, setMessages] = useUIState();

return (
<button
onClick={async () => {
const newMessage = await viewStock('NVDA');

setMessages((currentMessages: any) => [...currentMessages, newMessage]);
}}
>
Purchase
</button>
);
}
```

</Tab>
</Tabs>
Loading