Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: vercel/ai
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: ai@2.2.23
Choose a base ref
...
head repository: vercel/ai
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: ai@2.2.24
Choose a head ref
  • 6 commits
  • 48 files changed
  • 6 contributors

Commits on Nov 15, 2023

  1. Docs: experimental_StreamingReactResponse (#743)

    Co-authored-by: Max Leiter <maxwell.leiter@gmail.com>
    lgrammel and MaxLeiter authored Nov 15, 2023
    Copy the full SHA
    a176761 View commit details
  2. Add types for data stream lines. (#741)

    Co-authored-by: Max Leiter <max.leiter@vercel.com>
    lgrammel and MaxLeiter authored Nov 15, 2023
    Copy the full SHA
    3e2299e View commit details

Commits on Nov 16, 2023

  1. Copy the full SHA
    fd7d445 View commit details
  2. Introduce useAssistant (experimental). (#728)

    Co-authored-by: Safi Nettah <nettah.safi@protonmail.com>
    Co-authored-by: Max Leiter <max.leiter@vercel.com>
    3 people authored Nov 16, 2023
    Copy the full SHA
    69ca8f5 View commit details
  3. Solid.js: Add complex response parsing and StreamData support to useC…

    …hat (#738)
    
    Co-authored-by: Max Leiter <max.leiter@vercel.com>
    lgrammel and MaxLeiter authored Nov 16, 2023
    Copy the full SHA
    70bd2ac View commit details
  4. Version Packages (#747)

    Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
    github-actions[bot] and github-actions[bot] authored Nov 16, 2023
    Copy the full SHA
    785a245 View commit details
Showing with 1,809 additions and 550 deletions.
  1. +156 −0 docs/pages/docs/api-reference/streaming-react-response.md
  2. +5 −2 docs/pages/docs/api-reference/use-chat.mdx
  3. +82 −0 docs/pages/docs/guides/providers/openai.mdx
  4. +1 −1 examples/next-anthropic/package.json
  5. +1 −1 examples/next-fireworks/package.json
  6. +1 −1 examples/next-huggingface/package.json
  7. +1 −1 examples/next-langchain/package.json
  8. +1 −1 examples/next-openai-rate-limits/package.json
  9. +63 −0 examples/next-openai/app/api/assistant/assistant-setup.md
  10. +140 −0 examples/next-openai/app/api/assistant/route.ts
  11. +66 −0 examples/next-openai/app/assistant/page.tsx
  12. +2 −3 examples/next-openai/app/stream-react-response/action.tsx
  13. +1 −1 examples/next-openai/app/vision/page.tsx
  14. +1 −1 examples/next-openai/package.json
  15. +1 −1 examples/nuxt-openai/package.json
  16. +1 −1 examples/solidstart-openai/package.json
  17. +97 −0 examples/solidstart-openai/src/routes/api/chat-with-functions/index.ts
  18. +88 −0 examples/solidstart-openai/src/routes/function-calling/index.tsx
  19. +1 −1 examples/sveltekit-openai/package.json
  20. +8 −0 packages/core/CHANGELOG.md
  21. +1 −1 packages/core/package.json
  22. +1 −0 packages/core/react/index.ts
  23. +109 −0 packages/core/react/use-assistant.ts
  24. +51 −209 packages/core/react/use-chat.ts
  25. +5 −0 packages/core/react/use-chat.ui.test.tsx
  26. +134 −0 packages/core/shared/call-api.ts
  27. +101 −31 packages/core/{react → shared}/parse-complex-response.test.ts
  28. +36 −39 packages/core/{react → shared}/parse-complex-response.ts
  29. +87 −0 packages/core/shared/process-chat-stream.ts
  30. +25 −0 packages/core/shared/process-message-stream.ts
  31. +65 −0 packages/core/shared/stream-parts.test.ts
  32. +267 −0 packages/core/shared/stream-parts.ts
  33. +11 −0 packages/core/shared/types.ts
  34. +16 −45 packages/core/shared/utils.test.ts
  35. +12 −82 packages/core/shared/utils.ts
  36. +75 −99 packages/core/solid/use-chat.ts
  37. +60 −0 packages/core/streams/assistant-response.ts
  38. +1 −1 packages/core/streams/cohere-stream.test.ts
  39. +1 −1 packages/core/streams/huggingface-stream.test.ts
  40. +1 −0 packages/core/streams/index.ts
  41. +2 −2 packages/core/streams/langchain-stream.test.ts
  42. +5 −5 packages/core/streams/openai-stream.test.tsx
  43. +9 −4 packages/core/streams/openai-stream.ts
  44. +1 −1 packages/core/streams/replicate-stream.test.ts
  45. +4 −4 packages/core/streams/stream-data.ts
  46. +1 −1 packages/core/streams/streaming-react-response.ts
  47. +9 −9 pnpm-lock.yaml
  48. +2 −1 turbo.json
156 changes: 156 additions & 0 deletions docs/pages/docs/api-reference/streaming-react-response.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
---
title: experimental_StreamingReactResponse
layout:
toc: false
---

# `experimental_StreamingReactResponse`

The `experimental_StreamingReactResponse` class allows you to stream React component responses.

<Callout>
The `experimental_` prefix indicates that the API is not yet stable and may
change in the future without a major version bump.

It is currently only implemented from `ai/react`'s `useChat` hook.

</Callout>

## `experimental_StreamingReactResponse(res: ReadableStream, options?: ResponseOptions): Response` [#streamingreactresponse]

The `experimental_StreamingReactResponse` class is designed to facilitate streaming React responses in a server action environment. It can handle and stream both raw content and data payloads, including special UI payloads, through nested promises.

## Parameters

### `res: ReadableStream`

This parameter should be a `ReadableStream`, which encapsulates the HTTP response's content. It represents the stream from which the response is read and processed.

### `options?: {ui?: Function, data?: experimental_StreamData}`

This optional parameter allows additional configurations for rendering React components and handling streamed data.

The options object can include:

- `ui?: (message: {content: string, data?: JSONValue[] | undefined}) => UINode | Promise<UINode>`: A function that receives a message object with `content` and optional `data` fields. This function should return a React component (as `UINode`) for each chunk in the stream. The `data` attribute in the message is available when the `data` option is configured to include stream data.
- `data?: experimental_StreamData`: An instance of `experimental_StreamData` used to process and stream data along with the response.

## Returns

The method returns a `Promise<ReactResponseRow>`, which resolves to the next row of the React response. Each row contains a payload with UI components (`ui`), raw content (`content`), and a `next` property pointing to the subsequent row or `null` if it's the last row. This setup allows for continuous streaming and rendering of data in a React-based UI.

## Example

### Server-Side Implementation

This example highlights the usage of `experimental_StreamingReactResponse` specifically within a server action context in a Next.js environment. It is crucial to note that this functionality is only available when used in server actions. For more details on server actions, visit [Next.js Documentation on Server Actions](https://nextjs.org/docs/app/api-reference/functions/server-actions).

In this server-side script, the `handler` function interacts with the OpenAI API to process and stream chat completions. The responses are streamed using `OpenAIStream` and subsequently managed by `experimental_StreamingReactResponse` for dynamic React component rendering.

Server:

```tsx
// app/stream-react-response/action.tsx
'use server';

import OpenAI from 'openai';
import { OpenAIStream, experimental_StreamingReactResponse, Message } from 'ai';
import { Counter } from './counter';

const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!,
});

export async function handler({ messages }: { messages: Message[] }) {
// Request the OpenAI API for the response based on the prompt
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
stream: true,
messages: messages.map(m => ({
role: m.role,
content: m.content,
})),
});

// Convert the response into a friendly text-stream
const stream = OpenAIStream(response);

// Respond with the stream
return new experimental_StreamingReactResponse(stream, {
ui({ content }) {
return (
<div className="italic text-red-800">
Visit Next.js docs at{' '}
<a
href="https://nextjs.org/docs"
target="_blank"
className="underline"
>
https://nextjs.org/docs
</a>
</div>
);
},
});
}
```

### Client-Side Setup

On the client side, the useChat hook from ai/react is utilized to manage the chat interaction. The `Chat` component below renders the chat interface, including input forms and message displays. It dynamically integrates the server-side stream, displaying messages and UI elements as they are received.

```tsx
// app/stream-react-response/page.tsx

import { handler } from './action';
import { Chat } from './chat';

export const runtime = 'edge';

export default function Page() {
return <Chat handler={handler} />;
}
```

```tsx
// app/stream-react-response/chat.tsx

'use client';

import { useChat } from 'ai/react';

export function Chat({ handler }: { handler: any }) {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: handler,
});

return (
<div className="container mx-auto p-4">
<ul>
{messages.map((m, index) => (
<li key={index}>
{m.role === 'user' ? 'User: ' : 'AI: '}
{m.role === 'user' ? m.content : m.ui}
</li>
))}
</ul>

<form
className="flex gap-2 fixed bottom-0 left-0 w-full p-4 border-t"
onSubmit={handleSubmit}
>
<input
className="border border-gray-500 rounded p-2 w-full"
placeholder="what is Next.js..."
value={input}
onChange={handleInputChange}
autoFocus
/>
<button type="submit" className="bg-black text-white rounded px-4">
Send
</button>
</form>
</div>
);
}
```
7 changes: 5 additions & 2 deletions docs/pages/docs/api-reference/use-chat.mdx
Original file line number Diff line number Diff line change
@@ -289,8 +289,11 @@ The `useChat` function returns an object containing several helper methods and v
],
[
'handleSubmit',
'(e: React.FormEvent<HTMLFormElement>) => void',
'Form submission handler that automatically resets the input field and appends a user message.',
'(e: React.FormEvent<HTMLFormElement>, options: ChatRequestOptions) => void',
'Form submission handler that automatically resets the input field and appends a user message.' +
'The chat request options can have a `data` property ' +
'that you can use to send an extra body object to the API endpoint in addition to the `messages` array.',
,
],
[
'isLoading',
82 changes: 82 additions & 0 deletions docs/pages/docs/guides/providers/openai.mdx
Original file line number Diff line number Diff line change
@@ -311,3 +311,85 @@ const openai = new OpenAI({
The OpenAI SDK now supports Edge Runtime out of the box, so we recommend using
the official `openai` library instead.
</Callout>

## Guide: Using Images with GPT 4 Vision and useChat

You can use the extra `data` property that is part of `handleSubmit` to send additional data
such as an image URL or a base64 encoded image to the server

```tsx filename="app/page.tsx" showLineNumbers
'use client';

import { useChat } from 'ai/react';

export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.length > 0
? messages.map(m => (
<div key={m.id} className="whitespace-pre-wrap">
{m.role === 'user' ? 'User: ' : 'AI: '}
{m.content}
</div>
))
: null}

<form
onSubmit={e => {
handleSubmit(e, {
// data could contain the URL of an image that was uploaded to your service, or a base64 encoded image:
data: {
imageUrl:
'https://upload.wikimedia.org/wikipedia/commons/thumb/3/3c/Field_sparrow_in_CP_%2841484%29_%28cropped%29.jpg/733px-Field_sparrow_in_CP_%2841484%29_%28cropped%29.jpg',
},
});
}}
>
<input
className="fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl"
value={input}
placeholder="What does the image show..."
onChange={handleInputChange}
/>
</form>
</div>
);
}
```

On the server, you can pass that information to GPT-4 Vision.

```tsx filename="app/api/chat/route.ts" showLineNumbers
export async function POST(req: Request) {
// 'data' contains the additional data that you have sent:
const { messages, data } = await req.json();

const initialMessages = messages.slice(0, -1);
const currentMessage = messages[messages.length - 1];

const response = await openai.chat.completions.create({
model: 'gpt-4-vision-preview', // use a GPT-4 vision model
stream: true,
max_tokens: 150,
messages: [
...initialMessages,
{
...currentMessage,
content: [
{ type: 'text', text: currentMessage.content },

// forward the image information to OpenAI:
{
type: 'image_url',
image_url: data.imageUrl,
},
],
},
],
});

const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
}
```
2 changes: 1 addition & 1 deletion examples/next-anthropic/package.json
Original file line number Diff line number Diff line change
@@ -10,7 +10,7 @@
},
"dependencies": {
"@anthropic-ai/sdk": "^0.6.2",
"ai": "2.2.23",
"ai": "2.2.24",
"next": "13.4.12",
"react": "18.2.0",
"react-dom": "^18.2.0"
2 changes: 1 addition & 1 deletion examples/next-fireworks/package.json
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@
"lint": "next lint"
},
"dependencies": {
"ai": "2.2.23",
"ai": "2.2.24",
"next": "13.4.12",
"openai": "4.16.1",
"react": "18.2.0",
2 changes: 1 addition & 1 deletion examples/next-huggingface/package.json
Original file line number Diff line number Diff line change
@@ -11,7 +11,7 @@
"dependencies": {
"@huggingface/inference": "^2.5.1",
"next": "13.4.12",
"ai": "2.2.23",
"ai": "2.2.24",
"react": "18.2.0",
"react-dom": "^18.2.0"
},
2 changes: 1 addition & 1 deletion examples/next-langchain/package.json
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@
"lint": "next lint"
},
"dependencies": {
"ai": "2.2.23",
"ai": "2.2.24",
"langchain": "^0.0.129",
"next": "13.4.12",
"react": "18.2.0",
2 changes: 1 addition & 1 deletion examples/next-openai-rate-limits/package.json
Original file line number Diff line number Diff line change
@@ -11,7 +11,7 @@
"dependencies": {
"@upstash/ratelimit": "^0.4.3",
"@vercel/kv": "^0.2.2",
"ai": "2.2.23",
"ai": "2.2.24",
"next": "13.4.12",
"openai": "4.16.1",
"react": "18.2.0",
63 changes: 63 additions & 0 deletions examples/next-openai/app/api/assistant/assistant-setup.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Home Automation Assistant Example

## Setup

### Create OpenAI Assistant

[OpenAI Assistant Website](https://platform.openai.com/assistants)

Create a new assistant. Enable Code interpreter. Add the following functions and instructions to the assistant.

Then add the assistant id to the `.env.local` file as `ASSISTANT_ID=your-assistant-id`.

### Instructions

```
You are an assistant with access to a home automation system. You can get and set the temperature in the bedroom, home office, living room, kitchen and bathroom.
The system uses temperature in Celsius. If the user requests Fahrenheit, you should convert the temperature to Fahrenheit.
```

### getRoomTemperature function

```json
{
"name": "getRoomTemperature",
"description": "Get the temperature in a room",
"parameters": {
"type": "object",
"properties": {
"room": {
"type": "string",
"enum": ["bedroom", "home office", "living room", "kitchen", "bathroom"]
}
},
"required": ["room"]
}
}
```

### setRoomTemperature function

```json
{
"name": "setRoomTemperature",
"description": "Set the temperature in a room",
"parameters": {
"type": "object",
"properties": {
"room": {
"type": "string",
"enum": ["bedroom", "home office", "living room", "kitchen", "bathroom"]
},
"temperature": { "type": "number" }
},
"required": ["room", "temperature"]
}
}
```

## Run

1. Run `pnpm run dev` in `examples/next-openai`
2. Go to http://localhost:3000/assistant
Loading