You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This feature is experimental and may change in the future. Currently, only the
10
+
`generateText` and `streamText` functions support telemetry.
11
+
</Note>
12
+
13
+
The Vercel AI SDK uses [OpenTelemetry](https://opentelemetry.io/) to collect telemetry data.
14
+
OpenTelemetry is an open-source observability framework designed to provide
15
+
standardized instrumentation for collecting telemetry data.
16
+
17
+
## Enabling telemetry
18
+
19
+
For Next.js applications, please follow the [Next.js OpenTelemetry guide](https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry) to enable telemetry first.
20
+
21
+
You can then use the `experimental_telemetry` option to enable telemetry on specific `streamText` and `generateText` calls while
22
+
the feature is experimental:
23
+
24
+
```ts highlight="4"
25
+
const result =awaitgenerateText({
26
+
model: openai('gpt-4-turbo'),
27
+
prompt: 'Write a short story about a cat.',
28
+
experimental_telemetry: { isEnabled: true },
29
+
});
30
+
```
31
+
32
+
## Telemetry Metadata
33
+
34
+
You can provide a `functionId` to identify the function that the telemetry data is for,
35
+
and `metadata` to include additional information in the telemetry data.
36
+
37
+
```ts highlight="6-10"
38
+
const result =awaitgenerateText({
39
+
model: openai('gpt-4-turbo'),
40
+
prompt: 'Write a short story about a cat.',
41
+
experimental_telemetry: {
42
+
isEnabled: true,
43
+
functionId: 'my-awesome-function',
44
+
metadata: {
45
+
something: 'custom',
46
+
someOtherThing: 'other-value',
47
+
},
48
+
},
49
+
});
50
+
```
51
+
52
+
## Collected Data
53
+
54
+
### generateText function
55
+
56
+
`generateText` records 3 types of spans:
57
+
58
+
-`ai.generateText`: the full length of the generateText call. It contains 1 or more `ai.generateText.doGenerate` spans.
59
+
It contains the [basic span information](#basic-span-information) and the following attributes:
60
+
-`operation.name`: `ai.generateText`
61
+
-`ai.prompt`: the prompt that was used when calling `generateText`
62
+
-`ai.settings.maxToolRoundtrips`: the maximum number of tool roundtrips that were set
63
+
-`ai.generateText.doGenerate`: a provider doGenerate call. It can contain `ai.toolCall` spans.
64
+
It contains the [basic span information](#basic-span-information) and the following attributes:
65
+
-`operation.name`: `ai.generateText`
66
+
-`ai.prompt.format`: the format of the prompt
67
+
-`ai.prompt.messages`: the messages that were passed into the provider
68
+
-`ai.toolCall`: a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
69
+
70
+
### streamText function
71
+
72
+
`streamText` records 3 types of spans:
73
+
74
+
-`ai.streamText`: the full length of the streamText call. It contains a `ai.streamText.doStream` span.
75
+
It contains the [basic span information](#basic-span-information) and the following attributes:
76
+
-`operation.name`: `ai.streamText`
77
+
-`ai.prompt`: the prompt that was used when calling `streamText`
78
+
-`ai.streamText.doStream`: a provider doStream call.
79
+
This span contains an `ai.stream.firstChunk` event that is emitted when the first chunk of the stream is received.
80
+
The `doStream` span can also contain `ai.toolCall` spans.
81
+
It contains the [basic span information](#basic-span-information) and the following attributes:
82
+
-`operation.name`: `ai.streamText`
83
+
-`ai.prompt.format`: the format of the prompt
84
+
-`ai.prompt.messages`: the messages that were passed into the provider
85
+
-`ai.toolCall`: a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
86
+
87
+
## Span Details
88
+
89
+
### Basic span information
90
+
91
+
Many spans (`ai.generateText`, `ai.generateText.doGenerate`, `ai.streamText`, `ai.streamText.doStream`) contain the following attributes:
92
+
93
+
-`ai.finishReason`: the reason why the generation finished
94
+
-`ai.model.id`: the id of the model
95
+
-`ai.model.provider`: the provider of the model
96
+
-`ai.request.headers.*`: the request headers that were passed in through `headers`
97
+
-`ai.result.text`: the text that was generated
98
+
-`ai.result.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
99
+
-`ai.settings.maxRetries`: the maximum number of retries that were set
100
+
-`ai.telemetry.functionId`: the functionId that was set through `telemetry.functionId`
101
+
-`ai.telemetry.metadata.*`: the metadata that was passed in through `telemetry.metadata`
102
+
-`ai.usage.completionTokens`: the number of completion tokens that were used
103
+
-`ai.usage.promptTokens`: the number of prompt tokens that were used
104
+
-`resource.name`: the functionId that was set through `telemetry.functionId`
105
+
106
+
### Tool call spans
107
+
108
+
Tool call spans (`ai.toolCall`) contain the following attributes:
109
+
110
+
-`ai.toolCall.name`: the name of the tool
111
+
-`ai.toolCall.id`: the id of the tool call
112
+
-`ai.toolCall.args`: the parameters of the tool call
113
+
-`ai.toolCall.result`: the result of the tool call. Only available if the tool call is successful and the result is serializable.
Copy file name to clipboardExpand all lines: content/docs/07-reference/ai-sdk-core/01-generate-text.mdx
+34
Original file line number
Diff line number
Diff line change
@@ -337,6 +337,40 @@ console.log(text);
337
337
description:
338
338
'Maximum number of automatic roundtrips for tool calls. An automatic tool call roundtrip is another LLM call with the tool call results when all tool calls of the last assistant message have results. A maximum number is required to prevent infinite loops in the case of misconfigured tools. By default, it is set to 0, which will disable the feature.',
# Vercel AI SDK, Next.js, and OpenAI Chat Telemetry Example
2
+
3
+
This example shows how to use the [Vercel AI SDK](https://sdk.vercel.ai/docs) with [Next.js](https://nextjs.org/) and [OpenAI](https://openai.com) to create a ChatGPT-like AI-powered streaming chat bot.
4
+
5
+
## Deploy your own
6
+
7
+
Deploy the example using [Vercel](https://vercel.com?utm_source=github&utm_medium=readme&utm_campaign=ai-sdk-example):
8
+
9
+
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fvercel%2Fai%2Ftree%2Fmain%2Fexamples%2Fnext-openai-telemetry&env=OPENAI_API_KEY&envDescription=OpenAI%20API%20Key&envLink=https%3A%2F%2Fplatform.openai.com%2Faccount%2Fapi-keys&project-name=vercel-ai-chat-openai-telemetry&repository-name=vercel-ai-chat-openai-telemetry)
10
+
11
+
## How to use
12
+
13
+
Execute [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app) with [npm](https://docs.npmjs.com/cli/init), [Yarn](https://yarnpkg.com/lang/en/docs/cli/create/), or [pnpm](https://pnpm.io) to bootstrap the example:
1. Sign up at [OpenAI's Developer Platform](https://platform.openai.com/signup).
30
+
2. Go to [OpenAI's dashboard](https://platform.openai.com/account/api-keys) and create an API KEY.
31
+
3. Set the required OpenAI environment variable as the token value as shown [the example env file](./.env.local.example) but in a new file called `.env.local`
32
+
4.`pnpm install` to install the required dependencies.
33
+
5.`pnpm dev` to launch the development server.
34
+
35
+
## Learn More
36
+
37
+
To learn more about OpenAI, Next.js, and the Vercel AI SDK take a look at the following resources:
38
+
39
+
-[Vercel AI SDK docs](https://sdk.vercel.ai/docs)
40
+
-[Vercel AI Playground](https://play.vercel.ai)
41
+
-[OpenAI Documentation](https://platform.openai.com/docs) - learn about OpenAI features and API.
42
+
-[Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
0 commit comments