Skip to content

Commit 671331b

Browse files
authoredJul 11, 2024··
feat (core): add experimental OpenTelemetry support for generateText and streamText (#1884)
1 parent 0675d71 commit 671331b

35 files changed

+2856
-292
lines changed
 

‎.changeset/new-bugs-admire.md

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'ai': patch
3+
---
4+
5+
feat (core): add experimental OpenTelemetry support for generateText and streamText
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
---
2+
title: Telemetry
3+
description: Using OpenTelemetry with AI SDK Core
4+
---
5+
6+
# Telemetry
7+
8+
<Note type="warning">
9+
This feature is experimental and may change in the future. Currently, only the
10+
`generateText` and `streamText` functions support telemetry.
11+
</Note>
12+
13+
The Vercel AI SDK uses [OpenTelemetry](https://opentelemetry.io/) to collect telemetry data.
14+
OpenTelemetry is an open-source observability framework designed to provide
15+
standardized instrumentation for collecting telemetry data.
16+
17+
## Enabling telemetry
18+
19+
For Next.js applications, please follow the [Next.js OpenTelemetry guide](https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry) to enable telemetry first.
20+
21+
You can then use the `experimental_telemetry` option to enable telemetry on specific `streamText` and `generateText` calls while
22+
the feature is experimental:
23+
24+
```ts highlight="4"
25+
const result = await generateText({
26+
model: openai('gpt-4-turbo'),
27+
prompt: 'Write a short story about a cat.',
28+
experimental_telemetry: { isEnabled: true },
29+
});
30+
```
31+
32+
## Telemetry Metadata
33+
34+
You can provide a `functionId` to identify the function that the telemetry data is for,
35+
and `metadata` to include additional information in the telemetry data.
36+
37+
```ts highlight="6-10"
38+
const result = await generateText({
39+
model: openai('gpt-4-turbo'),
40+
prompt: 'Write a short story about a cat.',
41+
experimental_telemetry: {
42+
isEnabled: true,
43+
functionId: 'my-awesome-function',
44+
metadata: {
45+
something: 'custom',
46+
someOtherThing: 'other-value',
47+
},
48+
},
49+
});
50+
```
51+
52+
## Collected Data
53+
54+
### generateText function
55+
56+
`generateText` records 3 types of spans:
57+
58+
- `ai.generateText`: the full length of the generateText call. It contains 1 or more `ai.generateText.doGenerate` spans.
59+
It contains the [basic span information](#basic-span-information) and the following attributes:
60+
- `operation.name`: `ai.generateText`
61+
- `ai.prompt`: the prompt that was used when calling `generateText`
62+
- `ai.settings.maxToolRoundtrips`: the maximum number of tool roundtrips that were set
63+
- `ai.generateText.doGenerate`: a provider doGenerate call. It can contain `ai.toolCall` spans.
64+
It contains the [basic span information](#basic-span-information) and the following attributes:
65+
- `operation.name`: `ai.generateText`
66+
- `ai.prompt.format`: the format of the prompt
67+
- `ai.prompt.messages`: the messages that were passed into the provider
68+
- `ai.toolCall`: a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
69+
70+
### streamText function
71+
72+
`streamText` records 3 types of spans:
73+
74+
- `ai.streamText`: the full length of the streamText call. It contains a `ai.streamText.doStream` span.
75+
It contains the [basic span information](#basic-span-information) and the following attributes:
76+
- `operation.name`: `ai.streamText`
77+
- `ai.prompt`: the prompt that was used when calling `streamText`
78+
- `ai.streamText.doStream`: a provider doStream call.
79+
This span contains an `ai.stream.firstChunk` event that is emitted when the first chunk of the stream is received.
80+
The `doStream` span can also contain `ai.toolCall` spans.
81+
It contains the [basic span information](#basic-span-information) and the following attributes:
82+
- `operation.name`: `ai.streamText`
83+
- `ai.prompt.format`: the format of the prompt
84+
- `ai.prompt.messages`: the messages that were passed into the provider
85+
- `ai.toolCall`: a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
86+
87+
## Span Details
88+
89+
### Basic span information
90+
91+
Many spans (`ai.generateText`, `ai.generateText.doGenerate`, `ai.streamText`, `ai.streamText.doStream`) contain the following attributes:
92+
93+
- `ai.finishReason`: the reason why the generation finished
94+
- `ai.model.id`: the id of the model
95+
- `ai.model.provider`: the provider of the model
96+
- `ai.request.headers.*`: the request headers that were passed in through `headers`
97+
- `ai.result.text`: the text that was generated
98+
- `ai.result.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
99+
- `ai.settings.maxRetries`: the maximum number of retries that were set
100+
- `ai.telemetry.functionId`: the functionId that was set through `telemetry.functionId`
101+
- `ai.telemetry.metadata.*`: the metadata that was passed in through `telemetry.metadata`
102+
- `ai.usage.completionTokens`: the number of completion tokens that were used
103+
- `ai.usage.promptTokens`: the number of prompt tokens that were used
104+
- `resource.name`: the functionId that was set through `telemetry.functionId`
105+
106+
### Tool call spans
107+
108+
Tool call spans (`ai.toolCall`) contain the following attributes:
109+
110+
- `ai.toolCall.name`: the name of the tool
111+
- `ai.toolCall.id`: the id of the tool call
112+
- `ai.toolCall.args`: the parameters of the tool call
113+
- `ai.toolCall.result`: the result of the tool call. Only available if the tool call is successful and the result is serializable.

‎content/docs/07-reference/ai-sdk-core/01-generate-text.mdx

+34
Original file line numberDiff line numberDiff line change
@@ -337,6 +337,40 @@ console.log(text);
337337
description:
338338
'Maximum number of automatic roundtrips for tool calls. An automatic tool call roundtrip is another LLM call with the tool call results when all tool calls of the last assistant message have results. A maximum number is required to prevent infinite loops in the case of misconfigured tools. By default, it is set to 0, which will disable the feature.',
339339
},
340+
{
341+
name: 'experimental_telemetry',
342+
type: 'TelemetrySettings',
343+
isOptional: true,
344+
description: 'Telemetry configuration. Experimental feature.',
345+
properties: [
346+
{
347+
type: 'TelemetrySettings',
348+
parameters: [
349+
{
350+
name: 'isEnabled',
351+
type: 'boolean',
352+
isOptional: true,
353+
description:
354+
'Enable or disable telemetry. Disabled by default while experimental.',
355+
},
356+
{
357+
name: 'functionId',
358+
type: 'string',
359+
isOptional: true,
360+
description:
361+
'Identifier for this function. Used to group telemetry data by function.',
362+
},
363+
{
364+
name: 'metadata',
365+
isOptional: true,
366+
type: 'Record<string, string | number | boolean | Array<null | undefined | string> | Array<null | undefined | number> | Array<null | undefined | boolean>>',
367+
description:
368+
'Additional information to include in the telemetry data.',
369+
},
370+
],
371+
},
372+
],
373+
},
340374
]}
341375
/>
342376

‎content/docs/07-reference/ai-sdk-core/02-stream-text.mdx

+34
Original file line numberDiff line numberDiff line change
@@ -332,6 +332,40 @@ for await (const textPart of textStream) {
332332
description:
333333
'Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.',
334334
},
335+
{
336+
name: 'experimental_telemetry',
337+
type: 'TelemetrySettings',
338+
isOptional: true,
339+
description: 'Telemetry configuration. Experimental feature.',
340+
properties: [
341+
{
342+
type: 'TelemetrySettings',
343+
parameters: [
344+
{
345+
name: 'isEnabled',
346+
type: 'boolean',
347+
isOptional: true,
348+
description:
349+
'Enable or disable telemetry. Disabled by default while experimental.',
350+
},
351+
{
352+
name: 'functionId',
353+
type: 'string',
354+
isOptional: true,
355+
description:
356+
'Identifier for this function. Used to group telemetry data by function.',
357+
},
358+
{
359+
name: 'metadata',
360+
isOptional: true,
361+
type: 'Record<string, string | number | boolean | Array<null | undefined | string> | Array<null | undefined | number> | Array<null | undefined | boolean>>',
362+
description:
363+
'Additional information to include in the telemetry data.',
364+
},
365+
],
366+
},
367+
],
368+
},
335369
{
336370
name: 'onFinish',
337371
type: '(result: OnFinishResult) => void',

‎examples/ai-core/package.json

+4-1
Original file line numberDiff line numberDiff line change
@@ -3,14 +3,17 @@
33
"version": "0.0.0",
44
"private": true,
55
"dependencies": {
6+
"@ai-sdk/amazon-bedrock": "latest",
67
"@ai-sdk/anthropic": "latest",
78
"@ai-sdk/azure": "latest",
89
"@ai-sdk/cohere": "latest",
910
"@ai-sdk/google": "latest",
1011
"@ai-sdk/google-vertex": "latest",
1112
"@ai-sdk/mistral": "latest",
1213
"@ai-sdk/openai": "latest",
13-
"@ai-sdk/amazon-bedrock": "latest",
14+
"@opentelemetry/sdk-node": "0.52.0",
15+
"@opentelemetry/auto-instrumentations-node": "0.47.0",
16+
"@opentelemetry/sdk-trace-node": "1.25.0",
1417
"ai": "latest",
1518
"dotenv": "16.4.5",
1619
"mathjs": "12.4.2",
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
import { openai } from '@ai-sdk/openai';
2+
import { generateText, tool } from 'ai';
3+
import dotenv from 'dotenv';
4+
import { z } from 'zod';
5+
import { weatherTool } from '../tools/weather-tool';
6+
7+
dotenv.config();
8+
9+
import { NodeSDK } from '@opentelemetry/sdk-node';
10+
import { ConsoleSpanExporter } from '@opentelemetry/sdk-trace-node';
11+
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
12+
13+
const sdk = new NodeSDK({
14+
traceExporter: new ConsoleSpanExporter(),
15+
instrumentations: [getNodeAutoInstrumentations()],
16+
});
17+
18+
sdk.start();
19+
20+
async function main() {
21+
const result = await generateText({
22+
model: openai('gpt-3.5-turbo'),
23+
maxTokens: 512,
24+
tools: {
25+
weather: weatherTool,
26+
cityAttractions: tool({
27+
parameters: z.object({ city: z.string() }),
28+
}),
29+
},
30+
prompt:
31+
'What is the weather in San Francisco and what attractions should I visit?',
32+
experimental_telemetry: {
33+
isEnabled: true,
34+
functionId: 'my-awesome-function',
35+
metadata: {
36+
something: 'custom',
37+
someOtherThing: 'other-value',
38+
},
39+
},
40+
});
41+
42+
console.log(JSON.stringify(result, null, 2));
43+
44+
await sdk.shutdown();
45+
}
46+
47+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
import { openai } from '@ai-sdk/openai';
2+
import { generateText } from 'ai';
3+
import dotenv from 'dotenv';
4+
5+
dotenv.config();
6+
7+
import { NodeSDK } from '@opentelemetry/sdk-node';
8+
import { ConsoleSpanExporter } from '@opentelemetry/sdk-trace-node';
9+
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
10+
11+
const sdk = new NodeSDK({
12+
traceExporter: new ConsoleSpanExporter(),
13+
instrumentations: [getNodeAutoInstrumentations()],
14+
});
15+
16+
sdk.start();
17+
18+
async function main() {
19+
const result = await generateText({
20+
model: openai('gpt-3.5-turbo'),
21+
maxTokens: 50,
22+
prompt: 'Invent a new holiday and describe its traditions.',
23+
experimental_telemetry: {
24+
isEnabled: true,
25+
functionId: 'my-awesome-function',
26+
metadata: {
27+
something: 'custom',
28+
someOtherThing: 'other-value',
29+
},
30+
},
31+
});
32+
33+
console.log(result.text);
34+
35+
await sdk.shutdown();
36+
}
37+
38+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
import { openai } from '@ai-sdk/openai';
2+
import { streamText } from 'ai';
3+
import dotenv from 'dotenv';
4+
5+
dotenv.config();
6+
7+
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
8+
import { NodeSDK } from '@opentelemetry/sdk-node';
9+
import { ConsoleSpanExporter } from '@opentelemetry/sdk-trace-node';
10+
11+
const sdk = new NodeSDK({
12+
traceExporter: new ConsoleSpanExporter(),
13+
instrumentations: [getNodeAutoInstrumentations()],
14+
});
15+
16+
sdk.start();
17+
18+
async function main() {
19+
const result = await streamText({
20+
model: openai('gpt-3.5-turbo'),
21+
maxTokens: 50,
22+
prompt: 'Invent a new holiday and describe its traditions.',
23+
experimental_telemetry: {
24+
isEnabled: true,
25+
functionId: 'my-awesome-function',
26+
metadata: {
27+
something: 'custom',
28+
someOtherThing: 'other-value',
29+
},
30+
},
31+
});
32+
33+
for await (const textPart of result.textStream) {
34+
process.stdout.write(textPart);
35+
}
36+
37+
await sdk.shutdown();
38+
}
39+
40+
main().catch(console.error);
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
OPENAI_API_KEY=xxxxxxx
+35
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
2+
3+
# dependencies
4+
/node_modules
5+
/.pnp
6+
.pnp.js
7+
8+
# testing
9+
/coverage
10+
11+
# next.js
12+
/.next/
13+
/out/
14+
15+
# production
16+
/build
17+
18+
# misc
19+
.DS_Store
20+
*.pem
21+
22+
# debug
23+
npm-debug.log*
24+
yarn-debug.log*
25+
yarn-error.log*
26+
27+
# local env files
28+
.env*.local
29+
30+
# vercel
31+
.vercel
32+
33+
# typescript
34+
*.tsbuildinfo
35+
next-env.d.ts
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
# Vercel AI SDK, Next.js, and OpenAI Chat Telemetry Example
2+
3+
This example shows how to use the [Vercel AI SDK](https://sdk.vercel.ai/docs) with [Next.js](https://nextjs.org/) and [OpenAI](https://openai.com) to create a ChatGPT-like AI-powered streaming chat bot.
4+
5+
## Deploy your own
6+
7+
Deploy the example using [Vercel](https://vercel.com?utm_source=github&utm_medium=readme&utm_campaign=ai-sdk-example):
8+
9+
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fvercel%2Fai%2Ftree%2Fmain%2Fexamples%2Fnext-openai-telemetry&env=OPENAI_API_KEY&envDescription=OpenAI%20API%20Key&envLink=https%3A%2F%2Fplatform.openai.com%2Faccount%2Fapi-keys&project-name=vercel-ai-chat-openai-telemetry&repository-name=vercel-ai-chat-openai-telemetry)
10+
11+
## How to use
12+
13+
Execute [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app) with [npm](https://docs.npmjs.com/cli/init), [Yarn](https://yarnpkg.com/lang/en/docs/cli/create/), or [pnpm](https://pnpm.io) to bootstrap the example:
14+
15+
```bash
16+
npx create-next-app --example https://github.com/vercel/ai/tree/main/examples/next-openai next-openai-app
17+
```
18+
19+
```bash
20+
yarn create next-app --example https://github.com/vercel/ai/tree/main/examples/next-openai next-openai-app
21+
```
22+
23+
```bash
24+
pnpm create next-app --example https://github.com/vercel/ai/tree/main/examples/next-openai next-openai-app
25+
```
26+
27+
To run the example locally you need to:
28+
29+
1. Sign up at [OpenAI's Developer Platform](https://platform.openai.com/signup).
30+
2. Go to [OpenAI's dashboard](https://platform.openai.com/account/api-keys) and create an API KEY.
31+
3. Set the required OpenAI environment variable as the token value as shown [the example env file](./.env.local.example) but in a new file called `.env.local`
32+
4. `pnpm install` to install the required dependencies.
33+
5. `pnpm dev` to launch the development server.
34+
35+
## Learn More
36+
37+
To learn more about OpenAI, Next.js, and the Vercel AI SDK take a look at the following resources:
38+
39+
- [Vercel AI SDK docs](https://sdk.vercel.ai/docs)
40+
- [Vercel AI Playground](https://play.vercel.ai)
41+
- [OpenAI Documentation](https://platform.openai.com/docs) - learn about OpenAI features and API.
42+
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
import { openai } from '@ai-sdk/openai';
2+
import { generateText } from 'ai';
3+
4+
export async function POST(req: Request) {
5+
const { prompt } = await req.json();
6+
7+
const { text } = await generateText({
8+
model: openai('gpt-4-turbo'),
9+
maxTokens: 100,
10+
prompt,
11+
experimental_telemetry: {
12+
isEnabled: true,
13+
functionId: 'example-function-id',
14+
metadata: { example: 'value' },
15+
},
16+
});
17+
18+
return new Response(JSON.stringify({ text }), {
19+
headers: { 'Content-Type': 'application/json' },
20+
});
21+
}
25.3 KB
Binary file not shown.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
@tailwind base;
2+
@tailwind components;
3+
@tailwind utilities;
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
import './globals.css';
2+
import { Inter } from 'next/font/google';
3+
4+
const inter = Inter({ subsets: ['latin'] });
5+
6+
export const metadata = {
7+
title: 'Vercel AI SDK - Next.js OpenAI Examples',
8+
description: 'Examples of using the Vercel AI SDK with Next.js and OpenAI.',
9+
};
10+
11+
export default function RootLayout({
12+
children,
13+
}: {
14+
children: React.ReactNode;
15+
}) {
16+
return (
17+
<html lang="en">
18+
<body className={inter.className}>{children}</body>
19+
</html>
20+
);
21+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
'use client';
2+
3+
import { useState } from 'react';
4+
5+
export default function Page() {
6+
const [generation, setGeneration] = useState('');
7+
const [isLoading, setIsLoading] = useState(false);
8+
const [error, setError] = useState<Error | null>(null);
9+
10+
return (
11+
<div className="flex flex-col items-center justify-center min-h-screen p-4 bg-gray-100">
12+
<button
13+
className="px-4 py-2 font-bold text-white bg-blue-500 rounded hover:bg-blue-700"
14+
onClick={async () => {
15+
try {
16+
setIsLoading(true);
17+
18+
const response = await fetch('/api/text', {
19+
method: 'POST',
20+
body: JSON.stringify({
21+
prompt: 'Why is the sky blue?',
22+
}),
23+
headers: {
24+
'Content-Type': 'application/json',
25+
},
26+
});
27+
28+
const json = await response.json();
29+
30+
setGeneration(json.text);
31+
} catch (error) {
32+
setError(error as Error);
33+
} finally {
34+
setIsLoading(false);
35+
}
36+
}}
37+
>
38+
Generate
39+
</button>
40+
41+
{error && <div className="text-red-500">{error.message}</div>}
42+
<div className="mt-4">
43+
{isLoading ? (
44+
<span className="text-blue-500">Loading...</span>
45+
) : (
46+
<span className="text-gray-800">{generation}</span>
47+
)}
48+
</div>
49+
</div>
50+
);
51+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
import { registerOTel } from '@vercel/otel';
2+
3+
export function register() {
4+
registerOTel({
5+
serviceName: 'next-app',
6+
});
7+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
/** @type {import('next').NextConfig} */
2+
const nextConfig = {};
3+
4+
nextConfig.experimental = {
5+
instrumentationHook: true,
6+
};
7+
8+
module.exports = nextConfig;
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
{
2+
"name": "next-openai-telemetry",
3+
"version": "0.0.0",
4+
"private": true,
5+
"scripts": {
6+
"dev": "next dev",
7+
"build": "next build",
8+
"start": "next start",
9+
"lint": "next lint"
10+
},
11+
"dependencies": {
12+
"@ai-sdk/openai": "latest",
13+
"@ai-sdk/react": "latest",
14+
"@opentelemetry/api-logs": "0.52.1",
15+
"@opentelemetry/sdk-logs": "0.52.1",
16+
"@opentelemetry/instrumentation": "0.52.1",
17+
"@vercel/otel": "1.9.1",
18+
"ai": "latest",
19+
"next": "latest",
20+
"openai": "4.47.1",
21+
"react": "^18",
22+
"react-dom": "^18",
23+
"zod": "3.23.8"
24+
},
25+
"devDependencies": {
26+
"@types/node": "^18",
27+
"@types/react": "^18",
28+
"@types/react-dom": "^18",
29+
"autoprefixer": "^10.4.14",
30+
"eslint": "^7.32.0",
31+
"eslint-config-next": "14.2.3",
32+
"postcss": "^8.4.23",
33+
"tailwindcss": "^3.3.2",
34+
"typescript": "5.1.3"
35+
}
36+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
module.exports = {
2+
plugins: {
3+
tailwindcss: {},
4+
autoprefixer: {},
5+
},
6+
};
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
/** @type {import('tailwindcss').Config} */
2+
module.exports = {
3+
content: [
4+
'./pages/**/*.{js,ts,jsx,tsx,mdx}',
5+
'./components/**/*.{js,ts,jsx,tsx,mdx}',
6+
'./app/**/*.{js,ts,jsx,tsx,mdx}',
7+
],
8+
theme: {
9+
extend: {
10+
backgroundImage: {
11+
'gradient-radial': 'radial-gradient(var(--tw-gradient-stops))',
12+
'gradient-conic':
13+
'conic-gradient(from 180deg at 50% 50%, var(--tw-gradient-stops))',
14+
},
15+
},
16+
},
17+
plugins: [],
18+
};
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
{
2+
"compilerOptions": {
3+
"target": "es5",
4+
"lib": ["dom", "dom.iterable", "esnext"],
5+
"allowJs": true,
6+
"skipLibCheck": true,
7+
"strict": true,
8+
"forceConsistentCasingInFileNames": true,
9+
"noEmit": true,
10+
"esModuleInterop": true,
11+
"module": "esnext",
12+
"moduleResolution": "node",
13+
"resolveJsonModule": true,
14+
"isolatedModules": true,
15+
"jsx": "preserve",
16+
"incremental": true,
17+
"plugins": [
18+
{
19+
"name": "next"
20+
}
21+
],
22+
"paths": {
23+
"@/*": ["./*"]
24+
}
25+
},
26+
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"],
27+
"exclude": ["node_modules"]
28+
}

‎packages/core/core/generate-text/generate-text.test.ts

+185
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
import assert from 'node:assert';
22
import { z } from 'zod';
3+
import { setTestTracer } from '../telemetry/get-tracer';
34
import { MockLanguageModelV1 } from '../test/mock-language-model-v1';
5+
import { MockTracer } from '../test/mock-tracer';
46
import { generateText } from './generate-text';
57

68
const dummyResponseValues = {
@@ -498,3 +500,186 @@ describe('options.headers', () => {
498500
assert.deepStrictEqual(result.text, 'Hello, world!');
499501
});
500502
});
503+
504+
describe('telemetry', () => {
505+
let tracer: MockTracer;
506+
507+
beforeEach(() => {
508+
tracer = new MockTracer();
509+
setTestTracer(tracer);
510+
});
511+
512+
afterEach(() => {
513+
setTestTracer(undefined);
514+
});
515+
516+
it('should not record any telemetry data when not explicitly enabled', async () => {
517+
await generateText({
518+
model: new MockLanguageModelV1({
519+
doGenerate: async ({}) => ({
520+
...dummyResponseValues,
521+
text: `Hello, world!`,
522+
}),
523+
}),
524+
prompt: 'prompt',
525+
});
526+
527+
assert.deepStrictEqual(tracer.jsonSpans, []);
528+
});
529+
530+
it('should record telemetry data when enabled', async () => {
531+
await generateText({
532+
model: new MockLanguageModelV1({
533+
doGenerate: async ({}) => ({
534+
...dummyResponseValues,
535+
text: `Hello, world!`,
536+
}),
537+
}),
538+
prompt: 'prompt',
539+
headers: {
540+
header1: 'value1',
541+
header2: 'value2',
542+
},
543+
experimental_telemetry: {
544+
isEnabled: true,
545+
functionId: 'test-function-id',
546+
metadata: {
547+
test1: 'value1',
548+
test2: false,
549+
},
550+
},
551+
});
552+
553+
assert.deepStrictEqual(tracer.jsonSpans, [
554+
{
555+
name: 'ai.generateText',
556+
attributes: {
557+
'ai.model.id': 'mock-model-id',
558+
'ai.model.provider': 'mock-provider',
559+
'ai.prompt': '{"prompt":"prompt"}',
560+
'ai.settings.maxRetries': undefined,
561+
'ai.settings.maxToolRoundtrips': 0,
562+
'ai.telemetry.functionId': 'test-function-id',
563+
'ai.telemetry.metadata.test1': 'value1',
564+
'ai.telemetry.metadata.test2': false,
565+
'ai.finishReason': 'stop',
566+
'ai.result.text': 'Hello, world!',
567+
'ai.result.toolCalls': undefined,
568+
'ai.usage.completionTokens': 20,
569+
'ai.usage.promptTokens': 10,
570+
'ai.request.headers.header1': 'value1',
571+
'ai.request.headers.header2': 'value2',
572+
'operation.name': 'ai.generateText',
573+
'resource.name': 'test-function-id',
574+
},
575+
events: [],
576+
},
577+
{
578+
name: 'ai.generateText.doGenerate',
579+
attributes: {
580+
'ai.model.id': 'mock-model-id',
581+
'ai.model.provider': 'mock-provider',
582+
'ai.prompt.format': 'prompt',
583+
'ai.prompt.messages':
584+
'[{"role":"user","content":[{"type":"text","text":"prompt"}]}]',
585+
'ai.settings.maxRetries': undefined,
586+
'ai.telemetry.functionId': 'test-function-id',
587+
'ai.telemetry.metadata.test1': 'value1',
588+
'ai.telemetry.metadata.test2': false,
589+
'ai.finishReason': 'stop',
590+
'ai.result.text': 'Hello, world!',
591+
'ai.result.toolCalls': undefined,
592+
'ai.usage.completionTokens': 20,
593+
'ai.usage.promptTokens': 10,
594+
'ai.request.headers.header1': 'value1',
595+
'ai.request.headers.header2': 'value2',
596+
'operation.name': 'ai.generateText',
597+
'resource.name': 'test-function-id',
598+
},
599+
events: [],
600+
},
601+
]);
602+
});
603+
604+
it('should record successful tool call', async () => {
605+
await generateText({
606+
model: new MockLanguageModelV1({
607+
doGenerate: async ({}) => ({
608+
...dummyResponseValues,
609+
toolCalls: [
610+
{
611+
toolCallType: 'function',
612+
toolCallId: 'call-1',
613+
toolName: 'tool1',
614+
args: `{ "value": "value" }`,
615+
},
616+
],
617+
}),
618+
}),
619+
tools: {
620+
tool1: {
621+
parameters: z.object({ value: z.string() }),
622+
execute: async () => 'result1',
623+
},
624+
},
625+
prompt: 'test-input',
626+
experimental_telemetry: {
627+
isEnabled: true,
628+
},
629+
});
630+
631+
assert.deepStrictEqual(tracer.jsonSpans, [
632+
{
633+
name: 'ai.generateText',
634+
attributes: {
635+
'ai.model.id': 'mock-model-id',
636+
'ai.model.provider': 'mock-provider',
637+
'ai.prompt': '{"prompt":"test-input"}',
638+
'ai.settings.maxRetries': undefined,
639+
'ai.settings.maxToolRoundtrips': 0,
640+
'ai.telemetry.functionId': undefined,
641+
'ai.finishReason': 'stop',
642+
'ai.result.text': undefined,
643+
'ai.result.toolCalls':
644+
'[{"toolCallType":"function","toolCallId":"call-1","toolName":"tool1","args":"{ \\"value\\": \\"value\\" }"}]',
645+
'ai.usage.completionTokens': 20,
646+
'ai.usage.promptTokens': 10,
647+
'operation.name': 'ai.generateText',
648+
'resource.name': undefined,
649+
},
650+
events: [],
651+
},
652+
{
653+
name: 'ai.generateText.doGenerate',
654+
attributes: {
655+
'ai.model.id': 'mock-model-id',
656+
'ai.model.provider': 'mock-provider',
657+
'ai.prompt.format': 'prompt',
658+
'ai.prompt.messages':
659+
'[{"role":"user","content":[{"type":"text","text":"test-input"}]}]',
660+
'ai.settings.maxRetries': undefined,
661+
'ai.telemetry.functionId': undefined,
662+
'ai.finishReason': 'stop',
663+
'ai.result.text': undefined,
664+
'ai.result.toolCalls':
665+
'[{"toolCallType":"function","toolCallId":"call-1","toolName":"tool1","args":"{ \\"value\\": \\"value\\" }"}]',
666+
'ai.usage.completionTokens': 20,
667+
'ai.usage.promptTokens': 10,
668+
'operation.name': 'ai.generateText',
669+
'resource.name': undefined,
670+
},
671+
events: [],
672+
},
673+
{
674+
name: 'ai.toolCall',
675+
attributes: {
676+
'ai.toolCall.name': 'tool1',
677+
'ai.toolCall.id': 'call-1',
678+
'ai.toolCall.args': '{"value":"value"}',
679+
'ai.toolCall.result': '"result1"',
680+
},
681+
events: [],
682+
},
683+
]);
684+
});
685+
});

‎packages/core/core/generate-text/generate-text.ts

+172-68
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
import { Tracer } from '@opentelemetry/api';
12
import { CoreAssistantMessage, CoreToolMessage } from '../prompt';
23
import { CallSettings } from '../prompt/call-settings';
34
import {
@@ -8,6 +9,10 @@ import { getValidatedPrompt } from '../prompt/get-validated-prompt';
89
import { prepareCallSettings } from '../prompt/prepare-call-settings';
910
import { prepareToolsAndToolChoice } from '../prompt/prepare-tools-and-tool-choice';
1011
import { Prompt } from '../prompt/prompt';
12+
import { getBaseTelemetryAttributes } from '../telemetry/get-base-telemetry-attributes';
13+
import { getTracer } from '../telemetry/get-tracer';
14+
import { recordSpan } from '../telemetry/record-span';
15+
import { TelemetrySettings } from '../telemetry/telemetry-settings';
1116
import { CoreTool } from '../tool/tool';
1217
import {
1318
CallWarning,
@@ -75,6 +80,7 @@ export async function generateText<TOOLS extends Record<string, CoreTool>>({
7580
headers,
7681
maxAutomaticRoundtrips = 0,
7782
maxToolRoundtrips = maxAutomaticRoundtrips,
83+
experimental_telemetry: telemetry,
7884
...settings
7985
}: CallSettings &
8086
Prompt & {
@@ -111,88 +117,162 @@ case of misconfigured tools.
111117
By default, it's set to 0, which will disable the feature.
112118
*/
113119
maxToolRoundtrips?: number;
114-
}): Promise<GenerateTextResult<TOOLS>> {
115-
const retry = retryWithExponentialBackoff({ maxRetries });
116-
const validatedPrompt = getValidatedPrompt({ system, prompt, messages });
117120

118-
const mode = {
119-
type: 'regular' as const,
120-
...prepareToolsAndToolChoice({ tools, toolChoice }),
121-
};
122-
const callSettings = prepareCallSettings(settings);
123-
const promptMessages = convertToLanguageModelPrompt(validatedPrompt);
121+
/**
122+
* Optional telemetry configuration (experimental).
123+
*/
124+
experimental_telemetry?: TelemetrySettings;
125+
}): Promise<GenerateTextResult<TOOLS>> {
126+
const baseTelemetryAttributes = getBaseTelemetryAttributes({
127+
operationName: 'ai.generateText',
128+
model,
129+
telemetry,
130+
headers,
131+
settings: { ...settings, maxRetries },
132+
});
124133

125-
let currentModelResponse: Awaited<ReturnType<LanguageModel['doGenerate']>>;
126-
let currentToolCalls: ToToolCallArray<TOOLS> = [];
127-
let currentToolResults: ToToolResultArray<TOOLS> = [];
128-
let roundtrips = 0;
129-
const responseMessages: Array<CoreAssistantMessage | CoreToolMessage> = [];
134+
const tracer = getTracer({ isEnabled: telemetry?.isEnabled ?? false });
135+
return recordSpan({
136+
name: 'ai.generateText',
137+
attributes: {
138+
...baseTelemetryAttributes,
139+
// specific settings that only make sense on the outer level:
140+
'ai.prompt': JSON.stringify({ system, prompt, messages }),
141+
'ai.settings.maxToolRoundtrips': maxToolRoundtrips,
142+
},
143+
tracer,
144+
fn: async span => {
145+
const retry = retryWithExponentialBackoff({ maxRetries });
146+
const validatedPrompt = getValidatedPrompt({
147+
system,
148+
prompt,
149+
messages,
150+
});
130151

131-
do {
132-
currentModelResponse = await retry(() => {
133-
return model.doGenerate({
134-
mode,
135-
...callSettings,
152+
const mode = {
153+
type: 'regular' as const,
154+
...prepareToolsAndToolChoice({ tools, toolChoice }),
155+
};
156+
const callSettings = prepareCallSettings(settings);
157+
const promptMessages = convertToLanguageModelPrompt(validatedPrompt);
158+
159+
let currentModelResponse: Awaited<
160+
ReturnType<LanguageModel['doGenerate']>
161+
>;
162+
let currentToolCalls: ToToolCallArray<TOOLS> = [];
163+
let currentToolResults: ToToolResultArray<TOOLS> = [];
164+
let roundtrips = 0;
165+
const responseMessages: Array<CoreAssistantMessage | CoreToolMessage> =
166+
[];
167+
168+
do {
136169
// once we have a roundtrip, we need to switch to messages format:
137-
inputFormat: roundtrips === 0 ? validatedPrompt.type : 'messages',
138-
prompt: promptMessages,
139-
abortSignal,
140-
headers,
170+
const currentInputFormat =
171+
roundtrips === 0 ? validatedPrompt.type : 'messages';
172+
173+
currentModelResponse = await retry(() =>
174+
recordSpan({
175+
name: 'ai.generateText.doGenerate',
176+
attributes: {
177+
...baseTelemetryAttributes,
178+
'ai.prompt.format': currentInputFormat,
179+
'ai.prompt.messages': JSON.stringify(promptMessages),
180+
},
181+
tracer,
182+
fn: async span => {
183+
const result = await model.doGenerate({
184+
mode,
185+
...callSettings,
186+
inputFormat: currentInputFormat,
187+
prompt: promptMessages,
188+
abortSignal,
189+
headers,
190+
});
191+
192+
// Add response information to the span:
193+
span.setAttributes({
194+
'ai.finishReason': result.finishReason,
195+
'ai.usage.promptTokens': result.usage.promptTokens,
196+
'ai.usage.completionTokens': result.usage.completionTokens,
197+
'ai.result.text': result.text,
198+
'ai.result.toolCalls': JSON.stringify(result.toolCalls),
199+
});
200+
201+
return result;
202+
},
203+
}),
204+
);
205+
206+
// parse tool calls:
207+
currentToolCalls = (currentModelResponse.toolCalls ?? []).map(
208+
modelToolCall => parseToolCall({ toolCall: modelToolCall, tools }),
209+
);
210+
211+
// execute tools:
212+
currentToolResults =
213+
tools == null
214+
? []
215+
: await executeTools({
216+
toolCalls: currentToolCalls,
217+
tools,
218+
tracer,
219+
});
220+
221+
// append to messages for potential next roundtrip:
222+
const newResponseMessages = toResponseMessages({
223+
text: currentModelResponse.text ?? '',
224+
toolCalls: currentToolCalls,
225+
toolResults: currentToolResults,
226+
});
227+
responseMessages.push(...newResponseMessages);
228+
promptMessages.push(
229+
...newResponseMessages.map(convertToLanguageModelMessage),
230+
);
231+
} while (
232+
// there are tool calls:
233+
currentToolCalls.length > 0 &&
234+
// all current tool calls have results:
235+
currentToolResults.length === currentToolCalls.length &&
236+
// the number of roundtrips is less than the maximum:
237+
roundtrips++ < maxToolRoundtrips
238+
);
239+
240+
// Add response information to the span:
241+
span.setAttributes({
242+
'ai.finishReason': currentModelResponse.finishReason,
243+
'ai.usage.promptTokens': currentModelResponse.usage.promptTokens,
244+
'ai.usage.completionTokens':
245+
currentModelResponse.usage.completionTokens,
246+
'ai.result.text': currentModelResponse.text,
247+
'ai.result.toolCalls': JSON.stringify(currentModelResponse.toolCalls),
141248
});
142-
});
143249

144-
// parse tool calls:
145-
currentToolCalls = (currentModelResponse.toolCalls ?? []).map(
146-
modelToolCall => parseToolCall({ toolCall: modelToolCall, tools }),
147-
);
148-
149-
// execute tools:
150-
currentToolResults =
151-
tools == null
152-
? []
153-
: await executeTools({ toolCalls: currentToolCalls, tools });
154-
155-
// append to messages for potential next roundtrip:
156-
const newResponseMessages = toResponseMessages({
157-
text: currentModelResponse.text ?? '',
158-
toolCalls: currentToolCalls,
159-
toolResults: currentToolResults,
160-
});
161-
responseMessages.push(...newResponseMessages);
162-
promptMessages.push(
163-
...newResponseMessages.map(convertToLanguageModelMessage),
164-
);
165-
} while (
166-
// there are tool calls:
167-
currentToolCalls.length > 0 &&
168-
// all current tool calls have results:
169-
currentToolResults.length === currentToolCalls.length &&
170-
// the number of roundtrips is less than the maximum:
171-
roundtrips++ < maxToolRoundtrips
172-
);
173-
174-
return new GenerateTextResult({
175-
// Always return a string so that the caller doesn't have to check for undefined.
176-
// If they need to check if the model did not return any text,
177-
// they can check the length of the string:
178-
text: currentModelResponse.text ?? '',
179-
toolCalls: currentToolCalls,
180-
toolResults: currentToolResults,
181-
finishReason: currentModelResponse.finishReason,
182-
usage: calculateCompletionTokenUsage(currentModelResponse.usage),
183-
warnings: currentModelResponse.warnings,
184-
rawResponse: currentModelResponse.rawResponse,
185-
logprobs: currentModelResponse.logprobs,
186-
responseMessages,
250+
return new GenerateTextResult({
251+
// Always return a string so that the caller doesn't have to check for undefined.
252+
// If they need to check if the model did not return any text,
253+
// they can check the length of the string:
254+
text: currentModelResponse.text ?? '',
255+
toolCalls: currentToolCalls,
256+
toolResults: currentToolResults,
257+
finishReason: currentModelResponse.finishReason,
258+
usage: calculateCompletionTokenUsage(currentModelResponse.usage),
259+
warnings: currentModelResponse.warnings,
260+
rawResponse: currentModelResponse.rawResponse,
261+
logprobs: currentModelResponse.logprobs,
262+
responseMessages,
263+
});
264+
},
187265
});
188266
}
189267

190268
async function executeTools<TOOLS extends Record<string, CoreTool>>({
191269
toolCalls,
192270
tools,
271+
tracer,
193272
}: {
194273
toolCalls: ToToolCallArray<TOOLS>;
195274
tools: TOOLS;
275+
tracer: Tracer;
196276
}): Promise<ToToolResultArray<TOOLS>> {
197277
const toolResults = await Promise.all(
198278
toolCalls.map(async toolCall => {
@@ -202,7 +282,31 @@ async function executeTools<TOOLS extends Record<string, CoreTool>>({
202282
return undefined;
203283
}
204284

205-
const result = await tool.execute(toolCall.args);
285+
const result = await recordSpan({
286+
name: 'ai.toolCall',
287+
attributes: {
288+
'ai.toolCall.name': toolCall.toolName,
289+
'ai.toolCall.id': toolCall.toolCallId,
290+
'ai.toolCall.args': JSON.stringify(toolCall.args),
291+
},
292+
tracer,
293+
fn: async span => {
294+
const result = await tool.execute!(toolCall.args);
295+
296+
try {
297+
span.setAttributes({
298+
'ai.toolCall.result': JSON.stringify(result),
299+
});
300+
} catch (ignored) {
301+
// JSON stringify might fail if the result is not serializable,
302+
// in which case we just ignore it. In the future we might want to
303+
// add an optional serialize method to the tool interface and warn
304+
// if the result is not serializable.
305+
}
306+
307+
return result;
308+
},
309+
});
206310

207311
return {
208312
toolCallId: toolCall.toolCallId,

‎packages/core/core/generate-text/run-tools-transformation.ts

+57-31
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
import { LanguageModelV1StreamPart, NoSuchToolError } from '@ai-sdk/provider';
22
import { generateId } from '@ai-sdk/ui-utils';
3+
import { Tracer } from '@opentelemetry/api';
4+
import { recordSpan } from '../telemetry/record-span';
35
import { CoreTool } from '../tool';
46
import { calculateCompletionTokenUsage } from '../types/token-usage';
57
import { TextStreamPart } from './stream-text';
@@ -8,9 +10,11 @@ import { parseToolCall } from './tool-call';
810
export function runToolsTransformation<TOOLS extends Record<string, CoreTool>>({
911
tools,
1012
generatorStream,
13+
tracer,
1114
}: {
1215
tools?: TOOLS;
1316
generatorStream: ReadableStream<LanguageModelV1StreamPart>;
17+
tracer: Tracer;
1418
}): ReadableStream<TextStreamPart<TOOLS>> {
1519
let canClose = false;
1620
const outstandingToolCalls = new Set<string>();
@@ -82,38 +86,60 @@ export function runToolsTransformation<TOOLS extends Record<string, CoreTool>>({
8286
const toolExecutionId = generateId(); // use our own id to guarantee uniqueness
8387
outstandingToolCalls.add(toolExecutionId);
8488

85-
// Note: we don't await the tool execution here, because we want to process
86-
// the next chunk as soon as possible. This is important for the case where
87-
// the tool execution takes a long time.
88-
tool.execute(toolCall.args).then(
89-
(result: any) => {
90-
toolResultsStreamController!.enqueue({
91-
...toolCall,
92-
type: 'tool-result',
93-
result,
94-
} as any);
95-
96-
outstandingToolCalls.delete(toolExecutionId);
97-
98-
// close the tool results controller if no more outstanding tool calls
99-
if (canClose && outstandingToolCalls.size === 0) {
100-
toolResultsStreamController!.close();
101-
}
89+
// Note: we don't await the tool execution here (by leaving out 'await' on recordSpan),
90+
// because we want to process the next chunk as soon as possible.
91+
// This is important for the case where the tool execution takes a long time.
92+
recordSpan({
93+
name: 'ai.toolCall',
94+
attributes: {
95+
'ai.toolCall.name': toolCall.toolName,
96+
'ai.toolCall.id': toolCall.toolCallId,
97+
'ai.toolCall.args': JSON.stringify(toolCall.args),
10298
},
103-
(error: any) => {
104-
toolResultsStreamController!.enqueue({
105-
type: 'error',
106-
error,
107-
});
108-
109-
outstandingToolCalls.delete(toolExecutionId);
110-
111-
// close the tool results controller if no more outstanding tool calls
112-
if (canClose && outstandingToolCalls.size === 0) {
113-
toolResultsStreamController!.close();
114-
}
115-
},
116-
);
99+
tracer,
100+
fn: async span =>
101+
tool.execute!(toolCall.args).then(
102+
(result: any) => {
103+
toolResultsStreamController!.enqueue({
104+
...toolCall,
105+
type: 'tool-result',
106+
result,
107+
} as any);
108+
109+
outstandingToolCalls.delete(toolExecutionId);
110+
111+
// close the tool results controller if no more outstanding tool calls
112+
if (canClose && outstandingToolCalls.size === 0) {
113+
toolResultsStreamController!.close();
114+
}
115+
116+
// record telemetry
117+
try {
118+
span.setAttributes({
119+
'ai.toolCall.result': JSON.stringify(result),
120+
});
121+
} catch (ignored) {
122+
// JSON stringify might fail if the result is not serializable,
123+
// in which case we just ignore it. In the future we might want to
124+
// add an optional serialize method to the tool interface and warn
125+
// if the result is not serializable.
126+
}
127+
},
128+
(error: any) => {
129+
toolResultsStreamController!.enqueue({
130+
type: 'error',
131+
error,
132+
});
133+
134+
outstandingToolCalls.delete(toolExecutionId);
135+
136+
// close the tool results controller if no more outstanding tool calls
137+
if (canClose && outstandingToolCalls.size === 0) {
138+
toolResultsStreamController!.close();
139+
}
140+
},
141+
),
142+
});
117143
}
118144
} catch (error) {
119145
toolResultsStreamController!.enqueue({

‎packages/core/core/generate-text/stream-text.test.ts

+219
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ import { formatStreamPart } from '../../streams';
99
import { MockLanguageModelV1 } from '../test/mock-language-model-v1';
1010
import { createMockServerResponse } from '../test/mock-server-response';
1111
import { streamText } from './stream-text';
12+
import { MockTracer } from '../test/mock-tracer';
13+
import { setTestTracer } from '../telemetry/get-tracer';
1214

1315
describe('result.textStream', () => {
1416
it('should send text deltas', async () => {
@@ -1041,3 +1043,220 @@ describe('options.headers', () => {
10411043
);
10421044
});
10431045
});
1046+
1047+
describe('telemetry', () => {
1048+
let tracer: MockTracer;
1049+
1050+
beforeEach(() => {
1051+
tracer = new MockTracer();
1052+
setTestTracer(tracer);
1053+
});
1054+
1055+
afterEach(() => {
1056+
setTestTracer(undefined);
1057+
});
1058+
1059+
it('should not record any telemetry data when not explicitly enabled', async () => {
1060+
const result = await streamText({
1061+
model: new MockLanguageModelV1({
1062+
doStream: async ({}) => ({
1063+
stream: convertArrayToReadableStream([
1064+
{ type: 'text-delta', textDelta: 'Hello' },
1065+
{ type: 'text-delta', textDelta: ', ' },
1066+
{ type: 'text-delta', textDelta: `world!` },
1067+
{
1068+
type: 'finish',
1069+
finishReason: 'stop',
1070+
logprobs: undefined,
1071+
usage: { completionTokens: 20, promptTokens: 10 },
1072+
},
1073+
]),
1074+
rawCall: { rawPrompt: 'prompt', rawSettings: {} },
1075+
}),
1076+
}),
1077+
prompt: 'test-input',
1078+
});
1079+
1080+
// consume stream
1081+
await convertAsyncIterableToArray(result.textStream);
1082+
1083+
assert.deepStrictEqual(tracer.jsonSpans, []);
1084+
});
1085+
1086+
it('should record telemetry data when enabled', async () => {
1087+
const result = await streamText({
1088+
model: new MockLanguageModelV1({
1089+
doStream: async ({}) => ({
1090+
stream: convertArrayToReadableStream([
1091+
{ type: 'text-delta', textDelta: 'Hello' },
1092+
{ type: 'text-delta', textDelta: ', ' },
1093+
{ type: 'text-delta', textDelta: `world!` },
1094+
{
1095+
type: 'finish',
1096+
finishReason: 'stop',
1097+
logprobs: undefined,
1098+
usage: { completionTokens: 20, promptTokens: 10 },
1099+
},
1100+
]),
1101+
rawCall: { rawPrompt: 'prompt', rawSettings: {} },
1102+
}),
1103+
}),
1104+
prompt: 'test-input',
1105+
headers: {
1106+
header1: 'value1',
1107+
header2: 'value2',
1108+
},
1109+
experimental_telemetry: {
1110+
isEnabled: true,
1111+
functionId: 'test-function-id',
1112+
metadata: {
1113+
test1: 'value1',
1114+
test2: false,
1115+
},
1116+
},
1117+
});
1118+
1119+
// consume stream
1120+
await convertAsyncIterableToArray(result.textStream);
1121+
1122+
assert.deepStrictEqual(tracer.jsonSpans, [
1123+
{
1124+
name: 'ai.streamText',
1125+
attributes: {
1126+
'ai.model.id': 'mock-model-id',
1127+
'ai.model.provider': 'mock-provider',
1128+
'ai.prompt': '{"prompt":"test-input"}',
1129+
'ai.settings.maxRetries': undefined,
1130+
'ai.telemetry.functionId': 'test-function-id',
1131+
'ai.telemetry.metadata.test1': 'value1',
1132+
'ai.telemetry.metadata.test2': false,
1133+
'ai.finishReason': 'stop',
1134+
'ai.result.text': 'Hello, world!',
1135+
'ai.result.toolCalls': undefined,
1136+
'ai.usage.completionTokens': 20,
1137+
'ai.usage.promptTokens': 10,
1138+
'ai.request.headers.header1': 'value1',
1139+
'ai.request.headers.header2': 'value2',
1140+
'operation.name': 'ai.streamText',
1141+
'resource.name': 'test-function-id',
1142+
},
1143+
events: [],
1144+
},
1145+
{
1146+
name: 'ai.streamText.doStream',
1147+
attributes: {
1148+
'ai.model.id': 'mock-model-id',
1149+
'ai.model.provider': 'mock-provider',
1150+
'ai.prompt.format': 'prompt',
1151+
'ai.prompt.messages':
1152+
'[{"role":"user","content":[{"type":"text","text":"test-input"}]}]',
1153+
'ai.settings.maxRetries': undefined,
1154+
'ai.telemetry.functionId': 'test-function-id',
1155+
'ai.telemetry.metadata.test1': 'value1',
1156+
'ai.telemetry.metadata.test2': false,
1157+
'ai.finishReason': 'stop',
1158+
'ai.result.text': 'Hello, world!',
1159+
'ai.result.toolCalls': undefined,
1160+
'ai.usage.completionTokens': 20,
1161+
'ai.usage.promptTokens': 10,
1162+
'ai.request.headers.header1': 'value1',
1163+
'ai.request.headers.header2': 'value2',
1164+
'operation.name': 'ai.streamText',
1165+
'resource.name': 'test-function-id',
1166+
},
1167+
events: ['ai.stream.firstChunk'],
1168+
},
1169+
]);
1170+
});
1171+
1172+
it('should record successful tool call', async () => {
1173+
const result = await streamText({
1174+
model: new MockLanguageModelV1({
1175+
doStream: async ({}) => ({
1176+
stream: convertArrayToReadableStream([
1177+
{
1178+
type: 'tool-call',
1179+
toolCallType: 'function',
1180+
toolCallId: 'call-1',
1181+
toolName: 'tool1',
1182+
args: `{ "value": "value" }`,
1183+
},
1184+
{
1185+
type: 'finish',
1186+
finishReason: 'stop',
1187+
logprobs: undefined,
1188+
usage: { completionTokens: 20, promptTokens: 10 },
1189+
},
1190+
]),
1191+
rawCall: { rawPrompt: 'prompt', rawSettings: {} },
1192+
}),
1193+
}),
1194+
tools: {
1195+
tool1: {
1196+
parameters: z.object({ value: z.string() }),
1197+
execute: async ({ value }) => `${value}-result`,
1198+
},
1199+
},
1200+
prompt: 'test-input',
1201+
experimental_telemetry: {
1202+
isEnabled: true,
1203+
},
1204+
});
1205+
1206+
// consume stream
1207+
await convertAsyncIterableToArray(result.textStream);
1208+
1209+
assert.deepStrictEqual(tracer.jsonSpans, [
1210+
{
1211+
name: 'ai.streamText',
1212+
attributes: {
1213+
'ai.model.id': 'mock-model-id',
1214+
'ai.model.provider': 'mock-provider',
1215+
'ai.prompt': '{"prompt":"test-input"}',
1216+
'ai.settings.maxRetries': undefined,
1217+
'ai.telemetry.functionId': undefined,
1218+
'ai.finishReason': 'stop',
1219+
'ai.result.text': '',
1220+
'ai.result.toolCalls':
1221+
'[{"type":"tool-call","toolCallId":"call-1","toolName":"tool1","args":{"value":"value"}}]',
1222+
'ai.usage.completionTokens': 20,
1223+
'ai.usage.promptTokens': 10,
1224+
'operation.name': 'ai.streamText',
1225+
'resource.name': undefined,
1226+
},
1227+
events: [],
1228+
},
1229+
{
1230+
name: 'ai.streamText.doStream',
1231+
attributes: {
1232+
'ai.model.id': 'mock-model-id',
1233+
'ai.model.provider': 'mock-provider',
1234+
'ai.prompt.format': 'prompt',
1235+
'ai.prompt.messages':
1236+
'[{"role":"user","content":[{"type":"text","text":"test-input"}]}]',
1237+
'ai.settings.maxRetries': undefined,
1238+
'ai.telemetry.functionId': undefined,
1239+
'ai.finishReason': 'stop',
1240+
'ai.result.text': '',
1241+
'ai.result.toolCalls':
1242+
'[{"type":"tool-call","toolCallId":"call-1","toolName":"tool1","args":{"value":"value"}}]',
1243+
'ai.usage.completionTokens': 20,
1244+
'ai.usage.promptTokens': 10,
1245+
'operation.name': 'ai.streamText',
1246+
'resource.name': undefined,
1247+
},
1248+
events: ['ai.stream.firstChunk'],
1249+
},
1250+
{
1251+
name: 'ai.toolCall',
1252+
attributes: {
1253+
'ai.toolCall.name': 'tool1',
1254+
'ai.toolCall.id': 'call-1',
1255+
'ai.toolCall.args': '{"value":"value"}',
1256+
'ai.toolCall.result': '"value-result"',
1257+
},
1258+
events: [],
1259+
},
1260+
]);
1261+
});
1262+
});

‎packages/core/core/generate-text/stream-text.ts

+159-54
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
import { Span } from '@opentelemetry/api';
12
import { ServerResponse } from 'node:http';
23
import {
34
AIStreamCallbacksAndOptions,
@@ -10,6 +11,10 @@ import { getValidatedPrompt } from '../prompt/get-validated-prompt';
1011
import { prepareCallSettings } from '../prompt/prepare-call-settings';
1112
import { prepareToolsAndToolChoice } from '../prompt/prepare-tools-and-tool-choice';
1213
import { Prompt } from '../prompt/prompt';
14+
import { getBaseTelemetryAttributes } from '../telemetry/get-base-telemetry-attributes';
15+
import { getTracer } from '../telemetry/get-tracer';
16+
import { recordSpan } from '../telemetry/record-span';
17+
import { TelemetrySettings } from '../telemetry/telemetry-settings';
1318
import { CoreTool } from '../tool';
1419
import {
1520
CallWarning,
@@ -77,6 +82,7 @@ export async function streamText<TOOLS extends Record<string, CoreTool>>({
7782
maxRetries,
7883
abortSignal,
7984
headers,
85+
experimental_telemetry: telemetry,
8086
onFinish,
8187
...settings
8288
}: CallSettings &
@@ -96,6 +102,11 @@ The tool choice strategy. Default: 'auto'.
96102
*/
97103
toolChoice?: CoreToolChoice<TOOLS>;
98104

105+
/**
106+
* Optional telemetry configuration (experimental).
107+
*/
108+
experimental_telemetry?: TelemetrySettings;
109+
99110
/**
100111
Callback that is called when the LLM response and all request tool executions
101112
(for tools that have an `execute` function) are finished.
@@ -142,30 +153,74 @@ Warnings from the model provider (e.g. unsupported settings).
142153
warnings?: CallWarning[];
143154
}) => Promise<void> | void;
144155
}): Promise<StreamTextResult<TOOLS>> {
145-
const retry = retryWithExponentialBackoff({ maxRetries });
146-
const validatedPrompt = getValidatedPrompt({ system, prompt, messages });
147-
const { stream, warnings, rawResponse } = await retry(() =>
148-
model.doStream({
149-
mode: {
150-
type: 'regular',
151-
...prepareToolsAndToolChoice({ tools, toolChoice }),
152-
},
153-
...prepareCallSettings(settings),
154-
inputFormat: validatedPrompt.type,
155-
prompt: convertToLanguageModelPrompt(validatedPrompt),
156-
abortSignal,
157-
headers,
158-
}),
159-
);
160-
161-
return new StreamTextResult({
162-
stream: runToolsTransformation({
163-
tools,
164-
generatorStream: stream,
165-
}),
166-
warnings,
167-
rawResponse,
168-
onFinish,
156+
const baseTelemetryAttributes = getBaseTelemetryAttributes({
157+
operationName: 'ai.streamText',
158+
model,
159+
telemetry,
160+
headers,
161+
settings: { ...settings, maxRetries },
162+
});
163+
164+
const tracer = getTracer({ isEnabled: telemetry?.isEnabled ?? false });
165+
166+
return recordSpan({
167+
name: 'ai.streamText',
168+
attributes: {
169+
...baseTelemetryAttributes,
170+
// specific settings that only make sense on the outer level:
171+
'ai.prompt': JSON.stringify({ system, prompt, messages }),
172+
},
173+
tracer,
174+
endWhenDone: false,
175+
fn: async rootSpan => {
176+
const retry = retryWithExponentialBackoff({ maxRetries });
177+
const validatedPrompt = getValidatedPrompt({ system, prompt, messages });
178+
const promptMessages = convertToLanguageModelPrompt(validatedPrompt);
179+
const {
180+
result: { stream, warnings, rawResponse },
181+
doStreamSpan,
182+
} = await retry(() =>
183+
recordSpan({
184+
name: 'ai.streamText.doStream',
185+
attributes: {
186+
...baseTelemetryAttributes,
187+
'ai.prompt.format': validatedPrompt.type,
188+
'ai.prompt.messages': JSON.stringify(promptMessages),
189+
},
190+
tracer,
191+
endWhenDone: false,
192+
fn: async doStreamSpan => {
193+
return {
194+
result: await model.doStream({
195+
mode: {
196+
type: 'regular',
197+
...prepareToolsAndToolChoice({ tools, toolChoice }),
198+
},
199+
...prepareCallSettings(settings),
200+
inputFormat: validatedPrompt.type,
201+
prompt: promptMessages,
202+
abortSignal,
203+
headers,
204+
}),
205+
doStreamSpan,
206+
};
207+
},
208+
}),
209+
);
210+
211+
return new StreamTextResult({
212+
stream: runToolsTransformation({
213+
tools,
214+
generatorStream: stream,
215+
tracer,
216+
}),
217+
warnings,
218+
rawResponse,
219+
onFinish,
220+
rootSpan,
221+
doStreamSpan,
222+
});
223+
},
169224
});
170225
}
171226

@@ -247,13 +302,17 @@ Response headers.
247302
warnings,
248303
rawResponse,
249304
onFinish,
305+
rootSpan,
306+
doStreamSpan,
250307
}: {
251308
stream: ReadableStream<TextStreamPart<TOOLS>>;
252309
warnings: CallWarning[] | undefined;
253310
rawResponse?: {
254311
headers?: Record<string, string>;
255312
};
256313
onFinish?: Parameters<typeof streamText>[0]['onFinish'];
314+
rootSpan: Span;
315+
doStreamSpan: Span;
257316
}) {
258317
this.warnings = warnings;
259318
this.rawResponse = rawResponse;
@@ -303,6 +362,7 @@ Response headers.
303362
let text = '';
304363
const toolCalls: ToToolCall<TOOLS>[] = [];
305364
const toolResults: ToToolResult<TOOLS>[] = [];
365+
let firstChunk = true;
306366

307367
// pipe chunks through a transformation stream that extracts metadata:
308368
const self = this;
@@ -311,49 +371,92 @@ Response headers.
311371
async transform(chunk, controller): Promise<void> {
312372
controller.enqueue(chunk);
313373

314-
// Create the full text from text deltas (for onFinish callback and text promise):
315-
if (chunk.type === 'text-delta') {
316-
text += chunk.textDelta;
374+
// Telemetry event for first chunk:
375+
if (firstChunk) {
376+
firstChunk = false;
377+
doStreamSpan.addEvent('ai.stream.firstChunk');
317378
}
318379

319-
// store tool calls for onFinish callback and toolCalls promise:
320-
if (chunk.type === 'tool-call') {
321-
toolCalls.push(chunk);
322-
}
323-
324-
// store tool results for onFinish callback and toolResults promise:
325-
if (chunk.type === 'tool-result') {
326-
toolResults.push(chunk);
327-
}
328-
329-
// Note: tool executions might not be finished yet when the finish event is emitted.
330-
if (chunk.type === 'finish') {
331-
// store usage and finish reason for promises and onFinish callback:
332-
usage = chunk.usage;
333-
finishReason = chunk.finishReason;
334-
335-
// resolve promises that can be resolved now:
336-
resolveUsage(usage);
337-
resolveFinishReason(finishReason);
338-
resolveText(text);
339-
resolveToolCalls(toolCalls);
380+
const chunkType = chunk.type;
381+
switch (chunkType) {
382+
case 'text-delta':
383+
// create the full text from text deltas (for onFinish callback and text promise):
384+
text += chunk.textDelta;
385+
break;
386+
387+
case 'tool-call':
388+
// store tool calls for onFinish callback and toolCalls promise:
389+
toolCalls.push(chunk);
390+
break;
391+
392+
case 'tool-result':
393+
// store tool results for onFinish callback and toolResults promise:
394+
toolResults.push(chunk);
395+
break;
396+
397+
case 'finish':
398+
// Note: tool executions might not be finished yet when the finish event is emitted.
399+
// store usage and finish reason for promises and onFinish callback:
400+
usage = chunk.usage;
401+
finishReason = chunk.finishReason;
402+
403+
// resolve promises that can be resolved now:
404+
resolveUsage(usage);
405+
resolveFinishReason(finishReason);
406+
resolveText(text);
407+
resolveToolCalls(toolCalls);
408+
break;
409+
410+
case 'error':
411+
// ignored
412+
break;
413+
414+
default: {
415+
const exhaustiveCheck: never = chunkType;
416+
throw new Error(`Unknown chunk type: ${exhaustiveCheck}`);
417+
}
340418
}
341419
},
342420

343421
// invoke onFinish callback and resolve toolResults promise when the stream is about to close:
344422
async flush(controller) {
345423
try {
424+
const finalUsage = usage ?? {
425+
promptTokens: NaN,
426+
completionTokens: NaN,
427+
totalTokens: NaN,
428+
};
429+
const finalFinishReason = finishReason ?? 'unknown';
430+
const telemetryToolCalls =
431+
toolCalls.length > 0 ? JSON.stringify(toolCalls) : undefined;
432+
433+
doStreamSpan.setAttributes({
434+
'ai.finishReason': finalFinishReason,
435+
'ai.usage.promptTokens': finalUsage.promptTokens,
436+
'ai.usage.completionTokens': finalUsage.completionTokens,
437+
'ai.result.text': text,
438+
'ai.result.toolCalls': telemetryToolCalls,
439+
});
440+
441+
// finish doStreamSpan before other operations for correct timing:
442+
doStreamSpan.end();
443+
444+
// Add response information to the root span:
445+
rootSpan.setAttributes({
446+
'ai.finishReason': finalFinishReason,
447+
'ai.usage.promptTokens': finalUsage.promptTokens,
448+
'ai.usage.completionTokens': finalUsage.completionTokens,
449+
'ai.result.text': text,
450+
'ai.result.toolCalls': telemetryToolCalls,
451+
});
452+
346453
// resolve toolResults promise:
347454
resolveToolResults(toolResults);
348455

349456
// call onFinish callback:
350457
await self.onFinish?.({
351-
finishReason: finishReason ?? 'unknown',
352-
usage: usage ?? {
353-
promptTokens: NaN,
354-
completionTokens: NaN,
355-
totalTokens: NaN,
356-
},
458+
finishReason: finalFinishReason,
459+
usage: finalUsage,
357460
text,
358461
toolCalls,
359462
// The tool results are inferred as a never[] type, because they are
@@ -366,6 +469,8 @@ Response headers.
366469
});
367470
} catch (error) {
368471
controller.error(error);
472+
} finally {
473+
rootSpan.end();
369474
}
370475
},
371476
}),
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
import { Attributes } from '@opentelemetry/api';
2+
import { CallSettings } from '../prompt/call-settings';
3+
import { LanguageModel } from '../types/language-model';
4+
import { TelemetrySettings } from './telemetry-settings';
5+
6+
export function getBaseTelemetryAttributes({
7+
operationName,
8+
model,
9+
settings,
10+
telemetry,
11+
headers,
12+
}: {
13+
operationName: string;
14+
model: LanguageModel;
15+
settings: Omit<CallSettings, 'abortSignal' | 'headers'>;
16+
telemetry: TelemetrySettings | undefined;
17+
headers: Record<string, string | undefined> | undefined;
18+
}): Attributes {
19+
return {
20+
'ai.model.provider': model.provider,
21+
'ai.model.id': model.modelId,
22+
23+
// settings:
24+
...Object.entries(settings).reduce((attributes, [key, value]) => {
25+
attributes[`ai.settings.${key}`] = value;
26+
return attributes;
27+
}, {} as Attributes),
28+
29+
// special telemetry information
30+
'operation.name': operationName,
31+
'resource.name': telemetry?.functionId,
32+
'ai.telemetry.functionId': telemetry?.functionId,
33+
34+
// add metadata as attributes:
35+
...Object.entries(telemetry?.metadata ?? {}).reduce(
36+
(attributes, [key, value]) => {
37+
attributes[`ai.telemetry.metadata.${key}`] = value;
38+
return attributes;
39+
},
40+
{} as Attributes,
41+
),
42+
43+
// request headers
44+
...Object.entries(headers ?? {}).reduce((attributes, [key, value]) => {
45+
if (value !== undefined) {
46+
attributes[`ai.request.headers.${key}`] = value;
47+
}
48+
return attributes;
49+
}, {} as Attributes),
50+
};
51+
}
+23
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import { Tracer, trace } from '@opentelemetry/api';
2+
import { noopTracer } from './noop-tracer';
3+
4+
/**
5+
* Tracer variable for testing. Tests can set this to a mock tracer.
6+
*/
7+
let testTracer: Tracer | undefined = undefined;
8+
9+
export function setTestTracer(tracer: Tracer | undefined) {
10+
testTracer = tracer;
11+
}
12+
13+
export function getTracer({ isEnabled }: { isEnabled: boolean }): Tracer {
14+
if (!isEnabled) {
15+
return noopTracer;
16+
}
17+
18+
if (testTracer) {
19+
return testTracer;
20+
}
21+
22+
return trace.getTracer('ai');
23+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
import { Span, SpanContext, Tracer } from '@opentelemetry/api';
2+
3+
/**
4+
* Tracer implementation that does nothing (null object).
5+
*/
6+
export const noopTracer: Tracer = {
7+
startSpan(): Span {
8+
return noopSpan;
9+
},
10+
11+
startActiveSpan<F extends (span: Span) => unknown>(
12+
name: unknown,
13+
arg1: unknown,
14+
arg2?: unknown,
15+
arg3?: F,
16+
): ReturnType<any> {
17+
if (typeof arg1 === 'function') {
18+
return arg1(noopSpan);
19+
}
20+
if (typeof arg2 === 'function') {
21+
return arg2(noopSpan);
22+
}
23+
if (typeof arg3 === 'function') {
24+
return arg3(noopSpan);
25+
}
26+
},
27+
};
28+
29+
const noopSpan: Span = {
30+
spanContext() {
31+
return noopSpanContext;
32+
},
33+
setAttribute() {
34+
return this;
35+
},
36+
setAttributes() {
37+
return this;
38+
},
39+
addEvent() {
40+
return this;
41+
},
42+
addLink() {
43+
return this;
44+
},
45+
addLinks() {
46+
return this;
47+
},
48+
setStatus() {
49+
return this;
50+
},
51+
updateName() {
52+
return this;
53+
},
54+
end() {
55+
return this;
56+
},
57+
isRecording() {
58+
return false;
59+
},
60+
recordException() {
61+
return this;
62+
},
63+
};
64+
65+
const noopSpanContext: SpanContext = {
66+
traceId: '',
67+
spanId: '',
68+
traceFlags: 0,
69+
};
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
import { Attributes, Span, Tracer, SpanStatusCode } from '@opentelemetry/api';
2+
3+
export function recordSpan<T>({
4+
name,
5+
tracer,
6+
attributes,
7+
fn,
8+
endWhenDone = true,
9+
}: {
10+
name: string;
11+
tracer: Tracer;
12+
attributes: Attributes;
13+
fn: (span: Span) => Promise<T>;
14+
endWhenDone?: boolean;
15+
}) {
16+
return tracer.startActiveSpan(name, { attributes }, async span => {
17+
try {
18+
const result = await fn(span);
19+
20+
if (endWhenDone) {
21+
span.end();
22+
}
23+
24+
return result;
25+
} catch (error) {
26+
try {
27+
if (error instanceof Error) {
28+
span.recordException({
29+
name: error.name,
30+
message: error.message,
31+
stack: error.stack,
32+
});
33+
span.setStatus({
34+
code: SpanStatusCode.ERROR,
35+
message: error.message,
36+
});
37+
} else {
38+
span.setStatus({ code: SpanStatusCode.ERROR });
39+
}
40+
} finally {
41+
// always stop the span when there is an error:
42+
span.end();
43+
}
44+
45+
throw error;
46+
}
47+
});
48+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import { AttributeValue } from '@opentelemetry/api';
2+
3+
/**
4+
* Telemetry configuration.
5+
*/
6+
// This is meant to be both flexible for custom app requirements (metadata)
7+
// and extensible for standardization (example: functionId, more to come).
8+
export type TelemetrySettings = {
9+
/**
10+
* Enable or disable telemetry. Disabled by default while experimental.
11+
*/
12+
isEnabled?: boolean;
13+
14+
/**
15+
* Identifier for this function. Used to group telemetry data by function.
16+
*/
17+
functionId?: string;
18+
19+
/**
20+
* Additional information to include in the telemetry data.
21+
*/
22+
metadata?: Record<string, AttributeValue>;
23+
};
+134
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,134 @@
1+
import {
2+
AttributeValue,
3+
Attributes,
4+
Context,
5+
Span,
6+
SpanContext,
7+
SpanOptions,
8+
Tracer,
9+
} from '@opentelemetry/api';
10+
11+
export class MockTracer implements Tracer {
12+
spans: MockSpan[] = [];
13+
14+
get jsonSpans() {
15+
return this.spans.map(span => ({
16+
name: span.name,
17+
attributes: span.attributes,
18+
events: span.events,
19+
}));
20+
}
21+
22+
startSpan(name: string, options?: SpanOptions, context?: Context): Span {
23+
const span = new MockSpan({
24+
name,
25+
options,
26+
context,
27+
});
28+
this.spans.push(span);
29+
return span;
30+
}
31+
32+
startActiveSpan<F extends (span: Span) => unknown>(
33+
name: string,
34+
arg1: unknown,
35+
arg2?: unknown,
36+
arg3?: F,
37+
): ReturnType<any> {
38+
if (typeof arg1 === 'function') {
39+
const span = new MockSpan({
40+
name,
41+
});
42+
this.spans.push(span);
43+
return arg1(span);
44+
}
45+
if (typeof arg2 === 'function') {
46+
const span = new MockSpan({
47+
name,
48+
options: arg1 as SpanOptions,
49+
});
50+
this.spans.push(span);
51+
return arg2(span);
52+
}
53+
if (typeof arg3 === 'function') {
54+
const span = new MockSpan({
55+
name,
56+
options: arg1 as SpanOptions,
57+
context: arg2 as Context,
58+
});
59+
this.spans.push(span);
60+
return arg3(span);
61+
}
62+
}
63+
}
64+
65+
class MockSpan implements Span {
66+
name: string;
67+
context?: Context;
68+
options?: SpanOptions;
69+
attributes: Attributes;
70+
events: string[] = [];
71+
72+
readonly _spanContext: SpanContext = new MockSpanContext();
73+
74+
constructor({
75+
name,
76+
options,
77+
context,
78+
}: {
79+
name: string;
80+
options?: SpanOptions;
81+
context?: Context;
82+
}) {
83+
this.name = name;
84+
this.context = context;
85+
this.options = options;
86+
this.attributes = options?.attributes ?? {};
87+
}
88+
89+
spanContext(): SpanContext {
90+
return this._spanContext;
91+
}
92+
93+
setAttribute(key: string, value: AttributeValue): this {
94+
this.attributes = { ...this.attributes, [key]: value };
95+
return this;
96+
}
97+
98+
setAttributes(attributes: Attributes): this {
99+
this.attributes = { ...this.attributes, ...attributes };
100+
return this;
101+
}
102+
103+
addEvent(name: string): this {
104+
this.events.push(name);
105+
return this;
106+
}
107+
addLink() {
108+
return this;
109+
}
110+
addLinks() {
111+
return this;
112+
}
113+
setStatus() {
114+
return this;
115+
}
116+
updateName() {
117+
return this;
118+
}
119+
end() {
120+
return this;
121+
}
122+
isRecording() {
123+
return false;
124+
}
125+
recordException() {
126+
return this;
127+
}
128+
}
129+
130+
class MockSpanContext implements SpanContext {
131+
traceId = 'test-trace-id';
132+
spanId = 'test-span-id';
133+
traceFlags = 0;
134+
}

‎packages/core/package.json

+1
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,7 @@
8282
"@ai-sdk/svelte": "0.0.14",
8383
"@ai-sdk/ui-utils": "0.0.11",
8484
"@ai-sdk/vue": "0.0.13",
85+
"@opentelemetry/api": "1.9.0",
8586
"eventsource-parser": "1.1.2",
8687
"jsondiffpatch": "0.6.0",
8788
"json-schema": "0.4.0",

‎pnpm-lock.yaml

+1,123-138
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)
Please sign in to comment.