Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: openai/openai-node
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v4.24.7
Choose a base ref
...
head repository: openai/openai-node
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v4.25.0
Choose a head ref
  • 12 commits
  • 13 files changed
  • 3 contributors

Commits on Jan 12, 2024

  1. Copy the full SHA
    27134c0 View commit details
  2. Verified

    This commit was signed with the committer’s verified signature.
    0x326 John Meyer
    Copy the full SHA
    041f2a6 View commit details
  3. Update README.md

    Just-Moh-it authored Jan 12, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    0x326 John Meyer
    Copy the full SHA
    8438c2c View commit details
  4. Copy the full SHA
    0cfb8a0 View commit details

Commits on Jan 14, 2024

  1. Merge pull request #629 from Just-Moh-it/patch-1

    Docs: buggy code example
    atty-openai authored Jan 14, 2024
    Copy the full SHA
    89cea27 View commit details

Commits on Jan 21, 2024

  1. Copy the full SHA
    af88d16 View commit details
  2. Copy the full SHA
    33aca55 View commit details
  3. Copy the full SHA
    74e06ff View commit details
  4. Copy the full SHA
    fa02e48 View commit details
  5. Copy the full SHA
    00a2d43 View commit details
  6. Copy the full SHA
    ebc786b View commit details
  7. release: 4.25.0

    stainless-bot committed Jan 21, 2024
    Copy the full SHA
    b6e7177 View commit details
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "4.24.7"
".": "4.25.0"
}
21 changes: 21 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,26 @@
# Changelog

## 4.25.0 (2024-01-21)

Full Changelog: [v4.24.7...v4.25.0](https://github.com/openai/openai-node/compare/v4.24.7...v4.25.0)

### Features

* **api:** add usage to runs and run steps ([#640](https://github.com/openai/openai-node/issues/640)) ([3caa416](https://github.com/openai/openai-node/commit/3caa4166b8abb5bffb4c8be1495834b7f16af32d))


### Bug Fixes

* allow body type in RequestOptions to be null ([#637](https://github.com/openai/openai-node/issues/637)) ([c4f8a36](https://github.com/openai/openai-node/commit/c4f8a3698dc1d80439131c5097975d6a5db1b4e2))
* handle system_fingerprint in streaming helpers ([#636](https://github.com/openai/openai-node/issues/636)) ([f273530](https://github.com/openai/openai-node/commit/f273530ac491300842aef463852821a1a27805fb))
* **types:** accept undefined for optional client options ([#635](https://github.com/openai/openai-node/issues/635)) ([e48cd57](https://github.com/openai/openai-node/commit/e48cd57931cd0e81a77b55653cb1f663111dd733))


### Chores

* **internal:** debug logging for retries; speculative retry-after-ms support ([#633](https://github.com/openai/openai-node/issues/633)) ([fd64971](https://github.com/openai/openai-node/commit/fd64971612d1d7fcbd8a63885d333485bff68ab1))
* **internal:** update comment ([#631](https://github.com/openai/openai-node/issues/631)) ([e109d40](https://github.com/openai/openai-node/commit/e109d40a5c02c5bf4586e54d92bf0e355d254c1b))

## 4.24.7 (2024-01-13)

Full Changelog: [v4.24.6...v4.24.7](https://github.com/openai/openai-node/compare/v4.24.6...v4.24.7)
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -21,7 +21,7 @@ You can import in Deno via:
<!-- x-release-please-start-version -->

```ts
import OpenAI from 'https://deno.land/x/openai@v4.24.7/mod.ts';
import OpenAI from 'https://deno.land/x/openai@v4.25.0/mod.ts';
```

<!-- x-release-please-end -->
@@ -434,8 +434,8 @@ import { fetch } from 'undici'; // as one example
import OpenAI from 'openai';

const client = new OpenAI({
fetch: (url: RequestInfo, init?: RequestInfo): Response => {
console.log('About to make request', url, init);
fetch: async (url: RequestInfo, init?: RequestInfo): Promise<Response> => {
console.log('About to make a request', url, init);
const response = await fetch(url, init);
console.log('Got response', response);
return response;
2 changes: 1 addition & 1 deletion build-deno
Original file line number Diff line number Diff line change
@@ -14,7 +14,7 @@ This is a build produced from https://github.com/openai/openai-node – please g
Usage:
\`\`\`ts
import OpenAI from "https://deno.land/x/openai@v4.24.7/mod.ts";
import OpenAI from "https://deno.land/x/openai@v4.25.0/mod.ts";
const client = new OpenAI();
\`\`\`
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "openai",
"version": "4.24.7",
"version": "4.25.0",
"description": "The official TypeScript library for the OpenAI API",
"author": "OpenAI <support@openai.com>",
"types": "dist/index.d.ts",
30 changes: 19 additions & 11 deletions src/core.ts
Original file line number Diff line number Diff line change
@@ -417,14 +417,17 @@ export abstract class APIClient {

if (!response.ok) {
if (retriesRemaining && this.shouldRetry(response)) {
const retryMessage = `retrying, ${retriesRemaining} attempts remaining`;
debug(`response (error; ${retryMessage})`, response.status, url, responseHeaders);
return this.retryRequest(options, retriesRemaining, responseHeaders);
}

const errText = await response.text().catch((e) => castToError(e).message);
const errJSON = safeJSON(errText);
const errMessage = errJSON ? undefined : errText;
const retryMessage = retriesRemaining ? `(error; no more retries left)` : `(error; not retryable)`;

debug('response', response.status, url, responseHeaders, errMessage);
debug(`response (error; ${retryMessage})`, response.status, url, responseHeaders, errMessage);

const err = this.makeStatusError(response.status, errJSON, errMessage, responseHeaders);
throw err;
@@ -529,11 +532,21 @@ export abstract class APIClient {
retriesRemaining: number,
responseHeaders?: Headers | undefined,
): Promise<APIResponseProps> {
// About the Retry-After header: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After
let timeoutMillis: number | undefined;

// Note the `retry-after-ms` header may not be standard, but is a good idea and we'd like proactive support for it.
const retryAfterMillisHeader = responseHeaders?.['retry-after-ms'];
if (retryAfterMillisHeader) {
const timeoutMs = parseFloat(retryAfterMillisHeader);
if (!Number.isNaN(timeoutMs)) {
timeoutMillis = timeoutMs;
}
}

// About the Retry-After header: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After
const retryAfterHeader = responseHeaders?.['retry-after'];
if (retryAfterHeader) {
const timeoutSeconds = parseInt(retryAfterHeader);
if (retryAfterHeader && !timeoutMillis) {
const timeoutSeconds = parseFloat(retryAfterHeader);
if (!Number.isNaN(timeoutSeconds)) {
timeoutMillis = timeoutSeconds * 1000;
} else {
@@ -543,12 +556,7 @@ export abstract class APIClient {

// If the API asks us to wait a certain amount of time (and it's a reasonable amount),
// just do what it says, but otherwise calculate a default
if (
!timeoutMillis ||
!Number.isInteger(timeoutMillis) ||
timeoutMillis <= 0 ||
timeoutMillis > 60 * 1000
) {
if (!(timeoutMillis && 0 <= timeoutMillis && timeoutMillis < 60 * 1000)) {
const maxRetries = options.maxRetries ?? this.maxRetries;
timeoutMillis = this.calculateDefaultRetryTimeoutMillis(retriesRemaining, maxRetries);
}
@@ -717,7 +725,7 @@ export type RequestOptions<Req = unknown | Record<string, unknown> | Readable> =
method?: HTTPMethod;
path?: string;
query?: Req | undefined;
body?: Req | undefined;
body?: Req | null | undefined;
headers?: Headers | undefined;

maxRetries?: number;
8 changes: 4 additions & 4 deletions src/index.ts
Original file line number Diff line number Diff line change
@@ -11,12 +11,12 @@ export interface ClientOptions {
/**
* Defaults to process.env['OPENAI_API_KEY'].
*/
apiKey?: string;
apiKey?: string | undefined;

/**
* Defaults to process.env['OPENAI_ORG_ID'].
*/
organization?: string | null;
organization?: string | null | undefined;

/**
* Override the default base URL for the API, e.g., "https://api.example.com/v2/"
@@ -91,8 +91,8 @@ export class OpenAI extends Core.APIClient {
/**
* API Client for interfacing with the OpenAI API.
*
* @param {string} [opts.apiKey=process.env['OPENAI_API_KEY'] ?? undefined]
* @param {string | null} [opts.organization=process.env['OPENAI_ORG_ID'] ?? null]
* @param {string | undefined} [opts.apiKey=process.env['OPENAI_API_KEY'] ?? undefined]
* @param {string | null | undefined} [opts.organization=process.env['OPENAI_ORG_ID'] ?? null]
* @param {string} [opts.baseURL=process.env['OPENAI_BASE_URL'] ?? https://api.openai.com/v1] - Override the default base URL for the API.
* @param {number} [opts.timeout=10 minutes] - The maximum amount of time (in milliseconds) the client will wait for a response before timing out.
* @param {number} [opts.httpAgent] - An HTTP agent used to manage HTTP(s) connections.
132 changes: 81 additions & 51 deletions src/lib/ChatCompletionStream.ts
Original file line number Diff line number Diff line change
@@ -156,24 +156,28 @@ export class ChatCompletionStream
for (const { delta, finish_reason, index, logprobs = null, ...other } of chunk.choices) {
let choice = snapshot.choices[index];
if (!choice) {
snapshot.choices[index] = { finish_reason, index, message: delta, logprobs, ...other };
continue;
choice = snapshot.choices[index] = { finish_reason, index, message: {}, logprobs, ...other };
}

if (logprobs) {
if (!choice.logprobs) {
choice.logprobs = logprobs;
} else if (logprobs.content) {
choice.logprobs.content ??= [];
choice.logprobs.content.push(...logprobs.content);
choice.logprobs = Object.assign({}, logprobs);
} else {
const { content, ...rest } = logprobs;
Object.assign(choice.logprobs, rest);
if (content) {
choice.logprobs.content ??= [];
choice.logprobs.content.push(...content);
}
}
}

if (finish_reason) choice.finish_reason = finish_reason;
Object.assign(choice, other);

if (!delta) continue; // Shouldn't happen; just in case.
const { content, function_call, role, tool_calls } = delta;
const { content, function_call, role, tool_calls, ...rest } = delta;
Object.assign(choice.message, rest);

if (content) choice.message.content = (choice.message.content || '') + content;
if (role) choice.message.role = role;
@@ -190,8 +194,9 @@ export class ChatCompletionStream
}
if (tool_calls) {
if (!choice.message.tool_calls) choice.message.tool_calls = [];
for (const { index, id, type, function: fn } of tool_calls) {
for (const { index, id, type, function: fn, ...rest } of tool_calls) {
const tool_call = (choice.message.tool_calls[index] ??= {});
Object.assign(tool_call, rest);
if (id) tool_call.id = id;
if (type) tool_call.type = type;
if (fn) tool_call.function ??= { arguments: '' };
@@ -248,59 +253,72 @@ export class ChatCompletionStream
}

function finalizeChatCompletion(snapshot: ChatCompletionSnapshot): ChatCompletion {
const { id, choices, created, model } = snapshot;
const { id, choices, created, model, system_fingerprint, ...rest } = snapshot;
return {
...rest,
id,
choices: choices.map(({ message, finish_reason, index, logprobs }): ChatCompletion.Choice => {
if (!finish_reason) throw new OpenAIError(`missing finish_reason for choice ${index}`);
const { content = null, function_call, tool_calls } = message;
const role = message.role as 'assistant'; // this is what we expect; in theory it could be different which would make our types a slight lie but would be fine.
if (!role) throw new OpenAIError(`missing role for choice ${index}`);
if (function_call) {
const { arguments: args, name } = function_call;
if (args == null) throw new OpenAIError(`missing function_call.arguments for choice ${index}`);
if (!name) throw new OpenAIError(`missing function_call.name for choice ${index}`);
choices: choices.map(
({ message, finish_reason, index, logprobs, ...choiceRest }): ChatCompletion.Choice => {
if (!finish_reason) throw new OpenAIError(`missing finish_reason for choice ${index}`);
const { content = null, function_call, tool_calls, ...messageRest } = message;
const role = message.role as 'assistant'; // this is what we expect; in theory it could be different which would make our types a slight lie but would be fine.
if (!role) throw new OpenAIError(`missing role for choice ${index}`);
if (function_call) {
const { arguments: args, name } = function_call;
if (args == null) throw new OpenAIError(`missing function_call.arguments for choice ${index}`);
if (!name) throw new OpenAIError(`missing function_call.name for choice ${index}`);
return {
...choiceRest,
message: { content, function_call: { arguments: args, name }, role },
finish_reason,
index,
logprobs,
};
}
if (tool_calls) {
return {
...choiceRest,
index,
finish_reason,
logprobs,
message: {
...messageRest,
role,
content,
tool_calls: tool_calls.map((tool_call, i) => {
const { function: fn, type, id, ...toolRest } = tool_call;
const { arguments: args, name, ...fnRest } = fn || {};
if (id == null)
throw new OpenAIError(`missing choices[${index}].tool_calls[${i}].id\n${str(snapshot)}`);
if (type == null)
throw new OpenAIError(`missing choices[${index}].tool_calls[${i}].type\n${str(snapshot)}`);
if (name == null)
throw new OpenAIError(
`missing choices[${index}].tool_calls[${i}].function.name\n${str(snapshot)}`,
);
if (args == null)
throw new OpenAIError(
`missing choices[${index}].tool_calls[${i}].function.arguments\n${str(snapshot)}`,
);

return { ...toolRest, id, type, function: { ...fnRest, name, arguments: args } };
}),
},
};
}
return {
message: { content, function_call: { arguments: args, name }, role },
...choiceRest,
message: { ...messageRest, content, role },
finish_reason,
index,
logprobs,
};
}
if (tool_calls) {
return {
index,
finish_reason,
logprobs,
message: {
role,
content,
tool_calls: tool_calls.map((tool_call, i) => {
const { function: fn, type, id } = tool_call;
const { arguments: args, name } = fn || {};
if (id == null)
throw new OpenAIError(`missing choices[${index}].tool_calls[${i}].id\n${str(snapshot)}`);
if (type == null)
throw new OpenAIError(`missing choices[${index}].tool_calls[${i}].type\n${str(snapshot)}`);
if (name == null)
throw new OpenAIError(
`missing choices[${index}].tool_calls[${i}].function.name\n${str(snapshot)}`,
);
if (args == null)
throw new OpenAIError(
`missing choices[${index}].tool_calls[${i}].function.arguments\n${str(snapshot)}`,
);

return { id, type, function: { name, arguments: args } };
}),
},
};
}
return { message: { content: content, role }, finish_reason, index, logprobs };
}),
},
),
created,
model,
object: 'chat.completion',
...(system_fingerprint ? { system_fingerprint } : {}),
};
}

@@ -333,6 +351,18 @@ export interface ChatCompletionSnapshot {
* The model to generate the completion.
*/
model: string;

// Note we do not include an "object" type on the snapshot,
// because the object is not a valid "chat.completion" until finalized.
// object: 'chat.completion';

/**
* This fingerprint represents the backend configuration that the model runs with.
*
* Can be used in conjunction with the `seed` request parameter to understand when
* backend changes have been made that might impact determinism.
*/
system_fingerprint?: string;
}

export namespace ChatCompletionSnapshot {
27 changes: 27 additions & 0 deletions src/resources/beta/threads/runs/runs.ts
Original file line number Diff line number Diff line change
@@ -264,6 +264,12 @@ export interface Run {
* this run.
*/
tools: Array<Run.AssistantToolsCode | Run.AssistantToolsRetrieval | Run.AssistantToolsFunction>;

/**
* Usage statistics related to the run. This value will be `null` if the run is not
* in a terminal state (i.e. `in_progress`, `queued`, etc.).
*/
usage: Run.Usage | null;
}

export namespace Run {
@@ -332,6 +338,27 @@ export namespace Run {
*/
type: 'function';
}

/**
* Usage statistics related to the run. This value will be `null` if the run is not
* in a terminal state (i.e. `in_progress`, `queued`, etc.).
*/
export interface Usage {
/**
* Number of completion tokens used over the course of the run.
*/
completion_tokens: number;

/**
* Number of prompt tokens used over the course of the run.
*/
prompt_tokens: number;

/**
* Total number of tokens used (prompt + completion).
*/
total_tokens: number;
}
}

export interface RunCreateParams {
Loading