- Allow
<Image>
fallback content to be customized.
- Add support for passing
<Image>
to OpenAI models.
- Improve rendering performance:
- Change
<ShrinkConversation>
to cache token costs when remeasuring the same elements - Reduce performance impact of debug logging
- Change
- Fix incorrect unwrapping of JSX array
children
.
- Fix double Anthropic requests
- Add new OpenAI models
- Added support for Claude 3 and messages API in
AnthropicChatModel
- Relaxed function calling check for
OpenAIChatModel
- Changed the
FunctionDefinition
andTool
types to explicit JSON Schema. Zod types must now be explicitly converted to JSON Schema, and therequired
semantics now match JSON Schema.
- Fix bug where partially streamed unicode characters (e.g. Chinese) would cause an error in OpenAI function calls.
- Add
openai.finish_reason
span attribute forOpenAIChatModel
- Improved completion/prompt logging to include explicit message text
- Fix bug where memoized components could duplicate content
- Refactor
<Converse>
to allow rounds to progress in parallel when content allows
- Add new
batchFrames
render option to coalesce ready frames
- Fix
js-tiktoken
import that fails on 1.0.8.
- In the
Sidekick
component:- Remove the MDX repair attempt.
- Reduce standard system prompt size.
Sidekick
can now interject with filler content (e.g. "Let me check on that.") when the model requests a function call.
- Update OpenAI client to 4.16.0
- Add support for OpenAI parallel function calls
- Update the model enums in
ai-jsx/lib/openai
- In the OpenTelemetry logger, ensure that SeverityNumber is set.
- In the
Sidekick
component:- Put the user system messages before the built-in system messages.
- Make the MDX formatting logic is conditional on using MDX
- Accept
children
as the conversation to act on (defaults to<ConversationHistory>
)
- Fix a bug in
LimitToValidMdx
where a whitespace character was initially yieled.
Sidekick
now accepts auseCitationCard
prop, which controls whether it will emit<Citation />
MDX components.
Sidekick
is no longer locked to GPT-4-32k. Now, it'll run with whatever model is set by the AI.JSX context.- If you pass tools, make sure that the model supports native function calling, or you'll get an error.
- Fix bug in Anthropic's
ChatCompletion
where it was too aggressive in checking thattools
don't exist.
- Remove
finalSystemMessageBeforeResponse
fromSidekick
component. ThesystemMessage
is now always given to the model as the last part of the context window. - Remove other cruft from the built-in Sidekick system message.
- Remove
Card
component from the Sidekick's possible output MDX components.
- Remove
Prompt
component. - Remove
role
prop from theSidekick
component. - Fix issue with how the SDK handles request errors.
- Enable Sidekicks to introduce themselves at the start of a conversation.
- Fix an issue where empty strings in conversational prompts cause errors to be thrown.
- Modified
lib/openai
to preload the tokenizer to avoid a stall on first use - Fixed an issue where
debug(component)
would throw an exception if a component had a prop that could not be JSON-serialized.
- Modified
Sidekick
to add the following options:outputFormat
:text/mdx
,text/markdown
,text/plain
includeNextStepsRecommendations
:boolean
- Added components for Automatic Speech Recognition (ASR) in
lib/asr/asr.tsx
. - Addec components for Text-to-Speech (TTS) in
lib/tts/tts.tsx
. - ASR providers include Deepgram, Speechmatics, Assembly AI, Rev AI, Soniox, and Gladia.
- TTS providers include Google Cloud, AWS, Azure, and ElevenLabs.
- Fixed a bug where passing an empty
functionDefinitions
prop to<OpenAIChatModel>
would cause an error.
- Added the ability to set Anthropic/OpenAI clients without setting the default model
- Increase the default token limit for automatic API response trimming.
- API token limiting: long API responses in
Sidekick
are now automatically truncated. If this happens, the response is chunked and the LLM is given a new functionloadBySimilarity
to query the last function response.
- Changed
<UseTools>
to allow AI.JSX components to be tools. - Added
FixieAPIConfiguration
context. - Changed
FixieCorpus
to take aFixieAPIConfiguration
. - Added the
FixieCorpus.createTool
helper to create a tool that consults a Fixie corpus.
- Updated default URL for
<FixieCorpus>
toapi.fixie.ai
.
- Updated DocsQA battery to use the new version of the Fixie corpus REST API.
- Updated DocsQA battery to use the new Fixie corpus REST API.
- Add Sidekick component. Sidekicks are a high-level abstraction for combining tool use, docs QA, and generated UI.
- Change
MdxSystemMessage
to no longer automatically infer component names from theusageExamples
. Instead,usageExamples
is now a plain string, and component names are passed separately via thecomponentNames
prop.
- Change the
<ConversationHistory>
component to render to a node from aConversationHistoryContext
provider, rather than from OpenAI message types. - Replace usage of
openai-edge
with that of theopenai
v4 package.
- Updated the
<FixieCorpus>
component to use the new Fixie Corpus REST API. This is currently only available to users onbeta.fixie.ai
but will be brought toapp.fixie.ai
soon.
- Memoized streaming elements no longer replay their entire stream with every render. Instead, they start with the last rendered frame.
- Elements returned by partial rendering are automatically memoized to ensure they only render once.
- Streaming components can no longer yield promises or generators. Only
Node
s orAI.AppendOnlyStream
values can be yielded. - The
AI.AppendOnlyStream
value is now a function that can be called with a non-empty value to append.
- In the OpenTelemetry integration:
- Add prompt/completion attributes with token counts for
<OpenAIChatModel>
. This replaces thetokenCount
attribute added in 0.9.1. - By default, only emit spans for
async
components.
- Add prompt/completion attributes with token counts for
- Add
tokenCount
field to OpenTelemetry-emitted spans. Now, if you're emitting via OpenTelemetry (e.g. to DataDog), the spans will tell you how many tokens each component resolved to. This is helpful for answering quetsions like "how big is my system message?".
- Breaking: Remove prompt-engineered
UseTools
. Previously, if you calledUseTools
with a model that doesn't support native function calling (e.g. Anthropic),UseTools
would use a polyfilled version that uses prompt engineering to simulate function calling. However, this wasn't reliable enough in practice, so we've dropped it. - Fix issue where
gpt-4-32k
didn't accept functions. - Fix issue where Anthropic didn't permit function call/responses in its conversation history.
- Add Anthropic's claude-2 models as valid chat model types.
- Fix issue where Anthropic prompt formatting had extra
:
s.
- Fix issue where OpenTelemetry failures were not being properly attributed.
- Add OpenTelemetry integration for AI.JSX render tracing, which can be enabled by setting the
AIJSX_ENABLE_OPENTELEMETRY
environment variable.
- Throw validation errors when invalid elements (like bare strings) are passed to
ChatCompletion
components. - Reduce logspam from memoization.
- Fix issue where the
description
field wasn't passed to function definitions.
- Add support for token-based conversation shrinking via
<Shrinkable>
.
- Move
MdxChatCompletion
to beMdxSystemMessage
. You can now put thisSystemMessage
in anyChatCompletion
to prompt the model to give MDX output.
- Update readme.
- Add
Converse
andShowConversation
components facilitate streaming conversations.
- Change
ChatCompletion
components to render to<AssistantMessage>
and<FunctionCall>
elements.
- Move
memo
toAI.RenderContext
to ensure that memoized components render once, even if placed under a different context provider.
- Add
AIJSX_LOG
environment variable to control log level and output location.
- Update
<UseTools>
to take a complete conversation as achildren
prop, rather than as a stringquery
prop.
- Update
toTextStream
to accept alogger
, so you can now see log output when you're running AI.JSX on the server and outputting to a stream. See AI + UI and Observability.
- Add
MdxChatCompletion
, so your model calls can now output MDX using your components.
- Add Llama2 support.
- Updated
readme.md
in theai-jsx
package to fix bugs on the npm landing page.
- Make JIT UI stream rather than appear all at once.
- Use
openai-edge
instead of@nick.heiner/openai-edge
- Update logging to log the output of every component.
- Update
UseTools
to use OpenAI function calls if you're using a model that supports them.
ImageGen
now produces anImage
object which will render to a URL in the command line, but returns an<img />
tag when using in the browser (React/Next).
- Add ability to stream UI components in the UI on the client; AI.JSX on the server architecture pattern.
- Add ability to do append-only text streaming.
- Update
UseTools
to match OpenAI function syntax. - Add
ConversationalHistory
component.
- Improve legibility of error messages + overall error handling.
- DocsQA: add ability to use a Fixie corpus.
- Fix build system issue that caused problems for some consumers.
- Remove need for projects consuming AI.JSX to set
"moduleResolution": "esnext"
in theirtsconfig
. - Adding Weights and Biases integration
- Fix how env vars are read.
- When reading env vars, read from
VAR_NAME
andREACT_APP_VAR_NAME
. This makes your env vars available to projects usingcreate-react-app
. - Add OpenAI client proxy.
- Initial release