Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in useChat when parsing stream causing empty messages from AI #1347

Closed
Ladekarl opened this issue Apr 14, 2024 · 11 comments
Closed

Error in useChat when parsing stream causing empty messages from AI #1347

Ladekarl opened this issue Apr 14, 2024 · 11 comments

Comments

@Ladekarl
Copy link

Description

I was following the docs for implementing useChat but the responses from the stream would not show in the messages array returned from useChat. I would only see the user's prompts in there and not the AI reponses.

When I inspected the response from the chat endpoint everything seemed to work fine and it was returning the correctly streamed AI response.
By inserting a log statement in the onError callback in useChat I was able to get the following message:

error Error: Failed to parse stream string. No separator found.
    at parseStreamPart (index.mjs:173:15)
    at Array.map (<anonymous>)
    at readDataStream (index.mjs:217:52)
    at async parseComplexResponse (index.mjs:247:36)
    at async callChatApi (index.mjs:374:12)
    at async getStreamedResponse (index.mjs:541:12)
    at async processChatStream (index.mjs:392:46)
    at async eval (index.mjs:642:13)

Finally I tried reverting to an older version and there it worked just fine. I tried bumping the versions and found out that the error started appearing in version 3.0.20.
Reverting to 3.0.19 fixed the issue.

Code example

API route

export async function POST(req: NextRequest) {
  try {
    const body = await req.json();
    const messages = body.messages ?? [];
    const previousMessages = messages.slice(0, -1);
    const currentMessageContent = messages[messages.length - 1].content;

    const model = new ChatOpenAI({
      modelName: 'gpt-3.5-turbo-1106',
      temperature: 0.2,
      apiKey: OPENAI_API_KEY,
      streaming: true
    });

    const standaloneQuestionChain = RunnableSequence.from([
      condenseQuestionPrompt,
      model,
      new StringOutputParser()
    ]);

    const pinecone = new Pinecone({ apiKey: PINECONE_API_KEY });
    const pineconeIndex = pinecone.Index(PINECONE_INDEX_NAME);

    const vectorstore = await PineconeStore.fromExistingIndex(
      new CohereEmbeddings({ model: 'multilingual-22-12' }),
      { pineconeIndex }
    );

    const retriever = vectorstore.asRetriever();

    const retrievalChain = retriever.pipe(combineDocumentsFn);

    const answerChain = RunnableSequence.from([
      {
        context: RunnableSequence.from([
          input => input.question,
          retrievalChain
        ]),
        chat_history: input => input.chat_history,
        question: input => input.question
      },
      answerPrompt,
      model
    ]);

    const conversationalRetrievalQAChain = RunnableSequence.from([
      {
        question: standaloneQuestionChain,
        chat_history: input => input.chat_history
      },
      answerChain,
      new BytesOutputParser()
    ]);

    const stream = await conversationalRetrievalQAChain.stream({
      question: currentMessageContent,
      chat_history: formatVercelMessages(previousMessages)
    });

    return new StreamingTextResponse(stream);
  } catch (e: any) {
    return NextResponse.json({ error: e.message }, { status: e.status ?? 500 });
  }
}

Client component

'use client';

import { useChat } from 'ai/react';
import type { FormEvent } from 'react';

import { Box, Button, Container, TextField, Typography } from '@mui/material';

export default function Chat() {
  const {
    messages,
    input,
    handleInputChange,
    handleSubmit,
    isLoading: chatEndpointIsLoading
  } = useChat();

  async function sendMessage(e: FormEvent<HTMLFormElement>) {
    e.preventDefault();
    if (!messages.length) {
      await new Promise(resolve => setTimeout(resolve, 300));
    }
    if (chatEndpointIsLoading) {
      return;
    }
    handleSubmit(e);
  }

  return (
    <Container>
      {messages.length === 0 ? 'No messages' : ''}

      {messages.length > 0
        ? [...messages].reverse().map(m => {
            return (
              <Box key={m.id}>
                <Typography>{m.content}</Typography>
              </Box>
            );
          })
        : ''}
      <Container>
        <form onSubmit={sendMessage}>
          <TextField value={input} onChange={handleInputChange} />
          <Button type="submit">Send</Button>
        </form>
      </Container>
    </Container>
  );
}

Additional context

No response

@lgrammel
Copy link
Collaborator

Duplicate of #1316

See #1316 (comment) for upgrade instructions.

@mauriceackel
Copy link

mauriceackel commented Apr 15, 2024

I don't understand why this is resolved as a duplicate. The other issue seems to be related to parsing the StreamingTextResponse manually, while this is about using useChat.

I just installed version (3.0.22), removed my package-lock file, node_modules folder and .next build folder. Even after that, I got the same error as above (i.e. error Error: Failed to parse stream string. No separator found.) when using useChat together with StreamingTextResponse.

@lgrammel lgrammel reopened this Apr 15, 2024
@johnson-liang
Copy link

johnson-liang commented Apr 15, 2024

Also got Failed to parse stream string. on the error callback.

It turns out that the versions of ai package in our API (ai@3.0.13 on Express.JS server using streamToResponse) and UI (ai@3.0.21 in React.js application with useChat) dos not match.

It seems that there is a breaking change on 3.0.20, so that both API and UI must both use ai >= v3.0.20, or both use ai < v3.0.20.

We really do not expect such incompatibility in patch versions though, took us hours to debug and reach here.

@mauriceackel
Copy link

Okay, so it seems like the default mode for streaming responses is to include stream data now. I was able to get streaming work with 3.0.22 using the following setup:

const data = new StreamData();

const stream = await pipeline.stream(
  question,
  {
    callbacks: [{
      handleChainEnd: (_, __, parentRunId) => {
        if (parentRunId === undefined) data.close();
      },
    }],
  },
);

return new StreamingTextResponse(
  stream.pipeThrough(createStreamDataTransformer()),
  undefined,
  data,
);

@lgrammel, so we still need to manually close the StreamData object?

@lgrammel
Copy link
Collaborator

lgrammel commented Apr 15, 2024

@mauriceackel in your first example, langchain returns a text stream that gets forwarded. I'm working on making useChat/useCompletion compatible with it again, see #1350

In your current example, this might be sufficient:

return new StreamingTextResponse(
  stream.pipeThrough(createStreamDataTransformer()),
);

i.e. you might not need data, the important part is the stream transformation

@Ladekarl
Copy link
Author

@mauriceackel in your first example, langchain returns a text stream that gets forwarded. I'm working on making useChat/useCompletion compatible with it again, see #1350

In your current example, this might be sufficient:

return new StreamingTextResponse(
  stream.pipeThrough(createStreamDataTransformer()),
);

i.e. you might not need data, the important part is the stream transformation

This worked for me.
Thanks 👍

@pedrocarnevale
Copy link

Okay, so it seems like the default mode for streaming responses is to include stream data now. I was able to get streaming work with 3.0.22 using the following setup:

const data = new StreamData();

const stream = await pipeline.stream(
  question,
  {
    callbacks: [{
      handleChainEnd: (_, __, parentRunId) => {
        if (parentRunId === undefined) data.close();
      },
    }],
  },
);

return new StreamingTextResponse(
  stream.pipeThrough(createStreamDataTransformer()),
  undefined,
  data,
);

@lgrammel, so we still need to manually close the StreamData object?

With this solution, this happens to me

image

@connorblack
Copy link

stream.pipeThrough(createStreamDataTransformer()),

this works for me, no need to stub StreamData

@lgrammel
Copy link
Collaborator

@pedrocarnevale This can happen when you use an older version of the AI SDK on the client. Please make sure you use >= 3.0.20 on the client (ideally the client version should match the server version)

@pedrocarnevale
Copy link

@lgrammel I have this in my package.json file "ai": "^3.0.22". This "ai" package version reffers to AI SDK on the server, right? How do I know the version of the AI SDK on the client in my nextjs project?

@lgrammel
Copy link
Collaborator

@lgrammel I have this in my package.json file "ai": "^3.0.22". This "ai" package version reffers to AI SDK on the server, right? How do I know the version of the AI SDK on the client in my nextjs project?

It depends on your project. If you use e.g. Next.js there would most likely be only 1 package.json file for both server & client code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants