Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential bug in ai.jsx #420

Open
adriatic opened this issue Oct 19, 2023 · 5 comments
Open

Potential bug in ai.jsx #420

adriatic opened this issue Oct 19, 2023 · 5 comments

Comments

@adriatic
Copy link
Contributor

adriatic commented Oct 19, 2023

I deployed (to the cloud) the sample https://docs.ai-jsx.com/sidekicks/sidekicks-quickstart and asked the question show me what can you help with. This resulted with the error from lookUpGitHubKnowledgeBase:

This model response had an error: "Error during generation: AI.JSX(1032): OpenAI API Error: 400 This model's maximum context length is 4097 tokens. However, your messages resulted in 6069 tokens (5970 in the messages, 99 in the functions). Please reduce the length of the messages or functions. It's unclear whether this was caused by a bug in AI.JSX, in your code, or is an expected runtime error.

I have no doubt that I exceeded my token limit - and am reporting it just to be safe (that I reported this possible bug)

Added later: Rerun with a different (but similar) question: show me what can you help with and this time everything went fine:

I can help you with various tasks related to Git and GitHub, including:

Providing guidance on Git and GitHub workflows.
Assisting with creating, cloning, and initializing repositories.
Explaining how to commit changes and manage branches in Git.
Guiding you through the process of creating and merging pull requests on GitHub.
Helping you resolve merge conflicts in Git.
Assisting with configuring and using Git remotes.
Explaining how to collaborate with others using Git and GitHub.
Providing information on Git and GitHub APIs and how to use them.
If you have any specific questions or need assistance with a particular task, please let me know, and I'll be glad to help you!

Perhaps this LLM is too smart for me, as running it with the first question, that resulted with Error - This model's maximum context length is 4097 tokens now responded fine.

Note: I am fascinated with the difference in answering first and second question. Debugging this seems like a nightmare 😄

@zkoch
Copy link
Contributor

zkoch commented Oct 20, 2023

Which model are you using by default?

@adriatic
Copy link
Contributor Author

adriatic commented Oct 20, 2023

I followed the Quickstart verbatim, meaning that the model is Github. The only "personal" choice I made was to select several fields from Github. I do not remember the setting I did at Github to allow this sample to access Github.

@adriatic
Copy link
Contributor Author

I tried to run that same instance, with the prompt What questions can I ask

got back bullet points and I asked next how to review and approve a pull request

resulting with

Got response from lookUpGitHubKnowledgeBase:

AI.JSX(2004): Fixie API call to https://api.fixie.ai/api/v1/corpora/286b5a7d-2bcd-483f-aef5-acf157c5aea5:query returned status 500: .

This is a runtime error that's expected to occur with some frequency. It may go away on retry. It may be made more likely by errors in your code, or in AI.JSX.
    
Need help? 
* Discord: https://discord.com/channels/1065011484125569147/1121125525142904862
* Docs: https://docs.ai-jsx.com/
* GH: https://github.com/fixie-ai/ai-jsx/issues

Note that I am harping on this issue, because it is possible that I found bug 😄

@benlower
Copy link
Contributor

I think we have partially addressed some of the confusion that we were creating in the Quickstart with this PR that spells out the various types of docs collections and explains that there is a public collection for Git/GitHub.

Have you still been seeing the error WRT max tokens?

@adriatic
Copy link
Contributor Author

adriatic commented Nov 1, 2023

No, I did not try anything else - and will try for more tomorrow.

When will this fix be "live"? Is it already? (I always have such questions because there is no information about "fixes in the current code and docs")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants