Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] How to count used tokens? #442

Closed
rborosak opened this issue Apr 28, 2024 · 10 comments
Closed

[Question] How to count used tokens? #442

rborosak opened this issue Apr 28, 2024 · 10 comments

Comments

@rborosak
Copy link

Context / Scenario

I'm using Kernel memory as a plugin with Semantic Kernel.
Before each prompt I count the used tokens from chat history.

Question

I would like to know how can I get the count of the used tokens by the kernel memory plugin?

@rborosak rborosak added the question Further information is requested label Apr 28, 2024
@dluc
Copy link
Collaborator

dluc commented Apr 30, 2024

The number of tokens is logged in the application logs. If you need to count upfront, you can use the provided tokenizers, e.g. we just added tiktoken counters that covers GPT2/GPT3/GPT4 models, and the repo contains also a LLama tokenizer for llama compatible models.

@rborosak
Copy link
Author

@dluc where can I find an example?
Thank you

@dluc
Copy link
Collaborator

dluc commented May 2, 2024

@rborosak for which model and/or tokenizer?

@gonzalocabo
Copy link

@dluc For my implementation, I need to track the used tokens by the LLM. It's a web application, I need to record user info and tokens used to bill them after a period.

Reviewing the code the only approach is to use the Logger which counts the tokens with the Tokenizer, could you implement a Meter and trigger a metric or invoking a Event maybe?

@dluc
Copy link
Collaborator

dluc commented May 9, 2024

I'll see if there's an easy approach, not too expensive to develop. Can't promise though. We always welcome PRs, or draft PRs to kickstart the process if that might help.

@luismanez
Copy link
Contributor

SemanticKernel has the concepts of FunctionFilters, where you can do things like that, so I guess KM should follow the same approach... which actually makes me wonder in KernelMemory should drop the AskAsync functionality, being "just" a Memory plugin (SearchAsync) for SK + all the Indexing capabilities... (don't kill me, just thinking out loud 😄 )

@dluc
Copy link
Collaborator

dluc commented May 9, 2024

KernelMemory should drop the AskAsync functionality, being "just" a Memory plugin (SearchAsync) for SK + all the Indexing capabilities... (don't kill me, just thinking out loud 😄 )

it's actually the other way around. SK memoy plugins need DB connectors to talk to storage. AskAsync is built on those connectors. KM connectors are an evolution of SK connectors, out of necessity of

  1. supporting Security Filters
  2. making Embedding Generation optional, for storage solutions like Azure AI Search, Postgres and Chroma that can generate embeddings internally
  3. allowing Hybrid Search.

KM paved the way on the research side, so that these features can land also in SK ;-)

Anyway, if you're looking for a plugin here's KM plugin for SK: https://github.com/microsoft/kernel-memory/tree/main/clients/dotnet/SemanticKernelPlugin

@luismanez
Copy link
Contributor

yeah, don't get me wrong, I love KM and we're using it in PROD (would be interesting for a case-study??). That said, the AskAsync method offers less possibilities than if the call to the model is done by SK: calling other plugins, handlebars templates, streaming...

@dluc
Copy link
Collaborator

dluc commented May 10, 2024

yeah, don't get me wrong, I love KM and we're using it in PROD (would be interesting for a case-study??). That said, the AskAsync method offers less possibilities than if the call to the model is done by SK: calling other plugins, handlebars templates, streaming...

(feedback appreciated, in both ways, no worries :-))

I wouldn't compare KM with an agent or a planner, or similar features based on function calling, if that makes sense. For instance, when using "function" calling, there are multiple functions that the LLM can choose from, and "Ask" is one of those functions.

Looking at SK memory classes, you can see there's a "Search" function, which can be used to put relevant information into a planner/agent context. Then one needs to create another function to leverage the context to answer questions or execute some actions, which IMO are out of "memory" scope.

Looking at KM, we could (would like to) extend the Ask method to do intent detection and decide how to process the user question, e.g. whether to search for relevant records or to process an entire document without the need for "search". However, in terms of "Memory" the scope of actions should be limited to retrieving information -- at least that's the idea of the primary API, and one can leverage the underlying orchestration to do more :-)

@dluc
Copy link
Collaborator

dluc commented Jun 4, 2024

Please feel free to use #532 to vote for this feature

@microsoft microsoft locked and limited conversation to collaborators Jun 4, 2024
@dluc dluc converted this issue into discussion #538 Jun 4, 2024
@dluc dluc added discussion and removed question Further information is requested labels Jun 4, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
None yet
Development

No branches or pull requests

4 participants