Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: Cardinality difference between C# and Python on GetStreamingChatMessageContentsAsync and complete_chat_stream #6203

Open
matthewbolanos opened this issue May 12, 2024 · 1 comment
Assignees
Labels
bug Something isn't working .NET Issue or Pull requests regarding .NET code python Pull requests for the Python Semantic Kernel

Comments

@matthewbolanos
Copy link
Member

matthewbolanos commented May 12, 2024

In C#, GetStreamingChatMessageContentsAsync generates a IAsncEnumerable of StreamingChatMessageContent

// Invoke the chat completion service
IChatCompletionService chatCompletionService = kernel.GetRequiredService<IChatCompletionService>();
var results = chatCompletionService.GetStreamingChatMessageContentsAsync(
    chatHistory: chatHistory,
    executionSettings: new OpenAIPromptExecutionSettings() {
        // Allows the AI to automatically choose and invoke functions from the kernel's plugins
        ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
    },
    kernel: kernel
);

// Return the results as a stream
var completeMessage = new StringBuilder();
await foreach (var result in results)
{
    completeMessage.Append(result);

    // Send the message events to the client
    var events = assistantEventStreamUtility.CreateMessageEvent(run.Id, result);
    foreach (var messageEvent in events)
    {
        yield return messageEvent;
    }
}

Python, however, creates a AsyncGenerator[list[StreamingChatMessageContent], which makes the code less obvious to write because you always have to get the first index [0].

# Invoke the chat completion service
chatCompletion: ChatCompletionClientBase = kernel.get_service(type=ChatCompletionClientBase)
results = chatCompletion.complete_chat_stream(
    chat_history=history,
    settings=OpenAIChatPromptExecutionSettings(
        # function_call_behavior=FunctionCallBehavior()
    ),
    kernel=kernel
)

# Return the results as a stream 
completeMessage = ""
async for result in results:
    completeMessage += result[0].content

    # Send the message events to the client
    events = event_stream_utility.create_message_event(run.id, result[0])
    for event in events:
        yield event
@matthewbolanos matthewbolanos added the bug Something isn't working label May 12, 2024
@markwallace-microsoft markwallace-microsoft added .NET Issue or Pull requests regarding .NET code python Pull requests for the Python Semantic Kernel triage labels May 12, 2024
@github-actions github-actions bot changed the title Cardinality difference between C# and Python on StreamingChatMessageContent .Net: Cardinality difference between C# and Python on StreamingChatMessageContent May 12, 2024
@github-actions github-actions bot changed the title .Net: Cardinality difference between C# and Python on StreamingChatMessageContent Python: Cardinality difference between C# and Python on StreamingChatMessageContent May 12, 2024
@matthewbolanos matthewbolanos changed the title Python: Cardinality difference between C# and Python on StreamingChatMessageContent Python: Cardinality difference between C# and Python on GetStreamingChatMessageContentsAsync and complete_chat_stream May 12, 2024
@eavanvalkenburg
Copy link
Member

@matthewbolanos how does dotnet deal with num_responses > 1 for this, because that is the reason everything returns a list!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working .NET Issue or Pull requests regarding .NET code python Pull requests for the Python Semantic Kernel
Projects
None yet
Development

No branches or pull requests

4 participants