Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Capability to debug model input/output #55

Open
weaversam8 opened this issue Nov 7, 2023 · 1 comment
Open

Capability to debug model input/output #55

weaversam8 opened this issue Nov 7, 2023 · 1 comment

Comments

@weaversam8
Copy link

I haven't found an easy way within the library to log messages sent to/from the model. When debugging prompts and trying to ensure the model's response properly serializes to a Pydantic model, having this level of logging available would be very helpful.

@jackmpcollins
Copy link
Owner

Hi @weaversam8 thanks for the issue. Here are a few things that might be useful to you.

  • Prompt-functions expose a format method that takes the same arguments as the function itself but returns the string prompt that will be sent to the model.

     from magentic import prompt
    
     @prompt("Say hello {n} times")
     def say_hello(n: int) -> str:
         ...
     
     say_hello.format(2)
     # 'Say hello 2 times'
  • When the model output fails to be parsed by magentic, the exception traceback contains the pydantic validation error which shows what part of the model output caused the issue. An example showing the relevant part of the traceback is Update prompt to return NULL, but GPT model won't return NULL. #61 (comment)

  • openai uses the Python logger so you can enable debug logs. These show the request that gets sent to OpenAI.

     import logging
    
     from magentic import prompt
    
    
     logging.basicConfig(level=logging.DEBUG)
    
    
     def plus(a: int, b: int) -> int:
         return a + b
     
     
     @prompt(
         "Say hello {n} times",
         functions=[plus],
     )
     def say_hello(n: int) -> str:
         ...
     
    
     say_hello(2)
    
     # ...
     DEBUG:openai._base_client:Request options: {'method': 'post', 'url': '/chat/completions', 'files': None, 'json_data': {'messages': [{'role': 'user', 'content': 'Say hello 2 times'}], 'model': 'gpt-3.5-turbo', 'functions': [{'name': 'plus', 'parameters': {'properties': {'a': {'title': 'A', 'type': 'integer'}, 'b': {'title': 'B', 'type': 'integer'}}, 'required': ['a', 'b'], 'type': 'object'}}], 'max_tokens': None, 'stream': True, 'temperature': None}}
     # ...

Please let me know if there's something you think would be useful in addition to these. It seems like it would be useful for magentic to have its own debug logging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants