New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(llmobs): remove invocations of LLMObs.enable()
from supported integrations
#9226
Conversation
LLMObs.enable()
exceptionLLMObs.enable()
exception
Datadog ReportBranch report: ✅ 0 Failed, 2264 Passed, 722 Skipped, 1h 7m 44.72s Total duration (30m 48.47s time saved) |
…e-py into sabrenner/llmobs-apm-fix
…e-py into sabrenner/llmobs-apm-fix
…e-py into sabrenner/llmobs-apm-fix
…e-py into sabrenner/llmobs-apm-fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a better maintainable way to fix this is to remove any LLMObs.enable()
mentions in integration patch code, since the LLMObs.enable()
function will handle the actual openai/langchain/bedrock patching once #9172 is merged.
@Yun-Kim Good point. I can make the change and once those PRs are merged, check that the tests are still good. |
LLMObs.enable()
exceptionLLMObs.enable()
from supported integrations
Closing in favor of #9172, which will include this change and relevant test changes |
Changes Made
Removing all mentions of
LLMObs.enable()
from integration patching, as that patching logic was moved toLLMObs.enable()
itself. This change is reflected as necessary in tests as well.Checklist
changelog/no-changelog
is set@DataDog/apm-tees
.Reviewer Checklist