Skip to content

Open source Python SDK for agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks like CrewAI, Langchain, and Autogen

License

AgentOps-AI/agentops

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

AI agents suck. Weโ€™re fixing that.

Python Version

๐Ÿฆ Twitter ย ย โ€ขย ย  ๐Ÿ“ข Discord ย ย โ€ขย ย  ๐Ÿ–‡๏ธ AgentOps ย ย โ€ขย ย  ๐Ÿ“™ Documentation

AgentOps ๐Ÿ–‡๏ธ

License: MIT PyPI - Version AgentOps Twitter Discord community channel git commit activity

AgentOps helps developers build, evaluate, and monitor AI agents. Tools to build agents from prototype to production.

๐Ÿ“Š Replay Analytics and Debugging Step-by-step agent execution graphs
๐Ÿ’ธ LLM Cost Management Track spend with LLM foundation model providers
๐Ÿงช Agent Benchmarking Test your agents against 1,000+ evals
๐Ÿ” Compliance and Security Detect common prompt injection and data exfiltration exploits
๐Ÿค Framework Integrations Easily plugs in with frameworks like CrewAI and LangChain

Quick Start โŒจ๏ธ

pip install agentops

Session replays in 3 lines of code

Initialize the AgentOps client and automatically get analytics on every LLM call.

import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

...
# (optional: record specific functions)
@agentops.record_function('sample function being record')
def sample_function(...):
    ...

# End of program
agentops.end_session('Success')
# Woohoo You're done ๐ŸŽ‰

All your sessions are available on the AgentOps dashboard. Refer to our API documentation for detailed instructions.

Agent Dashboard Agent Dashboard
Session Analytics Session Analytics
Session Replays Session Replays

Integrations ๐Ÿฆพ

CrewAI ๐Ÿ›ถ

Build Crew agents with observability with only 2 lines of code. Simply set an AGENTOPS_API_KEY in your environment, and your crews will get automatic monitoring on the AgentOps dashboard.

AgentOps is integrated with CrewAI on a pre-release fork. Install crew with

pip install git+https://github.com/AgentOps-AI/crewAI.git@main

Langchain ๐Ÿฆœ๐Ÿ”—

AgentOps works seamlessly with applications built using Langchain. To use the handler, install Langchain as an optional dependency:

Installation
pip install agentops[langchain]

To use the handler, import and set

import os
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from agentops.langchain_callback_handler import LangchainCallbackHandler

AGENTOPS_API_KEY = os.environ['AGENTOPS_API_KEY']
handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['Langchain Example'])

llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY,
                 callbacks=[handler],
                 model='gpt-3.5-turbo')

agent = initialize_agent(tools,
                         llm,
                         agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
                         verbose=True,
                         callbacks=[handler], # You must pass in a callback handler to record your agent
                         handle_parsing_errors=True)

Check out the Langchain Examples Notebook for more details including Async handlers.

Cohere โŒจ๏ธ

First class support for Cohere(>=5.4.0). This is a living integration, should you need any added functionality please message us on Discord!

Installation
pip install cohere
import cohere
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
co = cohere.Client()

chat = co.chat(
    message="Is it pronounced ceaux-hear or co-hehray?"
)

print(chat)

agentops.end_session('Success')
import cohere
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

co = cohere.Client()

stream = co.chat_stream(
    message="Write me a haiku about the synergies between Cohere and AgentOps"
)

for event in stream:
    if event.event_type == "text-generation":
        print(event.text, end='')

agentops.end_session('Success')

LlamaIndex ๐Ÿฆ™

(Coming Soon)

Time travel debugging ๐Ÿ”ฎ

(coming soon!)

Agent Arena ๐ŸฅŠ

(coming soon!)

Evaluations Roadmap ๐Ÿงญ

Platform Dashboard Evals
โœ… Python SDK โœ… Multi-session and Cross-session metrics โœ… Custom eval metrics
๐Ÿšง Evaluation builder API โœ… Custom event tag trackingย  ๐Ÿ”œ Agent scorecards
โœ… Javascript/Typescript SDK โœ… Session replays ๐Ÿ”œ Evaluation playground + leaderboard

Debugging Roadmap ๐Ÿงญ

Performance testing Environments LLM Testing Reasoning and execution testing
โœ… Event latency analysis ๐Ÿ”œ Non-stationary environment testing ๐Ÿ”œ LLM non-deterministic function detection ๐Ÿšง Infinite loops and recursive thought detection
โœ… Agent workflow execution pricing ๐Ÿ”œ Multi-modal environments ๐Ÿšง Token limit overflow flags ๐Ÿ”œ Faulty reasoning detection
๐Ÿšง Success validators (external) ๐Ÿ”œ Execution containers ๐Ÿ”œ Context limit overflow flags ๐Ÿ”œ Generative code validators
๐Ÿ”œ Agent controllers/skill tests โœ… Honeypot and prompt injection detection (PromptArmor) ๐Ÿ”œ API bill tracking ๐Ÿ”œ Error breakpoint analysis
๐Ÿ”œ Information context constraint testing ๐Ÿ”œ Anti-agent roadblocks (i.e. Captchas) ๐Ÿ”œ CI/CD integration checks
๐Ÿ”œ Regression testing ๐Ÿ”œ Multi-agent framework visualization

Why AgentOps? ๐Ÿค”

Our mission is to bring your agent from prototype to production.

Agent developers often work with little to no visibility into agent testing performance. This means their agents never leave the lab. We're changing that.

AgentOps is the easiest way to evaluate, grade, and test agents. Is there a feature you'd like to see AgentOps cover? Just raise it in the issues tab, and we'll work on adding it to the roadmap.