Open Source Monitoring for AI Agents in Production
AgentOps is the infrastructure layer for deploying AI agents in production. Monitor, secure, and orchestrate your agents with one line of code.
You're running AI agents in production but you can't answer:
- "Is my agent drifting from expected behavior?"
- "How much are my LLM calls costing?"
- "Is someone trying to prompt inject my agent?"
- "Which agent is slowing down my system?"
import agentops
# One line to monitor your agent
agentops.init(api_key="your_key")
# Your agent code
@agentops.record_action
async def my_agent(user_input):
response = await llm.chat(user_input)
return response- 📊 Monitoring — Track latency, costs, token usage in real-time
- 🔍 Drift Detection — Alert when agent behavior changes
- 🛡️ Security — Detect prompt injection and data leaks
- 🔌 MCP Support — Native Model Context Protocol integration
- 📈 Dashboard — Web UI for monitoring all your agents
- 🏠 Self-Hosted — Run on-premise for privacy/compliance
pip install agentopsimport agentops
from openai import OpenAI
# Initialize monitoring
agentops.init(
api_key="your_api_key",
project_name="my_agent_project"
)
client = OpenAI()
# Your agent automatically monitored
@agentops.record_action(action_type="chat")
def my_agent(user_message):
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": user_message}]
)
return response.choices[0].message.content
# Run your agent
result = my_agent("Hello, how are you?")┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Your Agent │────▶│ AgentOps SDK │────▶│ AgentOps Cloud │
│ │ │ (This repo) │ │ (Optional) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ Local Dashboard│
│ (Open Source) │
└─────────────────┘
- Basic monitoring (latency, tokens, costs)
- Drift detection baseline
- Local dashboard
- Security scanning
- MCP integration
- Hosted cloud version
- Team collaboration
- Advanced analytics
- Enterprise SSO
- Multi-agent orchestration
- Auto-scaling
- Compliance reports
- Open Source — No vendor lock-in, inspect the code
- Local First — Run everything on-premise if needed
- MCP Native — Built for the Model Context Protocol standard
- Easy Integration — One decorator, zero refactoring
We welcome contributions! See CONTRIBUTING.md for details.
MIT License — see LICENSE for details.
Built with ❤️ for the AI agent community
Note: This is a beta release. APIs may change.