feat: Introduce ManagedAgent and AgentRunner implementations#110
feat: Introduce ManagedAgent and AgentRunner implementations#110jsonbailey wants to merge 8 commits intomainfrom
Conversation
c3f2da2 to
bc8b945
Compare
packages/ai-providers/server-ai-langchain/src/ldai_langchain/langchain_runner_factory.py
Outdated
Show resolved
Hide resolved
packages/ai-providers/server-ai-langchain/src/ldai_langchain/langchain_agent_runner.py
Outdated
Show resolved
Hide resolved
packages/ai-providers/server-ai-langchain/src/ldai_langchain/langchain_agent_runner.py
Outdated
Show resolved
Hide resolved
keelerm84
left a comment
There was a problem hiding this comment.
Looks like bugbot has some good feedback on this one.
packages/ai-providers/server-ai-langchain/src/ldai_langchain/langchain_agent_runner.py
Outdated
Show resolved
Hide resolved
886e3b7 to
a183f12
Compare
bc8b945 to
c1b87a6
Compare
packages/ai-providers/server-ai-langchain/src/ldai_langchain/langchain_runner_factory.py
Outdated
Show resolved
Hide resolved
packages/ai-providers/server-ai-openai/src/ldai_openai/openai_runner_factory.py
Outdated
Show resolved
Hide resolved
packages/ai-providers/server-ai-openai/src/ldai_openai/openai_agent_runner.py
Outdated
Show resolved
Hide resolved
packages/ai-providers/server-ai-openai/src/ldai_openai/openai_agent_runner.py
Outdated
Show resolved
Hide resolved
packages/ai-providers/server-ai-openai/src/ldai_openai/openai_agent_runner.py
Outdated
Show resolved
Hide resolved
packages/ai-providers/server-ai-langchain/src/ldai_langchain/langchain_helper.py
Outdated
Show resolved
Hide resolved
feat: Add OpenAIAgentRunner with agentic tool-calling loop feat: Add LangChainAgentRunner with agentic tool-calling loop feat: Add OpenAIRunnerFactory.create_agent(config, tools) -> OpenAIAgentRunner feat: Add LangChainRunnerFactory.create_agent(config, tools) -> LangChainAgentRunner feat: Add ManagedAgent wrapper holding AgentRunner and LDAIConfigTracker feat: Add LDAIClient.create_agent() returning ManagedAgent
…ider helper tests feat: add TestGetAIUsageFromResponse and TestGetToolCallsFromResponse test coverage for LangChainHelper feat: add TestGetAIUsageFromResponse test coverage for OpenAIHelper fix: update ManagedAgent.invoke to use track_metrics_of_async
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
e4b3830 to
afe344e
Compare
packages/ai-providers/server-ai-openai/src/ldai_openai/openai_agent_runner.py
Show resolved
Hide resolved
packages/ai-providers/server-ai-openai/src/ldai_openai/openai_agent_graph_runner.py
Show resolved
Hide resolved
packages/ai-providers/server-ai-langchain/src/ldai_langchain/langchain_runner_factory.py
Outdated
Show resolved
Hide resolved
packages/ai-providers/server-ai-openai/src/ldai_openai/openai_agent_runner.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
|
|
||
| total = getattr(usage, 'total_tokens', None) or 0 | ||
| inp = getattr(usage, 'input_tokens', None) or getattr(usage, 'prompt_tokens', None) or 0 | ||
| out = getattr(usage, 'output_tokens', None) or getattr(usage, 'completion_tokens', None) or 0 |
There was a problem hiding this comment.
Falsy or-chain conflates zero tokens with missing attribute
Low Severity
The or-chain in get_ai_usage_from_response uses Python truthiness to fall through attribute lookups: getattr(usage, 'input_tokens', None) or getattr(usage, 'prompt_tokens', None) or 0. Because 0 is falsy, a legitimate input_tokens = 0 is treated the same as a missing attribute, causing the chain to fall through and check prompt_tokens. If a future SDK version exposes both field names on the same usage object, a zero input_tokens could incorrectly resolve to a non-zero prompt_tokens. Using is None checks instead of or would correctly distinguish "zero" from "absent."


feat: Add OpenAIAgentRunner with agentic tool-calling loop
feat: Add LangChainAgentRunner with agentic tool-calling loop
feat: Add OpenAIRunnerFactory.create_agent(config, tools) -> OpenAIAgentRunner
feat: Add LangChainRunnerFactory.create_agent(config, tools) -> LangChainAgentRunner
feat: Add ManagedAgent wrapper holding AgentRunner and LDAIConfigTracker
feat: Add LDAIClient.create_agent() returning ManagedAgent
Requirements
Related issues
Provide links to any issues in this repository or elsewhere relating to this pull request.
Describe the solution you've provided
Provide a clear and concise description of what you expect to happen.
Describe alternatives you've considered
Provide a clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context about the pull request here.
Note
Medium Risk
Introduces a new managed agent execution path (
LDAIClient.create_agent) and new provider-side agent runners, which changes how tool definitions, instructions, and token usage are interpreted/tracked. Risk is mainly around compatibility with provider SDKs (langchain/openai-agents) and correct tool/usage wiring rather than data/security concerns.Overview
Adds first-class managed agent support to the server AI SDK via
ManagedAgentandLDAIClient.create_agent(), enabling tracked agent-style invocations (instructions + tool calling) distinct from model and agent-graph flows.Implements provider-side
create_agentsupport for OpenAI and LangChain by introducingOpenAIAgentRunner(backed byopenai-agents) andLangChainAgentRunner(backed bylangchain.agents.create_agent), including tool binding/validation and aggregating token usage from agent runs.Refactors token usage extraction to handle more response shapes (OpenAI chat completions vs agents
RunResult, LangChainusage_metadatavsresponse_metadata) and centralizes native OpenAI tool mapping inopenai_helper; updates tests accordingly and exports the new runners from provider packages.Written by Cursor Bugbot for commit 268b669. This will update automatically on new commits. Configure here.