-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Description
Package
azure-ai-agentserver-langgraph version 1.0.0b12 (with azure-ai-agentserver-core 1.0.0b12)
Describe the bug
When a LangGraph agent hosted in Azure AI Foundry uses tools (via @tool decorated functions), the Foundry playground SSE stream never closes. The agent generates the correct response text, but the playground spinner keeps running and the user cannot send follow-up messages without clicking "Stop".
Root cause (from Application Insights traces)
The ResponseFunctionCallArgumentEventGenerator and ResponseOutputTextEventGenerator in the agentserver cannot process LangGraph's streaming AIMessageChunk objects that contain tool calls. Every chunk produces a warning and is skipped:
FunctionCallArgumentEventGenerator did not process message: content='' tool_calls=[{'name': 'check_hr_capacity', 'args': {}, 'id': 'call_p2WaEs1A...', 'type': 'tool_call'}]
Message can not be processed by current generator ResponseFunctionCallArgumentEventGenerator: <class 'langchain_core.messages.ai.AIMessageChunk'>
This happens for:
- Each tool call chunk (
tool_calls=[...]) - The finish chunk (
finish_reason='tool_calls') - The usage metadata chunk
- The final response finish chunk (
finish_reason='stop')
Because the generators skip these chunks, the SSE stream never emits proper completion events and the connection stays open.
Secondary bug
There is also an async/sync mismatch when fetching conversation history:
TypeError: 'async for' requires an object with __aiter__ method, got coroutine
File "azure/ai/agentserver/langgraph/models/response_api_default_converter.py", line 251, in _fetch_historical_items
async for item in openai_client.conversations.items.list(conversation_id):
TypeError: 'async for' requires an object with __aiter__ method, got coroutine
To reproduce
- Create a LangGraph agent with one or more
@tooldecorated functions - Wrap it with
from_langgraph(agent, credentials=DefaultAzureCredential()) - Deploy to Azure AI Foundry as a hosted container agent
- Open the Foundry playground and send a message that triggers a tool call
- The response text appears but the spinner never stops
Minimal agent code
from langchain.agents import create_agent
from langchain.chat_models import init_chat_model
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
@tool
def get_info(query: str) -> str:
"""Get information."""
return f"Result for: {query}"
def create_agent():
model = init_chat_model("azure_openai:gpt-4.1", ...)
return create_agent(model, tools=[get_info], checkpointer=MemorySaver())app.py
from azure.ai.agentserver.langgraph import from_langgraph
from azure.identity import DefaultAzureCredential
from agent import create_agent
app = from_langgraph(create_agent(), credentials=DefaultAzureCredential())Expected behavior
The SSE stream should properly emit function_call, function_call_output, and output_text events for LangGraph tool call chunks, and close the stream when the agent completes its response.
Actual behavior
- All tool call chunks are skipped by the event generators
- The SSE stream never closes
- The Foundry playground hangs with the spinner
Environment
azure-ai-agentserver-langgraph==1.0.0b12azure-ai-agentserver-core==1.0.0b12langchain==1.2.10langgraph==1.0.9langchain-core==1.2.14- Python 3.11
- Azure AI Foundry hosted container
Additional context
- Agents without tool calls (pure text responses) work correctly
- The tool functions themselves execute successfully — the issue is purely in the SSE stream conversion
parallel_tool_calls=Falsebinding on the model does not prevent the issue (model still returns multiple tool calls)- This was tested with both single and multiple tool calls — the stream hangs in both cases