Skip to content

azure-ai-agentserver-langgraph: SSE stream hangs when LangGraph agent uses tool calls #45282

@Bartcardi82

Description

@Bartcardi82

Package

azure-ai-agentserver-langgraph version 1.0.0b12 (with azure-ai-agentserver-core 1.0.0b12)

Describe the bug

When a LangGraph agent hosted in Azure AI Foundry uses tools (via @tool decorated functions), the Foundry playground SSE stream never closes. The agent generates the correct response text, but the playground spinner keeps running and the user cannot send follow-up messages without clicking "Stop".

Root cause (from Application Insights traces)

The ResponseFunctionCallArgumentEventGenerator and ResponseOutputTextEventGenerator in the agentserver cannot process LangGraph's streaming AIMessageChunk objects that contain tool calls. Every chunk produces a warning and is skipped:

FunctionCallArgumentEventGenerator did not process message: content='' tool_calls=[{'name': 'check_hr_capacity', 'args': {}, 'id': 'call_p2WaEs1A...', 'type': 'tool_call'}]

Message can not be processed by current generator ResponseFunctionCallArgumentEventGenerator: <class 'langchain_core.messages.ai.AIMessageChunk'>

This happens for:

  1. Each tool call chunk (tool_calls=[...])
  2. The finish chunk (finish_reason='tool_calls')
  3. The usage metadata chunk
  4. The final response finish chunk (finish_reason='stop')

Because the generators skip these chunks, the SSE stream never emits proper completion events and the connection stays open.

Secondary bug

There is also an async/sync mismatch when fetching conversation history:

TypeError: 'async for' requires an object with __aiter__ method, got coroutine

File "azure/ai/agentserver/langgraph/models/response_api_default_converter.py", line 251, in _fetch_historical_items
    async for item in openai_client.conversations.items.list(conversation_id):
TypeError: 'async for' requires an object with __aiter__ method, got coroutine

To reproduce

  1. Create a LangGraph agent with one or more @tool decorated functions
  2. Wrap it with from_langgraph(agent, credentials=DefaultAzureCredential())
  3. Deploy to Azure AI Foundry as a hosted container agent
  4. Open the Foundry playground and send a message that triggers a tool call
  5. The response text appears but the spinner never stops

Minimal agent code

from langchain.agents import create_agent
from langchain.chat_models import init_chat_model
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver

@tool
def get_info(query: str) -> str:
    """Get information."""
    return f"Result for: {query}"

def create_agent():
    model = init_chat_model("azure_openai:gpt-4.1", ...)
    return create_agent(model, tools=[get_info], checkpointer=MemorySaver())

app.py

from azure.ai.agentserver.langgraph import from_langgraph
from azure.identity import DefaultAzureCredential
from agent import create_agent

app = from_langgraph(create_agent(), credentials=DefaultAzureCredential())

Expected behavior

The SSE stream should properly emit function_call, function_call_output, and output_text events for LangGraph tool call chunks, and close the stream when the agent completes its response.

Actual behavior

  • All tool call chunks are skipped by the event generators
  • The SSE stream never closes
  • The Foundry playground hangs with the spinner

Environment

  • azure-ai-agentserver-langgraph==1.0.0b12
  • azure-ai-agentserver-core==1.0.0b12
  • langchain==1.2.10
  • langgraph==1.0.9
  • langchain-core==1.2.14
  • Python 3.11
  • Azure AI Foundry hosted container

Additional context

  • Agents without tool calls (pure text responses) work correctly
  • The tool functions themselves execute successfully — the issue is purely in the SSE stream conversion
  • parallel_tool_calls=False binding on the model does not prevent the issue (model still returns multiple tool calls)
  • This was tested with both single and multiple tool calls — the stream hangs in both cases

Metadata

Metadata

Assignees

No one assigned

    Labels

    customer-reportedIssues that are reported by GitHub users external to the Azure organization.needs-triageWorkflow: This is a new issue that needs to be triaged to the appropriate team.questionThe issue doesn't require a change to the product in order to be resolved. Most issues start as that

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions