Conversation
…le graph and provider issue in agent.py
…h return api call for this in agent.py
…ent.py , It cannot accept **self.llm_kwargs in genai.Client() constructor .
There was a problem hiding this comment.
Pull request overview
This pull request improves Google GenAI integration in the Agentflow framework, focusing on proper handling of reasoning (thought) content in LLM responses, fixing model/provider splitting logic, and updating configuration. The changes enable the framework to correctly extract and process reasoning tokens from Google's Gemini models that support thinking/reasoning capabilities.
Changes:
- Fixed critical bug in Agent model/provider splitting logic that prevented proper model name extraction
- Enhanced reasoning content extraction to handle Google GenAI's thought parts where
part.thought=Truewith content inpart.text - Added
ThinkingConfigto Google GenAI requests to enable reasoning content in responses - Updated example to use correct
google/provider prefix and improved dotenv handling
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| examples/agent-class/graph.py | Updated model prefix from gemini/ to google/ for Agent class consistency; improved environment variable loading (with redundant logic); changed example query to weather-related |
| agentflow/graph/agent.py | Fixed model name assignment after provider/model split; removed invalid llm_kwargs from Google client initialization; added ThinkingConfig to enable reasoning content extraction |
| agentflow/adapters/llm/google_genai_converter.py | Enhanced reasoning token extraction from metadata; refactored part processing order to prevent content duplication; updated reasoning part handling to correctly extract content from part.text when part.thought is True |
- Added new linting rules for specific files in `pyproject.toml`. - Updated tests for Google GenAI converter to handle reasoning extraction correctly. - Enhanced OpenAI converter tests with reasoning extraction and token usage details. - Introduced new tests for OpenAI Responses API converter, covering various response formats. - Added reasoning tag extraction utilities tests to ensure proper parsing. - Implemented agent API routing tests to verify fallback mechanisms between responses and chat completions. - Updated Google Gemini integration tests to use the latest model version and improved message handling. - Modified base converter tests to include new converter types and ensure enum consistency.
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 17 out of 17 changed files in this pull request and generated 1 comment.
Comments suppressed due to low confidence (1)
agentflow/adapters/llm/openai_converter.py:144
- The order of content blocks appears inconsistent with the Google converter. In the Google converter, ReasoningBlocks are added before TextBlocks (see google_genai_converter.py lines 141-146), but here TextBlocks are added before ReasoningBlocks. For consistency with the PR's stated goal of "processing reasoning parts before text parts", consider reordering these blocks to add ReasoningBlock first, then TextBlock.
blocks = []
if content:
blocks.append(TextBlock(text=content))
if reasoning_content:
blocks.append(ReasoningBlock(summary=reasoning_content))
This pull request introduces several improvements and fixes related to Google GenAI integration, especially around handling "reasoning" (thought) content in LLM responses, configuration, and example usage. The most significant changes are the improved extraction and processing of reasoning content, updates to model/provider handling, and enhancements to example scripts for better developer experience.
Reasoning Content Extraction & Processing:
thoughts_token_countfrom the response metadata instead of hardcoding to zero.Model/Provider Handling & Configuration:
Agentinitialization to properly assign the model name.thinking_configwithinclude_thoughts=Truefor text output, enabling the model to return reasoning content in its response.Example Script Improvements:
graph.pyto load environment variables usingdotenvand ensureGEMINI_API_KEYis set correctly.gemini/gemini-2.5-flashtogoogle/gemini-2.5-flashfor consistency with provider/model conventions.