chore(core): remove sub-goals feature from LLM planning#1975
Conversation
…al logic from planning prompt The model no longer needs to do task decomposition (sub-goals). This removes all goal/subGoal related logic from the planning prompt and its caller, keeping only the simpler instruction-based flow. https://claude.ai/code/session_016KXax4JyZu2MKkA2XjRX6t
…ferences) Keep the full thought description telling the model what to consider (user's requirement, current state, next action) - just without the sub-goal related parts. https://claude.ai/code/session_016KXax4JyZu2MKkA2XjRX6t
❌ Deploy Preview for midscene failed. Why did it fail? →
|
Deploying midscene with
|
| Latest commit: |
d368cc4
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://43e101e5.midscene.pages.dev |
| Branch Preview URL: | https://claude-refactor-llm-planning.midscene.pages.dev |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d368cc4118
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const historicalLogsText = conversationHistory.historicalLogsToText(); | ||
| const historicalLogsSection = historicalLogsText | ||
| ? `\n\n${historicalLogsText}` | ||
| : ''; |
There was a problem hiding this comment.
Keep deepThink history from growing quadratically
This change makes deepThink sessions include historicalLogsToText() on every turn, while each turn also appends a new historical log entry; since previous user messages (which already contain the full log text) are kept in conversationHistory.snapshot(), the same log lines get repeatedly duplicated across rounds and prompt size grows roughly O(n²). In longer multi-step tasks this can push requests over model context limits or crowd out relevant screenshot/history context, whereas the prior deepThink path only carried compact sub-goal state.
Useful? React with 👍 / 👎.
Summary
This PR removes the sub-goals planning feature from the LLM planning system. The feature was previously controlled by the
deepThinkoption and allowed the AI to break down tasks into multiple sub-goals with status tracking. This change simplifies the planning logic by removing all sub-goal related functionality.Key Changes
parseSubGoalsFromXMLandparseMarkFinishedIndexesutility functions that were used to extract sub-goal information from AI responses<update-plan-content>and<mark-sub-goal-done>tags from AI responses inparseXMLPlanningResponseplan()function, including:conversationHistory.setSubGoals()conversationHistory.markSubGoalFinished()conversationHistory.appendSubGoalLog()conversationHistory.markAllSubGoalsFinished()appendHistoricalLog()for all execution logs, regardless ofdeepThinksetting<update-plan-content>and<mark-sub-goal-done>tag descriptionsincludeSubGoalsparameter todeepThinkinsystemPromptToTaskPlanningfor clarityImplementation Details
The
deepThinkoption is now passed through but no longer affects the planning behavior - it's kept for backward compatibility but doesn't change the system prompt or response parsing. The planning system now operates in a single mode that uses historical execution logs for context instead of maintaining a hierarchical sub-goal structure.https://claude.ai/code/session_016KXax4JyZu2MKkA2XjRX6t