Improve AI context quality for better LLM reasoning#164
Open
bajman wants to merge 2 commits intodev-chat:masterfrom
Open
Improve AI context quality for better LLM reasoning#164bajman wants to merge 2 commits intodev-chat:masterfrom
bajman wants to merge 2 commits intodev-chat:masterfrom
Conversation
LLM responses containing standard markdown (### headers, **bold**, * bullet lists) were rendering as raw text in Slack because Slack uses its own mrkdwn format. The existing markdownToSlackMrkdwn() converter was incomplete and not applied to all code paths. Resolves this by adopting Slack's new markdown block type, which natively translates standard markdown server-side. Applied across all four AI text paths: /ai/text, /ai/prompt-with-history, @moonbeam, and redeployMoonbeam. Removes the manual converter and getChunks utility. Also upgrades @slack/web-api v6 → v7.15.0 with associated breaking change fixes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
LLM responses were degraded by poor context assembly: user mentions passed as raw Slack IDs (<@u123abc>) left the model unable to identify who was being referenced, no channel context meant the model couldn't calibrate tone for #politics vs #dev, and /prompt-with-history embedded conversation history in system instructions rather than user input — violating the model's expected separation between behavioral guidance and content to reason over. The @moonbeam path also sent the triggering message twice (once from the DB, once appended), wasting tokens that could carry useful conversation history. Resolves this by resolving mentions to display names so the model can track conversational participants, injecting channel name and topic into system prompts for tone calibration, restructuring prompt-with-history to place history in user input where the model expects content, deduplicating the triggering message, adding token-aware history truncation to maximize useful context within budget, replacing the hardcoded userIdId != 39 exclusion with a dynamic lookup, and giving /ai/text basic channel and user context. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Collaborator
|
Will review this PR once #163 is approved + merged in. You have duplicative changes here that will lead to merge conflicts. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
<@U123ABC>in message history is now resolved to@displaynamebefore being sent to the LLM, so the model can identify conversational participants#politics), allowing the model to calibrate tone and topic relevancetruncateToTokenBudget()utility drops oldest messages when history exceeds ~8000 tokens, preventing context overflowuserIdId != 39: Dynamic lookup of Moonbeam's DB ID instead of hardcoded value that would silently break if the row changed/ai/textpath now receives channel name and requesting user's name in its system promptTest plan
@moonbeamresponses reference users by name, not raw Slack IDs/promptresponses are contextually appropriate to the channel topic/ai/textresponses acknowledge the channel context@moonbeamdoesn't repeat the triggering message in its reasoningnpx jestinpackages/backend— all tests related to changed files pass🤖 Generated with Claude Code