feat: add MCP server for native Claude Code tool integration#37
feat: add MCP server for native Claude Code tool integration#37darko-mijic merged 7 commits intomainfrom
Conversation
Implement a Model Context Protocol (MCP) server that wraps ProcessStateAPI,
eliminating 30-90s of subprocess overhead per session by loading the pipeline
once and keeping MasterDataset in memory.
Architecture (4 modules):
- server.ts: Lifecycle, stdio transport, stdout isolation, graceful shutdown
- tool-registry.ts: 25 tools with Zod schemas (session-aware, data query,
architecture, utility, server management)
- pipeline-session.ts: Atomic dataset swap, config auto-detection, rebuild
- file-watcher.ts: Chokidar v5 with 500ms debounce, error handling
Bug fixes applied during code review:
- Fix dp_list filter chaining (filters now narrow cumulatively, not overwrite)
- Fix dp_arch_coverage duplicating dp_unannotated (now calls analyzeCoverage)
- Add chokidar error event listener to prevent unhandled crashes
- Warn on unknown CLI arguments instead of silently ignoring
- Use static fs import instead of dynamic import('fs')
- Export parseCliArgs and isWatchedFileType for testability
Test coverage: 99 new Gherkin tests across 5 Rules covering pipeline session
lifecycle, tool registration, file type filtering, CLI argument parsing, and
output format validation. Uses MockMcpServer for transport-free testing.
|
Caution Review failedPull request was closed or merged during review Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new MCP server runtime and CLI, a PipelineSessionManager with atomic rebuilds, a debounced file watcher, a 25+ tool registry, design/spec/docs, test scaffolding and end-to-end BDD tests, plus minor fuzzy-match and scope-validator adjustments. No existing public API signatures were removed. (49 words) Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as CLI\n(mcp-server.ts)
participant Server as MCP Server\n(src/mcp/server.ts)
participant SessionMgr as Pipeline Session\nManager (src/mcp/pipeline-session.ts)
participant Registry as Tool Registry\n(src/mcp/tool-registry.ts)
participant Watcher as File Watcher\n(src/mcp/file-watcher.ts)
participant Transport as StdioServer\nTransport
User->>CLI: run mcp-server [args]
CLI->>Server: startMcpServer(argv, options)
Server->>Server: parseCliArgs() & isolate stdout
Server->>SessionMgr: initialize(options)
SessionMgr->>SessionMgr: buildMasterDataset -> PipelineSession
SessionMgr-->>Server: PipelineSession
Server->>Registry: registerAllTools(server, sessionMgr)
Registry-->>Server: tools registered
alt watch enabled
Server->>Watcher: start()
Watcher->>Watcher: debounce file events
Watcher->>SessionMgr: rebuild() on debounced change
SessionMgr-->>Watcher: new PipelineSession (atomic swap)
end
Server->>Transport: attach stdio transport
Transport-->>User: JSON-RPC tool calls/responses
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- dp_rules: return compact summary (totals + rule names + hint) when no pattern filter, preventing 889K char responses that break MCP clients - dp_pattern: enrich response with deliverables, dependencies, and relationships from ProcessStateAPI to match tool description - scope-validator: pass active patterns through FSM check instead of false-blocking on active→active non-transition - fuzzy-match: weight prefix scores by query coverage ratio and add name-length tie-breaker for deterministic ranking
There was a problem hiding this comment.
Actionable comments posted: 11
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@delivery-process/specs/mcp-server-integration.feature`:
- Around line 189-194: Update the scenario "Rebuild failure during watch does
not crash server" so it does not require an MCP notification unless the
implementation emits one: remove the step "And an MCP notification indicates
rebuild failure" or change it to a conditional/optional step (e.g., "If the
server emits MCP notifications, then one indicates rebuild failure"); keep the
remaining steps that assert the server continues running with the previous valid
dataset. Target the scenario text in
delivery-process/specs/mcp-server-integration.feature (scenario title "Rebuild
failure during watch does not crash server") to make the MCP-notification
expectation optional or removed.
In `@docs/MCP-SETUP.md`:
- Around line 117-126: The fenced CLI code block containing the "dp-mcp-server
[options]" snippet should include a language identifier to satisfy markdownlint
MD040 and maintain consistency; update the triple-backtick opening fence to
include "text" (i.e., ```text) so the block that lists the -i, -f, -b, -w, -h,
-v options is properly annotated.
In `@src/mcp/index.ts`:
- Around line 1-13: Update the JSDoc header for the MCPModule to include
relationship tags so discovery tools can traverse the public surface: add
`@libar-docs-uses` entries for the internal modules referenced by this barrel
(server, session, watcher, registry) in the existing header block for MCPModule
(the top-file JSDoc in src/mcp/index.ts) — e.g. add lines like "@libar-docs-uses
server", "@libar-docs-uses session", "@libar-docs-uses watcher",
"@libar-docs-uses registry" within the same comment block so tools can detect
those relationships.
In `@src/mcp/server.ts`:
- Around line 97-129: The file watcher is only stopped in cleanup() which is
only invoked on SIGINT/SIGTERM, so when the stdio client disconnects normally
the McpFileWatcher (fileWatcher) keeps running; modify the connection lifecycle
to call cleanup() when the MCP transport or server connection closes: after
creating StdioServerTransport() and/or after await server.connect(transport)
attach a close/end/disconnect handler on the transport or server (e.g.,
transport.on('close' | 'end') or server.on('disconnect' | 'close')) that invokes
cleanup() (avoiding duplicate process.exit calls when cleanup is triggered by a
normal disconnect). Ensure the handler references the existing cleanup and
fileWatcher variables so the watcher is stopped and server closed on client
disconnect.
- Around line 62-83: The public library functions (startMcpServer and
parseCliArgs) must not call process.exit or take CLI control; change
startMcpServer to return a Result type (Result.ok(sessionInfo) or
Result.error(err)) instead of exiting on pipeline initialization failure and
remove any direct process.exit calls in this module (also update the other
init/error/signal handlers that currently terminate). Update parseCliArgs to
validate value-taking flags so a flag value that begins with '-' is treated as
missing/invalid (return Result.error with a discriminated command or validation
error) rather than treating the next token as the value, and surface
help/version as Result values instead of printing/exiting; keep actual
printing/exiting for the CLI entrypoint (src/cli/mcp-server.ts) which should
consume these Result values and decide to log or exit. Ensure code references:
startMcpServer, parseCliArgs, PipelineSessionManager, and any signal/exit
handlers are updated to use the Result monad pattern.
- Around line 29-43: The module currently mutates global console.log (vars
_originalConsoleLog and console.log = console.error) in the module body which
runs after ESM imports (McpServer, StdioServerTransport, PipelineSessionManager,
registerAllTools, McpFileWatcher), so transitive import-time logs can bypass the
redirect and the mutation affects all consumers; move the console.log
redirection out of this module into the CLI entry point that launches the server
and apply it before any imports that load MCP SDK code (i.e., perform the
console swap at the very top of the CLI bootstrap before importing or
dynamically importing McpServer/StdioServerTransport/etc.), leaving this module
side-effect free and restoring _originalConsoleLog handling there if needed.
In `@src/mcp/tool-registry.ts`:
- Around line 639-677: The dp_help handler currently returns a hard-coded tools
array inside the server.registerTool callback (the static list in the dp_help
registration), which will drift as tools change; update the handler to build the
help text from the server's registry instead: query the server for its
registered tools (e.g., a registry API such as
server.getRegisteredTools()/server.listTools() or the internal registry used by
registerTool), map each registered entry to a single-line description using the
tool's id, title and description fields, sort/preserve ordering as needed, and
join them into the returned textResult; ensure dp_help itself is included or
excluded intentionally and keep fallback to the current static list if the
registry query is not available.
In `@tests/steps/mcp/mcp-server.steps.ts`:
- Around line 105-155: Capture and assert the original session and buildTime
before calling PipelineSessionManager.rebuild(): store manager.getSession() in a
variable (originalSession) and its buildTimeMs, then call rebuild and assert the
returned newSession !== originalSession, manager.getSession() !==
originalSession, and newSession.buildTimeMs !== originalSession.buildTimeMs; for
the isRebuilding lifecycle test, start the rebuild without awaiting (keep the
Promise from manager.rebuild()), immediately assert manager.isRebuilding() ===
true while the Promise is pending, then await the Promise and finally assert
manager.isRebuilding() === false and that manager.getSession() was replaced—use
the symbols PipelineSessionManager, rebuild, getSession, isRebuilding, and
buildTimeMs to locate the changes.
- Around line 326-339: The step patterns in the RuleScenarioOutline use
'{string}' placeholders which break mapping to the Examples table; change the
step text strings to use literal Cucumber outline placeholders '<args>' and
'<option>' (e.g. When('parseCliArgs is called with <args>' ...) and Then('the
parsed result has <option> set' ...)) while keeping the handler bodies that read
values from variables.args and variables.option and assign state!.parsedOptions
= parseCliArgs(argv); and asserting against state.parsedOptions as before.
- Around line 426-433: There are two step registrations using the identical
pattern 'the JSON result contains {string} key' (one Then and one And) which
causes duplicate/ambiguous step matching; remove one and replace both usages
with a single step that accepts a DataTable (e.g., register a step like
Then('the JSON result contains the following keys', (_ctx: unknown, dataTable:
DataTable) => { ... })) inside the same step file, parse the JSON exactly as
currently done (from state!.toolResult!.content[0].text) and iterate
dataTable.rows()/hashes() to assert expect(parsed).toHaveProperty(...) for each
key, and update the feature files to call that new step with a table of keys
instead of multiple repeated lines.
In `@tests/support/helpers/mcp-state.ts`:
- Around line 117-138: MockPipelineSessionManager currently returns
PipelineSession synchronously from initialize() and rebuild(), breaking the
production async contract and preventing callers from observing rebuilding=true
across an await. Make initialize() and rebuild() async (return
Promise<PipelineSession>), set this.rebuilding = true immediately in rebuild(),
await a microtask (e.g., Promise.resolve() or setImmediate) to preserve an async
boundary so callers can observe rebuilding while awaited, then update
this.session and set this.rebuilding = false before resolving; ensure the method
signatures and any tests/types referencing MockPipelineSessionManager reflect
the Promise<PipelineSession> return.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: e31a6589-88b6-4f51-ba4a-44c410aacb31
⛔ Files ignored due to path filters (2)
package.jsonis excluded by none and included by nonepnpm-lock.yamlis excluded by!**/pnpm-lock.yaml,!pnpm-lock.yamland included by none
📒 Files selected for processing (16)
delivery-process/design-reviews/mcp-server-integration.mddelivery-process/specs/mcp-server-integration.featuredelivery-process/stubs/mcp-server-integration/file-watcher.tsdelivery-process/stubs/mcp-server-integration/pipeline-session.tsdelivery-process/stubs/mcp-server-integration/server.tsdelivery-process/stubs/mcp-server-integration/tool-registry.tsdocs/MCP-SETUP.mdsrc/cli/mcp-server.tssrc/mcp/file-watcher.tssrc/mcp/index.tssrc/mcp/pipeline-session.tssrc/mcp/server.tssrc/mcp/tool-registry.tstests/features/mcp/mcp-server.featuretests/steps/mcp/mcp-server.steps.tstests/support/helpers/mcp-state.ts
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (4)
tests/support/helpers/mcp-state.ts (1)
160-173:⚠️ Potential issue | 🟠 MajorKeep the mock session manager async.
The real
PipelineSessionManager.initialize()/rebuild()are awaited by callers, but this mock resolves synchronously. That means tests can never observeisRebuilding() === trueacross an async boundary, and code that accidentally omitsawaitstill passes here.🔁 Suggested fix
- initialize(): PipelineSession { - return this.session; - } - - rebuild(): PipelineSession { - this.rebuilding = true; - // Simulate rebuild with a new build time - this.session = { - ...this.session, - buildTimeMs: this.session.buildTimeMs + 100, - }; - this.rebuilding = false; - return this.session; + async initialize(): Promise<PipelineSession> { + return this.session; + } + + async rebuild(): Promise<PipelineSession> { + this.rebuilding = true; + try { + await Promise.resolve(); + this.session = { + ...this.session, + buildTimeMs: this.session.buildTimeMs + 100, + }; + return this.session; + } finally { + this.rebuilding = false; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/support/helpers/mcp-state.ts` around lines 160 - 173, The mock PipelineSessionManager's initialize() and rebuild() should be made asynchronous so callers can await and observe isRebuilding() across an async boundary: change initialize() and rebuild() to return Promises (e.g., async functions) and in rebuild() set this.rebuilding = true, await a microtask or small timeout before updating this.session.buildTimeMs and setting this.rebuilding = false, then resolve the updated session; ensure callers of initialize() and rebuild() in tests still receive a PipelineSession but only after the async boundary so isRebuilding() can be observed while true.tests/steps/mcp/mcp-server.steps.ts (2)
327-339:⚠️ Potential issue | 🟠 MajorUse
<args>/<option>in the outline step text.This
RuleScenarioOutline()already readsvariables.argsandvariables.option, but the step text still uses normal scenario expressions. Switch these toWhen('parseCliArgs is called with <args>', ...)andThen('the parsed result has <option> set', ...)so the example-row binding is explicit.For `@amiceli/vitest-cucumber` ScenarioOutline step definitions, should the step text use literal `<args>` / `<option>` placeholders with values read from the `variables` object instead of `{string}`?As per coding guidelines "For
vitest-cucumberScenarioOutline step definitions: use<column>placeholders and read via thevariablesobject (e.g.,variables.columnName), NOT as function parameters like in regular Scenarios."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/steps/mcp/mcp-server.steps.ts` around lines 327 - 339, The step definitions in RuleScenarioOutline currently use literal "{string}" tokens but read values from variables.args and variables.option; update the step texts to use outline placeholders so the example row binds correctly — change When('parseCliArgs is called with {string}', ...) to When('parseCliArgs is called with <args>', ...) and Then('the parsed result has {string} set', ...) to Then('the parsed result has <option> set', ...) while keeping the handler bodies that call parseCliArgs(argv) and access variables.args / variables.option and state.parsedOptions unchanged.
118-132:⚠️ Potential issue | 🟠 MajorThese rebuild scenarios still don't prove atomic replacement or the in-flight flag.
newSession!.buildTimeMs >= 0never compares against the original session, andawait manager!.rebuild()only checksisRebuilding()after the promise has settled. A broken implementation that reuses the old session or never flips the flag totruewould still pass.🔍 What to assert instead
+ let originalSession: PipelineSession | null = null; + let originalBuildTime = -1; ... - When('rebuild is called', async () => { - newSession = await manager!.rebuild(); + When('rebuild is called', async () => { + originalSession = manager!.getSession(); + originalBuildTime = originalSession.buildTimeMs; + const rebuildPromise = manager!.rebuild(); + expect(manager!.isRebuilding()).toBe(true); + newSession = await rebuildPromise; }); ... - expect(newSession!.buildTimeMs).toBeGreaterThanOrEqual(0); - expect(manager!.getSession()).toBe(newSession); + expect(newSession).not.toBe(originalSession); + expect(newSession!.buildTimeMs).not.toBe(originalBuildTime); + expect(manager!.getSession()).toBe(newSession);Also applies to: 150-155
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/steps/mcp/mcp-server.steps.ts` around lines 118 - 132, The test currently only asserts rebuild outcome after completion and doesn't verify atomic replacement or that the in-flight flag flips during rebuild; to fix, capture the original session (e.g., const original = manager!.getSession()), start the rebuild without awaiting (const rebuildPromise = manager!.rebuild()), immediately assert manager!.isRebuilding() is true and manager!.getSession() is still original, then await rebuildPromise and assert the returned/new session is not the original (compare identity and/or buildTimeMs differs) and that manager!.getSession() equals the new session; apply the same pattern for the other scenario mentioned (lines 150-155).tests/features/mcp/mcp-server.feature (1)
178-182:⚠️ Potential issue | 🟠 MajorCollapse the repeated JSON-key checks into one table-driven step.
These two steps share the same pattern text, which is what forces the step file to register duplicate handlers later. A single step with a key table avoids the ambiguity and scales better as the response grows.
As per coding guidelines "Consolidate multiple `And` steps with identical text into a single step with DataTable or docstring; multiple identical patterns fail in vitest-cucumber."🧪 Suggested feature shape
- Then the JSON result contains "counts" key - And the JSON result contains "distribution" key + Then the JSON result contains the following keys + | key | + | counts | + | distribution |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/features/mcp/mcp-server.feature` around lines 178 - 182, Replace the two repeated steps "Then the JSON result contains \"counts\" key" and "And the JSON result contains \"distribution\" key" in the Scenario "dp_status returns JSON with counts" with a single table-driven step (e.g. "Then the JSON result contains keys:") and a DataTable listing counts and distribution; then remove the duplicate step definition handlers and implement one step definition that accepts a Cucumber DataTable (or array) and asserts each key exists on the JSON result (update the step definition that currently matches the repeated pattern to a single handler that iterates over the table entries and checks result.hasOwnProperty(key) or equivalent).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/api/fuzzy-match.ts`:
- Around line 163-167: The comparator in matches.sort (inside the fuzzy matching
routine) can still leave ties non-deterministic when score and
patternName.length are equal; update the comparator used in matches.sort so that
after comparing b.score - a.score and then a.patternName.length -
b.patternName.length it performs a final lexical comparison (e.g.
a.patternName.localeCompare(b.patternName)) to deterministically break ties;
locate the comparator where matches.sort((a, b) => { ... }) is defined and add
the final lexical comparison using a.patternName and b.patternName.
In `@src/mcp/tool-registry.ts`:
- Around line 82-83: Wrap calls that obtain a session and invoke tool handlers
so thrown exceptions are caught and returned as MCP error payloads instead of
propagating; specifically, change registerAllTools where getSession() is defined
to call sessionManager.getSession() inside a try/catch and, for each registered
handler (e.g., dp_scope_validate and the other handlers around the 181-188
region), wrap the handler invocation in try/catch and convert any thrown error
into a response object with isError: true and a useful error message/details
before sending it back via the MCP callback/transport. Ensure you preserve
normal successful return values when no exception occurs.
- Around line 587-590: The current handler replaces the configured input globs
with `${pathFilter}/**/*.ts` when path is provided, losing `.tsx` or custom
monorepo patterns; instead, when pathFilter is set build globs by taking
s.sourceGlobs.input and prefixing or joining each pattern with the pathFilter
(preserving the original patterns), e.g. map over s.sourceGlobs.input to apply
the pathFilter, then pass that resulting globs array to
analyzeCoverage(s.dataset, globs, s.baseDir, s.registry); update the async ({
path: pathFilter }) => handler to use path-aware mapping of s.sourceGlobs.input
rather than a hardcoded `${pathFilter}/**/*.ts`.
- Around line 427-445: The loop extracting AD-* decisions from stubs is too
narrow (it ignores DD-* entries); replace the manual regex logic inside the
for-loop that uses findStubPatterns(s.dataset) and the local regex with a call
to the shared helper extractDecisionItems() to parse both AD-* and DD-* items
and produce the same { pattern, id, description } shape; ensure the produced
decisions array matches the format consumed by checkDesignDecisionsRecorded()
(use stub.name for pattern and map extractDecisionItems() results into {
pattern, id, description }).
---
Duplicate comments:
In `@tests/features/mcp/mcp-server.feature`:
- Around line 178-182: Replace the two repeated steps "Then the JSON result
contains \"counts\" key" and "And the JSON result contains \"distribution\" key"
in the Scenario "dp_status returns JSON with counts" with a single table-driven
step (e.g. "Then the JSON result contains keys:") and a DataTable listing counts
and distribution; then remove the duplicate step definition handlers and
implement one step definition that accepts a Cucumber DataTable (or array) and
asserts each key exists on the JSON result (update the step definition that
currently matches the repeated pattern to a single handler that iterates over
the table entries and checks result.hasOwnProperty(key) or equivalent).
In `@tests/steps/mcp/mcp-server.steps.ts`:
- Around line 327-339: The step definitions in RuleScenarioOutline currently use
literal "{string}" tokens but read values from variables.args and
variables.option; update the step texts to use outline placeholders so the
example row binds correctly — change When('parseCliArgs is called with
{string}', ...) to When('parseCliArgs is called with <args>', ...) and Then('the
parsed result has {string} set', ...) to Then('the parsed result has <option>
set', ...) while keeping the handler bodies that call parseCliArgs(argv) and
access variables.args / variables.option and state.parsedOptions unchanged.
- Around line 118-132: The test currently only asserts rebuild outcome after
completion and doesn't verify atomic replacement or that the in-flight flag
flips during rebuild; to fix, capture the original session (e.g., const original
= manager!.getSession()), start the rebuild without awaiting (const
rebuildPromise = manager!.rebuild()), immediately assert manager!.isRebuilding()
is true and manager!.getSession() is still original, then await rebuildPromise
and assert the returned/new session is not the original (compare identity and/or
buildTimeMs differs) and that manager!.getSession() equals the new session;
apply the same pattern for the other scenario mentioned (lines 150-155).
In `@tests/support/helpers/mcp-state.ts`:
- Around line 160-173: The mock PipelineSessionManager's initialize() and
rebuild() should be made asynchronous so callers can await and observe
isRebuilding() across an async boundary: change initialize() and rebuild() to
return Promises (e.g., async functions) and in rebuild() set this.rebuilding =
true, await a microtask or small timeout before updating
this.session.buildTimeMs and setting this.rebuilding = false, then resolve the
updated session; ensure callers of initialize() and rebuild() in tests still
receive a PipelineSession but only after the async boundary so isRebuilding()
can be observed while true.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 68fdc75c-e5c8-4ebe-b79c-62960997e693
⛔ Files ignored due to path filters (1)
.mcp.jsonis excluded by none and included by none
📒 Files selected for processing (10)
src/api/fuzzy-match.tssrc/api/scope-validator.tssrc/mcp/tool-registry.tstests/features/api/output-shaping/fuzzy-match.featuretests/features/api/session-support/scope-validator.featuretests/features/mcp/mcp-server.featuretests/steps/api/output-shaping/fuzzy-match.steps.tstests/steps/api/session-support/scope-validator.steps.tstests/steps/mcp/mcp-server.steps.tstests/support/helpers/mcp-state.ts
…andling, tests
Server architecture (comments 4, 5, 6):
- Move console.log→stderr redirect from server.ts module body to CLI entry
point with dynamic import, ensuring redirect is active before ESM imports
- Change parseCliArgs to return ParseCliResult discriminated union instead
of calling process.exit for help/version/error
- Add flag-value validation (--input --watch no longer silently treats
--watch as a glob)
- Remove process.exit(1) from startMcpServer; throw on init failure instead
- Add process.stdin 'end' handler to tear down file watcher on client
disconnect (prevents chokidar holding event loop open)
Tool registry (comments 13, 14, 15):
- Wrap all 25 tool handlers with safeHandler() so thrown exceptions become
MCP isError payloads instead of transport-level errors
- Fix dp_decisions regex to match both AD-N and DD-N entries
- Fix dp_arch_coverage to preserve configured file extensions (.tsx etc)
instead of hardcoding *.ts when path filter is set
Tests (comments 8, 9, 11):
- Fix ScenarioOutline step patterns: use "<args>"/"<option>" placeholders
instead of {string} per vitest-cucumber rules
- Make MockPipelineSessionManager methods async to match production contract
- Capture original session before rebuild to assert object identity change
- Test isRebuilding=true during rebuild by starting without awaiting
Other (comments 1, 2, 3, 12):
- Fix spec wording: "logged to stderr" not "MCP notification"
- Add language identifier to MCP-SETUP.md code fence
- Add @libar-docs-uses tag to MCP barrel module
- Add lexical tie-breaker to fuzzy match sort for deterministic ordering
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
delivery-process/specs/mcp-server-integration.feature (1)
98-103:⚠️ Potential issue | 🟠 MajorAcceptance check is too weak for a “register all tools” contract.
“As least 19 tools” allows multiple regressions while still passing. For this rule, assert the exact contract (25) or assert a required tool-name set.
✅ Suggested tightening
- Then at least 19 tools are registered + Then exactly 25 tools are registeredAs per coding guidelines,
delivery-process/**/*.feature: Ensure scenarios define clear acceptance criteria.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@delivery-process/specs/mcp-server-integration.feature` around lines 98 - 103, The acceptance check in the "All CLI subcommands appear as MCP tools" Scenario is too weak: replace the loose assertion "Then at least 19 tools are registered" with a precise contract — either assert the exact expected count (change to "Then 25 tools are registered") or assert the exact required set of tool names (replace the count check with a step that verifies the registered tool-name set includes the canonical list of dp_* tools), and keep the existing "And each tool name starts with \"dp_\"" check; update the Gherkin steps in the scenario accordingly so the test fails on regressions.
♻️ Duplicate comments (3)
src/mcp/server.ts (1)
98-125:⚠️ Potential issue | 🟠 MajorKeep process termination out of the exported server API.
startMcpServer()is publicly exported, but its internal signal/stdin handlers forceprocess.exit(...). That can terminate embedding hosts unexpectedly and makes lifecycle control impossible outside the CLI.🔧 Suggested direction (idempotent shutdown, no forced exit here)
- process.on('SIGINT', () => { - void cleanup().then(() => process.exit(0)); - }); - process.on('SIGTERM', () => { - void cleanup().then(() => process.exit(0)); - }); + let shuttingDown = false; + const shutdown = async (): Promise<void> => { + if (shuttingDown) return; + shuttingDown = true; + await cleanup(); + }; + process.on('SIGINT', () => { + void shutdown(); + }); + process.on('SIGTERM', () => { + void shutdown(); + }); ... - process.stdin.on('end', () => { - log('Client disconnected (stdin closed)'); - void cleanup().then(() => process.exit(0)); - }); + process.stdin.on('end', () => { + log('Client disconnected (stdin closed)'); + void shutdown(); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/server.ts` around lines 98 - 125, The exported startMcpServer() should not force process termination: remove/avoid any direct process.exit(...) calls in the SIGINT/SIGTERM and stdin 'end' handlers and stop registering those global handlers unconditionally inside the module; instead ensure the cleanup() function (already defined) returns a Promise<void> and either (a) expose cleanup() from the module or (b) add an explicit option to startMcpServer (e.g., installProcessHandlers: boolean) so only the CLI entrypoint registers process.on('SIGINT'|'SIGTERM') and process.stdin.on('end') to call cleanup() (and optionally exit), while the library API only calls cleanup() without exiting so embedding hosts control lifecycle.src/mcp/tool-registry.ts (1)
604-618:⚠️ Potential issue | 🟠 MajorPath-filtered coverage globs can still drop configured file patterns.
At Line [612], the extension extraction only matches simple
*.extendings. Globs likesrc/**/*.{ts,tsx}are not captured, so Line [615] falls back totsand omitstsxfiles fromdp_arch_coveragewhenpathis used.♻️ Suggested fix (preserve configured file-pattern suffixes)
- const globs = - pathFilter !== undefined - ? (() => { - // Preserve file extensions from configured globs instead of hardcoding *.ts - const extensions = new Set<string>(); - for (const g of s.sourceGlobs.input) { - const m = /\*\.(\w+)$/.exec(g); - if (m?.[1] !== undefined) extensions.add(m[1]); - } - if (extensions.size === 0) extensions.add('ts'); - return [...extensions].map((ext) => `${pathFilter}/**/*.${ext}`); - })() - : [...s.sourceGlobs.input]; + const globs = + pathFilter !== undefined + ? s.sourceGlobs.input.map((glob) => { + const filePattern = glob.split('/').pop() ?? '*.{ts,tsx}'; + return `${pathFilter}/**/${filePattern}`; + }) + : [...s.sourceGlobs.input];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/tool-registry.ts` around lines 604 - 618, The current extraction in safeHandler for path-filtered globs only detects simple patterns like *.ext and misses brace lists (e.g. .{ts,tsx}), causing loss of configured extensions when path is provided; update the logic in the block that builds globs (the anonymous IIFE referenced by globs, using getSession() and s.sourceGlobs.input) to detect both simple extensions and brace lists: for each pattern in s.sourceGlobs.input, first try to match a brace group like .{ts,tsx} and split by commas to add each extension, otherwise fall back to the current simple *.ext capture, and only if nothing found add 'ts' as a default; then produce the pathFilter globs as before (e.g. `${pathFilter}/**/*.${ext}` or equivalent) so all configured extensions (tsx, ts, etc.) are preserved.tests/steps/mcp/mcp-server.steps.ts (1)
441-450:⚠️ Potential issue | 🟠 MajorDuplicate step pattern still present - consolidate into single step with DataTable.
Both
ThenandAndregister the identical pattern'the JSON result contains {string} key'within the same RuleScenario. In vitest-cucumber, duplicate pattern registrations cause ambiguous step matching.🔧 Suggested fix for step definitions
- Then('the JSON result contains {string} key', (_ctx: unknown, key: string) => { - const parsed = JSON.parse(state!.toolResult!.content[0].text) as Record<string, unknown>; - expect(parsed).toHaveProperty(key); - }); - - And('the JSON result contains {string} key', (_ctx: unknown, key: string) => { + Then('the JSON result contains keys', (_ctx: unknown, dataTable: DataTable) => { const parsed = JSON.parse(state!.toolResult!.content[0].text) as Record<string, unknown>; - expect(parsed).toHaveProperty(key); + for (const row of dataTable.hashes()) { + expect(parsed).toHaveProperty(row.key); + } });🔧 Corresponding feature file change (mcp-server.feature lines 183-184)
- Then the JSON result contains "counts" key - And the JSON result contains "distribution" key + Then the JSON result contains keys + | key | + | counts | + | distribution |As per coding guidelines: "Consolidate multiple
Andsteps with identical text into a single step with DataTable or docstring; multiple identical patterns fail in vitest-cucumber."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/steps/mcp/mcp-server.steps.ts` around lines 441 - 450, There are two identical step registrations for the pattern 'the JSON result contains {string} key' (the Then and And blocks); remove the duplicate and replace with a single step (e.g., keep only the Then registration) that accepts a DataTable or array of keys and asserts each key exists on the parsed JSON (reference the step registration lines calling JSON.parse(state!.toolResult!.content[0].text) and expect(parsed).toHaveProperty). Update the feature to provide multiple keys via a DataTable or list so the single step can iterate and assert every required key.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/steps/mcp/mcp-server.steps.ts`:
- Around line 197-201: The test uses a regex to assert a simple prefix; update
the step handler for "each tool name starts with {string}" to use
String.startsWith for clarity and safety: iterate the same
state!.mockServer!.tools and assert that each extracted name (the local variable
name) startsWith(prefix) (e.g., expect(name.startsWith(prefix)).toBe(true))
instead of constructing new RegExp(`^${prefix}`).
In `@tests/support/helpers/mcp-state.ts`:
- Around line 1-7: Update the header comment block (the existing /** ...
`@libar-docs` */ block) to include relationship annotations `@libar-docs-depends-on`
and `@libar-docs-uses`; list the actual modules/types this helper imports or
interacts with (for example the MCP server test state types, any mock utilities
and test helpers you import — replace placeholders with the real symbols found
in this file such as the MCP test state type name, mock logger, and test utility
module names) so discoverability queries can link dependencies and usages.
- Around line 165-173: MockPipelineSessionManager.rebuild() currently mutates
this.session before the async yield, which differs from production
PipelineSessionManager and can hide "read-old-session-while-rebuilding" bugs;
change rebuild() (in the MockPipelineSessionManager class) to compute a
newSession object locally (e.g., copy this.session and bump buildTimeMs), set
this.rebuilding = true, await the async boundary, then atomically assign
this.session = newSession, set this.rebuilding = false, and return the session
so the session swap happens only after the async work completes.
---
Outside diff comments:
In `@delivery-process/specs/mcp-server-integration.feature`:
- Around line 98-103: The acceptance check in the "All CLI subcommands appear as
MCP tools" Scenario is too weak: replace the loose assertion "Then at least 19
tools are registered" with a precise contract — either assert the exact expected
count (change to "Then 25 tools are registered") or assert the exact required
set of tool names (replace the count check with a step that verifies the
registered tool-name set includes the canonical list of dp_* tools), and keep
the existing "And each tool name starts with \"dp_\"" check; update the Gherkin
steps in the scenario accordingly so the test fails on regressions.
---
Duplicate comments:
In `@src/mcp/server.ts`:
- Around line 98-125: The exported startMcpServer() should not force process
termination: remove/avoid any direct process.exit(...) calls in the
SIGINT/SIGTERM and stdin 'end' handlers and stop registering those global
handlers unconditionally inside the module; instead ensure the cleanup()
function (already defined) returns a Promise<void> and either (a) expose
cleanup() from the module or (b) add an explicit option to startMcpServer (e.g.,
installProcessHandlers: boolean) so only the CLI entrypoint registers
process.on('SIGINT'|'SIGTERM') and process.stdin.on('end') to call cleanup()
(and optionally exit), while the library API only calls cleanup() without
exiting so embedding hosts control lifecycle.
In `@src/mcp/tool-registry.ts`:
- Around line 604-618: The current extraction in safeHandler for path-filtered
globs only detects simple patterns like *.ext and misses brace lists (e.g.
.{ts,tsx}), causing loss of configured extensions when path is provided; update
the logic in the block that builds globs (the anonymous IIFE referenced by
globs, using getSession() and s.sourceGlobs.input) to detect both simple
extensions and brace lists: for each pattern in s.sourceGlobs.input, first try
to match a brace group like .{ts,tsx} and split by commas to add each extension,
otherwise fall back to the current simple *.ext capture, and only if nothing
found add 'ts' as a default; then produce the pathFilter globs as before (e.g.
`${pathFilter}/**/*.${ext}` or equivalent) so all configured extensions (tsx,
ts, etc.) are preserved.
In `@tests/steps/mcp/mcp-server.steps.ts`:
- Around line 441-450: There are two identical step registrations for the
pattern 'the JSON result contains {string} key' (the Then and And blocks);
remove the duplicate and replace with a single step (e.g., keep only the Then
registration) that accepts a DataTable or array of keys and asserts each key
exists on the parsed JSON (reference the step registration lines calling
JSON.parse(state!.toolResult!.content[0].text) and
expect(parsed).toHaveProperty). Update the feature to provide multiple keys via
a DataTable or list so the single step can iterate and assert every required
key.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 1f045c15-50ed-4b26-af6c-7fd679884f0e
⛔ Files ignored due to path filters (1)
CLAUDE.mdis excluded by none and included by none
📒 Files selected for processing (10)
delivery-process/specs/mcp-server-integration.featuredocs/MCP-SETUP.mdsrc/api/fuzzy-match.tssrc/cli/mcp-server.tssrc/mcp/index.tssrc/mcp/server.tssrc/mcp/tool-registry.tstests/features/mcp/mcp-server.featuretests/steps/mcp/mcp-server.steps.tstests/support/helpers/mcp-state.ts
…ck, rebuild coalescing, full dp_pattern - Use real package version for --version output and MCP server metadata - Return applied boolean from applyProjectSourceDefaults (true only when sources were filled) - Serialize/coalesce concurrent rebuilds via shared promise + pendingRebuild loop - Spread full pattern payload in dp_pattern (includes rules, extractedShapes, directive) - Remove dp_rebuild short-circuit guard (coalescing handles concurrent requests) - Check .config.js in addition to .config.ts for source fallback detection - Add 5 new test scenarios: config fallback, previous-session reads, coalesced rebuilds, version flag, full metadata - Update spec invariants and add 4 new acceptance-criteria scenarios
|
@coderabbitai, are the reviews slower because this repo is configured as OSS project? I have a paid plan and I could activate it here if that would improve the review speed. |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
delivery-process/specs/mcp-server-integration.feature (1)
99-105:⚠️ Potential issue | 🟡 MinorRaise this lower bound to the current tool inventory.
The runtime, tests, and generated design review now describe 25 tools, but this acceptance criterion still allows 19. That weakens the spec enough to let six tools disappear without violating the documented contract.
As per coding guidelines, `**/*.feature`: Feature files are both executable specs and documentation source; feature description content appears in generated docs, so prioritize clarity and structure.♻️ Suggested fix
- Then at least 19 tools are registered + Then at least 25 tools are registered🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@delivery-process/specs/mcp-server-integration.feature` around lines 99 - 105, Update the acceptance criterion that currently reads "Then at least 19 tools are registered" to the current inventory count by changing the literal 19 to 25 in the feature step; locate the step text "Then at least 19 tools are registered" in the Scenario "All CLI subcommands appear as MCP tools" and replace the bound so the spec requires at least 25 tools (and add a brief comment if desired to remind maintainers to keep this number in sync with the runtime/tooling inventory).src/config/config-loader.ts (1)
272-299:⚠️ Potential issue | 🟠 Major
appliedis still too coarse for partial config defaults.If a config file fills only one missing source list, this returns
trueand the current callers skip fallback entirely. That leaves the other list empty, so repos that configure TypeScript globs but rely on conventional feature paths still lose their feature sources. Return per-list status, or let callers fallback for whichever array remains empty.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/config-loader.ts` around lines 272 - 299, The function applyProjectSourceDefaults currently returns a single boolean "applied" which hides partial successes; change it to return per-list status (e.g., { inputApplied: boolean, featuresApplied: boolean } or a tuple) so callers can fallback for whichever list remains empty. In applyProjectSourceDefaults, replace the single applied flag with two flags (inputApplied and featuresApplied), set them when you push into config.input or config.features (using resolved.project.sources.typescript and resolved.project.sources.features), and return both flags. Update all callers to inspect the specific flag for input vs features and only skip fallback for the list that was actually populated.
♻️ Duplicate comments (4)
tests/steps/mcp/mcp-server.steps.ts (1)
606-614:⚠️ Potential issue | 🟠 MajorDon't register the same JSON-key step twice in one scenario.
These two handlers use the identical pattern text, which makes matching ambiguous in
@amiceli/vitest-cucumber. Keep one step definition and drive both key assertions through a table or a single reusable handler.In `@amiceli/vitest-cucumber` 5.2.1, can a single RuleScenario register two steps with the exact same pattern text, or should repeated assertions be consolidated into one step/DataTable?As per coding guidelines,
**/*.feature: Consolidate multipleAndsteps with identical text into a single step with DataTable or docstring; multiple identical patterns fail in vitest-cucumber.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/steps/mcp/mcp-server.steps.ts` around lines 606 - 614, Remove the duplicate step registration for the identical pattern "the JSON result contains {string} key" so only a single step handler remains; locate the two And/Then handlers that parse JSON via state!.toolResult!.content[0].text and keep one of them, then update feature scenarios to pass multiple keys via a DataTable or call the single step repeatedly from the scenario (or implement a single step that accepts a table of keys and asserts each with expect(parsed).toHaveProperty). Ensure only the unique step text is registered to avoid ambiguous pattern matching in `@amiceli/vitest-cucumber`.tests/features/mcp/mcp-server.feature (1)
45-49:⚠️ Potential issue | 🟡 MinorDon't make
buildTimeMsuniqueness part of this scenario.
buildTimeMsis elapsed duration, so two rebuilds can legitimately produce the same value. The durable contract here is that rebuild returns a new session and replaces the one returned bygetSession(), not that the timing metadata changes.As per coding guidelines, `tests/features/**/*.feature`: Ensure scenarios are clear and follow BDD patterns.♻️ Suggested wording
- And the new session has a different build time than the original + And the rebuilt session replaces the original session🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/features/mcp/mcp-server.feature` around lines 45 - 49, The scenario asserts that buildTimeMs changes, but buildTimeMs is elapsed time and may legitimately be equal; update the feature so it only checks that PipelineSessionManager.rebuild() returns a new session object and that PipelineSessionManager.getSession() now returns that new session (remove the buildTimeMs uniqueness assertion). Specifically, modify the Scenario to assert a new session is returned by rebuild and that getSession() returns an object different from the original session (compare identity or session ID), referencing PipelineSessionManager, rebuild, getSession and buildTimeMs only if needed for non-uniqueness clarification.src/mcp/tool-registry.ts (1)
603-619:⚠️ Potential issue | 🟠 Major
dp_arch_coveragestill drops the configured glob shapes whenpathis set.The new logic only preserves trailing extensions. Inputs like
src/**/*.{ts,tsx},packages/*/src/**/*.ts, ordelivery-process/stubs/**/*.tsare rewritten to<path>/**/*.ts[x], which changes the scan scope and can hide or invent coverage gaps. Filters.sourceGlobs.inputbypathFilterinstead of synthesizing new globs from the extension suffixes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/tool-registry.ts` around lines 603 - 619, The handler in safeHandler uses pathFilter to synthesize new globs by extracting extensions, which loses original glob shapes (e.g., brace expansions, intermediate directories); instead filter the existing s.sourceGlobs.input by the pathFilter: call getSession(), then when pathFilter is defined use s.sourceGlobs.input.filter(g => g.startsWith(pathFilter) || matches g against pathFilter-aware pattern) (or prepend the pathFilter to each original glob only when it already refers to the same subtree) and pass those filtered/original globs to analyzeCoverage(s.dataset, globs, s.baseDir, s.registry); update the branch that builds globs so it no longer reconstructs globs from extensions but reuses and filters s.sourceGlobs.input.src/mcp/server.ts (1)
66-129:⚠️ Potential issue | 🟠 Major
startMcpServeris still a process-owning exported API.Line [66] exports a reusable function, but Lines [111]-[116] and Lines [125]-[127] still bind process signals/stdin and call
process.exit(...). That keeps CLI control flow in library code and bypasses the Result-monad contract for this layer.Based on learnings,
Applies to src/**/*.{ts,tsx} : Use Result Monad pattern: functions return Result.ok(value) or Result.error(err) instead of throwing exceptions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/server.ts` around lines 66 - 129, startMcpServer currently takes ownership of process lifecycle by registering process.on('SIGINT'|'SIGTERM') handlers and calling process.exit(...) and binding process.stdin.on('end'), which violates the Result-monad/library boundary; remove those direct process bindings and exits and instead return a Result (Result.ok or Result.error) that includes the running server resources and a cleanup function. Concretely: update startMcpServer signature to return a Result containing at least { server: McpServer, cleanup: () => Promise<void>, fileWatcher?: McpFileWatcher } (or accept an option like autoShutdown), delete the process.on(...) and process.stdin.on('end') blocks and any direct process.exit calls, and ensure cleanup() (the existing cleanup function) is returned to the caller so callers control signal wiring and process exit behavior; keep registerAllTools, server.connect, and fileWatcher logic intact and use Result.ok/Result.error to surface success/failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@delivery-process/specs/mcp-server-integration.feature`:
- Around line 73-78: The scenarios marked with the error tag use
`@libar-docs-sequence-error` incorrectly: for the "Server starts with explicit
input globs" Scenario (and the other two similar alternate-flow scenarios)
remove the `@libar-docs-sequence-error` tag and either drop the tag entirely or
replace it with a non-error alternate-path tag (e.g.,
`@libar-docs-alternate-success` or `@alternate-path`) so these non-fatal edge-case
scenarios are not rendered as error→exit(1) paths; update the three scenarios
that currently include `@libar-docs-sequence-error` accordingly.
In `@src/mcp/pipeline-session.ts`:
- Around line 64-88: Modify PipelineSession API methods (initialize, rebuild,
getSession) to return the repo Result<T, Error> contract instead of throwing;
catch errors from buildSession and any other internal throws and return
Result.error(err) while returning Result.ok(session) on success. Update
initialize to return Result.ok(session) after buildSession completes and to
convert the current thrown errors (e.g., when input.length === 0) into
Result.error(new Error(...)) and ensure rebuild and getSession are similarly
wrapped; keep buildSession internal behavior but ensure any exceptions it throws
are caught at the public method boundaries and converted to Result.error so
callers receive structured Result values rather than needing try/catch.
In `@src/mcp/server.ts`:
- Around line 37-55: Replace the plain TypeScript interfaces McpServerOptions
and ParsedOptions (and any ad-hoc CLI validation that builds ParseCliResult)
with Zod-based runtime schemas placed under src/validation-schemas/; create Zod
schemas (e.g., mcpServerOptionsSchema, parsedOptionsSchema) that match the
current shape, export types via z.infer<typeof ...> to replace McpServerOptions
and ParsedOptions, and update the CLI parsing/validation code that produces
ParseCliResult to validate input through those schemas and return
error/help/version/options accordingly; ensure the new schemas are used wherever
runtime validation is currently done (the ad-hoc checks referenced around the
ParseCliResult construction) so all CLI input is validated at runtime.
- Around line 102-128: cleanup can be invoked concurrently from multiple
signal/EOF handlers (process.on('SIGINT'|'SIGTERM') and
process.stdin.on('end')), so make cleanup idempotent by introducing a shared
guard (e.g. a boolean or a Promise like cleanupPromise) and early-return if a
cleanup is already in progress; inside cleanup, set the guard, run await
fileWatcher.stop() and await server.close() (each wrapped in try/catch to
suppress double-close errors), and clear/resolve the guard only once when
finished so subsequent invocations immediately return the same completed Promise
instead of re-closing resources.
In `@tests/steps/mcp/mcp-server.steps.ts`:
- Around line 1-9: Add the missing `@libar-docs` relationship tags to the top
comment block of this step file: include `@libar-docs-uses` entries that reference
the runtime pieces exercised by these tests (e.g., mcp-server,
mcp-tool-registration, pipeline-session, file-watcher, cli-parser,
output-formatters) and a `@libar-docs-depends-on` pointing at the TypeScript
source glob **/*.{ts,tsx}; update the header comment in this file (the MCP
Server Step Definitions block) to include those tags so discoverability queries
can locate related runtime artifacts.
In `@tests/support/helpers/mcp-state.ts`:
- Around line 6-8: Remove the `@libar-docs-depends-on` tag from the TypeScript
header comment in the mcp-state helper; keep the `@libar-docs-uses` line as-is so
only runtime dependencies are declared—locate the header block containing the
tags (look for "@libar-docs-depends-on" and "@libar-docs-uses" in the top
comment) and delete the depends-on line.
---
Outside diff comments:
In `@delivery-process/specs/mcp-server-integration.feature`:
- Around line 99-105: Update the acceptance criterion that currently reads "Then
at least 19 tools are registered" to the current inventory count by changing the
literal 19 to 25 in the feature step; locate the step text "Then at least 19
tools are registered" in the Scenario "All CLI subcommands appear as MCP tools"
and replace the bound so the spec requires at least 25 tools (and add a brief
comment if desired to remind maintainers to keep this number in sync with the
runtime/tooling inventory).
In `@src/config/config-loader.ts`:
- Around line 272-299: The function applyProjectSourceDefaults currently returns
a single boolean "applied" which hides partial successes; change it to return
per-list status (e.g., { inputApplied: boolean, featuresApplied: boolean } or a
tuple) so callers can fallback for whichever list remains empty. In
applyProjectSourceDefaults, replace the single applied flag with two flags
(inputApplied and featuresApplied), set them when you push into config.input or
config.features (using resolved.project.sources.typescript and
resolved.project.sources.features), and return both flags. Update all callers to
inspect the specific flag for input vs features and only skip fallback for the
list that was actually populated.
---
Duplicate comments:
In `@src/mcp/server.ts`:
- Around line 66-129: startMcpServer currently takes ownership of process
lifecycle by registering process.on('SIGINT'|'SIGTERM') handlers and calling
process.exit(...) and binding process.stdin.on('end'), which violates the
Result-monad/library boundary; remove those direct process bindings and exits
and instead return a Result (Result.ok or Result.error) that includes the
running server resources and a cleanup function. Concretely: update
startMcpServer signature to return a Result containing at least { server:
McpServer, cleanup: () => Promise<void>, fileWatcher?: McpFileWatcher } (or
accept an option like autoShutdown), delete the process.on(...) and
process.stdin.on('end') blocks and any direct process.exit calls, and ensure
cleanup() (the existing cleanup function) is returned to the caller so callers
control signal wiring and process exit behavior; keep registerAllTools,
server.connect, and fileWatcher logic intact and use Result.ok/Result.error to
surface success/failure.
In `@src/mcp/tool-registry.ts`:
- Around line 603-619: The handler in safeHandler uses pathFilter to synthesize
new globs by extracting extensions, which loses original glob shapes (e.g.,
brace expansions, intermediate directories); instead filter the existing
s.sourceGlobs.input by the pathFilter: call getSession(), then when pathFilter
is defined use s.sourceGlobs.input.filter(g => g.startsWith(pathFilter) ||
matches g against pathFilter-aware pattern) (or prepend the pathFilter to each
original glob only when it already refers to the same subtree) and pass those
filtered/original globs to analyzeCoverage(s.dataset, globs, s.baseDir,
s.registry); update the branch that builds globs so it no longer reconstructs
globs from extensions but reuses and filters s.sourceGlobs.input.
In `@tests/features/mcp/mcp-server.feature`:
- Around line 45-49: The scenario asserts that buildTimeMs changes, but
buildTimeMs is elapsed time and may legitimately be equal; update the feature so
it only checks that PipelineSessionManager.rebuild() returns a new session
object and that PipelineSessionManager.getSession() now returns that new session
(remove the buildTimeMs uniqueness assertion). Specifically, modify the Scenario
to assert a new session is returned by rebuild and that getSession() returns an
object different from the original session (compare identity or session ID),
referencing PipelineSessionManager, rebuild, getSession and buildTimeMs only if
needed for non-uniqueness clarification.
In `@tests/steps/mcp/mcp-server.steps.ts`:
- Around line 606-614: Remove the duplicate step registration for the identical
pattern "the JSON result contains {string} key" so only a single step handler
remains; locate the two And/Then handlers that parse JSON via
state!.toolResult!.content[0].text and keep one of them, then update feature
scenarios to pass multiple keys via a DataTable or call the single step
repeatedly from the scenario (or implement a single step that accepts a table of
keys and asserts each with expect(parsed).toHaveProperty). Ensure only the
unique step text is registered to avoid ambiguous pattern matching in
`@amiceli/vitest-cucumber`.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 38091105-e66b-4a75-ae8b-213d8f2c75f8
⛔ Files ignored due to path filters (2)
_claude-md/api/mcp-server.mdis excluded by none and included by none_claude-md/metadata.jsonis excluded by none and included by none
📒 Files selected for processing (9)
delivery-process/design-reviews/mcp-server-integration.mddelivery-process/specs/mcp-server-integration.featuresrc/config/config-loader.tssrc/mcp/pipeline-session.tssrc/mcp/server.tssrc/mcp/tool-registry.tstests/features/mcp/mcp-server.featuretests/steps/mcp/mcp-server.steps.tstests/support/helpers/mcp-state.ts
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🛠 Debug Information Gathered
In-trial and open-source plans have lower rate limits than the paid plans. More aggressive rate limits are set for open-source users compared to trial and paid users. This directly affects how quickly reviews can be processed when you hit those limits, especially on larger PRs like this one. Activating your paid plan for this repository would give you the higher rate limits and should improve review throughput. To do so:
Once linked to the paid plan, the |
…otent cleanup, annotations - Remove @libar-docs-sequence-error from 2 non-error edge-case scenarios (explicit input globs, concurrent reads during rebuild) - Make shutdown cleanup idempotent with cleanupPromise guard and process.once - Add @libar-docs-uses tags to MCP test step definitions - Remove @libar-docs-depends-on from TypeScript helper (Gherkin-only tag)
Summary
Add a Model Context Protocol (MCP) server that wraps ProcessStateAPI as native Claude Code tools, eliminating subprocess overhead during AI-assisted development sessions.
--watchmode auto-rebuilds the dataset on.ts/.featurefile changes with 500ms debouncedelivery-process.config.ts, supports explicit--input/--featuresglobs for monorepo setupsMotivation
Each
pnpm process:queryinvocation runs the full 8-step pipeline (config, scan, extract, merge, hierarchy, workflow, transform, validate), taking 2-5 seconds. During a typical session with 10-20 queries, this adds 30-90 seconds of pure pipeline overhead. The subprocess model also prevents stateful interaction — no way to keep MasterDataset in memory between queries.The MCP server solves this by loading the pipeline once and exposing all ProcessStateAPI methods as native tools with sub-millisecond dispatch.
Architecture
Four modules with clear separation of concerns:
src/mcp/server.tssrc/mcp/tool-registry.tssrc/mcp/pipeline-session.tssrc/mcp/file-watcher.ts.ts/.featurefile filteringKey design decisions:
dp_tool name prefix to avoid collisions in multi-server setupsBug fixes from code review
dp_list— filters now narrow results sequentially instead of each filter overwriting the previous onedp_arch_coveragefromdp_unannotated— coverage tool now callsanalyzeCoverage()(returns percentage, counts, unused taxonomy) instead of duplicating the unannotated file listfsimport — replace dynamicimport('fs')with staticimport * as fs from 'node:fs'Refinements from first live integration test
After exercising all 25 tools via Claude Code, 4 issues were identified and addressed:
MCP tool refinements (new code in this PR):
dp_rulesoutput cap — unfiltered rules query returned 889K chars; now returns compact summary (totals + rule names + per-area counts + hint) when no pattern filter is provideddp_patternenriched response — was returning only 6 summary fields despite description promising "full metadata"; now includes deliverables, dependencies, and relationships via existing ProcessStateAPI methodsPre-existing fixes (in
src/api/, surfaced by MCP integration):scope-validatoractive pattern fix —checkFsmAllowsTransition()falsely blocked implement sessions on already-active patterns by checkingactive→activetransition; now returns PASS with "already active" detailfuzzy-matchscoring improvement — all prefix matches scored identical 0.9; now uses coverage-weighted scoring (0.9 + queryLen/nameLen * 0.09) with name-length tie-breaker for deterministic rankingTest coverage
111 Gherkin tests across 6 Rules, following the project's Gherkin-only testing policy:
Additional test updates in existing suites:
scope-validator.feature: +1 scenario (active pattern passes FSM check)fuzzy-match.feature: +2 scenarios (prefix ranking, coverage scoring), 1 updated (prefix score range)Tests use
MockMcpServer(recordsregisterTool()calls) andMockPipelineSessionManager(wraps test factories) for transport-free testing.Quick start
{ "mcpServers": { "delivery-process": { "command": "npx", "args": ["dp-mcp-server", "--watch"] } } }Files changed
New source files (6):
src/mcp/server.ts,tool-registry.ts,pipeline-session.ts,file-watcher.ts,index.tssrc/cli/mcp-server.ts(bin entry)New test files (3):
tests/features/mcp/mcp-server.featuretests/steps/mcp/mcp-server.steps.tstests/support/helpers/mcp-state.tsNew documentation (1):
docs/MCP-SETUP.md— quick start, configuration examples, tool reference, troubleshootingDesign artifacts (5):
delivery-process/design-reviews/mcp-server-integration.md— sequence + component diagramsdelivery-process/stubs/mcp-server-integration/— 4 design stubs with API documentationModified (6):
delivery-process/specs/mcp-server-integration.feature— status roadmap→active, sequence annotations, deliverables completepackage.json—@modelcontextprotocol/sdk,chokidardependencies, bin entry, mcp exportpnpm-lock.yamlsrc/api/scope-validator.ts— active pattern early return in FSM checksrc/api/fuzzy-match.ts— coverage-weighted prefix scoring + sort tie-breaker.mcp.json— MCP server configuration for Claude CodeTest plan
pnpm typecheckpassespnpm lintpasses (0 errors, 0 warnings)pnpm test tests/steps/mcp/mcp-server.steps.ts— 111 tests passpnpm test— full suite 8740 tests pass (141 files, 0 regressions)pnpm build && npx dp-mcp-server --helpSummary by CodeRabbit
New Features
Documentation
Tests
Bug Fixes / Behavior