Skip to content

Minor update#497

Merged
Trecek merged 45 commits intostablefrom
main
Mar 24, 2026
Merged

Minor update#497
Trecek merged 45 commits intostablefrom
main

Conversation

@Trecek
Copy link
Copy Markdown
Collaborator

@Trecek Trecek commented Mar 24, 2026

No description provided.

Trecek and others added 30 commits March 21, 2026 00:41
## Summary

`autoskillit init` creates files in `.autoskillit/` via independent
helper functions, but the gitignore writer (`ensure_project_temp`) had
no structural contract with the file creators
(`_create_secrets_template`). Tests verified each function in isolation
— no test ever checked the cross-cutting invariant: **every sensitive
file placed into `.autoskillit/` must be covered by
`.autoskillit/.gitignore`**.

This PR adds structural immunity so that any future file added to the
init flow without updating the gitignore or committed-files allowlist
will be caught automatically by both CI tests and the doctor command.

### Changes

- **`core/io.py`**: Added `_COMMITTED_BY_DESIGN` frozenset allowlist for
intentionally committed files (`config.yaml`, `recipes`)
- **`core/__init__.py`**: Re-exported `_AUTOSKILLIT_GITIGNORE_ENTRIES`
and `_COMMITTED_BY_DESIGN` for cross-package access
- **`cli/_doctor.py`**: Added `_check_gitignore_completeness` (check 9)
— warns when `.autoskillit/` files aren't covered by gitignore or
allowlist; two-pass scan covers both filesystem and canonical entries
- **`tests/cli/test_init.py`**: 4 new structural immunity tests (dynamic
file discovery, comment truthfulness, 2 regression guards)
- **`tests/cli/test_doctor.py`**: 2 new doctor check tests + updated
expected check set + fixed healthy doctor test setup
- **`CLAUDE.md`**: Updated doctor check count from 8 to 9

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    START([autoskillit init])
    DOCTOR_START([autoskillit doctor])

    subgraph InitFlow ["Init Flow — _register_all()"]
        direction TB
        EPT["● ensure_project_temp<br/>━━━━━━━━━━<br/>Creates temp/ + .gitignore<br/>Reads _AUTOSKILLIT_GITIGNORE_ENTRIES<br/>Backfills missing entries"]
        GI_EXISTS{"gitignore<br/>exists?"}
        GI_WRITE["Write all entries"]
        GI_BACKFILL["Backfill missing entries"]
        CST["_create_secrets_template<br/>━━━━━━━━━━<br/>Creates .secrets.yaml<br/>Comment: already in .gitignore"]
    end

    subgraph Registry ["● Registry — core/io.py"]
        direction TB
        ENTRIES["● _AUTOSKILLIT_GITIGNORE_ENTRIES<br/>━━━━━━━━━━<br/>temp/, .secrets.yaml"]
        ALLOW["● _COMMITTED_BY_DESIGN<br/>━━━━━━━━━━<br/>config.yaml, recipes"]
    end

    subgraph DoctorCheck9 ["● Doctor Check 9 — _check_gitignore_completeness"]
        direction TB
        D_DIR{".autoskillit/<br/>exists?"}
        D_GI{".gitignore<br/>exists?"}
        D_SCAN["● Enumerate .autoskillit/ files<br/>━━━━━━━━━━<br/>Skip .gitignore itself<br/>Skip _COMMITTED_BY_DESIGN<br/>Check remaining vs gitignore"]
        D_CANON["● Check canonical entries<br/>━━━━━━━━━━<br/>Verify every entry in<br/>_AUTOSKILLIT_GITIGNORE_ENTRIES<br/>is present in .gitignore"]
        D_RESULT{"uncovered<br/>files?"}
    end

    subgraph TestGates ["● Test Gates — Structural Immunity"]
        direction TB
        T1["● test_init_all_created_files<br/>_covered_by_gitignore<br/>━━━━━━━━━━<br/>Dynamic file discovery"]
        T2["● test_secrets_template<br/>_gitignore_comment_is_true<br/>━━━━━━━━━━<br/>Comment truthfulness"]
        T3["● test_gitignore_entries<br/>_includes_secrets_yaml<br/>━━━━━━━━━━<br/>Regression guard"]
        T4["● test_doctor_warns_on<br/>_missing_gitignore_entry<br/>━━━━━━━━━━<br/>Doctor check validation"]
    end

    OK_INIT([INIT COMPLETE])
    OK_DOCTOR([Severity.OK])
    WARN_DOCTOR([Severity.WARNING])

    START --> EPT
    EPT --> GI_EXISTS
    GI_EXISTS -->|"no"| GI_WRITE
    GI_EXISTS -->|"yes"| GI_BACKFILL
    GI_WRITE --> CST
    GI_BACKFILL --> CST
    CST --> OK_INIT

    EPT -.->|"reads"| ENTRIES
    GI_WRITE -.->|"reads"| ENTRIES
    GI_BACKFILL -.->|"reads"| ENTRIES

    DOCTOR_START --> D_DIR
    D_DIR -->|"no"| OK_DOCTOR
    D_DIR -->|"yes"| D_GI
    D_GI -->|"no"| WARN_DOCTOR
    D_GI -->|"yes"| D_SCAN
    D_SCAN --> D_CANON
    D_CANON --> D_RESULT
    D_RESULT -->|"none"| OK_DOCTOR
    D_RESULT -->|"found"| WARN_DOCTOR

    D_SCAN -.->|"reads"| ALLOW
    D_CANON -.->|"reads"| ENTRIES

    ENTRIES -.->|"validated by"| T1
    ENTRIES -.->|"validated by"| T3
    ALLOW -.->|"used by"| T1
    CST -.->|"validated by"| T2
    D_SCAN -.->|"validated by"| T4

    class START,DOCTOR_START,OK_INIT,OK_DOCTOR terminal;
    class WARN_DOCTOR detector;
    class EPT,CST,GI_WRITE,GI_BACKFILL handler;
    class GI_EXISTS,D_DIR,D_GI,D_RESULT stateNode;
    class ENTRIES,ALLOW stateNode;
    class D_SCAN,D_CANON newComponent;
    class T1,T2,T3,T4 newComponent;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Start, complete, and error states |
| Orange | Handler | Init flow processing nodes |
| Teal | State | Registry constants and decision points |
| Green | New/Modified | New doctor check logic and test gates |
| Red | Detector | Warning outcomes |

## Implementation Plan

Plan file:
`temp/rectify/rectify_init_gitignore_immunity_2026-03-19_190700.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary

The token summary vanished from pipeline-created PRs because its
injection protocol lived exclusively in recipe `note:` fields — prose
documentation addressed to the LLM orchestrator, not executable recipe
steps. There was no runtime enforcement and no test sensitivity to the
WARNING-severity semantic rule that fired on all three bundled
production recipes.

The remedy addresses two independent immunity mechanisms: (1) `open-pr`
now self-retrieves token telemetry from disk using `cwd` as a
pipeline-run scoping key — requiring a `cwd_filter` parameter added to
the shared session-log iterator — making cross-process telemetry access
typed, testable, and free from orchestrator compliance requirements; and
(2) the test suite now asserts zero WARNING-level semantic findings on
bundled production recipes, so any future note-encoded protocol that
generates a WARNING fails CI immediately.

## Architecture Impact

### Data Lineage Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart LR
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    subgraph Prod ["Pipeline Execution (every run_skill step)"]
        STEP["run_skill invocation<br/>━━━━━━━━━━<br/>step_name = YAML step name<br/>cwd = work_dir (pipeline root)"]
        SL["flush_session_log()<br/>━━━━━━━━━━<br/>execution/session_log.py<br/>called after session exits"]
    end

    subgraph Disk ["Global Log Storage<br/>~/.local/share/autoskillit/logs/"]
        JSONL[("sessions.jsonl<br/>━━━━━━━━━━<br/>cwd · timestamp · dir_name<br/>step_name · token counts")]
        TU[("sessions/dir/token_usage.json<br/>━━━━━━━━━━<br/>step_name · input_tokens<br/>output_tokens · timing_seconds")]
    end

    subgraph Iterator ["● Recovery Iterator (audit.py)"]
        ITER["● _iter_session_log_entries()<br/>━━━━━━━━━━<br/>+ cwd_filter: str = ''<br/>skip if idx.cwd != cwd_filter<br/>yields Path to per-session files"]
    end

    subgraph TokenLog ["● DefaultTokenLog (tokens.py)"]
        LOAD["● load_from_log_dir()<br/>━━━━━━━━━━<br/>+ cwd_filter: str = ''<br/>delegates to iterator"]
        REPT["get_report() / compute_total()<br/>━━━━━━━━━━<br/>returns list of TokenEntry dicts<br/>+ aggregated total"]
    end

    subgraph OpenPR ["★ open-pr Skill (SKILL.md)"]
        STEP0B["★ Step 0b: Self-Retrieve<br/>━━━━━━━━━━<br/>PIPELINE_CWD=$(pwd)<br/>python3: load_from_log_dir(cwd_filter=PIPELINE_CWD)"]
        FMT["TelemetryFormatter.format_token_table()<br/>━━━━━━━━━━<br/>pipeline/telemetry_fmt.py<br/>→ markdown table string"]
        BODY["PR Body Assembly<br/>━━━━━━━━━━<br/>## Token Usage Summary<br/>embedded markdown table"]
    end

    subgraph ServerRecovery ["Existing: Server Restart Recovery (_state.py)"]
        SVREC["_initialize() on server start<br/>━━━━━━━━━━<br/>load_from_log_dir(since=24h)<br/>cwd_filter='' (global, unchanged)"]
    end

    subgraph Guards ["★ Structural Test Guards"]
        T1["★ test_load_from_log_dir_cwd_filter<br/>━━━━━━━━━━<br/>tests/pipeline/test_tokens.py<br/>proves cross-pipeline isolation"]
        T2["★ test_bundled_recipes_zero_warnings<br/>━━━━━━━━━━<br/>tests/recipe/test_bundled_recipes.py<br/>catches future WARNING regressions"]
    end

    STEP -->|"cwd=work_dir<br/>step_name=step"| SL
    SL -->|"append index entry<br/>with cwd field"| JSONL
    SL -->|"write telemetry file<br/>(when step_name set)"| TU

    JSONL -->|"● read + filter by cwd"| ITER
    TU -->|"read per-session telemetry"| ITER
    ITER -->|"yield file paths"| LOAD
    LOAD --> REPT
    REPT -->|"steps + total"| STEP0B
    STEP0B --> FMT
    FMT --> BODY

    JSONL -.->|"global since= filter only<br/>(unchanged)"| SVREC

    T1 -.->|"validates cwd isolation"| LOAD
    T2 -.->|"catches future WARNING<br/>note-protocol regressions"| JSONL

    class STEP cli;
    class SL handler;
    class JSONL,TU stateNode;
    class ITER,LOAD,REPT handler;
    class STEP0B,T1,T2 newComponent;
    class FMT phase;
    class BODY output;
    class SVREC phase;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Input | Pipeline step invocation (entry point) |
| Orange | Handler | Recovery iterator, token log, disk read processing
|
| Teal | Storage | Primary disk storage (source of truth —
sessions.jsonl, token_usage.json) |
| Purple | Phase | Formatting and existing server recovery path |
| Green (★/●) | New/Modified | Proposed new components: self-retrieval
step + test guards |
| Dark Teal | Output | PR body as final data destination |

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;

    START([open-pr invoked by pipeline])

    subgraph Guard ["Step 0: Stable Branch Guard"]
        CHK_STABLE{"base_branch == stable<br/>AND head != main?"}
    end

    subgraph SelfRetrieval ["● Step 0b: Token Self-Retrieval (SKILL.md)"]
        CWD["● Capture PIPELINE_CWD=$(pwd)<br/>━━━━━━━━━━<br/>scoping key for this run"]
        LOAD_TOK["● load_from_log_dir(cwd_filter=PIPELINE_CWD)<br/>━━━━━━━━━━<br/>● pipeline/tokens.py + audit.py<br/>reads sessions.jsonl + token_usage.json"]
        CHK_SESSIONS{"n sessions > 0?"}
        FMT_TOK["TelemetryFormatter.format_token_table()<br/>━━━━━━━━━━<br/>TOKEN_SUMMARY_CONTENT = markdown table"]
        NO_TOK["TOKEN_SUMMARY_CONTENT = ''<br/>━━━━━━━━━━<br/>section omitted gracefully"]
    end

    subgraph PRSetup ["Steps 1–5: Parse, Diff, Lenses"]
        PARSE["Step 1: Parse args<br/>━━━━━━━━━━<br/>plan_paths · feature_branch<br/>base_branch · closing_issue"]
        DIFF["Step 3: git diff<br/>━━━━━━━━━━<br/>new_files · modified_files"]
        LENSES["Step 4–5: Arch-Lens Diagrams<br/>━━━━━━━━━━<br/>select 1–3 lenses<br/>validate ★/● markers"]
    end

    subgraph Body ["Step 6: Compose PR Body (● SKILL.md)"]
        BODY["● Compose PR body<br/>━━━━━━━━━━<br/>Summary · Requirements<br/>Architecture Impact"]
        CHK_TOK{"TOKEN_SUMMARY_CONTENT<br/>non-empty?"}
        EMBED["Embed ## Token Usage Summary<br/>━━━━━━━━━━<br/>TOKEN_SUMMARY_CONTENT verbatim"]
        SKIP_SEC["Omit section<br/>━━━━━━━━━━<br/>standalone invocation"]
    end

    subgraph GitHub ["Steps 7–8: GitHub PR"]
        CHK_GH{"gh auth status<br/>exit 0?"}
        CREATE["gh pr create<br/>━━━━━━━━━━<br/>--body-file pr_body.md"]
        OUT_EMPTY["output: pr_url=<br/>━━━━━━━━━━<br/>graceful degradation"]
    end

    subgraph SemanticGuard ["● Semantic Validation Gate (CI)"]
        VAL["validate_recipe()<br/>━━━━━━━━━━<br/>● rules_graph.py<br/>telemetry-before-open-pr REMOVED"]
        CHK_WARN{"warnings == 0?<br/>━━━━━━━━━━<br/>● test_bundled_recipes.py"}
        PASS_CI["CI PASSES<br/>━━━━━━━━━━<br/>zero-warning guard satisfied"]
        FAIL_CI["CI FAILS<br/>━━━━━━━━━━<br/>new note-protocol caught immediately"]
    end

    END_SUCCESS([PR URL returned])
    ERROR([ERROR: invalid base_branch])

    START --> CHK_STABLE
    CHK_STABLE -->|"yes"| ERROR
    CHK_STABLE -->|"no"| CWD

    CWD --> LOAD_TOK
    LOAD_TOK --> CHK_SESSIONS
    CHK_SESSIONS -->|"yes"| FMT_TOK
    CHK_SESSIONS -->|"no"| NO_TOK

    FMT_TOK --> PARSE
    NO_TOK --> PARSE
    PARSE --> DIFF
    DIFF --> LENSES
    LENSES --> BODY

    BODY --> CHK_TOK
    CHK_TOK -->|"yes"| EMBED
    CHK_TOK -->|"no"| SKIP_SEC
    EMBED --> CHK_GH
    SKIP_SEC --> CHK_GH

    CHK_GH -->|"yes"| CREATE
    CHK_GH -->|"no"| OUT_EMPTY
    CREATE --> END_SUCCESS
    OUT_EMPTY --> END_SUCCESS

    VAL --> CHK_WARN
    CHK_WARN -->|"== 0"| PASS_CI
    CHK_WARN -->|"> 0"| FAIL_CI

    class START,END_SUCCESS terminal;
    class ERROR detector;
    class CHK_STABLE,CHK_SESSIONS,CHK_TOK,CHK_GH,CHK_WARN stateNode;
    class PARSE,DIFF,LENSES,BODY,CREATE handler;
    class CWD,LOAD_TOK,FMT_TOK newComponent;
    class NO_TOK,SKIP_SEC,OUT_EMPTY phase;
    class EMBED output;
    class VAL,PASS_CI,FAIL_CI phase;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Start and end states |
| Red | Detector | Error terminal (branch guard failure) |
| Teal | State | Decision / routing nodes |
| Orange | Handler | Processing steps (parse, diff, compose, create) |
| Green (●) | Modified | New self-retrieval path: cwd capture,
load_from_log_dir, format |
| Purple | Phase | Graceful degradation paths and semantic validation |
| Dark Teal | Output | Token summary embed into PR body |

### State Lifecycle Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;

    subgraph WriteContracts ["INIT_ONLY Fields — sessions.jsonl (written once, never modified)"]
        SESS_CWD["cwd<br/>━━━━━━━━━━<br/>pipeline work_dir at session exit<br/>WRITE: flush_session_log() only<br/>READ: ● _iter_session_log_entries (cwd_filter gate)"]
        SESS_META["session_id · dir_name · timestamp<br/>step_name · input_tokens · output_tokens<br/>━━━━━━━━━━<br/>WRITE: flush_session_log() only<br/>READ: iterator for dir lookup + since filter"]
    end

    subgraph Protocols ["● Protocol Contracts (_type_protocols.py)"]
        P_TOKEN["● TokenStore.load_from_log_dir<br/>━━━━━━━━━━<br/>log_root: Path<br/>since: str = ''<br/>+ cwd_filter: str = ''"]
        P_AUDIT["● AuditStore.load_from_log_dir<br/>━━━━━━━━━━<br/>same signature update"]
        P_TIMING["● TimingStore.load_from_log_dir<br/>━━━━━━━━━━<br/>same signature update"]
    end

    subgraph GateLayer ["● Isolation Gate — _iter_session_log_entries (audit.py)"]
        GATE{"● cwd_filter non-empty?"}
        SKIP["skip entry<br/>━━━━━━━━━━<br/>idx.cwd != cwd_filter<br/>cross-pipeline contamination blocked"]
        PASS["yield path to<br/>token_usage.json<br/>━━━━━━━━━━<br/>matching entry only"]
        COMPAT["cwd_filter = ''<br/>━━━━━━━━━━<br/>no filter applied<br/>backward-compatible"]
    end

    subgraph MutableState ["MUTABLE — DefaultTokenLog._entries (tokens.py)"]
        LIVE["record(step_name, usage)<br/>━━━━━━━━━━<br/>live accumulation during pipeline<br/>_entries[step_name] += usage"]
        LOAD["● load_from_log_dir(cwd_filter)<br/>━━━━━━━━━━<br/>disk recovery: rebuilds _entries<br/>from matching session files only"]
        ENTRIES["_entries: dict[str, TokenEntry]<br/>━━━━━━━━━━<br/>key = step_name<br/>value = accumulated token counts"]
    end

    subgraph Derived ["DERIVED — computed, not stored"]
        RPT["get_report() / compute_total()<br/>━━━━━━━━━━<br/>defensive copy of _entries<br/>regenerated on each call"]
    end

    subgraph RecoveryModes ["Two Distinct Recovery Contracts"]
        GLOBAL["Server Restart Recovery<br/>━━━━━━━━━━<br/>_state.py: load_from_log_dir<br/>since=24h · cwd_filter=''<br/>global, all pipelines"]
        SCOPED["● open-pr Self-Retrieval<br/>━━━━━━━━━━<br/>Step 0b: load_from_log_dir<br/>cwd_filter=PIPELINE_CWD<br/>scoped to this run only"]
    end

    SESS_CWD -->|"cwd field read by gate"| GATE
    SESS_META -->|"dir_name + since used"| GATE

    GATE -->|"cwd_filter='' (empty)"| COMPAT
    GATE -->|"cwd_filter non-empty AND idx.cwd != filter"| SKIP
    GATE -->|"cwd_filter non-empty AND idx.cwd == filter"| PASS

    COMPAT -->|"yields all matching since"| LOAD
    PASS -->|"yields scoped paths"| LOAD
    LOAD -->|"accumulates into"| ENTRIES
    LIVE -->|"live record into"| ENTRIES
    ENTRIES -->|"read-only snapshot"| RPT

    GLOBAL -->|"uses cwd_filter=''"| COMPAT
    SCOPED -->|"uses cwd_filter=PIPELINE_CWD"| GATE

    P_TOKEN -.->|"contract for"| LOAD
    P_AUDIT -.->|"contract for"| LOAD
    P_TIMING -.->|"contract for"| LOAD

    class SESS_CWD detector;
    class SESS_META stateNode;
    class P_TOKEN,P_AUDIT,P_TIMING newComponent;
    class GATE stateNode;
    class SKIP detector;
    class PASS,COMPAT handler;
    class LIVE,LOAD,ENTRIES handler;
    class RPT phase;
    class GLOBAL cli;
    class SCOPED newComponent;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Red | INIT_ONLY / Guard | `cwd` field (write-once), skip-entry
enforcement |
| Teal | State / Gate | sessions.jsonl metadata, `cwd_filter` decision
node |
| Green (●) | Modified | Protocol contracts, scoped recovery entry point
|
| Orange | Handler | Iterator pass-through, live record,
load_from_log_dir |
| Purple | Derived | Computed snapshots (get_report, compute_total) |
| Dark Blue | Recovery | Global server-restart recovery path |

Closes #441

## Implementation Plan

Plan file:
`temp/rectify/rectify_token-summary-note-protocol-immunity_2026-03-20_000000.md`

## Token Usage Summary

## token_summary


🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
)

## Summary

When `auto_merge == "true"` and `queue_available == "false"`, the
pipeline previously dropped
the PR unmerged by routing the `route_queue_mode` default case to
`release_issue_success`.
This is incorrect — GitHub's `gh pr merge --squash --auto` works without
a merge queue by
merging directly once required checks pass. The fix adds a
`direct_merge` step chain as
the default route, mirroring the queue path's structure (enable → poll →
conflict-fix →
re-push → re-enable). The same change is applied in three recipes:
`implementation.yaml`,
`remediation.yaml`, and `implementation-groups.yaml`.

## Requirements

### ROUTE — Recipe Routing Updates

- **REQ-ROUTE-001:** When `auto_merge` is `"true"` and `queue_available`
is `"false"`, both `implementation.yaml` and `remediation.yaml` must
route to a direct-merge step instead of skipping to cleanup/release.
- **REQ-ROUTE-002:** The direct-merge step must invoke `gh pr merge
--squash --auto` to enable GitHub-native auto-merge regardless of merge
queue presence.
- **REQ-ROUTE-003:** The direct-merge path must poll PR state for
`merged` completion rather than calling `wait_for_merge_queue`.

### FAIL — Failure Handling

- **REQ-FAIL-001:** The direct-merge path must handle merge failures
(e.g., conflicts from concurrent merges) with a resolve-and-retry
pattern analogous to the queue ejection path.
- **REQ-FAIL-002:** If the direct merge fails due to a non-conflict
error, the recipe must route to the same cleanup/release path used by
the queue timeout case.

### DESC — Ingredient Description

- **REQ-DESC-001:** The `auto_merge` ingredient description in both
`implementation.yaml` and `remediation.yaml` must be updated to reflect
that it controls automatic merging after checks pass, not specifically
merge queue enrollment.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 45, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;

    START([CI checks pass])

    subgraph Detection ["Queue Detection"]
        CMQ["check_merge_queue<br/>━━━━━━━━━━<br/>GraphQL: mergeQueue id?<br/>captures queue_available"]
        RQM{"● route_queue_mode<br/>━━━━━━━━━━<br/>auto_merge? queue?"}
    end

    subgraph QueuePath ["Queue Path"]
        EAM["enable_auto_merge<br/>━━━━━━━━━━<br/>gh pr merge --squash --auto<br/>→ enroll in queue"]
        WFQ["wait_for_queue<br/>━━━━━━━━━━<br/>wait_for_merge_queue<br/>900 s timeout"]
        REENROLL["reenroll_stalled_pr<br/>━━━━━━━━━━<br/>toggle_auto_merge"]
        QEF["queue_ejected_fix<br/>━━━━━━━━━━<br/>resolve-merge-conflicts"]
        RPQF["re_push_queue_fix<br/>━━━━━━━━━━<br/>push_to_remote"]
        RMQ["reenter_merge_queue<br/>━━━━━━━━━━<br/>gh pr merge --squash --auto"]
    end

    subgraph DirectPath ["★ Direct Merge Path (new)"]
        DM["★ direct_merge<br/>━━━━━━━━━━<br/>gh pr merge --squash --auto<br/>direct (no queue)"]
        WFDM["★ wait_for_direct_merge<br/>━━━━━━━━━━<br/>poll gh pr view<br/>10 s × 90 = 15 min"]
        DMCF["★ direct_merge_conflict_fix<br/>━━━━━━━━━━<br/>resolve-merge-conflicts"]
        RPDF["★ re_push_direct_fix<br/>━━━━━━━━━━<br/>push_to_remote"]
        RDM["★ redirect_merge<br/>━━━━━━━━━━<br/>gh pr merge --squash --auto<br/>re-enable after fix"]
    end

    SUCCESS["release_issue_success<br/>━━━━━━━━━━<br/>label: staged"]
    TIMEOUT["release_issue_timeout<br/>━━━━━━━━━━<br/>no staged label"]
    FAILURE["release_issue_failure<br/>━━━━━━━━━━<br/>→ cleanup_failure"]
    CONFIRM["confirm_cleanup<br/>━━━━━━━━━━<br/>delete clone? yes/no"]
    DONE([done])

    START --> CMQ
    CMQ --> RQM
    RQM -->|"auto_merge != 'true'"| CONFIRM
    RQM -->|"queue_available == true"| EAM
    RQM -->|"● default (was release_issue_success)"| DM

    EAM --> WFQ
    EAM -->|"on_failure"| CONFIRM
    WFQ -->|"merged"| SUCCESS
    WFQ -->|"ejected"| QEF
    WFQ -->|"stalled"| REENROLL
    WFQ -->|"timeout"| TIMEOUT
    REENROLL --> WFQ
    QEF -->|"resolved"| RPQF
    QEF -->|"escalation_required"| FAILURE
    RPQF --> RMQ
    RMQ --> WFQ

    DM --> WFDM
    DM -->|"on_failure"| CONFIRM
    WFDM -->|"merged"| SUCCESS
    WFDM -->|"closed"| DMCF
    WFDM -->|"timeout"| TIMEOUT
    DMCF -->|"resolved"| RPDF
    DMCF -->|"escalation_required"| FAILURE
    RPDF --> RDM
    RDM --> WFDM

    SUCCESS --> CONFIRM
    TIMEOUT --> CONFIRM
    CONFIRM -->|"yes"| DONE
    CONFIRM -->|"no"| DONE

    class START,DONE terminal;
    class CMQ phase;
    class RQM stateNode;
    class EAM,WFQ,REENROLL,QEF,RPQF,RMQ handler;
    class DM,WFDM,DMCF,RPDF,RDM newComponent;
    class CONFIRM detector;
    class SUCCESS,TIMEOUT,FAILURE output;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Pipeline start and end states |
| Purple | Phase | Queue detection and analysis nodes |
| Teal | State | `● route_queue_mode` — modified routing decision |
| Orange | Handler | Existing queue path steps |
| Green | New Component | `★` New direct-merge path steps added by this
PR |
| Dark Teal | Output | Terminal release / timeout / failure states |
| Red | Detector | `confirm_cleanup` gate |

### State Lifecycle Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 48, 'rankSpacing': 56, 'curve': 'basis'}}}%%
flowchart TB
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    subgraph Inputs ["INIT_ONLY — Ingredient Fields (set at recipe start)"]
        direction LR
        OPR["inputs.open_pr<br/>━━━━━━━━━━<br/>default: true<br/>SKIP_GATE for all PR steps"]
        AM["● inputs.auto_merge<br/>━━━━━━━━━━<br/>default: true<br/>● desc: direct merge fallback"]
        BB["inputs.base_branch<br/>━━━━━━━━━━<br/>merge target"]
        IU["inputs.issue_url<br/>━━━━━━━━━━<br/>optional: release gate"]
    end

    subgraph Prerequisites ["SET_ONCE — Pipeline Context (captured before merge path)"]
        direction LR
        QA["context.queue_available<br/>━━━━━━━━━━<br/>SET: check_merge_queue<br/>READ: route_queue_mode"]
        PRN["context.pr_number<br/>━━━━━━━━━━<br/>SET: extract_pr_number<br/>READ: direct_merge, wait_for_direct_merge"]
        WD["context.work_dir<br/>━━━━━━━━━━<br/>SET: clone<br/>READ: all direct-merge steps"]
        MT["context.merge_target<br/>━━━━━━━━━━<br/>SET: create_branch<br/>READ: re_push_direct_fix"]
    end

    subgraph Gate ["VALIDATION GATE — skip_when_false"]
        GATE["inputs.open_pr == true<br/>━━━━━━━━━━<br/>Guards all 5 new steps:<br/>direct_merge, wait_for_direct_merge,<br/>direct_merge_conflict_fix,<br/>re_push_direct_fix, redirect_merge"]
    end

    subgraph RoutingDecision ["● route_queue_mode — State-Based Router"]
        RQM["● route_queue_mode<br/>━━━━━━━━━━<br/>reads: inputs.auto_merge<br/>reads: context.queue_available<br/>● default → direct_merge (was: release_issue_success)"]
    end

    subgraph NewCaptures ["★ CAPTURE_RESULT — New Context Fields (direct merge path)"]
        direction TB
        DMS["★ direct_merge_state<br/>━━━━━━━━━━<br/>SET: wait_for_direct_merge<br/>values: merged | closed | timeout<br/>READ: on_result conditions"]
        CER["conflict_escalation_required<br/>━━━━━━━━━━<br/>SET: direct_merge_conflict_fix<br/>values: true | false<br/>READ: on_result conditions"]
    end

    subgraph RouteContracts ["ON_RESULT CONTRACTS — Typed routing guards"]
        direction TB
        WFDMc["wait_for_direct_merge<br/>━━━━━━━━━━<br/>merged → release_issue_success<br/>closed → direct_merge_conflict_fix<br/>default → release_issue_timeout"]
        DMCFc["direct_merge_conflict_fix<br/>━━━━━━━━━━<br/>escalation_required == true → release_issue_failure<br/>default → re_push_direct_fix"]
    end

    OPR -->|"evaluated each step"| GATE
    AM -->|"read by"| RQM
    QA -->|"read by"| RQM
    BB -->|"passed to"| DMCFc
    IU -->|"gates"| RouteContracts

    PRN -->|"passed to"| Gate
    WD -->|"passed to"| Gate
    MT -->|"passed to"| Gate

    GATE -->|"pass → execute"| RQM
    RQM -->|"default route changed"| NewCaptures

    DMS -->|"consumed by"| WFDMc
    CER -->|"consumed by"| DMCFc

    class OPR,AM,BB,IU detector;
    class QA,PRN,WD,MT stateNode;
    class GATE gap;
    class RQM phase;
    class DMS newComponent;
    class CER handler;
    class WFDMc,DMCFc output;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Red | INIT_ONLY | Ingredient fields — set once at recipe start, never
mutated |
| Teal | SET_ONCE | Pipeline context fields — captured before merge
path, read by new steps |
| Yellow/Orange | SKIP_GATE | `inputs.open_pr` validation gate
protecting all new steps |
| Purple | Modified Router | `● route_queue_mode` — routing logic
changed by this PR |
| Green | New Capture | `★ direct_merge_state` — new field introduced by
this PR |
| Orange | Existing Capture | `conflict_escalation_required` — existing
field, also written by new step |
| Dark Teal | Route Contracts | `on_result` typed routing guards
(consume captured fields) |

Closes #401

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/impl-20260320-165150-373150/temp/make-plan/direct_merge_fallback_plan_2026-03-20_000001.md`

## Token Usage Summary

## Token Summary\n\n(Token data accumulated server-side)

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…lti-Issue Runs (#451)

## Summary

When `process-issues` orchestrates a multi-issue run, `claim_issue` is
currently deferred to each recipe's internal step graph. This leaves
every not-yet-started issue unclaimed and vulnerable to parallel pickup
by another session. The fix requires three coordinated changes:

1. **`claim_issue` MCP tool** gains an `allow_reentry: bool = False`
parameter — when `True` and the in-progress label is already present,
returns `claimed: true` (reentry) instead of `claimed: false` (blocked).
2. **`process-issues` skill** is refactored to claim all manifest issues
upfront before dispatching any recipe, track which were successfully
claimed, pass `upfront_claimed: "true"` as a recipe ingredient, and
release uncompleted issues on fatal failure.
3. **Three recipes** (`implementation.yaml`, `remediation.yaml`,
`implementation-groups.yaml`) each gain an `upfront_claimed` ingredient
(default `"false"`) and pass it as `allow_reentry` to their
`claim_issue` step so that a pre-claim by the orchestrator is recognized
as "proceed" rather than "abort".

## Requirements

### BATCH

- **REQ-BATCH-001:** The `process-issues` skill must call `claim_issue`
for every issue in the triage manifest before dispatching any recipe
execution.
- **REQ-BATCH-002:** The system must iterate through all issues in the
manifest and call `claim_issue` individually for each, collecting
results before proceeding.
- **REQ-BATCH-003:** Issues where `claim_issue` returns `claimed: false`
(already claimed by another session) must be excluded from the dispatch
list and logged as skipped.

### COMPAT

- **REQ-COMPAT-001:** Per-recipe `claim_issue` steps must not abort when
the in-progress label was already applied by the same orchestration
session's upfront claim.
- **REQ-COMPAT-002:** The `claim_issue` tool's `on_result` routing in
recipes must treat a pre-existing label applied by the current session
as a proceed condition, not an escalate-stop condition.
- **REQ-COMPAT-003:** Single-issue recipe flows (no orchestrator) must
continue to function identically to current behavior.

### RELEASE

- **REQ-RELEASE-001:** The `process-issues` skill must release all
upfront-claimed but unprocessed issues when the orchestrator encounters
a fatal failure.
- **REQ-RELEASE-002:** The system must track which issues were claimed
upfront and which have been handed off to recipe execution, so that only
uncompleted issues are released on failure.
- **REQ-RELEASE-003:** Issues that completed recipe execution (success
or recipe-level failure with its own release) must not be
double-released by the orchestrator cleanup.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;

    %% TERMINALS %%
    START([● process-issues invoked])
    DONE([DONE])
    ERROR([FATAL ERROR])

    subgraph Phase0 ["Phase 0 — Parse & Discover"]
        direction TB
        Parse["Parse args<br/>━━━━━━━━━━<br/>--batch N, --dry-run,<br/>--comment, --merge-batch"]
        Discover["Discover manifest<br/>━━━━━━━━━━<br/>triage_manifest_*.json"]
        DryRun{"--dry-run?"}
        Confirm{"User confirms Y/n?"}
    end

    subgraph Phase1 ["● Phase 1 — Upfront Claiming (MODIFIED)"]
        direction TB
        Flatten["● Flatten all issues<br/>━━━━━━━━━━<br/>Collect issues from<br/>all selected batches"]
        ClaimCall["● claim_issue()<br/>━━━━━━━━━━<br/>allow_reentry=False<br/>for each issue"]
        ClaimedDecision{"● claimed == true?"}
        TrackClaimed["● pre_claimed_urls<br/>━━━━━━━━━━<br/>append issue_url"]
        TrackSkipped["● Log skipped<br/>━━━━━━━━━━<br/>foreign session owns it"]
    end

    subgraph Phase2 ["● Phase 2 — Batch Dispatch (MODIFIED)"]
        direction TB
        BatchIssueLoop{"For each issue<br/>in batch (asc order)"}
        InPreClaimed{"● issue_url in<br/>pre_claimed_urls?"}
        SkipIssue["Skip issue<br/>━━━━━━━━━━<br/>not pre-claimed"]
        LoadRecipe["load_recipe()<br/>━━━━━━━━━━<br/>● with upfront_claimed:<br/>'true' ingredient"]
        MarkDone["● completed_urls<br/>━━━━━━━━━━<br/>append after recipe<br/>returns"]
    end

    subgraph RecipeInternal ["● Recipe Internal — claim_issue step (MODIFIED)"]
        direction TB
        RecipeClaim["● recipe claim_issue<br/>━━━━━━━━━━<br/>allow_reentry:<br/>inputs.upfront_claimed"]
        ClaimTool["● claim_issue tool<br/>━━━━━━━━━━<br/>allow_reentry=True:<br/>label present→claimed=true<br/>allow_reentry=False:<br/>label present→claimed=false"]
        ClaimResult{"result.claimed<br/>== true?"}
        ProceedRecipe["compute_branch<br/>━━━━━━━━━━<br/>continue recipe..."]
        EscalateStop["escalate_stop<br/>━━━━━━━━━━<br/>foreign claim detected"]
    end

    subgraph Phase3 ["● Phase 3 — Fatal Cleanup (NEW)"]
        direction TB
        Diff["● Compute uncompleted<br/>━━━━━━━━━━<br/>pre_claimed_urls −<br/>completed_urls"]
        ReleaseLoop["● release_issue()<br/>━━━━━━━━━━<br/>for each uncompleted url"]
    end

    subgraph Phase4 ["Phase 4 — Summary"]
        direction TB
        WriteReport["Write process_report<br/>━━━━━━━━━━<br/>successes/failures/<br/>skipped counts"]
    end

    %% FLOW %%
    START --> Parse --> Discover --> DryRun
    DryRun -->|"yes"| DONE
    DryRun -->|"no"| Confirm
    Confirm -->|"n"| DONE
    Confirm -->|"Y"| Flatten

    Flatten --> ClaimCall
    ClaimCall --> ClaimedDecision
    ClaimedDecision -->|"true"| TrackClaimed --> ClaimCall
    ClaimedDecision -->|"false"| TrackSkipped --> ClaimCall
    ClaimCall -->|"all done"| BatchIssueLoop

    BatchIssueLoop --> InPreClaimed
    InPreClaimed -->|"no"| SkipIssue --> BatchIssueLoop
    InPreClaimed -->|"yes"| LoadRecipe
    LoadRecipe --> MarkDone --> BatchIssueLoop
    BatchIssueLoop -->|"all done"| WriteReport --> DONE

    LoadRecipe -->|"fatal error"| Diff
    Diff --> ReleaseLoop --> ERROR

    LoadRecipe -.->|"dispatches"| RecipeClaim
    RecipeClaim --> ClaimTool --> ClaimResult
    ClaimResult -->|"true"| ProceedRecipe
    ClaimResult -->|"false"| EscalateStop

    %% CLASS ASSIGNMENTS %%
    class START,DONE,ERROR terminal;
    class Parse,Discover handler;
    class DryRun,Confirm stateNode;
    class Flatten,ClaimCall,TrackClaimed,TrackSkipped newComponent;
    class ClaimedDecision stateNode;
    class BatchIssueLoop phase;
    class InPreClaimed stateNode;
    class SkipIssue detector;
    class LoadRecipe,MarkDone handler;
    class RecipeClaim,ClaimTool newComponent;
    class ClaimResult stateNode;
    class ProceedRecipe handler;
    class EscalateStop detector;
    class Diff,ReleaseLoop newComponent;
    class WriteReport output;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Start, done, and error states |
| Teal | State | Decision and routing nodes |
| Purple | Phase | Control flow and loop nodes |
| Orange | Handler | Processing and execution nodes |
| Green | Modified/New | ● Components changed by this PR |
| Red | Detector | Validation gates and failure handling |
| Dark Teal | Output | Generated artifacts and results |

Closes #445

## Implementation Plan

Plan file:
`temp/make-plan/orchestrator_upfront_claim_plan_2026-03-20_171414.md`

## Token Usage Summary

## Token Usage Summary

| Step | input | output | cached | count | time |
|------|-------|--------|--------|-------|------|
| plan | 9.7k | 265.2k | 8.3M | 16 | 1h 47m |
| verify | 294 | 235.8k | 11.5M | 15 | 1h 20m |
| implement | 6.2k | 313.3k | 40.0M | 15 | 2h 42m |
| fix | 60 | 13.0k | 1.1M | 3 | 11m 58s |
| audit_impl | 132 | 73.4k | 2.5M | 9 | 25m 3s |
| open_pr | 370 | 207.5k | 10.8M | 14 | 1h 28m |
| review_pr | 203 | 259.5k | 5.7M | 8 | 1h 3m |
| resolve_review | 3.7k | 183.4k | 13.7M | 8 | 1h 21m |
| resolve_conflicts | 75 | 30.6k | 2.6M | 3 | 10m 44s |
| diagnose_ci | 22 | 7.3k | 466.1k | 1 | 2m 33s |
| resolve_ci | 13 | 3.1k | 239.8k | 1 | 2m 4s |
| **Total** | 20.7k | 1.6M | 97.1M | | 10h 35m |

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…#452)

## Summary

AutoSkillit automates code generation and commits at scale. Without a
secret-scanning hook in the pre-commit pipeline, leaked credentials
shift from "possible" to "inevitable." This plan adds a security gate to
`autoskillit init` that checks for a known secret scanner in
`.pre-commit-config.yaml` and — when absent — requires the user to type
an explicit acknowledgment phrase before proceeding. The bypass decision
is persisted to `.autoskillit/config.yaml` with a UTC timestamp.
`autoskillit doctor` gains a new `secret_scanning_hook` check that
reports `ERROR` when no scanner is detected.

## Architecture Impact

### Security Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    START([autoskillit init / doctor])

    subgraph ConfigBoundary ["TRUST BOUNDARY 1: Pre-commit Config Parse"]
        PCFile["● .pre-commit-config.yaml<br/>━━━━━━━━━━<br/>Untrusted filesystem input<br/>(may be absent, malformed, or missing scanner)"]
        ParseYAML["● _detect_secret_scanner<br/>━━━━━━━━━━<br/>load_yaml() — parse repos[].hooks[].id<br/>Membership: _KNOWN_SCANNERS frozenset<br/>{gitleaks, detect-secrets, trufflehog, git-secrets}"]
        ScanFound{Scanner<br/>found?}
    end

    subgraph GreenPath ["PASS PATH"]
        GreenOK["● Print ✓ confirmation<br/>━━━━━━━━━━<br/>secret scanning: ✓ hook detected"]
    end

    subgraph TTYBoundary ["TRUST BOUNDARY 2: Interactive Session Gate"]
        TTYCheck["● sys.stdin.isatty()<br/>━━━━━━━━━━<br/>Non-interactive: fail closed<br/>(CI, pipes, headless sessions)"]
        TTYFail["ABORT — SystemExit(1)<br/>━━━━━━━━━━<br/>No scanner + non-interactive<br/>Cannot bypass this check"]
    end

    subgraph ConsentBoundary ["TRUST BOUNDARY 3: Typed Consent Gate"]
        WarnBox["● Warning box + phrase display<br/>━━━━━━━━━━<br/>Shows exact bypass phrase required"]
        UserInput["User types response<br/>━━━━━━━━━━<br/>input() → strip()"]
        PhraseMatch{Exact phrase<br/>match?}
        BadPhrase["ABORT — SystemExit(1)<br/>━━━━━━━━━━<br/>Phrase mismatch<br/>--force cannot bypass"]
    end

    subgraph AuditBoundary ["TRUST BOUNDARY 4: Bypass Audit Trail"]
        LogBypass["● _log_secret_scan_bypass<br/>━━━━━━━━━━<br/>safety.secret_scan_bypass_accepted<br/>= UTC ISO timestamp → config.yaml<br/>(atomic_write)"]
    end

    subgraph RegisterFlow ["Post-Gate: _register_all"]
        Hooks["sync_hooks_to_settings<br/>━━━━━━━━━━<br/>Hook registration"]
        MCP["Register MCP server<br/>━━━━━━━━━━<br/>~/.claude.json"]
    end

    subgraph DoctorBoundary ["TRUST BOUNDARY 5: Doctor Observability Check"]
        DocCheck["● _check_secret_scanning_hook<br/>━━━━━━━━━━<br/>Check 10 in run_doctor()<br/>Reuses _detect_secret_scanner()"]
        DocOK["DoctorResult OK<br/>━━━━━━━━━━<br/>severity=ok<br/>check=secret_scanning_hook"]
        DocError["● DoctorResult ERROR<br/>━━━━━━━━━━<br/>severity=error<br/>Message: add scanner to prevent credential leaks"]
    end

    END([Init complete / Doctor report])

    START --> PCFile
    PCFile --> ParseYAML
    ParseYAML --> ScanFound
    ScanFound -->|yes| GreenOK
    ScanFound -->|no| TTYCheck
    GreenOK --> Hooks
    TTYCheck -->|no tty| TTYFail
    TTYCheck -->|tty| WarnBox
    WarnBox --> UserInput
    UserInput --> PhraseMatch
    PhraseMatch -->|wrong| BadPhrase
    PhraseMatch -->|correct| LogBypass
    LogBypass --> Hooks
    Hooks --> MCP
    MCP --> END

    START --> DocCheck
    DocCheck --> DocOK
    DocCheck --> DocError
    DocOK --> END
    DocError --> END

    class START,END terminal;
    class PCFile phase;
    class ParseYAML,TTYCheck,PhraseMatch detector;
    class ScanFound phase;
    class GreenOK,LogBypass,DocOK newComponent;
    class WarnBox,UserInput newComponent;
    class TTYFail,BadPhrase,DocError gap;
    class Hooks,MCP handler;
    class DocCheck newComponent;
```

### Operational Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    subgraph InitCLI ["● autoskillit init  (app.py)"]
        InitCmd["● autoskillit init<br/>━━━━━━━━━━<br/>--force  --scope user|project"]
        ConfigWrite["_generate_config_yaml<br/>━━━━━━━━━━<br/>Write .autoskillit/config.yaml"]
        SecretGate["● _check_secret_scanning<br/>━━━━━━━━━━<br/>Gate: scanner present OR consent<br/>Aborts with SystemExit(1) on failure"]
        RegisterAll["_register_all<br/>━━━━━━━━━━<br/>Hooks + MCP + summary"]
    end

    subgraph DoctorCLI ["● autoskillit doctor  (_doctor.py)"]
        DoctorCmd["● autoskillit doctor<br/>━━━━━━━━━━<br/>--output-json"]
        ChecksExisting["Checks 1–9<br/>━━━━━━━━━━<br/>stale_mcp_servers, mcp_server_registered,<br/>autoskillit_on_path, project_config,<br/>version_consistency, hook_health,<br/>hook_registration, script_version_health,<br/>gitignore_completeness"]
        CheckSecret["● Check 10: _check_secret_scanning_hook<br/>━━━━━━━━━━<br/>check=secret_scanning_hook<br/>Reuses _detect_secret_scanner()"]
        DoctorOut["DoctorResult[]<br/>━━━━━━━━━━<br/>severity: ok | warning | error<br/>JSON or human-readable output"]
    end

    subgraph Config ["CONFIGURATION SOURCES (read)"]
        PreCommit[".pre-commit-config.yaml<br/>━━━━━━━━━━<br/>Scanned for hook ids:<br/>gitleaks / detect-secrets<br/>trufflehog / git-secrets"]
        ConfigYaml[".autoskillit/config.yaml<br/>━━━━━━━━━━<br/>safety.secret_scan_bypass_accepted<br/>= UTC ISO timestamp (bypass log)"]
    end

    subgraph TTYState ["RUNTIME STATE (read)"]
        Stdin["sys.stdin.isatty()<br/>━━━━━━━━━━<br/>Interactive vs non-interactive<br/>Determines consent path"]
    end

    subgraph ObsOutputs ["OBSERVABILITY OUTPUTS (write)"]
        InitOutput["● init stdout<br/>━━━━━━━━━━<br/>secret scanning: ✓ hook detected<br/>OR bypass: accepted — logged<br/>OR ERROR + SystemExit(1)"]
        DoctorJSON["● doctor JSON / text<br/>━━━━━━━━━━<br/>{ check: secret_scanning_hook,<br/>  severity: ok | error,<br/>  message: ... }"]
    end

    InitCmd --> ConfigWrite
    ConfigWrite --> SecretGate
    SecretGate --> PreCommit
    SecretGate --> Stdin
    SecretGate --> ConfigYaml
    SecretGate --> RegisterAll
    SecretGate --> InitOutput

    DoctorCmd --> ChecksExisting
    DoctorCmd --> CheckSecret
    CheckSecret --> PreCommit
    ChecksExisting --> DoctorOut
    CheckSecret --> DoctorOut
    DoctorOut --> DoctorJSON

    class InitCmd,DoctorCmd cli;
    class ConfigWrite,RegisterAll,ChecksExisting handler;
    class SecretGate,CheckSecret newComponent;
    class PreCommit,ConfigYaml,Stdin stateNode;
    class InitOutput,DoctorOut,DoctorJSON output;
```

Closes #448

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/impl-448-20260320-180957-935972/temp/make-plan/init_secret_scanning_opt_in_plan_2026-03-20_180957.md`

## Token Usage Summary

| Step | input | output | cached | count | time |
|------|-------|--------|--------|-------|------|
| plan | 9.7k | 298.8k | 10.5M | 18 | 1h 56m |
| verify | 326 | 264.5k | 12.7M | 17 | 1h 34m |
| implement | 6.3k | 365.6k | 46.4M | 17 | 2h 57m |
| fix | 131 | 43.6k | 4.6M | 5 | 27m 26s |
| audit_impl | 146 | 82.3k | 2.8M | 10 | 27m 50s |
| open_pr | 404 | 222.9k | 12.1M | 15 | 1h 35m |
| review_pr | 226 | 287.1k | 6.4M | 9 | 1h 9m |
| resolve_review | 3.7k | 200.0k | 16.1M | 9 | 1h 32m |
| resolve_conflicts | 75 | 30.6k | 2.6M | 3 | 10m 44s |
| diagnose_ci | 36 | 9.0k | 670.5k | 2 | 3m 15s |
| resolve_ci | 13 | 3.1k | 239.8k | 1 | 2m 4s |
| **Total** | 21.1k | 1.8M | 115.2M | | 11h 57m |

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

The routing layer (`_prompts.py`, `sous-chef/SKILL.md`) uses only the
boolean `needs_retry`
field to decide whether a failed `run_skill` result should follow
`on_context_limit` or
`on_failure`. The `retry_reason` field is emitted in JSON output and
visible to the LLM as
prose, but the routing instructions never gate on its value.

This creates a structural blindness: `RetryReason.RESUME` is used for
both "context/turn limit
with partial progress on disk" and "session never ran at all (empty
output, kill anomaly)".
When the session produced no output, the orchestrator nevertheless
routes to `on_context_limit`
(e.g., `retry_worktree`) which assumes partial work exists — leading it
to attempt continuation
of a worktree that contains no work.

The fix makes `retry_reason` a **routing discriminant** rather than mere
informational prose:
adds `RetryReason.EMPTY_OUTPUT` for kill-anomaly retries under
`NATURAL_EXIT` with no context
exhaustion evidence, and updates routing rules to gate
`on_context_limit` specifically on
`retry_reason: resume`. Any future new `RetryReason` value automatically
falls through to
`on_failure` until explicitly added, making incorrect routing the
fail-safe default.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    START([Claude Code Process Exits])

    subgraph Classification ["SESSION CLASSIFICATION — ● session.py"]
        direction TB
        P1["Phase 1: session.needs_retry property<br/>━━━━━━━━━━<br/>ERROR_MAX_TURNS OR _is_context_exhausted()<br/>→ (True, RESUME) always"]
        KA{"● _is_kill_anomaly?<br/>━━━━━━━━━━<br/>subtype: EMPTY_OUTPUT/UNPARSEABLE/<br/>INTERRUPTED, or SUCCESS+empty"}
        TERM{"TerminationReason?<br/>━━━━━━━━━━<br/>NATURAL_EXIT / COMPLETED /<br/>STALE / TIMED_OUT"}
        CE{"● _is_context_exhausted?<br/>━━━━━━━━━━<br/>jsonl_context_exhausted OR<br/>marker in errors OR marker in result"}
        RESUME_OUT["RetryReason.RESUME<br/>━━━━━━━━━━<br/>Partial progress on disk<br/>Context/turn limit confirmed"]
        EO_OUT["● RetryReason.EMPTY_OUTPUT<br/>━━━━━━━━━━<br/>Clean exit, no output<br/>No partial progress on disk"]
        ES_OUT["RetryReason.EARLY_STOP<br/>━━━━━━━━━━<br/>Model stopped voluntarily<br/>Non-empty output present"]
        NONE_OUT["RetryReason.NONE<br/>━━━━━━━━━━<br/>No retry needed<br/>(success or terminal failure)"]
    end

    subgraph Packaging ["SKILL RESULT PACKAGING — core/_type_enums.py + _type_results.py"]
        direction LR
        SR["SkillResult<br/>━━━━━━━━━━<br/>needs_retry: bool<br/>● retry_reason: RetryReason<br/>(routing discriminant, not prose)"]
    end

    subgraph Routing ["● ORCHESTRATOR ROUTING — ● _prompts.py + ● sous-chef/SKILL.md"]
        direction TB
        RG{"● retry_reason value?<br/>━━━━━━━━━━<br/>Gate on specific value,<br/>not just needs_retry bool"}
        HAS_OCL{"step defines<br/>on_context_limit?"}
        OCL["→ on_context_limit<br/>━━━━━━━━━━<br/>retry_worktree or test<br/>Partial progress assumed"]
        FAIL_FALL["→ on_failure<br/>━━━━━━━━━━<br/>All non-resume reasons<br/>No partial state assumed"]
    end

    START --> P1
    P1 -->|"needs_retry=True<br/>(Phase 1 fires)"| RESUME_OUT
    P1 -->|"needs_retry=False<br/>(proceed to Phase 2)"| TERM

    TERM -->|"NATURAL_EXIT / COMPLETED"| KA
    TERM -->|"STALE / TIMED_OUT"| NONE_OUT

    KA -->|"yes + NATURAL_EXIT + rc=0"| CE
    KA -->|"yes + COMPLETED"| RESUME_OUT
    KA -->|"no + rc=0 + marker absent"| ES_OUT
    KA -->|"no + channel confirmed"| NONE_OUT

    CE -->|"● True — context exhausted"| RESUME_OUT
    CE -->|"● False — no exhaustion signal"| EO_OUT

    RESUME_OUT --> SR
    EO_OUT --> SR
    ES_OUT --> SR
    NONE_OUT --> SR

    SR -->|"needs_retry=True"| RG
    SR -->|"needs_retry=False"| SUCCESS_END

    RG -->|"● retry_reason = resume"| HAS_OCL
    RG -->|"● retry_reason = empty_output<br/>early_stop / zero_writes"| FAIL_FALL

    HAS_OCL -->|"yes"| OCL
    HAS_OCL -->|"no"| FAIL_FALL

    SUCCESS_END([success / terminal failure])

    %% CLASS ASSIGNMENTS %%
    class START terminal;
    class P1 phase;
    class TERM,KA detector;
    class CE detector;
    class RESUME_OUT,NONE_OUT,ES_OUT stateNode;
    class EO_OUT newComponent;
    class SR handler;
    class RG,HAS_OCL detector;
    class OCL output;
    class FAIL_FALL gap;
    class SUCCESS_END terminal;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Process entry and exit points |
| Purple | Phase | Control flow phase nodes |
| Red (dark) | Detector | Decision gates (_is_kill_anomaly,
_is_context_exhausted, routing gate) |
| Teal (dark) | State | Existing RetryReason values (RESUME, EARLY_STOP,
NONE) |
| Green | New Component | ● Modified: EMPTY_OUTPUT reason + updated
routing |
| Orange | Handler | SkillResult contract |
| Dark Teal | Output | on_context_limit route (resume path) |
| Yellow | Gap | on_failure fallback for non-resumable retries |

### Error/Resilience Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;

    START([Headless Session Exits])

    subgraph Classification ["FAILURE CLASSIFICATION — ● session.py"]
        direction TB
        PHASE1{"Phase 1 fires?<br/>━━━━━━━━━━<br/>ERROR_MAX_TURNS or<br/>_is_context_exhausted()"}
        KA{"● _is_kill_anomaly?<br/>━━━━━━━━━━<br/>Empty output or<br/>unparseable result"}
        CE{"● _is_context_exhausted?<br/>━━━━━━━━━━<br/>jsonl marker OR<br/>error list OR result text"}
        RESUME["RetryReason.RESUME<br/>━━━━━━━━━━<br/>Resumable: context/turn limit<br/>OR infrastructure kill<br/>Partial progress on disk"]
        EO["● RetryReason.EMPTY_OUTPUT<br/>━━━━━━━━━━<br/>Non-resumable: clean exit<br/>with no output produced<br/>No partial progress on disk"]
        OTHER["Other RetryReason values<br/>━━━━━━━━━━<br/>EARLY_STOP, ZERO_WRITES,<br/>DRAIN_RACE, NONE"]
    end

    subgraph Guards ["COMPOSITION GUARDS — session.py"]
        direction LR
        CONTRA{"Contradiction?<br/>━━━━━━━━━━<br/>success=True AND<br/>needs_retry=True"}
        DEADEND{"Dead-end?<br/>━━━━━━━━━━<br/>channel confirmed AND<br/>not success AND not retry"}
    end

    subgraph Routing ["● ORCHESTRATOR ROUTING — ● _prompts.py + ● sous-chef/SKILL.md"]
        direction TB
        NR{"needs_retry=True?"}
        RR{"● retry_reason?<br/>━━━━━━━━━━<br/>Inspect value,<br/>not just bool"}
        OCL_CHK{"Step defines<br/>on_context_limit?"}
        OCL["→ on_context_limit<br/>━━━━━━━━━━<br/>retry_worktree / test<br/>Resume partial work"]
        FAIL["→ on_failure<br/>━━━━━━━━━━<br/>Fresh restart<br/>or escalate"]
    end

    T_SUCCESS([SUCCEEDED])
    T_TERMINAL([TERMINAL FAILURE])

    START --> PHASE1
    PHASE1 -->|"yes → RESUME"| RESUME
    PHASE1 -->|"no → Phase 2"| KA

    KA -->|"yes"| CE
    KA -->|"no"| OTHER

    CE -->|"● yes — exhausted"| RESUME
    CE -->|"● no — clean exit"| EO

    RESUME --> CONTRA
    EO --> CONTRA
    OTHER --> CONTRA

    CONTRA -->|"demote success=False"| DEADEND
    CONTRA -->|"no contradiction"| DEADEND

    DEADEND -->|"yes → DRAIN_RACE"| NR
    DEADEND -->|"no"| NR

    NR -->|"False"| T_SUCCESS
    NR -->|"False + terminal"| T_TERMINAL
    NR -->|"True"| RR

    RR -->|"● resume or drain_race"| OCL_CHK
    RR -->|"● empty_output"| FAIL
    RR -->|"● early_stop / zero_writes"| FAIL

    OCL_CHK -->|"yes"| OCL
    OCL_CHK -->|"no"| FAIL

    OCL -->|"retry worktree"| START
    FAIL -->|"recipe on_failure"| T_TERMINAL

    %% CLASS ASSIGNMENTS %%
    class START terminal;
    class PHASE1,KA,CE detector;
    class RESUME stateNode;
    class EO newComponent;
    class OTHER stateNode;
    class CONTRA,DEADEND phase;
    class NR,RR,OCL_CHK detector;
    class OCL output;
    class FAIL gap;
    class T_SUCCESS,T_TERMINAL terminal;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Start, success, and terminal failure states |
| Red (dark) | Detector | Classification gates and routing decision
points |
| Purple | Phase | Composition guards (contradiction, dead-end) |
| Teal (dark) | State | Existing RetryReason values (RESUME, other) |
| Green | New Component | ● EMPTY_OUTPUT — new non-resumable failure
class |
| Dark Teal | Output | on_context_limit recovery route |
| Yellow | Gap | on_failure fallback (non-resumable path) |

### State Lifecycle Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 45, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    subgraph EnumDef ["● ENUM DEFINITION — ● core/_type_enums.py"]
        direction LR
        RESUME_V["RetryReason.RESUME<br/>━━━━━━━━━━<br/>Contract: context/turn limit<br/>confirmed OR infrastructure kill<br/>Partial progress on disk"]
        EO_V["● RetryReason.EMPTY_OUTPUT<br/>━━━━━━━━━━<br/>Contract: clean exit<br/>no output produced<br/>No partial progress on disk"]
        OTHERS_V["RetryReason.EARLY_STOP<br/>ZERO_WRITES / DRAIN_RACE / NONE<br/>━━━━━━━━━━<br/>Existing values, unchanged"]
    end

    subgraph InvariantGate ["● INVARIANT ENFORCEMENT GATE — ● session.py"]
        direction TB
        P1_G["Phase 1 Gate<br/>━━━━━━━━━━<br/>ERROR_MAX_TURNS OR<br/>context_exhausted signal<br/>→ RESUME always correct"]
        KA_G{"_is_kill_anomaly?<br/>━━━━━━━━━━<br/>empty/unparseable output"}
        CE_G{"● _is_context_exhausted?<br/>━━━━━━━━━━<br/>Validation gate:<br/>must confirm before RESUME"}
        RESUME_ASSIGN["assign RESUME<br/>━━━━━━━━━━<br/>Invariant holds:<br/>partial progress confirmed"]
        EO_ASSIGN["● assign EMPTY_OUTPUT<br/>━━━━━━━━━━<br/>Invariant holds:<br/>no progress confirmed"]
    end

    subgraph Contract ["SKILLRESULT CONTRACT — core/_type_results.py"]
        direction LR
        SR_FIELD["● SkillResult.retry_reason<br/>━━━━━━━━━━<br/>INIT_ONLY: set by _compute_retry<br/>ROUTING DISCRIMINANT: value<br/>inspected by orchestrator<br/>(was: informational label)"]
    end

    subgraph RoutingContract ["● ROUTING CONTRACT — ● _prompts.py + ● sous-chef/SKILL.md"]
        direction TB
        RESUME_RULE["RESUME contract rule<br/>━━━━━━━━━━<br/>retry_reason=resume<br/>→ on_context_limit eligible<br/>Partial work on disk assumed"]
        EO_RULE["● EMPTY_OUTPUT contract rule<br/>━━━━━━━━━━<br/>retry_reason=empty_output<br/>→ NEVER on_context_limit<br/>No partial work exists"]
        FAIL_SAFE["● Fail-safe default<br/>━━━━━━━━━━<br/>Any new RetryReason value<br/>automatically routes to<br/>on_failure until explicitly added"]
    end

    P1_G -->|"Phase 1 fires"| RESUME_ASSIGN
    P1_G -->|"Phase 1 skipped"| KA_G
    KA_G -->|"yes"| CE_G
    KA_G -->|"no"| OTHERS_V

    CE_G -->|"● True — exhaustion confirmed"| RESUME_ASSIGN
    CE_G -->|"● False — no exhaustion"| EO_ASSIGN

    RESUME_ASSIGN -->|"uses"| RESUME_V
    EO_ASSIGN -->|"● uses"| EO_V

    RESUME_V --> SR_FIELD
    EO_V --> SR_FIELD
    OTHERS_V --> SR_FIELD

    SR_FIELD -->|"● RESUME"| RESUME_RULE
    SR_FIELD -->|"● EMPTY_OUTPUT"| EO_RULE
    SR_FIELD -->|"● new future value"| FAIL_SAFE

    %% CLASS ASSIGNMENTS %%
    class RESUME_V stateNode;
    class EO_V newComponent;
    class OTHERS_V stateNode;
    class P1_G phase;
    class KA_G,CE_G detector;
    class RESUME_ASSIGN stateNode;
    class EO_ASSIGN newComponent;
    class SR_FIELD handler;
    class RESUME_RULE output;
    class EO_RULE newComponent;
    class FAIL_SAFE gap;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Teal (dark) | State | Existing RetryReason values and routing rules |
| Green | New Component | ● EMPTY_OUTPUT value + its contract rule |
| Red (dark) | Detector | Invariant enforcement gates (_is_kill_anomaly,
_is_context_exhausted) |
| Purple | Phase | Phase 1 classification gate |
| Orange | Handler | SkillResult.retry_reason field (now routing
discriminant) |
| Dark Teal | Output | RESUME routing contract (on_context_limit
eligible) |
| Yellow | Gap | Fail-safe default for unknown retry reasons |

Closes #447

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260320-181858-518174/temp/rectify/rectify_retry-reason-routing-blindness_2026-03-20_000000_part_a.md`

## Token Usage Summary

## token_summary

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

Add strict schema validation to every user-writable YAML config layer so
that unrecognized
keys fail loudly at load time, and a `SECRETS_ONLY_KEYS` enforcement
layer ensures
`github.token` (and any future secrets) can only appear in
`.secrets.yaml` — never in any
`config.yaml`. Additionally fix two misleading error messages in
`execution/github.py` that
point users toward the committed file, and harden the `setup-project`
skill template with an
explicit negative instruction.

## Architecture Impact

### Security Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    START(["load_config()"])

    subgraph TrustedLayer ["TRUSTED LAYER (no validation)"]
        DEF["defaults.yaml<br/>━━━━━━━━━━<br/>Bundled artifact<br/>Schema source of truth"]
    end

    subgraph UserConfigBoundary ["TRUST BOUNDARY: User config.yaml"]
        UCFG["~/.autoskillit/config.yaml<br/>━━━━━━━━━━<br/>User-writable · can be committed"]
        UVAL["● validate_layer_keys()<br/>━━━━━━━━━━<br/>is_secrets_layer=False<br/>Unknown keys → hard fail"]
        USEC["● SECRETS_ONLY_KEYS check<br/>━━━━━━━━━━<br/>github.token FORBIDDEN<br/>→ ConfigSchemaError"]
    end

    subgraph ProjectConfigBoundary ["TRUST BOUNDARY: Project config.yaml"]
        PCFG[".autoskillit/config.yaml<br/>━━━━━━━━━━<br/>Project-writable · can be committed"]
        PVAL["● validate_layer_keys()<br/>━━━━━━━━━━<br/>is_secrets_layer=False<br/>Unknown keys → hard fail"]
        PSEC["● SECRETS_ONLY_KEYS check<br/>━━━━━━━━━━<br/>github.token FORBIDDEN<br/>→ ConfigSchemaError"]
    end

    subgraph SecretsBoundary ["TRUST BOUNDARY: .secrets.yaml (gitignored)"]
        SCFG[".autoskillit/.secrets.yaml<br/>━━━━━━━━━━<br/>gitignored · never committed"]
        SVAL["● validate_layer_keys()<br/>━━━━━━━━━━<br/>is_secrets_layer=True<br/>Unknown keys → hard fail"]
        SALLOW["github.token ALLOWED<br/>━━━━━━━━━━<br/>SECRETS_ONLY_KEYS gate<br/>opens for this layer"]
    end

    subgraph EnvLayer ["ENV LAYER (Dynaconf, unrestricted)"]
        ENV["AUTOSKILLIT_SECTION__KEY<br/>━━━━━━━━━━<br/>Env overrides bypass<br/>file validation"]
    end

    subgraph SchemaEnforcement ["● SCHEMA ENFORCEMENT (new)"]
        SCHEMA["● _CONFIG_SCHEMA<br/>━━━━━━━━━━<br/>Derived from AutomationConfig<br/>dataclass hierarchy"]
        SECKEYS["● SECRETS_ONLY_KEYS<br/>━━━━━━━━━━<br/>frozenset({'github.token'})"]
        ERR["● ConfigSchemaError<br/>━━━━━━━━━━<br/>Hard fail · ValueError subclass<br/>typo hint via difflib"]
    end

    subgraph TokenFlow ["TOKEN FLOW → API"]
        MERGED["● _make_dynaconf()<br/>━━━━━━━━━━<br/>Merged YAML → temp file<br/>→ Dynaconf → AutomationConfig"]
        CTX["make_context()<br/>━━━━━━━━━━<br/>token = config.github.token<br/>or GITHUB_TOKEN env var"]
        GHF["DefaultGitHubFetcher<br/>━━━━━━━━━━<br/>self._token (private)<br/>never logged"]
        HDR["_headers()<br/>━━━━━━━━━━<br/>● Error msg → .secrets.yaml<br/>Bearer only if token truthy"]
    end

    START --> DEF
    DEF --> UCFG
    UCFG --> UVAL
    UVAL -->|unknown key| ERR
    UVAL -->|valid| USEC
    USEC -->|github.token present| ERR
    USEC -->|clean| PCFG
    PCFG --> PVAL
    PVAL -->|unknown key| ERR
    PVAL -->|valid| PSEC
    PSEC -->|github.token present| ERR
    PSEC -->|clean| SCFG
    SCFG --> SVAL
    SVAL -->|unknown key| ERR
    SVAL -->|valid| SALLOW
    SALLOW --> MERGED
    ENV --> MERGED
    MERGED --> CTX
    CTX --> GHF
    GHF --> HDR

    SCHEMA -.->|informs| UVAL
    SCHEMA -.->|informs| PVAL
    SCHEMA -.->|informs| SVAL
    SECKEYS -.->|enforces| USEC
    SECKEYS -.->|enforces| PSEC
    SECKEYS -.->|gate opens| SALLOW

    class DEF stateNode;
    class UCFG,PCFG,SCFG phase;
    class UVAL,PVAL,SVAL,USEC,PSEC detector;
    class SALLOW,SCHEMA,SECKEYS,ERR newComponent;
    class ENV cli;
    class MERGED,CTX,GHF,HDR output;
    class START terminal;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal/Entry | Entry point and env-var layer |
| Teal | Trusted | Bundled defaults (schema source of truth) |
| Purple | Config Files | User-writable YAML layers |
| Red | Validation | ● validate_layer_keys() gates and SECRETS_ONLY_KEYS
checks |
| Green | New/Modified | ● New enforcement: _CONFIG_SCHEMA,
SECRETS_ONLY_KEYS, ConfigSchemaError |
| Dark Teal | Token Flow | Merged config → context → API headers |

### Operational Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 45, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    subgraph InitWorkflow ["INIT WORKFLOW"]
        INIT["autoskillit init<br/>━━━━━━━━━━<br/>--test-command<br/>--scope user|project"]
        GEN_CFG[".autoskillit/config.yaml<br/>━━━━━━━━━━<br/>Non-secret settings only<br/>github.default_repo, test_check"]
        GEN_SEC[".autoskillit/.secrets.yaml<br/>━━━━━━━━━━<br/>github.token placeholder<br/>gitignored"]
    end

    subgraph SetupGuide ["● SETUP-PROJECT GUIDANCE (modified)"]
        SKILL["● setup-project/SKILL.md<br/>━━━━━━━━━━<br/>Step 5: Config Updates<br/>NEVER put token in config.yaml"]
    end

    subgraph ConfigLoad ["● CONFIG LOAD (modified)"]
        SERVE["autoskillit serve<br/>━━━━━━━━━━<br/>MCP server start"]
        SHOWCFG["autoskillit config show<br/>━━━━━━━━━━<br/>Resolved merged JSON"]
        LOAD["● load_config()<br/>━━━━━━━━━━<br/>_make_dynaconf()<br/>4-layer merge"]
    end

    subgraph ConfigLayers ["CONFIGURATION HIERARCHY (5 layers)"]
        L1["defaults.yaml<br/>━━━━━━━━━━<br/>Layer 1 — trusted, no validation"]
        L2["~/.autoskillit/config.yaml<br/>━━━━━━━━━━<br/>Layer 2 — ● validated"]
        L3[".autoskillit/config.yaml<br/>━━━━━━━━━━<br/>Layer 3 — ● validated"]
        L4[".autoskillit/.secrets.yaml<br/>━━━━━━━━━━<br/>Layer 4 — ● validated, secrets OK"]
        L5["AUTOSKILLIT_SECTION__KEY<br/>━━━━━━━━━━<br/>Layer 5 — env vars, highest priority"]
    end

    subgraph ValidationGate ["● VALIDATION GATE (new)"]
        VALIDATE["● validate_layer_keys()<br/>━━━━━━━━━━<br/>Unknown key? → hard fail<br/>Wrong layer? → redirect"]
        SCHEMA_ERR["● ConfigSchemaError<br/>━━━━━━━━━━<br/>Typo hint via difflib<br/>Startup blocked"]
    end

    subgraph GitHubErrors ["● GITHUB ERROR MESSAGES (modified)"]
        GH_404["● github.py — HTTP 404<br/>━━━━━━━━━━<br/>'Configure github.token in<br/>.autoskillit/.secrets.yaml'"]
    end

    subgraph Output ["OPERATOR FEEDBACK"]
        OK["AutomationConfig loaded<br/>━━━━━━━━━━<br/>Server ready"]
        ERR_OUT["● ConfigSchemaError raised<br/>━━━━━━━━━━<br/>Message shows layer path<br/>+typo hint + .secrets.yaml redirect"]
    end

    INIT --> GEN_CFG
    INIT --> GEN_SEC
    SKILL -.->|"guides LLM during setup"| GEN_CFG

    SERVE --> LOAD
    SHOWCFG --> LOAD
    LOAD --> L1 --> L2 --> L3 --> L4 --> L5

    L2 --> VALIDATE
    L3 --> VALIDATE
    L4 --> VALIDATE
    VALIDATE -->|valid| OK
    VALIDATE -->|invalid key| SCHEMA_ERR
    SCHEMA_ERR --> ERR_OUT

    GH_404 -.->|"operator sees .secrets.yaml hint"| GEN_SEC

    class INIT,SERVE,SHOWCFG cli;
    class GEN_CFG,GEN_SEC,L1,L2,L3,L4 stateNode;
    class LOAD,VALIDATE,SCHEMA_ERR,GH_404,SKILL newComponent;
    class OK output;
    class ERR_OUT gap;
    class L5 phase;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | CLI | Entry point commands |
| Teal | Config Files | YAML layer files and config state |
| Green | Modified | ● PR changes: validation, error messages, skill
guidance |
| Yellow/Orange | Error | ConfigSchemaError — blocks server startup |
| Dark Teal | Success | Server ready with valid config |
| Purple | Env Layer | Environment variable overrides (highest priority)
|

Closes #449

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/impl-449-20260320-184502-060373/temp/make-plan/strict_schema_validation_config_yaml_plan_2026-03-20_000000.md`

## Token Usage Summary

| Step | input | output | cached | count | time |
|------|-------|--------|--------|-------|------|
| plan | 9.7k | 298.8k | 10.5M | 18 | 1h 56m |
| verify | 326 | 264.5k | 12.7M | 17 | 1h 34m |
| implement | 6.3k | 365.6k | 46.4M | 17 | 2h 57m |
| fix | 131 | 43.6k | 4.6M | 5 | 27m 26s |
| audit_impl | 161 | 90.0k | 3.1M | 11 | 30m 5s |
| open_pr | 435 | 237.8k | 13.3M | 16 | 1h 40m |
| review_pr | 226 | 287.1k | 6.4M | 9 | 1h 9m |
| resolve_review | 3.7k | 200.0k | 16.1M | 9 | 1h 32m |
| resolve_conflicts | 75 | 30.6k | 2.6M | 3 | 10m 44s |
| diagnose_ci | 36 | 9.0k | 670.5k | 2 | 3m 15s |
| resolve_ci | 27 | 5.9k | 482.4k | 2 | 3m 57s |
| **Total** | 21.2k | 1.8M | 117.0M | | 12h 6m |

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…e Merges (#460)

## Summary

The orchestrator has no rule preventing it from calling `gh pr merge` in
parallel when
multiple implementation pipelines each open a PR on a repo that lacks a
GitHub merge
queue. When two or more pipelines reach the merge phase simultaneously,
only the first
succeeds; the rest fail with stale-base conflicts. This plan closes that
gap by:

1. Adding a `MERGE PHASE — MANDATORY` section to `sous-chef/SKILL.md`
that (a) prohibits
direct parallel `gh pr merge` calls on non-queue branches, (b) specifies
the detection
command the orchestrator must run once before the merge phase, and (c)
mandates
   sequential routing when no queue is available.

2. Adding a `kitchen_rule` to `implementation.yaml` that names
`check_merge_queue` and
`route_queue_mode` as the canonical merge-routing steps and forbids
ad-hoc
   `gh pr merge` calls from the orchestrator.

3. Adding contract tests to
`tests/contracts/test_instruction_surface.py` that enforce
   both changes are present and contain the required sentinel phrases.

No Python source changes are needed. All five orchestrator deviations
described in the
incident stem from bypassing these guidance rules, so adding them to the
two surfaces
the orchestrator reads (sous-chef + kitchen_rules) directly addresses
the root cause.

## Requirements

### DETECT — Merge Queue Detection

- **REQ-DETECT-001:** The system must determine whether the target
repository branch has a GitHub merge queue enabled before the merge
phase begins.
- **REQ-DETECT-002:** The detection result must be available to the
orchestrator without requiring a headless session (e.g., via a recipe
step, tool hook, or config lookup).
- **REQ-DETECT-003:** The detection must occur once per pipeline run,
not per-PR.

### ROUTE — Conditional Merge Routing

- **REQ-ROUTE-001:** When merge queue is available, the orchestrator
must be permitted to enroll multiple PRs via `gh pr merge --squash
--auto` in parallel.
- **REQ-ROUTE-002:** When merge queue is NOT available, the orchestrator
must merge PRs sequentially — one at a time, waiting for each to
complete before starting the next.
- **REQ-ROUTE-003:** The sequential merge path must use either the
`merge-prs` recipe or `process-issues --merge-batch` style
(`analyze-prs` → `merge-pr` per PR in order).

### GUIDE — Orchestrator Guidance

- **REQ-GUIDE-001:** The sous-chef SKILL.md must contain an explicit
instruction prohibiting parallel `gh pr merge` calls on branches without
a merge queue.
- **REQ-GUIDE-002:** The guidance must specify the required merge
workflow for non-queue repos: sequential merge via `merge-prs` recipe or
`--merge-batch` flag.
- **REQ-GUIDE-003:** The implementation recipe's kitchen_rules must
reference the merge queue detection result when describing the merge
phase.

### FAIL — Failure Handling

- **REQ-FAIL-001:** When `gh pr merge` fails with a merge conflict, the
orchestrator must route to the recipe's `on_failure` path rather than
improvising with direct git commands.
- **REQ-FAIL-002:** The conflict recovery path must rebase the PR branch
against the updated base, re-push, and retry the merge — sequentially,
not in parallel.
- **REQ-FAIL-003:** The orchestrator must never use `run_cmd` for git
investigation (rebase --abort, git log, git reset) when a merge step
fails — it must delegate to the appropriate skill.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 45, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    %% TERMINALS %%
    START([N PRs open<br/>CI passed])
    DONE([release_issue_success<br/>Pipeline complete])
    ERROR([escalate_stop<br/>Unresolvable conflict])

    subgraph Guidance ["● Orchestrator Guidance (Modified)"]
        direction LR
        SousChef["● sous-chef/SKILL.md<br/>━━━━━━━━━━<br/>MERGE PHASE — MANDATORY<br/>Detect once · Route by availability<br/>NEVER parallel gh pr merge<br/>Route conflicts to on_failure"]
        KitchenRule["● implementation.yaml<br/>kitchen_rules<br/>━━━━━━━━━━<br/>MERGE ROUTING rule:<br/>check_merge_queue → route_queue_mode<br/>Prohibits direct gh pr merge"]
    end

    subgraph Detection ["Detection (run_cmd, once per run)"]
        direction TB
        CheckQueue["check_merge_queue<br/>━━━━━━━━━━<br/>gh api graphql mergeQueue<br/>→ queue_available: true|false"]
        RouteMode{"route_queue_mode<br/>━━━━━━━━━━<br/>queue_available?<br/>auto_merge?"}
    end

    subgraph QueuePath ["Queue Path"]
        direction TB
        AutoMerge["enable_auto_merge<br/>━━━━━━━━━━<br/>gh pr merge --squash --auto<br/>Enroll in GitHub queue"]
        WaitQueue["wait_for_merge_queue<br/>━━━━━━━━━━<br/>Poll until merged or ejected"]
    end

    subgraph DirectPath ["Non-Queue Path (Sequential)"]
        direction TB
        DirectMerge["direct_merge<br/>━━━━━━━━━━<br/>gh pr merge --squash --auto<br/>One PR at a time"]
        WaitDirect{"wait_for_direct_merge<br/>━━━━━━━━━━<br/>Poll: merged / closed / timeout"}
    end

    subgraph ConflictRecovery ["Conflict Recovery"]
        direction TB
        ConflictFix["direct_merge_conflict_fix<br/>━━━━━━━━━━<br/>run_skill: resolve-merge-conflicts<br/>Rebase + fix conflicts"]
        RePush["re_push_direct_merge_fix<br/>━━━━━━━━━━<br/>push_to_remote<br/>Force-push rebased branch"]
        RedirectMerge["redirect_merge<br/>━━━━━━━━━━<br/>gh pr merge --squash --auto<br/>Re-enqueue after rebase"]
    end

    %% MAIN FLOW %%
    START --> SousChef
    START --> KitchenRule
    SousChef -.->|"guides orchestrator"| CheckQueue
    KitchenRule -.->|"enforces recipe steps"| CheckQueue
    CheckQueue --> RouteMode

    RouteMode -->|"queue_available=true"| AutoMerge
    RouteMode -->|"queue_available=false"| DirectMerge
    RouteMode -->|"auto_merge=false"| DONE

    AutoMerge --> WaitQueue
    WaitQueue -->|"merged"| DONE
    WaitQueue -->|"ejected"| ConflictFix

    DirectMerge --> WaitDirect
    WaitDirect -->|"merged"| DONE
    WaitDirect -->|"closed (stale base)"| ConflictFix
    WaitDirect -->|"timeout"| ERROR

    ConflictFix -->|"resolved"| RePush
    ConflictFix -->|"escalation_required=true"| ERROR
    RePush --> RedirectMerge
    RedirectMerge --> WaitDirect

    %% CLASS ASSIGNMENTS %%
    class START,DONE,ERROR terminal;
    class SousChef,KitchenRule phase;
    class CheckQueue handler;
    class RouteMode,WaitDirect stateNode;
    class AutoMerge,WaitQueue handler;
    class DirectMerge handler;
    class ConflictFix detector;
    class RePush,RedirectMerge phase;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Start, complete, and error states |
| Purple (●) | Phase/Guidance | Modified guidance surfaces:
sous-chef/SKILL.md, kitchen_rules, recovery steps |
| Orange | Handler | Recipe execution steps: check_merge_queue,
enable_auto_merge, direct_merge, wait_for_queue |
| Teal | State | Decision/routing nodes: route_queue_mode,
wait_for_direct_merge |
| Red | Detector | Conflict recovery entry: direct_merge_conflict_fix |

Closes #444

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/impl-444-20260320-232459-625096/temp/make-plan/orchestrator_merge_queue_sequencing_plan_2026-03-20_000000.md`

## Token Usage Summary

No token data available.

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

The `open-pr-main` skill generates comprehensive promotion PRs but
previously included no token cost data. This fix mirrors the proven
self-retrieval pattern from `open-pr` Step 0b: at startup, session token
logs are loaded from disk scoped to the current pipeline CWD via
`DefaultTokenLog.load_from_log_dir(cwd_filter=PIPELINE_CWD)`, formatted
via `TelemetryFormatter.format_token_table`, and conditionally embedded
as a `## Token Usage Summary` section in the PR body. The change is
entirely confined to `skills_extended/open-pr-main/SKILL.md` and a new
contract test file.

## Requirements

### TOKEN — Token Aggregation

- **REQ-TOKEN-001:** The open-pr-main skill must include a `## Token
Usage Summary` section in the generated PR body.
- **REQ-TOKEN-002:** The token summary must aggregate usage data from
the constituent PRs that landed in integration since divergence from
main.
- **REQ-TOKEN-003:** The aggregated summary must be presented as a
formatted markdown table consistent with TelemetryFormatter output.

### SKILL — Skill Interface

- **REQ-SKILL-001:** The skill must support collecting telemetry data
without requiring an external orchestrator to pre-generate the summary.
- **REQ-SKILL-002:** The token summary section must appear in the PR
body template alongside existing sections (domain analysis, architecture
impact, etc.).

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    %% TERMINALS %%
    START([START])
    COMPLETE([COMPLETE])

    subgraph TokenPhase ["★ Step 0b: Token Self-Retrieval (NEW)"]
        direction TB
        Step0b["★ Token Self-Retrieval<br/>━━━━━━━━━━<br/>PIPELINE_CWD=$(pwd)<br/>load_from_log_dir(cwd_filter=PIPELINE_CWD)<br/>→ writes temp/open-pr-main/token_summary.md"]
        TokenGate{"★ token_summary.md<br/>non-empty?"}
        SetToken["★ TOKEN_SUMMARY_CONTENT<br/>━━━━━━━━━━<br/>set to file contents"]
        SkipToken["★ TOKEN_SUMMARY_CONTENT<br/>━━━━━━━━━━<br/>left empty (graceful)"]
    end

    subgraph DiscoveryPhase ["Steps 1–14: PR Discovery & Analysis (existing)"]
        direction TB
        ParseArgs["Step 1: Parse Args<br/>━━━━━━━━━━<br/>integration_branch, base_branch"]
        DiscoverPRs["Steps 2–4: Discover PRs<br/>━━━━━━━━━━<br/>merge-base → gh pr list → closing refs"]
        DomainAnalysis["Steps 5–12: Domain Analysis<br/>━━━━━━━━━━<br/>issues · diffs · domain summaries · exec summary"]
        GenDiagrams["Steps 13–14: Arch-Lens Diagrams<br/>━━━━━━━━━━<br/>select lenses · generate · validate ★/● markers"]
    end

    subgraph BodyPhase ["● Step 15: PR Body Composition (MODIFIED)"]
        direction TB
        BuildBody["● Compose PR Body Sections<br/>━━━━━━━━━━<br/>executive summary · stats · highlights<br/>Merged PRs table · Linked Issues table<br/>Domain Analysis · Architecture Impact<br/>closing refs"]
        BodyTokenGate{"★ TOKEN_SUMMARY_CONTENT<br/>non-empty?"}
        EmbedToken["★ Embed Section<br/>━━━━━━━━━━<br/>## Token Usage Summary<br/>{TOKEN_SUMMARY_CONTENT}"]
        SkipSection["omit section<br/>━━━━━━━━━━<br/>standalone / no pipeline sessions"]
        FinalBody["● temp/open-pr-main/pr_body_{ts}.md<br/>━━━━━━━━━━<br/>written to disk"]
    end

    subgraph CreatePhase ["Steps 16–18: GitHub PR Creation (existing)"]
        GHCheck{"gh auth status OK?"}
        CreatePR["gh pr create<br/>━━━━━━━━━━<br/>--body-file pr_body_{ts}.md"]
    end

    %% FLOW %%
    START --> Step0b
    Step0b --> TokenGate
    TokenGate -->|"yes"| SetToken
    TokenGate -->|"no / n=0"| SkipToken
    SetToken --> ParseArgs
    SkipToken --> ParseArgs
    ParseArgs --> DiscoverPRs
    DiscoverPRs --> DomainAnalysis
    DomainAnalysis --> GenDiagrams
    GenDiagrams --> BuildBody
    BuildBody --> BodyTokenGate
    BodyTokenGate -->|"non-empty"| EmbedToken
    BodyTokenGate -->|"empty"| SkipSection
    EmbedToken --> FinalBody
    SkipSection --> FinalBody
    FinalBody --> GHCheck
    GHCheck -->|"yes"| CreatePR
    GHCheck -->|"no (emit pr_url=)"| COMPLETE
    CreatePR --> COMPLETE

    %% CLASS ASSIGNMENTS %%
    class START,COMPLETE terminal;
    class Step0b,SetToken,SkipToken,EmbedToken newComponent;
    class TokenGate,BodyTokenGate,GHCheck detector;
    class ParseArgs,DiscoverPRs handler;
    class DomainAnalysis,GenDiagrams phase;
    class BuildBody,FinalBody output;
    class SkipSection stateNode;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | START and COMPLETE states |
| Green | New Component (★) | New nodes added by this PR |
| Red | Detector | Decision/gate nodes |
| Orange | Handler | Argument parsing and PR discovery |
| Purple | Phase | Domain analysis and diagram generation |
| Dark Teal | Output | PR body composition and file writing |
| Teal | State | Graceful skip paths |

Closes #440

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/impl-440-20260320-232459-195668/temp/make-plan/open_pr_main_token_usage_plan_2026-03-20_000000.md`

## Token Usage Summary

No token data available.

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
… User-Defined (#459)

## Summary

`list_recipes()` currently sorts all recipes alphabetically by `name`,
interleaving bundled and user-defined recipes (UDRs). This causes
numbered positions in `autoskillit order` and `autoskillit recipes list`
to shift whenever UDRs are added or removed. Additionally, the MCP
`list_recipes` tool strips the `source` field, leaving agents unable to
distinguish provenance.

Two targeted changes fix this:

1. **`src/autoskillit/recipe/io.py`**: Change the sort key in
`list_recipes()` to sort by `(source != BUILTIN, name)` — bundled
recipes sort first (False < True), then alphabetically within each tier.
2. **`src/autoskillit/recipe/_api.py`**: Add `source` to
`RecipeListItem` TypedDict and to the dict constructed in
`format_recipe_list_response()`.

No other files require modification. The CLI `recipes list` and `order`
commands call `list_recipes()` directly and already consume `r.source`,
so they receive correct ordering automatically once the sort key is
fixed.

## Requirements

### ORD — Recipe Ordering

- **REQ-ORD-001:** The `list_recipes()` function must return bundled
recipes before user-defined recipes.
- **REQ-ORD-002:** Within each source tier (bundled, user-defined),
recipes must be sorted alphabetically by name.
- **REQ-ORD-003:** The ordering must be consistent across all consumer
surfaces: CLI `recipes list`, CLI `order`, and the MCP `list_recipes`
tool.

### MCP — MCP Tool Response

- **REQ-MCP-001:** The `format_recipe_list_response` function must
include the `source` field in each recipe entry returned by the MCP
`list_recipes` tool.

## Architecture Impact

### Data Lineage Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart LR
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef integration fill:#c62828,stroke:#ef9a9a,stroke-width:2px,color:#fff;

    subgraph Origins ["Data Origins"]
        BUILTIN["bundled recipes/<br/>━━━━━━━━━━<br/>RecipeSource.BUILTIN<br/>.yaml files"]
        PROJECT[".autoskillit/recipes/<br/>━━━━━━━━━━<br/>RecipeSource.PROJECT<br/>.yaml files"]
    end

    subgraph Collection ["_collect_recipes()"]
        COLLECTOR["RecipeInfo stream<br/>━━━━━━━━━━<br/>name, description,<br/>source, path, summary"]
    end

    subgraph Sorting ["● list_recipes() — recipe/io.py"]
        SORT["● sort key changed<br/>━━━━━━━━━━<br/>(source != BUILTIN, name)<br/>BUILTIN first, PROJECT after<br/>alphabetical within tier"]
    end

    subgraph CLISurfaces ["CLI Consumers (direct RecipeInfo)"]
        CLI_LIST["recipes list<br/>━━━━━━━━━━<br/>columnar table<br/>NAME / SOURCE / DESC"]
        CLI_ORDER["order command<br/>━━━━━━━━━━<br/>numbered menu<br/>stable positions"]
    end

    subgraph Projection ["● format_recipe_list_response() — recipe/_api.py"]
        FMT["● RecipeListItem<br/>━━━━━━━━━━<br/>name, description,<br/>summary, + source"]
    end

    subgraph MCPWire ["MCP Tool Handler"]
        MCP_TOOL["list_recipes tool<br/>━━━━━━━━━━<br/>json.dumps()<br/>JSON string over MCP"]
    end

    subgraph HookLayer ["● pretty_output.py — PostToolUse Hook"]
        HOOK["● _fmt_list_recipes()<br/>━━━━━━━━━━<br/>Markdown-KV render<br/>- name [source]: desc<br/>coverage contracts updated"]
    end

    BUILTIN -->|"_collect_recipes(BUILTIN)"| COLLECTOR
    PROJECT -->|"_collect_recipes(PROJECT)"| COLLECTOR
    COLLECTOR -->|"unsorted list[RecipeInfo]"| SORT
    SORT -->|"LoadResult[RecipeInfo]<br/>bundled-first order"| CLI_LIST
    SORT -->|"LoadResult[RecipeInfo]<br/>bundled-first order"| CLI_ORDER
    SORT -->|"LoadResult[RecipeInfo]"| FMT
    FMT -->|"dict with source field"| MCP_TOOL
    MCP_TOOL -.->|"JSON string (MCP wire)"| HOOK

    class BUILTIN,PROJECT cli;
    class COLLECTOR stateNode;
    class SORT phase;
    class CLI_LIST,CLI_ORDER output;
    class FMT handler;
    class MCP_TOOL integration;
    class HOOK output;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Origins | YAML source files on disk (read-only inputs) |
| Teal | State | RecipeInfo stream (internal struct) |
| Purple | Phase | ● Sort stage — modified to stable bundled-first
ordering |
| Orange | Handler | ● Projection stage — RecipeListItem gains `source`
field |
| Red | MCP | MCP tool handler (json serialization) |
| Dark Teal | Output | CLI surfaces and ● PostToolUse hook (Markdown-KV
render) |

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;

    START([list_recipes called])

    subgraph Discovery ["Discovery — _collect_recipes()"]
        direction TB
        PROJ["Scan PROJECT tier<br/>━━━━━━━━━━<br/>.autoskillit/recipes/*.yaml"]
        BUILTIN_SCAN["Scan BUILTIN tier<br/>━━━━━━━━━━<br/>pkg_root()/recipes/*.yaml"]
        DEDUP{"Name already seen?<br/>━━━━━━━━━━<br/>PROJECT shadows BUILTIN"}
        APPEND["Append RecipeInfo<br/>━━━━━━━━━━<br/>name, desc, source, path, summary"]
    end

    subgraph SortPhase ["● list_recipes() — Sort Phase"]
        direction TB
        SORT["● Sort by<br/>━━━━━━━━━━<br/>(source != BUILTIN, name)<br/>False < True → BUILTIN first<br/>then alphabetical within tier"]
    end

    ROUTE{"Consumer type?"}

    subgraph CLIPath ["CLI Path (direct RecipeInfo)"]
        direction TB
        CLI_RENDER["recipes list / order<br/>━━━━━━━━━━<br/>Iterate LoadResult.items<br/>Render table / numbered menu"]
        CLI_OUT["stdout<br/>━━━━━━━━━━<br/>Columnar table or<br/>interactive numbered menu"]
    end

    subgraph MCPPath ["MCP Path"]
        direction TB
        FMT["● format_recipe_list_response()<br/>━━━━━━━━━━<br/>Project RecipeInfo →<br/>RecipeListItem with source field"]
        ERRCHECK{"result.errors<br/>present?"}
        JSON_DUMP["json.dumps()<br/>━━━━━━━━━━<br/>Serialize to JSON string"]
        HOOK["● _fmt_list_recipes() hook<br/>━━━━━━━━━━<br/>PostToolUse: Markdown-KV render<br/>- name [source]: desc<br/>coverage contracts updated"]
    end

    END_CLI([CLI complete])
    END_MCP([MCP response delivered])

    START --> PROJ
    PROJ -->|"each .yaml"| DEDUP
    PROJ -->|"after PROJECT"| BUILTIN_SCAN
    BUILTIN_SCAN -->|"each .yaml"| DEDUP
    DEDUP -->|"no — new name"| APPEND
    DEDUP -->|"yes — skip"| PROJ
    APPEND --> PROJ
    BUILTIN_SCAN -->|"all collected"| SORT

    SORT -->|"LoadResult[RecipeInfo]"| ROUTE
    ROUTE -->|"CLI invocation"| CLI_RENDER
    ROUTE -->|"MCP tool call"| FMT
    CLI_RENDER --> CLI_OUT --> END_CLI

    FMT --> ERRCHECK
    ERRCHECK -->|"no errors"| JSON_DUMP
    ERRCHECK -->|"errors present"| JSON_DUMP
    JSON_DUMP --> HOOK --> END_MCP

    class START,END_CLI,END_MCP terminal;
    class PROJ,BUILTIN_SCAN,APPEND handler;
    class DEDUP,ERRCHECK,ROUTE stateNode;
    class SORT phase;
    class CLI_RENDER handler;
    class CLI_OUT output;
    class FMT phase;
    class JSON_DUMP handler;
    class HOOK output;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Entry and exit states |
| Orange | Handler | Discovery scan, append, JSON serialize, CLI render
|
| Teal | State | Decision and routing nodes |
| Purple | Phase | ● Sort (stable ordering) and ● MCP projection (source
field) |
| Dark Teal | Output | CLI stdout and ● hook Markdown-KV render |

Closes #456

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/impl-456-20260320-232500-690754/temp/make-plan/stable_recipe_listing_order_plan_2026-03-20_120000.md`

## Token Usage Summary

No token data available.

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

Adds a **PARALLEL STEP SCHEDULING — MANDATORY** section to
`src/autoskillit/skills/sous-chef/SKILL.md` that codifies the wavefront
scheduling rule: when running multiple pipelines in parallel, all fast
steps (MCP tool calls completing in seconds) across ALL pipelines must
be completed before any slow step (`run_skill`, which launches headless
sessions taking minutes) is launched. Slow steps are then batched and
launched together so they overlap in wall-clock time. Additionally, adds
a `run_mode` ingredient (default: `sequential`, option: `parallel`) to
the `implementation` and `remediation` bundled recipes.

## Requirements

### PROMPT

- **REQ-PROMPT-001:** The sous-chef SKILL.md must contain a PARALLEL
STEP SCHEDULING section that is marked MANDATORY.
- **REQ-PROMPT-002:** The section must define fast steps as MCP tool
calls that complete in seconds: `run_cmd`, `clone_repo`,
`create_unique_branch`, `fetch_github_issue`, `claim_issue`,
`merge_worktree`, `test_check`, `reset_test_dir`, `classify_fix`.
- **REQ-PROMPT-003:** The section must define slow steps as any
`run_skill` invocation.
- **REQ-PROMPT-004:** The section must instruct the orchestrator to
complete all fast steps for ALL pipelines before launching any slow
step.
- **REQ-PROMPT-005:** The section must instruct the orchestrator to
launch all slow steps together in one parallel batch once all pipelines
are aligned at a slow step boundary.
- **REQ-PROMPT-006:** The section must explicitly prohibit launching a
slow step for one pipeline while another pipeline still has fast steps
pending.
- **REQ-PROMPT-007:** The section must explain the wall-clock rationale:
batched rounds wait for the slowest step, so fast steps that finish
instantly idle until the slow step completes.
- **REQ-PROMPT-008:** The section must be placed after the existing
MULTIPLE ISSUES section in the SKILL.md file.

### INGREDIENT

- **REQ-INGREDIENT-001:** Recipes that support multi-issue execution
(e.g., `implementation`, `remediation`) must declare a `run_mode`
ingredient with options `sequential` (default) and `parallel`.
- **REQ-INGREDIENT-002:** The default value for `run_mode` must be
`sequential` — parallel execution requires explicit opt-in.
- **REQ-INGREDIENT-003:** When `run_mode: parallel` is set, the
orchestrator must apply the wavefront scheduling rule from
REQ-PROMPT-004 through REQ-PROMPT-006.
- **REQ-INGREDIENT-004:** When `run_mode: sequential` is set (or
defaulted), the orchestrator must process issues one at a time in the
order provided.
- **REQ-INGREDIENT-005:** The `run_mode` ingredient must be respected
regardless of what the user says at runtime — if the user overrides
verbally (e.g., says "run in parallel"), the verbal instruction takes
precedence per the existing MULTIPLE ISSUES sous-chef rule.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;

    START([START: N issues submitted])

    subgraph ModeSelect ["● Mode Selection (MULTIPLE ISSUES + run_mode)"]
        direction TB
        MODE{"run_mode or<br/>user intent?"}
        SEQ["Sequential Mode<br/>━━━━━━━━━━<br/>Process one issue at a time<br/>in provided order"]
        PAR["Parallel Mode Selected<br/>━━━━━━━━━━<br/>Launch N independent pipelines<br/>apply wavefront scheduling"]
    end

    subgraph FastDrain ["★ Fast-Step Drain (PARALLEL STEP SCHEDULING)"]
        direction TB
        INSPECT["● Inspect next pending step<br/>━━━━━━━━━━<br/>for each of N pipelines"]
        HAS_FAST{"Any pipeline has<br/>a fast step pending?"}
        RUN_FAST["★ Execute all fast steps<br/>━━━━━━━━━━<br/>run_cmd · clone_repo<br/>create_unique_branch<br/>fetch_github_issue<br/>claim_issue · merge_worktree<br/>test_check · reset_test_dir<br/>classify_fix"]
    end

    subgraph SlowBatch ["★ Slow-Step Batch (PARALLEL STEP SCHEDULING)"]
        direction TB
        ALL_SLOW["★ All pipelines at slow boundary<br/>━━━━━━━━━━<br/>every next step is run_skill"]
        LAUNCH["★ Launch all slow steps together<br/>━━━━━━━━━━<br/>run_skill × N in parallel<br/>wall-clock time overlaps"]
        WAIT["Wait for batch completion<br/>━━━━━━━━━━<br/>bounded by slowest session"]
    end

    DONE_CHECK{"All pipelines<br/>complete?"}
    DONE([DONE: all N pipelines finished])

    START --> MODE
    MODE -->|"sequential / run_mode=sequential"| SEQ
    MODE -->|"parallel / run_mode=parallel"| PAR
    PAR --> INSPECT
    INSPECT --> HAS_FAST
    HAS_FAST -->|"YES — drain fast steps"| RUN_FAST
    RUN_FAST -->|"re-inspect after batch"| INSPECT
    HAS_FAST -->|"NO — all aligned at slow step"| ALL_SLOW
    ALL_SLOW --> LAUNCH
    LAUNCH --> WAIT
    WAIT --> DONE_CHECK
    DONE_CHECK -->|"more steps remain"| INSPECT
    DONE_CHECK -->|"all finished"| DONE
    SEQ --> DONE

    class START,DONE terminal;
    class MODE detector;
    class SEQ,PAR stateNode;
    class INSPECT phase;
    class HAS_FAST detector;
    class RUN_FAST newComponent;
    class ALL_SLOW,LAUNCH newComponent;
    class WAIT handler;
    class DONE_CHECK detector;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Start and end states |
| Red | Detector | Decision points and routing guards |
| Teal | State | Mode selection outcomes |
| Purple | Phase | Step inspection / control flow |
| Green (★) | New Component | New scheduling logic added by this PR |
| Orange | Handler | Wait / synchronization (bounded by slowest) |

### State Lifecycle Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    START([START: load_recipe called])

    subgraph IngredientDecl ["● Ingredient Declaration (INIT_ONLY)"]
        direction TB
        IMPL_YAML["● implementation.yaml<br/>━━━━━━━━━━<br/>run_mode:<br/>  default: sequential"]
        REMED_YAML["● remediation.yaml<br/>━━━━━━━━━━<br/>run_mode:<br/>  default: sequential"]
        OVERRIDE["override dict<br/>━━━━━━━━━━<br/>caller may pass<br/>run_mode=parallel"]
    end

    subgraph StructValidation ["Structural Validation Gates"]
        direction TB
        PRESENCE["ingredient presence check<br/>━━━━━━━━━━<br/>referenced names must<br/>be declared in recipe"]
        NO_ENUM["gap: no value validation<br/>━━━━━━━━━━<br/>sequential/parallel NOT<br/>enforced by schema"]
    end

    subgraph ActiveRecipe ["Resolved Ingredient State (INIT_ONLY after this point)"]
        direction TB
        RESOLVED["run_mode resolved<br/>━━━━━━━━━━<br/>sequential (default)<br/>OR parallel (override)"]
    end

    subgraph PromptContract ["● sous-chef SKILL.md Contract (reads run_mode)"]
        direction TB
        SKILL_READ["● PARALLEL STEP SCHEDULING<br/>━━━━━━━━━━<br/>reads run_mode at runtime<br/>applies wavefront rule if parallel"]
    end

    subgraph ContractTests ["★ Contract Enforcement (new test suite)"]
        direction TB
        SCHED_TESTS["★ test_sous_chef_scheduling.py<br/>━━━━━━━━━━<br/>REQ-PROMPT-001..008<br/>asserts section present + correct"]
        INGRED_TESTS["● TestRunModeIngredient<br/>━━━━━━━━━━<br/>REQ-INGREDIENT-001..005<br/>asserts run_mode declared + default"]
    end

    DONE([Orchestrator applies scheduling rule])

    START --> IMPL_YAML
    START --> REMED_YAML
    OVERRIDE --> RESOLVED
    IMPL_YAML --> PRESENCE
    REMED_YAML --> PRESENCE
    PRESENCE --> NO_ENUM
    NO_ENUM --> RESOLVED
    RESOLVED --> SKILL_READ
    SKILL_READ --> DONE

    SCHED_TESTS -.->|"enforces contract on"| SKILL_READ
    INGRED_TESTS -.->|"enforces contract on"| IMPL_YAML
    INGRED_TESTS -.->|"enforces contract on"| REMED_YAML

    class START,DONE terminal;
    class IMPL_YAML,REMED_YAML handler;
    class OVERRIDE phase;
    class PRESENCE stateNode;
    class NO_ENUM gap;
    class RESOLVED detector;
    class SKILL_READ handler;
    class SCHED_TESTS newComponent;
    class INGRED_TESTS newComponent;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Start and end states |
| Orange | Handler | Recipe YAML definitions and prompt reads |
| Purple | Phase | Caller-supplied override |
| Teal | Gate | Structural validation (presence check) |
| Yellow | Gap | Missing value constraint (no enum enforcement) |
| Red | Resolved | INIT_ONLY state after resolution — never mutated |
| Green (★) | New Component | New contract tests added by this PR |

Closes #461

## Implementation Plan

Plan file:
`temp/make-plan/add_parallel_step_scheduling_rule_plan_2026-03-21_085800.md`

## Token Usage Summary

No token data available for this pipeline run.

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…ence (#464)

## Summary

Implements first-run detection for `cook` sessions and a guided
onboarding menu with
concurrent background intelligence gathering. When a project has been
initialized
(`autoskillit init`) but has never been onboarded, `cook` intercepts the
session launch
to present a 5-option interactive menu (Analyze, GitHub Issue, Demo Run,
Write Recipe,
Skip). Background threads gather project intelligence (build tools,
pre-commit scanner,
good-first-issues) concurrently while the user reads the menu. The
chosen action becomes
the `initial_prompt` for the Claude session. A `.autoskillit/.onboarded`
marker is written
after any menu path completes, preventing re-prompting on subsequent
`cook` invocations.
`autoskillit init --force` resets the marker. Also adds
`tailorable`/`tailoring_hints`
frontmatter fields to `SkillInfo` as infrastructure for the upcoming
skill-tailoring
workflow (Issue #215).

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;

    %% TERMINALS %%
    START([● cook])
    DONE([session complete])
    ABORT([aborted])

    subgraph Guard ["Precondition Guards  ·  ● _cook.py"]
        direction TB
        CLAUDE{"claude in PATH?"}
        CONFIRM{"Launch session?<br/>━━━━━━━━━━<br/>[Enter / n]"}
    end

    subgraph Detection ["★ First-Run Detection  ·  _onboarding.is_first_run()"]
        direction TB
        FR{"★ is_first_run<br/>━━━━━━━━━━<br/>config.yaml exists?<br/>.onboarded absent?<br/>recipes/ empty?<br/>no overrides?"}
    end

    subgraph Onboarding ["★ Guided Onboarding  ·  run_onboarding_menu()"]
        direction TB
        WELCOME["★ welcome banner<br/>━━━━━━━━━━<br/>Would you like help?"]
        OPT_IN{"Y / n?"}
        INTEL["★ gather_intel (bg threads)<br/>━━━━━━━━━━<br/>_detect_scanner()<br/>_detect_build_tools()<br/>_fetch_good_first_issues()"]
        MENU["★ menu display<br/>━━━━━━━━━━<br/>A / B / C / D / E"]
        CHOICE{"★ user choice<br/>━━━━━━━━━━<br/>[A/B/C/D/E]"}
    end

    subgraph Routes ["★ initial_prompt Routes"]
        direction LR
        PA["A: /autoskillit:setup-project"]
        PB["B: /autoskillit:prepare-issue {ref}"]
        PC["C: /autoskillit:setup-project {target}"]
        PD["D: /autoskillit:write-recipe"]
    end

    subgraph SessionLaunch ["● Session Launch  ·  _cook.py"]
        direction TB
        BUILD["● build_interactive_cmd<br/>━━━━━━━━━━<br/>initial_prompt injected<br/>skills_dir added"]
        RUN["subprocess.run<br/>━━━━━━━━━━<br/>Claude interactive session"]
    end

    MARKER_SKIP["★ mark_onboarded<br/>━━━━━━━━━━<br/>write .autoskillit/.onboarded<br/>(skip / decline path)"]
    MARKER_DONE["★ mark_onboarded<br/>━━━━━━━━━━<br/>write .autoskillit/.onboarded<br/>(finally: A–D complete)"]

    %% MAIN FLOW %%
    START --> CLAUDE
    CLAUDE -->|"not found"| ABORT
    CLAUDE -->|"found"| CONFIRM
    CONFIRM -->|"n"| ABORT
    CONFIRM -->|"enter"| FR

    FR -->|"not first run"| BUILD
    FR -->|"first run"| WELCOME

    WELCOME --> OPT_IN
    OPT_IN -->|"n / no"| MARKER_SKIP
    OPT_IN -->|"Y / enter"| INTEL
    INTEL --> MENU
    MENU --> CHOICE

    CHOICE -->|"A"| PA
    CHOICE -->|"B"| PB
    CHOICE -->|"C"| PC
    CHOICE -->|"D"| PD
    CHOICE -->|"E / other"| MARKER_SKIP

    PA & PB & PC & PD --> BUILD
    MARKER_SKIP -->|"initial_prompt = None"| BUILD
    BUILD --> RUN
    RUN -->|"session exits · finally block"| MARKER_DONE
    MARKER_DONE --> DONE

    %% CLASS ASSIGNMENTS %%
    class START,DONE,ABORT terminal;
    class CLAUDE,CONFIRM detector;
    class FR detector;
    class OPT_IN,CHOICE stateNode;
    class WELCOME,MENU newComponent;
    class INTEL newComponent;
    class PA,PB,PC,PD newComponent;
    class MARKER_SKIP,MARKER_DONE newComponent;
    class BUILD,RUN handler;
```

### Operational Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;

    subgraph CLILayer ["CLI ENTRY POINTS"]
        direction LR
        COOK["● autoskillit cook / c<br/>━━━━━━━━━━<br/>Interactive Claude session<br/>+ first-run onboarding gate"]
        INIT["● autoskillit init<br/>━━━━━━━━━━<br/>--force  resets onboarding<br/>--test-command --scope"]
        SKILLS["autoskillit skills list<br/>━━━━━━━━━━<br/>Lists skills incl.<br/>tailorable metadata"]
    end

    subgraph OnboardingModule ["★ Onboarding Module  ·  cli/_onboarding.py"]
        direction TB
        DETECT["★ is_first_run()<br/>━━━━━━━━━━<br/>reads config.yaml ✓<br/>reads .onboarded ✓<br/>reads recipes/ ✓<br/>reads .claude/skills/ ✓"]
        MENU["★ run_onboarding_menu()<br/>━━━━━━━━━━<br/>welcome + Y/n<br/>background intel gather<br/>A/B/C/D/E choice"]
        MARKER_WRITE["★ mark_onboarded()<br/>━━━━━━━━━━<br/>writes .autoskillit/.onboarded"]
    end

    subgraph ConfigState ["CONFIGURATION & STATE FILES"]
        direction TB
        CONFIG[".autoskillit/config.yaml<br/>━━━━━━━━━━<br/>read: first-run gate<br/>write: init"]
        ONBOARDED["★ .autoskillit/.onboarded<br/>━━━━━━━━━━<br/>gitignored marker<br/>absent = first run<br/>present = onboarded"]
        RECIPES[".autoskillit/recipes/<br/>━━━━━━━━━━<br/>read: first-run gate<br/>(empty = first run)"]
    end

    subgraph SkillMeta ["● SkillInfo Metadata  ·  workspace/skills.py"]
        direction TB
        SKILL_INFO["● SkillInfo dataclass<br/>━━━━━━━━━━<br/>★ tailorable: bool<br/>★ tailoring_hints: str<br/>(from SKILL.md frontmatter)"]
    end

    subgraph Outputs ["OBSERVABILITY OUTPUTS"]
        direction TB
        SESSION["Claude interactive session<br/>━━━━━━━━━━<br/>initial_prompt injected<br/>when first-run path taken"]
        GITIGNORE[".autoskillit/.gitignore<br/>━━━━━━━━━━<br/>auto-includes .onboarded<br/>via ensure_project_temp()"]
    end

    %% FLOWS %%
    COOK -->|"reads"| DETECT
    DETECT -->|"reads"| CONFIG
    DETECT -->|"reads"| ONBOARDED
    DETECT -->|"reads"| RECIPES
    DETECT -->|"first run → invoke"| MENU
    MENU -->|"writes"| MARKER_WRITE
    MARKER_WRITE -->|"creates"| ONBOARDED
    COOK -->|"launches"| SESSION

    INIT -->|"writes"| CONFIG
    INIT -->|"--force: deletes"| ONBOARDED

    SKILLS -->|"reads"| SKILL_INFO
    CONFIG -->|"gitignore updated by"| GITIGNORE

    %% CLASS ASSIGNMENTS %%
    class COOK,INIT,SKILLS cli;
    class DETECT,MENU newComponent;
    class MARKER_WRITE newComponent;
    class CONFIG,RECIPES stateNode;
    class ONBOARDED newComponent;
    class SKILL_INFO handler;
    class SESSION,GITIGNORE output;
```

### State Lifecycle Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;

    subgraph Gates ["★ FIRST-RUN DETECTION GATES  ·  is_first_run()"]
        direction TB
        G1["★ Gate 1: config.yaml exists<br/>━━━━━━━━━━<br/>False → not first run<br/>(init has not run)"]
        G2["★ Gate 2: .onboarded absent<br/>━━━━━━━━━━<br/>False → not first run<br/>(already onboarded)"]
        G3["★ Gate 3: recipes/ empty<br/>━━━━━━━━━━<br/>False → not first run<br/>(customized project)"]
        G4["★ Gate 4: no overrides<br/>━━━━━━━━━━<br/>detect_project_local_overrides()<br/>False → not first run"]
    end

    subgraph MarkerLifecycle ["★ .onboarded Marker Lifecycle"]
        direction LR
        ABSENT["★ .onboarded ABSENT<br/>━━━━━━━━━━<br/>initial state after init<br/>triggers first-run gate"]
        PRESENT["★ .onboarded PRESENT<br/>━━━━━━━━━━<br/>idempotent write<br/>atomic_write (no overwrite)"]
    end

    subgraph WriteGuards ["★ Write Guards  ·  mark_onboarded()"]
        direction TB
        IDEMPOTENT["★ exists() check before write<br/>━━━━━━━━━━<br/>no-op if already present<br/>(prevents double-write)"]
        ATOMIC["atomic_write()<br/>━━━━━━━━━━<br/>temp-file + rename<br/>(prevents partial write)"]
        GITIGNORE["● ensure_project_temp()<br/>━━━━━━━━━━<br/>.onboarded in _GITIGNORE_ENTRIES<br/>(prevents accidental commit)"]
    end

    subgraph ResetGate ["● RESET GATE  ·  init --force"]
        direction TB
        FORCE{"● --force flag<br/>━━━━━━━━━━<br/>config written?"}
        DELETE["● onboarded_marker.unlink<br/>━━━━━━━━━━<br/>missing_ok=True<br/>(safe delete)"]
    end

    subgraph TransientState ["★ TRANSIENT STATE  ·  cook() call frame"]
        direction TB
        PROMPT["initial_prompt: str | None<br/>━━━━━━━━━━<br/>None → no onboarding<br/>str → skill injected as<br/>Claude opening message"]
        INTEL["★ OnboardingIntel<br/>━━━━━━━━━━<br/>scanner_found: str | None<br/>build_tools: list[str]<br/>github_issues: list[str]<br/>populated once, read-only"]
    end

    subgraph SkillMeta ["● SKILL METADATA  ·  SkillInfo"]
        direction TB
        TAILORABLE["● SkillInfo.tailorable: bool<br/>━━━━━━━━━━<br/>parsed from SKILL.md<br/>INIT_ONLY (frozen dataclass)"]
        HINTS["● SkillInfo.tailoring_hints: str<br/>━━━━━━━━━━<br/>parsed from SKILL.md<br/>INIT_ONLY (frozen dataclass)"]
    end

    %% GATE CHAIN %%
    G1 -->|"pass"| G2
    G2 -->|"pass"| G3
    G3 -->|"pass"| G4
    G4 -->|"all pass → first run"| ABSENT

    %% MARKER LIFECYCLE %%
    ABSENT -->|"read by is_first_run()"| G2
    ABSENT -->|"onboarding complete"| IDEMPOTENT
    IDEMPOTENT -->|"not exists"| ATOMIC
    ATOMIC -->|"writes"| PRESENT
    GITIGNORE -->|"gitignores"| PRESENT

    %% RESET %%
    FORCE -->|"yes"| DELETE
    DELETE -->|"resets to"| ABSENT

    %% TRANSIENT %%
    G4 -->|"first run detected"| PROMPT
    INTEL -->|"feeds suggestions to"| PROMPT

    %% CLASS ASSIGNMENTS %%
    class G1,G2,G3,G4 detector;
    class ABSENT,PRESENT newComponent;
    class IDEMPOTENT,ATOMIC newComponent;
    class GITIGNORE handler;
    class FORCE stateNode;
    class DELETE handler;
    class PROMPT phase;
    class INTEL newComponent;
    class TAILORABLE,HINTS handler;
```

Closes #457

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/impl-457-20260321-085654-812593/temp/make-plan/first_run_detection_guided_onboarding_plan_2026-03-21_090000.md`

## Token Usage Summary

No token data available for this pipeline run.

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

The adjudication pipeline classified `**plan_path** = /path` (model
output with bold markdown on the token name) as `CONTRACT_VIOLATION` — a
terminal failure state — because `re.search('plan_path\s*=\s*/.+',
'**plan_path** = /path')` returned `None`. The model's markdown
decorator (`**`) sits between the token name and the `=`, breaking the
regex while the semantic content is fully intact.

The root weakness was that `_check_expected_patterns` applied raw
regexes to unprocessed model output with no tolerance for formatting
variation. This single function is the universal choke-point for all 25+
skill contracts that use `key = value` structured output tokens. Part A
installs a markdown normalizer (`_strip_markdown_from_tokens`) at the
choke-point and adds comprehensive regression tests. Part B adds a
static enforcement semantic rule
(`output-section-no-markdown-directive`) and derives
`_OUTPUT_PATH_TOKENS` from the contract schema, with no-markdown
directives added to all 31 at-risk SKILL.md output sections.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;

    %% TERMINALS %%
    START([START: run_headless_core])
    SUCCEEDED([SUCCEEDED])
    RETRIABLE([RETRIABLE])
    FAILED([FAILED])

    subgraph Execution ["Session Execution"]
        direction TB
        Runner["● runner()<br/>━━━━━━━━━━<br/>SubprocessResult<br/>termination + channel"]
        TermDispatch{"termination?"}
        StaleRecovery["stale recovery<br/>━━━━━━━━━━<br/>re-parse stdout"]
    end

    subgraph Recovery ["Recovery Layer"]
        direction TB
        RecoverMarker["_recover_from_separate_marker<br/>━━━━━━━━━━<br/>scan assistant_messages<br/>for standalone marker"]
        RecoverBlock["● _recover_block_from_assistant_messages<br/>━━━━━━━━━━<br/>combine assistant_messages<br/>re-check patterns (normalized)"]
    end

    subgraph Outcome ["Outcome Computation"]
        direction TB
        ComputeSuccess{"● _compute_success<br/>━━━━━━━━━━<br/>Channel B bypass<br/>or termination dispatch"}
        CheckContent["_check_session_content<br/>━━━━━━━━━━<br/>error / empty / subtype<br/>marker / patterns"]
        CheckPatterns["● _check_expected_patterns<br/>━━━━━━━━━━<br/>normalize then re.search<br/>all patterns must match"]
        NormalizeMD["★ _strip_markdown_from_tokens<br/>━━━━━━━━━━<br/>**token** = → token =<br/>*token* = → token ="]
        ContradictionGuard{"contradiction guard<br/>━━━━━━━━━━<br/>success + retry?"}
        DeadEndGuard{"dead-end guard<br/>━━━━━━━━━━<br/>failed + no-retry<br/>+ channel confirmed?"}
        EvalContent["_evaluate_content_state<br/>━━━━━━━━━━<br/>ABSENT / CONTRACT_VIOLATION<br/>/ SESSION_ERROR / COMPLETE"]
    end

    subgraph Result ["Result Assembly"]
        direction TB
        NormalizeSubtype["● _normalize_subtype<br/>━━━━━━━━━━<br/>map outcome → label<br/>adjudicated_failure / success…"]
        BudgetGuard["_apply_budget_guard<br/>━━━━━━━━━━<br/>cap consecutive retries"]
        ZeroWriteGate["zero-write gate<br/>━━━━━━━━━━<br/>demote if write expected<br/>but write_count==0"]
    end

    %% MAIN FLOW %%
    START --> Runner
    Runner --> TermDispatch

    TermDispatch -->|"STALE"| StaleRecovery
    TermDispatch -->|"TIMED_OUT"| ComputeSuccess
    TermDispatch -->|"NATURAL_EXIT / COMPLETED"| RecoverMarker

    StaleRecovery -->|"recovered"| SUCCEEDED
    StaleRecovery -->|"not recovered"| RETRIABLE

    RecoverMarker -->|"marker found in messages"| RecoverBlock
    RecoverMarker -->|"no marker or completion_marker unset"| RecoverBlock

    RecoverBlock -->|"patterns matched in messages"| ComputeSuccess
    RecoverBlock -->|"no match"| ComputeSuccess

    ComputeSuccess -->|"Channel B path"| CheckPatterns
    ComputeSuccess -->|"COMPLETED / NATURAL_EXIT"| CheckContent
    CheckContent --> CheckPatterns
    CheckPatterns --> NormalizeMD
    NormalizeMD -->|"normalized text"| CheckPatterns

    CheckPatterns -->|"all match → success=True"| ContradictionGuard
    CheckPatterns -->|"any miss → success=False"| ContradictionGuard

    ContradictionGuard -->|"success=True AND retry=True<br/>demote success"| DeadEndGuard
    ContradictionGuard -->|"consistent"| DeadEndGuard

    DeadEndGuard -->|"failed + no-retry + channel confirmed"| EvalContent
    DeadEndGuard -->|"else"| NormalizeSubtype

    EvalContent -->|"ABSENT → promote to RETRIABLE"| NormalizeSubtype
    EvalContent -->|"CONTRACT_VIOLATION → terminal"| NormalizeSubtype
    EvalContent -->|"SESSION_ERROR → terminal"| NormalizeSubtype

    NormalizeSubtype --> BudgetGuard
    BudgetGuard --> ZeroWriteGate

    ZeroWriteGate -->|"outcome=SUCCEEDED"| SUCCEEDED
    ZeroWriteGate -->|"outcome=RETRIABLE"| RETRIABLE
    ZeroWriteGate -->|"outcome=FAILED"| FAILED

    %% CLASS ASSIGNMENTS %%
    class START terminal;
    class SUCCEEDED,RETRIABLE,FAILED terminal;
    class Runner,CheckContent handler;
    class TermDispatch,ComputeSuccess,ContradictionGuard,DeadEndGuard stateNode;
    class StaleRecovery,RecoverMarker,RecoverBlock,EvalContent phase;
    class CheckPatterns,NormalizeSubtype,BudgetGuard,ZeroWriteGate detector;
    class NormalizeMD newComponent;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | START / SUCCEEDED / RETRIABLE / FAILED states |
| Teal | State | Decision forks (termination dispatch, success/retry
guards) |
| Purple | Phase | Recovery helpers and content-state evaluation |
| Orange | Handler | subprocess runner and session content check |
| Red | Detector | Pattern matching, subtype normalization, budget and
write gates |
| Green | New Component | ★ `_strip_markdown_from_tokens` — new
normalization function |

### State Lifecycle Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    %% TERMINALS %%
    START([Contract Defined])
    ENFORCED([Contract Enforced])
    VIOLATED([Contract Violated])

    subgraph DesignTime ["DESIGN-TIME CONTRACT LAYER"]
        direction TB
        Contracts["● skill_contracts.yaml<br/>━━━━━━━━━━<br/>expected_output_patterns<br/>write_behavior · outputs"]
        SKILL_MD["● SKILL.md ## Output section<br/>━━━━━━━━━━<br/>31 files updated with<br/>no-markdown directive"]
        NoMarkdownRule["★ output-section-no-markdown-directive<br/>━━━━━━━━━━<br/>Semantic rule (WARNING)<br/>Fires if directive absent"]
        ContractLoader["contracts.py load_bundled_manifest()<br/>━━━━━━━━━━<br/>lru_cache · L0 core loader<br/>SkillContract dataclass"]
    end

    subgraph DesignGate ["DESIGN-TIME VALIDATION GATE"]
        direction TB
        DesignCheck{"skill has<br/>expected_output_patterns?"}
        OutputSection{"## Output section<br/>contains directive?"}
        WarnFiring["WARNING: output-section-<br/>no-markdown-directive<br/>━━━━━━━━━━<br/>Add directive to ## Output"]
        DesignPass["Design-time gate: PASS<br/>━━━━━━━━━━<br/>Contract well-specified"]
    end

    subgraph RuntimeLayer ["RUNTIME CONTRACT ENFORCEMENT"]
        direction TB
        OutputPathTokens["_OUTPUT_PATH_TOKENS<br/>━━━━━━━━━━<br/>frozenset of file_path outputs<br/>module-level, from contracts"]
        PathContaminationCheck["path contamination check<br/>━━━━━━━━━━<br/>output paths must stay<br/>within cwd boundary"]
        NormalizeMD["★ _strip_markdown_from_tokens()<br/>━━━━━━━━━━<br/>**token** = → token =<br/>*token* = → token ="]
        PatternCheck["● _check_expected_patterns()<br/>━━━━━━━━━━<br/>normalize → re.search all<br/>AND semantics · all must match"]
    end

    subgraph ContentGate ["RUNTIME CONTENT STATE GATE"]
        direction TB
        ContentState{"_evaluate_content_state()"}
        COMPLETE["ContentState.COMPLETE<br/>━━━━━━━━━━<br/>non-empty · marker present<br/>all patterns matched"]
        ABSENT["ContentState.ABSENT<br/>━━━━━━━━━━<br/>empty result or marker absent<br/>→ RETRIABLE (drain-race)"]
        CONTRACT_VIO["ContentState.CONTRACT_VIOLATION<br/>━━━━━━━━━━<br/>result present · marker present<br/>pattern absent → TERMINAL"]
        SESSION_ERR["ContentState.SESSION_ERROR<br/>━━━━━━━━━━<br/>CLI is_error=True<br/>→ TERMINAL"]
    end

    subgraph SubtypeNorm ["SUBTYPE NORMALIZATION"]
        direction TB
        NormSubtype["● _normalize_subtype()<br/>━━━━━━━━━━<br/>CliSubtype.SUCCESS + FAILED<br/>→ adjudicated_failure"]
        SkillResult["SkillResult<br/>━━━━━━━━━━<br/>success · subtype · needs_retry<br/>retry_reason"]
    end

    %% FLOW %%
    START --> Contracts
    Contracts --> ContractLoader
    SKILL_MD --> NoMarkdownRule
    ContractLoader --> DesignCheck

    DesignCheck -->|"has patterns"| OutputSection
    DesignCheck -->|"no patterns — exempt"| DesignPass

    OutputSection -->|"directive absent"| WarnFiring
    OutputSection -->|"directive present"| DesignPass
    WarnFiring -.->|"warning only,<br/>pipeline continues"| DesignPass

    DesignPass --> OutputPathTokens
    OutputPathTokens --> PathContaminationCheck
    PathContaminationCheck --> NormalizeMD
    NormalizeMD --> PatternCheck

    PatternCheck --> ContentState

    ContentState -->|"empty / marker absent"| ABSENT
    ContentState -->|"patterns all match"| COMPLETE
    ContentState -->|"result present, pattern missing"| CONTRACT_VIO
    ContentState -->|"CLI error"| SESSION_ERR

    COMPLETE --> NormSubtype
    ABSENT -->|"drain-race → promote RETRIABLE"| NormSubtype
    CONTRACT_VIO -->|"terminal failure"| NormSubtype
    SESSION_ERR -->|"terminal failure"| NormSubtype

    NormSubtype --> SkillResult

    SkillResult -->|"success=True"| ENFORCED
    SkillResult -->|"success=False, needs_retry=False"| VIOLATED
    SkillResult -->|"needs_retry=True"| START

    %% CLASS ASSIGNMENTS %%
    class START,ENFORCED,VIOLATED terminal;
    class Contracts,SKILL_MD handler;
    class NoMarkdownRule,NormalizeMD newComponent;
    class ContractLoader,OutputPathTokens phase;
    class DesignCheck,OutputSection,ContentState stateNode;
    class WarnFiring,CONTRACT_VIO,SESSION_ERR detector;
    class PathContaminationCheck,PatternCheck,NormSubtype detector;
    class COMPLETE,ABSENT output;
    class SkillResult gap;
    class DesignPass cli;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Contract defined / enforced / violated end
states |
| Orange | Handler | Contract manifest and SKILL.md source files (●) |
| Green | New Component | ★ New normalization function and semantic rule
|
| Purple | Phase | Contract loader and path token frozenset |
| Teal | State | Decision gates (pattern present? directive present?) |
| Red | Detector | Warning firings, pattern checks, violation states,
subtype normalization |
| Dark Teal | Output | COMPLETE / ABSENT content states |
| Yellow | Derived | SkillResult — mutable routing outcome |

Closes #462

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260321-085252-104453/temp/rectify/rectify_structured-output-markdown-fragility_2026-03-21_091500_part_a.md`

## Token Usage Summary

| Step | input | output | cached | count | time |
|------|-------|--------|--------|-------|------|
| open_pr | 3.4k | 191.6k | 12.9M | 13 | 1h 6m |
| audit_impl | 5.3k | 166.4k | 5.2M | 16 | 1h 2m |
| implement | 9.6k | 197.6k | 29.5M | 13 | 1h 21m |
| review_pr | 2.2k | 406.9k | 16.2M | 13 | 2h 46m |
| dry_walkthrough | 526 | 79.8k | 5.2M | 3 | 26m 55s |
| fix | 114 | 37.3k | 3.9M | 5 | 20m 10s |
| diagnose_ci | 27 | 4.0k | 382.6k | 2 | 1m 36s |
| resolve_review | 2.7k | 301.3k | 36.5M | 12 | 1h 48m |
| resolve_ci | 26 | 5.5k | 452.1k | 2 | 3m 52s |
| assess | 102 | 31.2k | 4.5M | 3 | 18m 18s |
| plan-31 | 29 | 9.4k | 515.6k | 1 | 4m 28s |
| plan-28 | 21 | 12.1k | 307.8k | 1 | 7m 20s |
| plan-32 | 21 | 16.3k | 390.4k | 1 | 9m 1s |
| plan-33 | 21 | 20.3k | 361.1k | 1 | 9m 9s |
| review-31 | 57 | 398 | 1.4M | 1 | 3m 13s |
| review-32 | 1.6k | 5.0k | 144.7k | 1 | 3m 35s |
| review-28-retry | 3.8k | 4.9k | 155.3k | 1 | 5m 28s |
| review-33-retry | 1.9k | 6.2k | 163.4k | 1 | 6m 10s |
| verify-28 | 17 | 7.2k | 426.3k | 1 | 2m 1s |
| verify-31 | 15 | 8.7k | 360.3k | 1 | 2m 25s |
| verify-32 | 22 | 16.0k | 831.0k | 1 | 4m 24s |
| verify-33 | 26 | 21.2k | 974.8k | 1 | 6m 26s |
| implement-31 | 34 | 11.3k | 934.3k | 1 | 3m 18s |
| implement-32 | 26 | 10.6k | 809.4k | 1 | 3m 56s |
| implement-28 | 41 | 18.1k | 1.3M | 1 | 5m 19s |
| implement-33 | 36 | 23.6k | 1.3M | 1 | 7m 19s |
| audit-33 | 12 | 6.0k | 151.4k | 1 | 1m 47s |
| audit-28 | 14 | 7.1k | 206.9k | 1 | 1m 57s |
| audit-32 | 14 | 7.2k | 202.2k | 1 | 2m 7s |
| audit-31 | 375 | 10.4k | 149.8k | 1 | 3m 42s |
| open-pr-33 | 21 | 9.8k | 513.3k | 1 | 2m 41s |
| open-pr-28 | 21 | 9.9k | 485.4k | 1 | 3m 13s |
| open-pr-32 | 29 | 12.5k | 837.5k | 1 | 4m 25s |
| open-pr-31 | 28 | 11.5k | 657.9k | 1 | 4m 32s |
| review-pr-28 | 25 | 27.0k | 561.0k | 1 | 4m 57s |
| review-pr-31 | 20 | 31.5k | 423.2k | 1 | 5m 43s |
| review-pr-32 | 23 | 31.7k | 659.1k | 1 | 5m 55s |
| review-pr-33 | 23 | 31.7k | 565.2k | 1 | 7m 12s |
| resolve-review-33 | 35 | 14.2k | 1.2M | 1 | 4m 58s |
| resolve-review-31 | 40 | 17.2k | 1.2M | 1 | 5m 21s |
| resolve-review-28 | 46 | 17.8k | 1.6M | 1 | 6m 26s |
| resolve-review-32 | 38 | 20.7k | 1.2M | 1 | 6m 45s |
| plan-30 | 24 | 14.9k | 549.5k | 1 | 5m 25s |
| plan-29 | 1.5k | 14.3k | 342.6k | 1 | 5m 43s |
| review-29 | 11 | 7.2k | 196.5k | 1 | 4m 34s |
| review-30 | 2.6k | 5.9k | 155.7k | 1 | 5m 14s |
| verify-30 | 19 | 10.9k | 639.2k | 1 | 3m 31s |
| verify-29 | 2.9k | 15.2k | 575.0k | 1 | 4m 17s |
| resolve_conflict_449 | 11 | 2.0k | 232.3k | 1 | 42s |
| implement-30 | 21 | 12.8k | 485.2k | 1 | 3m 34s |
| implement-29 | 30 | 9.1k | 893.3k | 1 | 4m 12s |
| fix-30 | 21 | 4.9k | 480.0k | 1 | 1m 35s |
| fix-29 | 29 | 6.2k | 793.3k | 1 | 2m 31s |
| open-pr-30 | 24 | 10.7k | 516.1k | 1 | 4m 23s |
| open-pr-29 | 27 | 17.5k | 767.1k | 1 | 6m 10s |
| plan-34 | 23 | 16.1k | 493.0k | 1 | 5m 38s |
| review-34 | 17 | 12.2k | 531.4k | 1 | 4m 17s |
| verify-34 | 14 | 12.0k | 364.9k | 1 | 3m 44s |
| implement-34 | 36 | 13.9k | 1.2M | 1 | 3m 56s |
| open-pr-34 | 30 | 17.1k | 824.2k | 1 | 9m 8s |
| plan-35 | 195 | 13.9k | 693.5k | 1 | 6m 24s |
| plan-37 | 34 | 20.0k | 848.0k | 1 | 9m 13s |
| plan-36 | 37 | 31.6k | 1.2M | 1 | 10m 39s |
| verify-37 | 24 | 14.2k | 831.4k | 1 | 3m 45s |
| verify-35 | 60 | 20.0k | 2.7M | 1 | 5m 49s |
| verify-36 | 26 | 21.2k | 1.1M | 1 | 6m 18s |
| implement-35 | 42 | 16.8k | 1.7M | 1 | 4m 58s |
| plan | 6.4k | 205.6k | 9.7M | 10 | 1h 16m |
| implement-37 | 93 | 55.5k | 6.7M | 1 | 19m 16s |
| implement-36 | 100 | 68.9k | 8.2M | 1 | 24m 41s |
| fix-35 | 14 | 2.2k | 204.0k | 1 | 43s |
| verify | 1.0k | 104.4k | 6.1M | 9 | 31m 15s |
| audit-35 | 15 | 7.1k | 236.3k | 1 | 1m 55s |
| audit-36 | 15 | 8.2k | 266.1k | 1 | 2m 20s |
| audit-37 | 11 | 9.4k | 207.8k | 1 | 2m 58s |
| open-pr-36 | 32 | 17.3k | 851.6k | 1 | 5m 24s |
| open-pr-37 | 34 | 18.1k | 1.2M | 1 | 5m 59s |
| open-pr-35 | 20 | 11.0k | 481.3k | 1 | 6m 22s |
| review-pr-36 | 251 | 36.7k | 735.0k | 1 | 10m 5s |
| review-pr-35 | 36 | 38.8k | 1.3M | 1 | 11m 19s |
| review-pr-37 | 25 | 47.4k | 871.5k | 1 | 12m 5s |
| resolve-review-36 | 21 | 4.3k | 397.2k | 1 | 1m 28s |
| resolve-review-37 | 590 | 32.4k | 2.3M | 1 | 9m 12s |
| resolve-review-35 | 267 | 48.0k | 3.9M | 1 | 14m 44s |
| review-pr-36-retry | 28 | 39.0k | 854.3k | 1 | 10m 1s |
| resolve-review-36-retry | 54 | 33.0k | 3.1M | 1 | 9m 39s |
| resolve-conflicts-36 | 26 | 13.1k | 808.6k | 1 | 3m 32s |
| resolve-conflicts-37 | 34 | 15.7k | 1.2M | 1 | 4m 4s |
| plan-38 | 22 | 24.3k | 473.4k | 1 | 7m 52s |
| review-38 | 7.2k | 7.4k | 210.5k | 1 | 8m 2s |
| verify-38 | 279 | 15.1k | 1.2M | 1 | 4m 31s |
| implement-38 | 31 | 22.3k | 1.4M | 1 | 6m 28s |
| audit-38 | 12 | 7.4k | 157.2k | 1 | 2m 2s |
| open-pr-38 | 27 | 14.0k | 776.8k | 1 | 4m 16s |
| review-pr-38 | 1.0k | 49.9k | 635.1k | 1 | 12m 7s |
| resolve-review-38 | 52 | 35.6k | 2.7M | 1 | 10m 42s |
| plan-39 | 3.1k | 16.4k | 555.3k | 1 | 6m 0s |
| review-39 | 14 | 6.3k | 222.4k | 1 | 5m 31s |
| verify-39 | 2.7k | 9.1k | 571.6k | 1 | 2m 29s |
| implement-39 | 28 | 9.5k | 1.1M | 1 | 3m 11s |
| audit-39 | 17 | 9.5k | 316.1k | 1 | 2m 39s |
| open-pr-39 | 25 | 11.0k | 661.1k | 1 | 4m 7s |
| review-pr-39 | 24 | 42.2k | 706.9k | 1 | 7m 36s |
| resolve-review-39 | 3.1k | 22.6k | 1.5M | 1 | 7m 25s |
| plan-40 | 1.4k | 23.9k | 768.7k | 1 | 9m 42s |
| review-40 | 3.2k | 5.6k | 171.6k | 1 | 6m 50s |
| verify-40 | 20 | 13.2k | 867.9k | 1 | 3m 46s |
| implement-40 | 43 | 104.5k | 2.4M | 1 | 24m 59s |
| audit-40 | 18 | 10.8k | 382.3k | 1 | 3m 7s |
| open-pr-40 | 31 | 12.5k | 874.8k | 1 | 5m 50s |
| review-pr-40 | 27 | 34.2k | 765.3k | 1 | 7m 39s |
| resolve-review-40 | 43 | 26.0k | 2.1M | 1 | 9m 4s |
| plan-41 | 31 | 18.5k | 858.9k | 1 | 7m 40s |
| review-41 | 6.7k | 8.2k | 181.9k | 1 | 5m 37s |
| verify-41 | 34 | 37.0k | 2.4M | 1 | 10m 46s |
| implement-41 | 91 | 56.2k | 6.9M | 1 | 19m 22s |
| audit-41 | 416 | 13.9k | 207.9k | 1 | 9m 51s |
| open-pr-41 | 37 | 14.6k | 1.3M | 1 | 5m 26s |
| review-pr-41 | 34 | 37.7k | 1.1M | 1 | 9m 24s |
| resolve-review-41 | 55 | 20.0k | 2.1M | 1 | 7m 14s |
| investigate | 3.5k | 11.0k | 484.1k | 1 | 8m 35s |
| rectify | 20 | 23.6k | 641.3k | 1 | 11m 12s |
| **Total** | 82.5k | 3.8M | 241.9M | | 22h 43m |

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

The MCP server's `_initialize()` startup recovery loaded token and
timing telemetry from ALL pipeline runs in the last 24 hours with no
pipeline-scoping filter, contaminating the `DefaultTokenLog` and
`DefaultTimingLog` singletons with entries from entirely unrelated
pipelines. The architectural fix removes token/timing recovery from
`_initialize()` entirely — these logs are per-pipeline live accumulators
and have no cross-pipeline recovery semantics. `DefaultAuditLog`
legitimately spans pipelines (failure tracking) and is left intact. A
stale instruction in `tools_recipe.py` simultaneously directed the
orchestrator to use the contaminated server-side `get_token_summary`
tool to pre-stage a PR token file; that instruction is removed. The
architectural result: at server startup the token/timing logs are empty;
they are populated only from live `run_skill` calls in the running
pipeline; skills self-retrieve clean, CWD-scoped data from disk when
they need to build PR bodies.

## Architecture Impact

### Data Lineage Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 70, 'curve': 'basis'}}}%%
flowchart LR
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef integration fill:#c62828,stroke:#ef9a9a,stroke-width:2px,color:#fff;

    subgraph Origins ["Data Origins"]
        direction TB
        RUN_SKILL["run_skill() calls<br/>━━━━━━━━━━<br/>Live pipeline steps<br/>raw step_name + token counts"]
    end

    subgraph Disk ["Disk Storage (Source of Truth)"]
        direction TB
        SESSIONS[("~/.local/share/autoskillit/logs/<br/>━━━━━━━━━━<br/>sessions.jsonl index (cwd field)<br/>sessions/{dir}/token_usage.json<br/>sessions/{dir}/step_timing.json")]
    end

    subgraph Startup ["● Server Startup (_state._initialize)"]
        direction TB
        AUDIT_REC["DefaultAuditLog.load_from_log_dir<br/>━━━━━━━━━━<br/>since= filter only (no cwd_filter)<br/>Cross-pipeline failures — correct"]
        TOK_EMPTY["● DefaultTokenLog — empty at startup<br/>━━━━━━━━━━<br/>load_from_log_dir REMOVED<br/>Was contaminating across pipelines"]
        TIM_EMPTY["● DefaultTimingLog — empty at startup<br/>━━━━━━━━━━<br/>load_from_log_dir REMOVED<br/>Was contaminating across pipelines"]
    end

    subgraph Live ["Live In-Memory Singletons"]
        direction TB
        TOK_LIVE["● DefaultTokenLog (singleton)<br/>━━━━━━━━━━<br/>● canonical_step_name() strips -N<br/>Current pipeline only"]
        TIM_LIVE["● DefaultTimingLog (singleton)<br/>━━━━━━━━━━<br/>● canonical_step_name() strips -N<br/>Current pipeline only"]
        AUDIT_LIVE["DefaultAuditLog (singleton)<br/>━━━━━━━━━━<br/>Startup data + current failures"]
    end

    subgraph SkillRetrieval ["Skill Self-Retrieval (pipeline-scoped)"]
        direction TB
        OPEN_PR["open-pr Step 0b<br/>━━━━━━━━━━<br/>Fresh DefaultTokenLog()<br/>load_from_log_dir(cwd_filter=PIPELINE_CWD)"]
    end

    subgraph Artifacts ["PR Artifacts"]
        PR_BODY["PR Body Token Table<br/>━━━━━━━━━━<br/>temp/open-pr/token_summary.md<br/>Pipeline-scoped only"]
    end

    subgraph Contracts ["★ New Contract Tests"]
        direction TB
        CONTRACT["★ test_tools_recipe_contracts.py<br/>━━━━━━━━━━<br/>Asserts get_token_summary( not in<br/>load_recipe docstring"]
    end

    RUN_SKILL -->|"record(step_name, usage)<br/>canonical_step_name()"| TOK_LIVE
    RUN_SKILL -->|"record(step_name, duration)<br/>canonical_step_name()"| TIM_LIVE
    RUN_SKILL -->|"session_log.flush()<br/>writes token_usage.json<br/>step_timing.json + index"| SESSIONS
    SESSIONS -->|"load_from_log_dir(since=)<br/>no cwd_filter — intentional"| AUDIT_REC
    SESSIONS -. "✗ REMOVED — startup<br/>contamination eliminated" .-> TOK_EMPTY
    SESSIONS -. "✗ REMOVED — startup<br/>contamination eliminated" .-> TIM_EMPTY
    TOK_EMPTY -->|"starts empty"| TOK_LIVE
    TIM_EMPTY -->|"starts empty"| TIM_LIVE
    AUDIT_REC --> AUDIT_LIVE
    SESSIONS -->|"load_from_log_dir<br/>cwd_filter=PIPELINE_CWD"| OPEN_PR
    OPEN_PR -->|"pipeline-scoped token table"| PR_BODY
    CONTRACT -. "asserts stale<br/>instruction absent" .-> OPEN_PR

    class RUN_SKILL cli;
    class SESSIONS stateNode;
    class AUDIT_REC,OPEN_PR handler;
    class TOK_EMPTY,TIM_EMPTY newComponent;
    class TOK_LIVE,TIM_LIVE,AUDIT_LIVE phase;
    class PR_BODY output;
    class CONTRACT detector;
```

### State Lifecycle Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 70, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;

    subgraph Startup ["● _initialize() — Startup Contract (server/_state.py)"]
        direction LR
        AUDIT_INIT["DefaultAuditLog<br/>━━━━━━━━━━<br/>CROSS_PIPELINE<br/>load_from_log_dir(since=) ✓<br/>No cwd_filter — intentional"]
        TOK_INIT["● DefaultTokenLog<br/>━━━━━━━━━━<br/>INIT_ONLY = empty {}<br/>load_from_log_dir REMOVED<br/>Was: contaminated startup"]
        TIM_INIT["● DefaultTimingLog<br/>━━━━━━━━━━<br/>INIT_ONLY = empty {}<br/>load_from_log_dir REMOVED<br/>Was: contaminated startup"]
    end

    subgraph Gates ["Mutation Gates (both record() and load_from_log_dir())"]
        direction TB
        NORM_GATE["● canonical_step_name()<br/>━━━━━━━━━━<br/>strips trailing -N suffixes<br/>'plan-30' → 'plan'<br/>Applied on EVERY mutation path"]
        CWD_GATE["cwd_filter gate<br/>━━━━━━━━━━<br/>Empty = all pipelines<br/>Non-empty = current pipeline only<br/>_iter_session_log_entries()"]
        SINCE_GATE["since= filter<br/>━━━━━━━━━━<br/>ISO timestamp bound<br/>Excludes old sessions<br/>_iter_session_log_entries()"]
    end

    subgraph LiveMutation ["Live Mutation — record() path"]
        direction TB
        LIVE_REC["● DefaultTokenLog.record()<br/>━━━━━━━━━━<br/>APPEND_ONLY _entries<br/>Aggregates by canonical key<br/>invocation_count += 1"]
        LIVE_TIM["● DefaultTimingLog.record()<br/>━━━━━━━━━━<br/>APPEND_ONLY _entries<br/>Aggregates total_seconds<br/>invocation_count += 1"]
    end

    subgraph DiskMutation ["Disk Mutation — load_from_log_dir() path (skill self-retrieval)"]
        direction TB
        DISK_REC["● DefaultTokenLog.load_from_log_dir<br/>━━━━━━━━━━<br/>cwd_filter=PIPELINE_CWD required<br/>Merges into fresh instance<br/>Not called at startup"]
        DISK_TIM["● DefaultTimingLog.load_from_log_dir<br/>━━━━━━━━━━<br/>● cwd_filter now tested<br/>Merges into fresh instance<br/>Not called at startup"]
    end

    subgraph Contracts ["Contract Enforcement"]
        direction TB
        STATE_TEST["● test_state.py<br/>━━━━━━━━━━<br/>Asserts token/timing empty<br/>after _initialize()"]
        CONTRACT_TEST["★ test_tools_recipe_contracts.py<br/>━━━━━━━━━━<br/>Asserts get_token_summary(<br/>absent from load_recipe docstring"]
        TIM_TEST["● test_timings.py<br/>━━━━━━━━━━<br/>TestLoadFromLogDirCwdFilterTiming<br/>Verifies cwd_filter isolation"]
    end

    subgraph Reset ["State Reset"]
        CLEAR["DefaultTokenLog.clear()<br/>━━━━━━━━━━<br/>_entries = {} (empty dict)<br/>Resets to INIT_ONLY state<br/>Writes telemetry_clear_marker"]
    end

    AUDIT_INIT -->|"populated from disk"| SINCE_GATE
    TOK_INIT -. "starts empty<br/>no disk load" .-> NORM_GATE
    TIM_INIT -. "starts empty<br/>no disk load" .-> NORM_GATE
    SINCE_GATE -->|"passes timestamp"| CWD_GATE
    CWD_GATE -->|"cwd match"| NORM_GATE
    NORM_GATE -->|"canonical key"| DISK_REC
    NORM_GATE -->|"canonical key"| DISK_TIM
    NORM_GATE -->|"canonical key"| LIVE_REC
    NORM_GATE -->|"canonical key"| LIVE_TIM
    STATE_TEST -. "asserts empty<br/>after init" .-> TOK_INIT
    STATE_TEST -. "asserts empty<br/>after init" .-> TIM_INIT
    CONTRACT_TEST -. "asserts stale<br/>instruction absent" .-> DISK_REC
    TIM_TEST -. "validates<br/>cwd_filter" .-> DISK_TIM
    LIVE_REC -->|"clear() resets to {}"| CLEAR
    CLEAR -. "INIT_ONLY<br/>boundary restored" .-> TOK_INIT

    class AUDIT_INIT handler;
    class TOK_INIT,TIM_INIT newComponent;
    class NORM_GATE,CWD_GATE,SINCE_GATE stateNode;
    class LIVE_REC,LIVE_TIM phase;
    class DISK_REC,DISK_TIM output;
    class STATE_TEST,TIM_TEST detector;
    class CONTRACT_TEST gap;
    class CLEAR cli;
```

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;

    START([SERVER START])

    subgraph Startup ["● _initialize() — Server Startup (server/_state.py)"]
        direction TB
        RECOVER["recover_crashed_sessions()<br/>━━━━━━━━━━<br/>tmpfs trace cleanup only<br/>No token/timing involvement"]
        CLEAR_MARKER{"read_telemetry_clear_marker()<br/>━━━━━━━━━━<br/>marker > since_dt?"}
        SINCE_CALC["Compute since_dt<br/>━━━━━━━━━━<br/>now - 24h OR clear_marker<br/>(whichever is later)"]
        AUDIT_LOAD["● ctx.audit.load_from_log_dir<br/>━━━━━━━━━━<br/>since=since_str (no cwd_filter)<br/>Cross-pipeline — correct"]
        TOK_SKIP["● DefaultTokenLog starts empty {}<br/>━━━━━━━━━━<br/>load_from_log_dir REMOVED<br/>No cross-pipeline contamination"]
        TIM_SKIP["● DefaultTimingLog starts empty {}<br/>━━━━━━━━━━<br/>load_from_log_dir REMOVED<br/>No cross-pipeline contamination"]
    end

    INIT_DONE([SERVER READY])

    subgraph LiveFlow ["● Live Accumulation — record() path (tokens.py / timings.py)"]
        direction TB
        RUN_SKILL_CALL["run_skill() invocation completes<br/>━━━━━━━━━━<br/>Yields step_name + token_usage dict"]
        STEP_EMPTY{"step_name empty<br/>or token_usage None?"}
        CANON_LIVE["● canonical_step_name(step_name)<br/>━━━━━━━━━━<br/>strip trailing -N suffix<br/>'plan-30' → 'plan'"]
        RECORD_ACC["● token_log.record() / timing_log.record()<br/>━━━━━━━━━━<br/>_entries[key] += counts<br/>invocation_count += 1"]
    end

    subgraph SkillRetrieve ["Skill Self-Retrieval — load_from_log_dir() (open-pr Step 0b)"]
        direction TB
        FRESH_LOG["new DefaultTokenLog()<br/>━━━━━━━━━━<br/>Fresh empty instance<br/>Bypasses contaminated singleton"]
        ITER_SESSIONS["_iter_session_log_entries()<br/>━━━━━━━━━━<br/>reads sessions.jsonl index"]
        SINCE_CHECK{"since= filter<br/>━━━━━━━━━━<br/>entry.timestamp >= since?"}
        CWD_CHECK{"cwd_filter non-empty?<br/>━━━━━━━━━━<br/>entry.cwd == PIPELINE_CWD?"}
        CANON_DISK["● canonical_step_name(raw_step)<br/>━━━━━━━━━━<br/>Same normalization as live path"]
        MERGE_ENTRY["Merge into fresh _entries<br/>━━━━━━━━━━<br/>Accumulate by canonical key"]
        TOKEN_FILE["temp/open-pr/token_summary.md<br/>━━━━━━━━━━<br/>Pipeline-scoped token table"]
    end

    NO_OP([SKIP — no-op])
    SKIP_SESSION([SKIP session])
    PR_BODY([PR BODY with clean token table])

    START --> RECOVER
    RECOVER --> CLEAR_MARKER
    CLEAR_MARKER -->|"marker exists + newer"| SINCE_CALC
    CLEAR_MARKER -->|"no marker / older"| SINCE_CALC
    SINCE_CALC --> AUDIT_LOAD
    SINCE_CALC --> TOK_SKIP
    SINCE_CALC --> TIM_SKIP
    AUDIT_LOAD --> INIT_DONE
    TOK_SKIP --> INIT_DONE
    TIM_SKIP --> INIT_DONE
    INIT_DONE --> RUN_SKILL_CALL
    RUN_SKILL_CALL --> STEP_EMPTY
    STEP_EMPTY -->|"yes"| NO_OP
    STEP_EMPTY -->|"no"| CANON_LIVE
    CANON_LIVE --> RECORD_ACC
    INIT_DONE --> FRESH_LOG
    FRESH_LOG --> ITER_SESSIONS
    ITER_SESSIONS --> SINCE_CHECK
    SINCE_CHECK -->|"too old"| SKIP_SESSION
    SINCE_CHECK -->|"passes"| CWD_CHECK
    CWD_CHECK -->|"cwd mismatch"| SKIP_SESSION
    CWD_CHECK -->|"cwd matches"| CANON_DISK
    CANON_DISK --> MERGE_ENTRY
    MERGE_ENTRY -->|"all sessions processed"| TOKEN_FILE
    TOKEN_FILE --> PR_BODY

    class START,INIT_DONE terminal;
    class NO_OP,SKIP_SESSION,PR_BODY output;
    class RECOVER,SINCE_CALC handler;
    class CLEAR_MARKER,STEP_EMPTY,SINCE_CHECK,CWD_CHECK stateNode;
    class AUDIT_LOAD phase;
    class TOK_SKIP,TIM_SKIP newComponent;
    class RUN_SKILL_CALL,RECORD_ACC,FRESH_LOG,ITER_SESSIONS,MERGE_ENTRY,TOKEN_FILE handler;
    class CANON_LIVE,CANON_DISK detector;
```

Closes #466

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260321-113541-384854/temp/rectify/rectify_token-telemetry-contamination_2026-03-21_120000_part_a.md`

## Token Usage Summary

## token_summary

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

The `direct_merge` step in `implementation.yaml`, `remediation.yaml`,
and
`implementation-groups.yaml` uses `gh pr merge --squash --auto`
unconditionally.
On repos where `autoMergeAllowed=false` (no branch protection rules),
this command
fails, and the pipeline silently falls through to `confirm_cleanup` —
leaving the
PR unmerged. The same issue affects `redirect_merge` and `merge-pr`
SKILL.md Step 2.

**Fix:** Insert a new `check_auto_merge` detection step after
`check_merge_queue`
in all three recipes, add a third route condition in `route_queue_mode`,
and introduce
a complete `immediate_merge` path (5 new steps per recipe) that uses
plain
`gh pr merge --squash` when `autoMergeAllowed=false`. Update `merge-pr`
SKILL.md
and `sous-chef` SKILL.md to document the three-way routing.

## Requirements

### DETECT — Auto-Merge Availability Detection

- **REQ-DETECT-001:** The recipe must determine whether the target
repository has `autoMergeAllowed` enabled before reaching any step that
uses `gh pr merge --auto`.
- **REQ-DETECT-002:** The detection must use the GraphQL
`autoMergeAllowed` field on the repository object, captured into
`context.auto_merge_available`.
- **REQ-DETECT-003:** The detection must occur once per pipeline run
(combined with the existing `check_merge_queue` step or as an adjacent
step).

### ROUTE — Three-Way Merge Routing

- **REQ-ROUTE-001:** `route_queue_mode` must branch into three paths:
queue available, auto-merge available (no queue), and neither available.
- **REQ-ROUTE-002:** When `queue_available == false` and
`auto_merge_available == false`, the recipe must route to a new
`immediate_merge` step that uses `gh pr merge --squash` without the
`--auto` flag.
- **REQ-ROUTE-003:** The `immediate_merge` step must have its own
conflict-fix and retry path analogous to `direct_merge_conflict_fix`.

### FIX — Affected Step Corrections

- **REQ-FIX-001:** The `direct_merge` step must only be reachable when
`auto_merge_available == true`.
- **REQ-FIX-002:** The `redirect_merge` step (retry after conflict fix)
must also respect auto-merge availability — use `--auto` only when
available, plain `--squash` otherwise.
- **REQ-FIX-003:** The `merge-pr` SKILL.md must detect
`autoMergeAllowed` before choosing between `--auto` and direct merge.

### GUIDE — Orchestrator Guidance Update

- **REQ-GUIDE-001:** The sous-chef MERGE PHASE section must document the
three-way routing including the `autoMergeAllowed` dimension.
- **REQ-GUIDE-002:** The `implementation.yaml` kitchen_rules MERGE
ROUTING entry must reference the auto-merge detection and the
`immediate_merge` path.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 38, 'rankSpacing': 52, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;

    CI_PASS([CI passed])

    subgraph Detection ["Detection Phase"]
        direction TB
        CMQ["check_merge_queue<br/>━━━━━━━━━━<br/>GraphQL: mergeQueue exists?<br/>→ context.queue_available"]
        CAM["★ check_auto_merge<br/>━━━━━━━━━━<br/>GraphQL: autoMergeAllowed?<br/>→ context.auto_merge_available"]
    end

    subgraph Routing ["● route_queue_mode (4-way router)"]
        R1{"auto_merge<br/>input ≠ true?"}
        R2{"queue_available<br/>= true?"}
        R3{"★ auto_merge_available<br/>= true?"}
    end

    subgraph QueuePath ["Queue Path"]
        EAM["enable_auto_merge<br/>━━━━━━━━━━<br/>gh pr merge --squash --auto<br/>→ merge queue"]
        WFQ["wait_for_queue<br/>━━━━━━━━━━<br/>merge queue watcher"]
    end

    subgraph DirectPath ["Direct Merge Path"]
        DM["direct_merge<br/>━━━━━━━━━━<br/>gh pr merge --squash --auto"]
        WFDM["wait_for_direct_merge<br/>━━━━━━━━━━<br/>poll 90×10s"]
        DMCF["direct_merge_conflict_fix<br/>━━━━━━━━━━<br/>resolve-merge-conflicts"]
        REDM["redirect_merge<br/>━━━━━━━━━━<br/>gh pr merge --squash --auto"]
    end

    subgraph ImmediatePath ["★ Immediate Merge Path (new)"]
        IM["★ immediate_merge<br/>━━━━━━━━━━<br/>gh pr merge --squash<br/>(no --auto)"]
        WFIM["★ wait_for_immediate_merge<br/>━━━━━━━━━━<br/>poll 30×10s"]
        IMCF["★ immediate_merge_conflict_fix<br/>━━━━━━━━━━<br/>resolve-merge-conflicts"]
        RPIF["★ re_push_immediate_fix<br/>━━━━━━━━━━<br/>push_to_remote"]
        RMI["★ remerge_immediate<br/>━━━━━━━━━━<br/>gh pr merge --squash"]
    end

    CLEANUP["confirm_cleanup"]
    SUCCESS([release_issue_success])
    TIMEOUT([release_issue_timeout])
    FAILURE([release_issue_failure])

    CI_PASS --> CMQ
    CMQ -->|"on_success"| CAM
    CMQ -->|"on_failure"| SUCCESS

    CAM -->|"on_success / on_failure"| R1

    R1 -->|"auto_merge ≠ true"| CLEANUP
    R1 -->|"else"| R2
    R2 -->|"queue_available = true"| EAM
    R2 -->|"else"| R3
    R3 -->|"auto_merge_available = true"| DM
    R3 -->|"default"| IM

    EAM -->|"on_success"| WFQ
    EAM -->|"on_failure"| CLEANUP
    WFQ -->|"merged"| SUCCESS
    WFQ -->|"ejected/stalled/timeout"| TIMEOUT

    DM -->|"on_success"| WFDM
    DM -->|"on_failure"| CLEANUP
    WFDM -->|"merged"| SUCCESS
    WFDM -->|"closed"| DMCF
    WFDM -->|"timeout"| TIMEOUT
    DMCF -->|"escalation=true"| FAILURE
    DMCF -->|"resolved"| REDM
    REDM -->|"on_success"| WFDM
    REDM -->|"on_failure"| FAILURE

    IM -->|"on_success"| WFIM
    IM -->|"on_failure"| CLEANUP
    WFIM -->|"merged"| SUCCESS
    WFIM -->|"closed"| IMCF
    WFIM -->|"timeout"| TIMEOUT
    IMCF -->|"escalation=true"| FAILURE
    IMCF -->|"resolved"| RPIF
    RPIF -->|"on_success"| RMI
    RPIF -->|"on_failure"| FAILURE
    RMI -->|"on_success"| WFIM
    RMI -->|"on_failure"| FAILURE

    %% CLASS ASSIGNMENTS %%
    class CI_PASS,SUCCESS,TIMEOUT,FAILURE terminal;
    class CMQ,CAM detector;
    class R1,R2,R3 stateNode;
    class EAM,WFQ,DM,WFDM,DMCF,REDM handler;
    class IM,WFIM,IMCF,RPIF,RMI newComponent;
    class CLEANUP phase;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | CI passed, success, timeout, failure terminals
|
| Red | Detector | Detection steps (check_merge_queue, ★
check_auto_merge) |
| Teal | State | ● route_queue_mode routing decision nodes |
| Orange | Handler | Existing queue and direct merge execution steps |
| Green | New Component | ★ New immediate_merge path steps (5 new steps)
|
| Purple | Phase | confirm_cleanup (graceful exit) |

Closes #469

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/impl-20260321-193442-670135/temp/make-plan/direct_merge_auto_merge_routing_plan_2026-03-21_194423.md`

## Token Usage Summary

## token_summary


🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

The `init()` command in `app.py` placed the security gate
(`_check_secret_scanning`) **after**
user-input collection (`_prompt_test_command`) and file writes
(`atomic_write`). In a
non-interactive environment without `--test-command`, `input()` raised
`EOFError` before the
gate fired — bypassing it entirely with a generic crash exit. The fix is
structural: the
security gate now runs **before any user input or file I/O**.
Additionally, `_prompt_test_command()`
and all other CLI `input()` callers now use a shared
`_require_interactive_stdin()` TTY guard
that fails explicitly in non-interactive contexts, closing the EOFError
escape hatch regardless
of call-site ordering.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;

    START([START — autoskillit init])
    ERROR([ERROR — SystemExit 1])
    COMPLETE([COMPLETE — init done])

    subgraph Setup ["Setup"]
        ScopeCheck{"scope valid?<br/>━━━━━━━━━━<br/>user or project"}
        MkDir["config_dir.mkdir()<br/>━━━━━━━━━━<br/>.autoskillit/ created"]
    end

    subgraph Gate ["● Security Gate — runs FIRST, before any I/O"]
        SecGate["● _check_secret_scanning()<br/>━━━━━━━━━━<br/>returns ★ _ScanResult(passed, bypass_accepted)"]
        ScannerCheck{"scanner detected?<br/>━━━━━━━━━━<br/>_detect_secret_scanner()"}
        NonInterGate["print ERROR block<br/>━━━━━━━━━━<br/>non-interactive: no bypass"]
        BypassPrompt["show warning + phrase<br/>━━━━━━━━━━<br/>interactive bypass path"]
        PhraseCheck{"phrase matches?<br/>━━━━━━━━━━<br/>_SECRET_SCAN_BYPASS_PHRASE"}
        GateFail{"gate.passed?"}
    end

    subgraph ConfigWrite ["Config Write (gate already cleared — no rollback needed)"]
        ConfigExists{"config exists<br/>AND NOT force?"}
        AlreadyMsg["print 'already exists'<br/>━━━━━━━━━━<br/>log bypass if accepted"]
        TestCmdGiven{"test_command arg<br/>provided?"}
        UseGiven["cmd_parts = split()<br/>━━━━━━━━━━<br/>non-interactive path"]
        PromptCmd["● _prompt_test_command()<br/>━━━━━━━━━━<br/>★ _require_interactive_stdin() guard"]
        TtyCheck{"★ sys.stdin.isatty()?<br/>━━━━━━━━━━<br/>inside _require_interactive_stdin"}
        NiTtyFail["print 'requires interactive terminal'<br/>━━━━━━━━━━<br/>explicit SystemExit(1)"]
        InputCall["input('Test command [...]:')<br/>━━━━━━━━━━<br/>interactive only"]
        WriteConfig["atomic_write(config.yaml)<br/>━━━━━━━━━━<br/>no rollback flag needed"]
        LogBypass["_log_secret_scan_bypass()<br/>━━━━━━━━━━<br/>if bypass_accepted"]
    end

    subgraph Register ["Registration Phase"]
        RegAll["_register_all()<br/>━━━━━━━━━━<br/>hooks, MCP server, GitHub repo"]
    end

    START --> ScopeCheck
    ScopeCheck -->|"invalid"| ERROR
    ScopeCheck -->|"valid"| MkDir
    MkDir --> SecGate
    SecGate --> ScannerCheck
    ScannerCheck -->|"scanner found"| GateFail
    ScannerCheck -->|"not TTY, no scanner"| NonInterGate
    NonInterGate --> GateFail
    ScannerCheck -->|"TTY, no scanner"| BypassPrompt
    BypassPrompt --> PhraseCheck
    PhraseCheck -->|"wrong phrase"| GateFail
    PhraseCheck -->|"phrase matched"| GateFail

    GateFail -->|"passed=False"| ERROR
    GateFail -->|"passed=True"| ConfigExists

    ConfigExists -->|"yes — skip write"| AlreadyMsg
    AlreadyMsg --> RegAll
    ConfigExists -->|"no / force"| TestCmdGiven
    TestCmdGiven -->|"provided"| UseGiven
    TestCmdGiven -->|"not provided"| PromptCmd
    PromptCmd --> TtyCheck
    TtyCheck -->|"not TTY"| NiTtyFail
    NiTtyFail --> ERROR
    TtyCheck -->|"is TTY"| InputCall
    InputCall --> WriteConfig
    UseGiven --> WriteConfig
    WriteConfig --> LogBypass
    LogBypass --> RegAll
    RegAll --> COMPLETE

    class START,COMPLETE,ERROR terminal;
    class ScopeCheck,ScannerCheck,ConfigExists,TestCmdGiven,PhraseCheck,GateFail stateNode;
    class MkDir,AlreadyMsg,UseGiven,InputCall,WriteConfig handler;
    class SecGate,NonInterGate,BypassPrompt detector;
    class PromptCmd,LogBypass newComponent;
    class TtyCheck stateNode;
    class RegAll phase;
    class NiTtyFail detector;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Start, complete, and error terminals |
| Teal | State | Decision points and routing conditions |
| Orange | Handler | Unchanged processing nodes |
| Red | Detector | Security gates, TTY validation, error paths |
| Green | New/Modified | `★` new or `●` modified components |
| Purple | Phase | Registration phase |

### Security Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    START([CLI Invocation])
    BLOCKED([BLOCKED — SystemExit 1])
    ALLOWED([ALLOWED — proceed])

    subgraph Layer1 ["TRUST BOUNDARY 1 — Secret Scanning Gate (autoskillit init)"]
        InitEntry["● app.py: init()<br/>━━━━━━━━━━<br/>Entry point — scope validated"]
        SecGate["● _check_secret_scanning()<br/>━━━━━━━━━━<br/>★ NOW RUNS FIRST — before any I/O<br/>returns ★ _ScanResult(passed, bypass_accepted)"]
        ScannerCheck{"scanner in<br/>.pre-commit-config.yaml?<br/>━━━━━━━━━━<br/>_detect_secret_scanner()"}
        TtyCheck1{"sys.stdin.isatty()?<br/>━━━━━━━━━━<br/>inside _check_secret_scanning"}
        PhraseCheck{"exact bypass phrase<br/>━━━━━━━━━━<br/>typed by human?"}
        BlockNonInter["ERROR: Non-interactive<br/>cannot bypass<br/>━━━━━━━━━━<br/>return _ScanResult(False)"]
        BlockWrongPhrase["Aborted: phrase mismatch<br/>━━━━━━━━━━<br/>return _ScanResult(False)"]
        BypassAccepted["bypass_accepted=True<br/>━━━━━━━━━━<br/>return _ScanResult(True, True)"]
        ScannerPass["scanner found<br/>━━━━━━━━━━<br/>return _ScanResult(True)"]
    end

    subgraph Layer2 ["TRUST BOUNDARY 2 — Interactive Consent Enforcement"]
        TTYGuard["★ _require_interactive_stdin()<br/>━━━━━━━━━━<br/>shared TTY contract enforcer<br/>called by ALL input() callers"]
        TtyCheck2{"sys.stdin.isatty()?<br/>━━━━━━━━━━<br/>pre-condition for input()"}
        BlockNonTTY["ERROR: requires interactive terminal<br/>━━━━━━━━━━<br/>SystemExit(1) — no bare EOFError"]
    end

    subgraph Layer3 ["COMMANDS PROTECTED BY TTY BOUNDARY"]
        PromptTest["● _prompt_test_command()<br/>━━━━━━━━━━<br/>calls _require_interactive_stdin first"]
        CookConfirm["● cook() launch confirm<br/>━━━━━━━━━━<br/>calls _require_interactive_stdin first"]
        WorkspaceClean["● workspace clean confirm<br/>━━━━━━━━━━<br/>calls _require_interactive_stdin first"]
    end

    subgraph Layer4 ["AUDIT TRAIL"]
        BypassLog["_log_secret_scan_bypass()<br/>━━━━━━━━━━<br/>timestamp → config.yaml<br/>called AFTER config write"]
        ConfigYaml["★ .autoskillit/config.yaml<br/>━━━━━━━━━━<br/>safety.secret_scan_bypass_accepted<br/>ISO-8601 timestamp persisted"]
    end

    START --> InitEntry
    InitEntry --> SecGate
    SecGate --> ScannerCheck
    ScannerCheck -->|"scanner found"| ScannerPass
    ScannerCheck -->|"no scanner"| TtyCheck1
    TtyCheck1 -->|"not TTY"| BlockNonInter
    TtyCheck1 -->|"is TTY"| PhraseCheck
    PhraseCheck -->|"wrong phrase"| BlockWrongPhrase
    PhraseCheck -->|"exact match"| BypassAccepted

    BlockNonInter --> BLOCKED
    BlockWrongPhrase --> BLOCKED

    ScannerPass --> ALLOWED
    BypassAccepted --> ALLOWED

    ALLOWED -->|"init path"| TTYGuard
    TTYGuard --> TtyCheck2
    TtyCheck2 -->|"not TTY"| BlockNonTTY
    BlockNonTTY --> BLOCKED
    TtyCheck2 -->|"is TTY"| PromptTest
    TtyCheck2 -->|"is TTY"| CookConfirm
    TtyCheck2 -->|"is TTY"| WorkspaceClean

    BypassAccepted -->|"bypass_accepted=True"| BypassLog
    BypassLog --> ConfigYaml

    class START,BLOCKED,ALLOWED terminal;
    class InitEntry cli;
    class SecGate,PromptTest,CookConfirm,WorkspaceClean handler;
    class ScannerCheck,TtyCheck1,TtyCheck2,PhraseCheck stateNode;
    class BlockNonInter,BlockWrongPhrase,BlockNonTTY detector;
    class ScannerPass phase;
    class TTYGuard,BypassAccepted newComponent;
    class BypassLog,ConfigYaml output;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal/CLI | Start, allowed, blocked terminals and entry
points |
| Teal | State | Decision/routing points |
| Orange | Handler | Commands and prompt functions |
| Red | Detector | Blocking gates — security violations |
| Purple | Phase | Passing validation states |
| Green | New/Modified | `★` new or `●` modified components |
| Dark Teal | Output | Audit artifacts persisted to disk |

Closes #470

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260321-193454-179265/temp/rectify/rectify_secret_scanning_gate_ordering_2026-03-21_000000_part_a.md`

## Token Usage Summary

No token data available

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

The `.autoskillit/` scope has a **complete, self-reinforcing protection
feedback loop**: a
canonical constant (`_AUTOSKILLIT_GITIGNORE_ENTRIES`) feeds an
idempotent writer
(`ensure_project_temp`), a runtime validator
(`_check_gitignore_completeness`), and a structural
integration test (`test_init_all_created_files_covered_by_gitignore`)
that makes it impossible to
add a new file to `.autoskillit/` without updating the constant — the
test fails immediately.

The root project scope had **none of this**. No constant. No
programmatic writer. No structural
test. The bug survived because there is no equivalent feedback loop for
the root scope.

This PR closes that gap by implementing both halves:

- **Part A (Write Path):** A new `_ROOT_GITIGNORE_ENTRIES` constant in
`core/io.py` and extended `ensure_project_temp()` that idempotently
writes root `.gitignore` entries. Structural tests enforce the invariant
permanently.
- **Part B (Validate Path):** Extended `_check_gitignore_completeness()`
in `_doctor.py` to also validate root `.gitignore` against
`_ROOT_GITIGNORE_ENTRIES`. Doctor now reports false-OK when root
gitignore is missing.
- **Clarified comment** in `.secrets.yaml` template: "gitignored — do
not commit" removes ambiguity about which `.gitignore` provides
protection.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 45, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    INIT_START(["autoskillit init"])
    DOCTOR_START(["autoskillit doctor"])
    INIT_END(["project initialized"])
    DOCTOR_END(["doctor report emitted"])

    subgraph Constants ["● core/io.py — Constants"]
        direction LR
        ASKE["_AUTOSKILLIT_GITIGNORE_ENTRIES<br/>━━━━━━━━━━<br/>temp/ · .secrets.yaml · .onboarded · sync_manifest.json"]
        RSKE["● _ROOT_GITIGNORE_ENTRIES<br/>━━━━━━━━━━<br/>.autoskillit/.secrets.yaml<br/>.autoskillit/temp/<br/>.autoskillit/.onboarded<br/>.autoskillit/sync_manifest.json"]
    end

    subgraph InitFlow ["● ensure_project_temp() — core/io.py"]
        direction TB
        EPT["● ensure_project_temp(project_dir)<br/>━━━━━━━━━━<br/>create .autoskillit/temp/"]
        CHECK_SUB{"● .autoskillit/.gitignore<br/>exists?"}
        WRITE_SUB_NEW["write .autoskillit/.gitignore<br/>━━━━━━━━━━<br/>from _AUTOSKILLIT_GITIGNORE_ENTRIES"]
        WRITE_SUB_APPEND["append missing entries<br/>━━━━━━━━━━<br/>to .autoskillit/.gitignore"]
        CHECK_ROOT{"● root .gitignore<br/>exists?"}
        WRITE_ROOT_NEW["● write {project_dir}/.gitignore<br/>━━━━━━━━━━<br/>from _ROOT_GITIGNORE_ENTRIES"]
        WRITE_ROOT_APPEND["● append missing root entries<br/>━━━━━━━━━━<br/>to {project_dir}/.gitignore"]
    end

    subgraph InitTail ["● _init_helpers.py"]
        CST["● _create_secrets_template()<br/>━━━━━━━━━━<br/>comment: gitignored — do not commit"]
    end

    subgraph DoctorFlow ["● _check_gitignore_completeness() — _doctor.py"]
        direction TB
        SCAN_SUB["scan .autoskillit/ entries<br/>━━━━━━━━━━<br/>check against .autoskillit/.gitignore"]
        SCAN_ENTRIES["check _AUTOSKILLIT_GITIGNORE_ENTRIES<br/>━━━━━━━━━━<br/>all entries present?"]
        CHECK_ROOT_EXISTS{"● root .gitignore<br/>exists?"}
        SCAN_ROOT["● scan _ROOT_GITIGNORE_ENTRIES<br/>━━━━━━━━━━<br/>check against root .gitignore"]
        VERDICT{"uncovered<br/>entries?"}
        WARN["emit WARNING<br/>━━━━━━━━━━<br/>list missing entries"]
        OK["emit OK<br/>━━━━━━━━━━<br/>all entries covered"]
    end

    INIT_START --> EPT
    ASKE -.->|"sources"| WRITE_SUB_NEW
    ASKE -.->|"sources"| WRITE_SUB_APPEND
    RSKE -.->|"sources"| WRITE_ROOT_NEW
    RSKE -.->|"sources"| WRITE_ROOT_APPEND
    EPT --> CHECK_SUB
    CHECK_SUB -->|"no"| WRITE_SUB_NEW
    CHECK_SUB -->|"yes"| WRITE_SUB_APPEND
    WRITE_SUB_NEW --> CHECK_ROOT
    WRITE_SUB_APPEND --> CHECK_ROOT
    CHECK_ROOT -->|"no"| WRITE_ROOT_NEW
    CHECK_ROOT -->|"yes"| WRITE_ROOT_APPEND
    WRITE_ROOT_NEW --> CST
    WRITE_ROOT_APPEND --> CST
    CST --> INIT_END

    DOCTOR_START --> SCAN_SUB
    ASKE -.->|"validates against"| SCAN_ENTRIES
    RSKE -.->|"validates against"| SCAN_ROOT
    SCAN_SUB --> SCAN_ENTRIES
    SCAN_ENTRIES --> CHECK_ROOT_EXISTS
    CHECK_ROOT_EXISTS -->|"exists"| SCAN_ROOT
    CHECK_ROOT_EXISTS -->|"missing"| VERDICT
    SCAN_ROOT --> VERDICT
    VERDICT -->|"yes"| WARN
    VERDICT -->|"no"| OK
    WARN --> DOCTOR_END
    OK --> DOCTOR_END

    class INIT_START,INIT_END,DOCTOR_START,DOCTOR_END terminal;
    class ASKE stateNode;
    class RSKE newComponent;
    class EPT,CST handler;
    class CHECK_SUB,CHECK_ROOT,CHECK_ROOT_EXISTS,VERDICT stateNode;
    class WRITE_SUB_NEW,WRITE_SUB_APPEND handler;
    class WRITE_ROOT_NEW,WRITE_ROOT_APPEND,SCAN_ROOT newComponent;
    class SCAN_SUB,SCAN_ENTRIES phase;
    class WARN detector;
    class OK output;
```

### Operational Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    subgraph CLI ["CLI ENTRY POINTS"]
        direction LR
        INIT_CMD["autoskillit init<br/>━━━━━━━━━━<br/>project setup wizard"]
        DOCTOR_CMD["● autoskillit doctor<br/>━━━━━━━━━━<br/>9 setup checks<br/>--json flag"]
    end

    subgraph Constants ["● core/io.py — Canonical Constants"]
        direction LR
        ASKE_CONST["_AUTOSKILLIT_GITIGNORE_ENTRIES<br/>━━━━━━━━━━<br/>temp/ · .secrets.yaml<br/>.onboarded · sync_manifest.json"]
        RSKE_CONST["● _ROOT_GITIGNORE_ENTRIES<br/>━━━━━━━━━━<br/>.autoskillit/.secrets.yaml<br/>.autoskillit/temp/<br/>.autoskillit/.onboarded<br/>.autoskillit/sync_manifest.json"]
    end

    subgraph InitOutput ["● Init Outputs (ensure_project_temp)"]
        direction TB
        SUB_GI[".autoskillit/.gitignore<br/>━━━━━━━━━━<br/>idempotent write/append"]
        ROOT_GI["● {project}/.gitignore<br/>━━━━━━━━━━<br/>idempotent write/append<br/>from _ROOT_GITIGNORE_ENTRIES"]
        SECRETS[".autoskillit/.secrets.yaml<br/>━━━━━━━━━━<br/>● comment: gitignored — do not commit"]
        CONFIG[".autoskillit/config.yaml<br/>━━━━━━━━━━<br/>project config"]
    end

    subgraph DoctorChecks ["● Doctor Checks (_doctor.py)"]
        direction TB
        CHECK1["stale_mcp_servers<br/>━━━━━━━━━━<br/>dead binary paths"]
        CHECK2["mcp_server_registered<br/>━━━━━━━━━━<br/>plugin or mcpServers entry"]
        CHECK3["hook_registration<br/>━━━━━━━━━━<br/>HOOK_REGISTRY scripts present"]
        CHECK4["● gitignore_completeness<br/>━━━━━━━━━━<br/>.autoskillit/ coverage +<br/>● root .gitignore coverage"]
        CHECK5["secret_scanning_hook<br/>━━━━━━━━━━<br/>.pre-commit-config.yaml"]
    end

    subgraph DoctorReport ["Doctor Report Outputs"]
        direction LR
        OK_REPORT["✓ OK results<br/>━━━━━━━━━━<br/>all checks passed"]
        WARN_REPORT["⚠ WARNING results<br/>━━━━━━━━━━<br/>actionable guidance"]
        ERR_REPORT["✗ ERROR results<br/>━━━━━━━━━━<br/>blocking issues"]
    end

    INIT_CMD -->|"calls"| SUB_GI
    INIT_CMD -->|"calls"| ROOT_GI
    INIT_CMD -->|"calls"| SECRETS
    INIT_CMD -->|"calls"| CONFIG
    ASKE_CONST -.->|"sources"| SUB_GI
    RSKE_CONST -.->|"sources"| ROOT_GI

    DOCTOR_CMD --> CHECK1
    DOCTOR_CMD --> CHECK2
    DOCTOR_CMD --> CHECK3
    DOCTOR_CMD --> CHECK4
    DOCTOR_CMD --> CHECK5
    ASKE_CONST -.->|"validates"| CHECK4
    RSKE_CONST -.->|"validates"| CHECK4
    CHECK4 --> WARN_REPORT
    CHECK1 --> ERR_REPORT
    CHECK2 --> OK_REPORT
    CHECK3 --> OK_REPORT
    CHECK5 --> ERR_REPORT

    class INIT_CMD,DOCTOR_CMD cli;
    class ASKE_CONST stateNode;
    class RSKE_CONST newComponent;
    class SUB_GI,CONFIG,SECRETS handler;
    class ROOT_GI newComponent;
    class CHECK1,CHECK2,CHECK3,CHECK5 phase;
    class CHECK4 newComponent;
    class OK_REPORT output;
    class WARN_REPORT detector;
    class ERR_REPORT detector;
```

Closes #471

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260321-195422-226999/temp/rectify/rectify_init-secrets-root-gitignore_2026-03-21_195422_part_a.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…ART A ONLY (#476)

## Summary

The config system had two crash-inducing architectural weaknesses:
`_log_secret_scan_bypass` wrote `safety.secret_scan_bypass_accepted`
into `config.yaml` — a key not in `SafetyConfig` — causing every
subsequent `load_config()` call to raise `ConfigSchemaError` after a
bypass-accepted `autoskillit init`. Additionally, when `github.token`
appeared in `config.yaml`, the error message provided no actionable fix
steps (no exact YAML to copy, no removal instruction). A third
structural gap: `_SECRETS_ONLY_KEYS` was manually maintained with no
test enforcing completeness against the config dataclasses.

The fixes are threefold: (1) route bypass timestamps to
`.autoskillit/.state.yaml` (never schema-validated), eliminating the
self-inflicted crash; (2) make `validate_layer_keys` self-diagnosing for
misplaced secrets — the error now includes the exact YAML block and a
removal instruction; (3) add `test_secrets_only_keys_completeness` as a
structural guard that fails if any secret-typed field in `GitHubConfig`
is absent from `_SECRETS_ONLY_KEYS`. Part B (write-time config
validation gateway and `autoskillit doctor` misplaced-secrets check) is
included in this branch.

## Architecture Impact

### Error/Resilience Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 45, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;

    LOAD_CONFIG([load_config])
    WRITE_CONFIG([● write_config_layer])

    subgraph LayerPipeline ["LAYER LOAD PIPELINE (_make_dynaconf)"]
        direction TB
        DEFAULTS["defaults.yaml<br/>━━━━━━━━━━<br/>skip validation"]
        USER_CFG["~/.autoskillit/config.yaml<br/>━━━━━━━━━━<br/>validated, secrets disallowed"]
        PROJ_CFG[".autoskillit/config.yaml<br/>━━━━━━━━━━<br/>validated, secrets disallowed"]
        SECRETS_LAYER[".autoskillit/.secrets.yaml<br/>━━━━━━━━━━<br/>validated, secrets allowed"]
    end

    subgraph ValidationGate ["● VALIDATION GATE (validate_layer_keys)"]
        VLK{"validate<br/>layer keys"}
        ERR_TOP["unknown top key<br/>━━━━━━━━━━<br/>difflib suggestion"]
        ERR_SECRET["● secret in config.yaml<br/>━━━━━━━━━━<br/>exact YAML fix block<br/>removal instruction"]
        ERR_SUB["unknown sub-key<br/>━━━━━━━━━━<br/>difflib suggestion"]
    end

    subgraph WriteGate ["● WRITE-TIME GATE (write_config_layer)"]
        WCL["● write_config_layer<br/>━━━━━━━━━━<br/>validates BEFORE write"]
        WCL_FAIL["write blocked<br/>━━━━━━━━━━<br/>no file touched"]
        WCL_OK["atomic write<br/>━━━━━━━━━━<br/>schema-valid only"]
    end

    subgraph StateIsolation ["● STATE ISOLATION (_log_secret_scan_bypass)"]
        BYPASS["● _log_secret_scan_bypass<br/>━━━━━━━━━━<br/>bypass timestamp"]
        STATE_YAML[".autoskillit/.state.yaml<br/>━━━━━━━━━━<br/>never validated<br/>internal state"]
    end

    subgraph DoctorDiag ["● DOCTOR DIAGNOSTIC (_check_config_layers_for_secrets)"]
        DOCTOR["● _check_config_layers_for_secrets<br/>━━━━━━━━━━<br/>scans user + project config.yaml<br/>returns DoctorResult, never raises"]
        DOC_OK["DoctorResult.OK<br/>━━━━━━━━━━<br/>no secrets in config layers"]
        DOC_ERR["DoctorResult.ERROR<br/>━━━━━━━━━━<br/>actionable YAML fix<br/>+ removal step"]
    end

    T_SCHEMA_ERR([ConfigSchemaError])
    T_LOADED([AutomationConfig loaded])
    T_WRITTEN([file written])

    LOAD_CONFIG --> DEFAULTS
    DEFAULTS --> USER_CFG
    USER_CFG --> VLK
    VLK -->|"pass"| PROJ_CFG
    PROJ_CFG --> VLK
    VLK -->|"pass"| SECRETS_LAYER
    SECRETS_LAYER --> VLK
    VLK -->|"pass — all layers valid"| T_LOADED

    VLK -->|"unknown top key"| ERR_TOP
    VLK -->|"secret in non-secrets layer"| ERR_SECRET
    VLK -->|"unknown sub-key"| ERR_SUB
    ERR_TOP --> T_SCHEMA_ERR
    ERR_SECRET --> T_SCHEMA_ERR
    ERR_SUB --> T_SCHEMA_ERR

    WRITE_CONFIG --> WCL
    WCL -->|"validation fails"| WCL_FAIL
    WCL -->|"schema valid"| WCL_OK
    WCL_FAIL --> T_SCHEMA_ERR
    WCL_OK --> T_WRITTEN

    BYPASS --> STATE_YAML

    DOCTOR -->|"clean"| DOC_OK
    DOCTOR -->|"violation found"| DOC_ERR

    class LOAD_CONFIG,WRITE_CONFIG,T_SCHEMA_ERR,T_LOADED,T_WRITTEN terminal;
    class DEFAULTS stateNode;
    class USER_CFG,PROJ_CFG,SECRETS_LAYER handler;
    class VLK,WCL detector;
    class ERR_TOP,ERR_SUB,WCL_FAIL,DOC_ERR gap;
    class ERR_SECRET,BYPASS,DOCTOR,WCL newComponent;
    class STATE_YAML,WCL_OK,DOC_OK output;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Entry points and final states |
| Orange | Handler | Config layer paths (validated) |
| Dark Teal | State/Output | Defaults layer, output states |
| Red | Detector | Validation gates |
| Yellow | Gap | Error paths: unknown keys, write blocked, doctor
violation |
| Green | New/Modified | `ERR_SECRET` (actionable message),
`_log_secret_scan_bypass`, `_check_config_layers_for_secrets`,
`write_config_layer` |

### State Lifecycle Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 45, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;

    subgraph StorageLocations ["STORAGE LOCATIONS — MUTATION RULES"]
        direction LR
        DEFAULTS_YAML["config/defaults.yaml<br/>━━━━━━━━━━<br/>DEFAULTS<br/>read-only baseline<br/>never validated"]
        CONFIG_YAML["● .autoskillit/config.yaml<br/>━━━━━━━━━━<br/>SCHEMA_LOCKED<br/>secrets forbidden<br/>validated on every load + write"]
        SECRETS_YAML[".autoskillit/.secrets.yaml<br/>━━━━━━━━━━<br/>SECRETS_ONLY<br/>secrets allowed here only<br/>validated on every load"]
        STATE_YAML["● .autoskillit/.state.yaml<br/>━━━━━━━━━━<br/>INTERNAL_STATE<br/>no schema enforcement<br/>never passed to validate_layer_keys"]
    end

    subgraph SecretContract ["● SECRET KEY CONTRACT (_SECRETS_ONLY_KEYS)"]
        SOK["● _SECRETS_ONLY_KEYS<br/>━━━━━━━━━━<br/>frozenset{'github.token'}<br/>re-exported from config/__init__.py<br/>used by validate_layer_keys + doctor"]
        SOK_TEST["● test_secrets_only_keys_completeness<br/>━━━━━━━━━━<br/>structural guard: asserts every<br/>token/key/secret field in GitHubConfig<br/>is in _SECRETS_ONLY_KEYS"]
    end

    subgraph LoadGate ["LOAD-TIME SCHEMA GATE (validate_layer_keys)"]
        VLK{"validate<br/>layer keys"}
        VLK_SCHEMA["● schema check<br/>━━━━━━━━━━<br/>unknown top/sub keys<br/>→ ConfigSchemaError"]
        VLK_SECRET["● secrets-placement check<br/>━━━━━━━━━━<br/>SECRETS_ONLY key in non-secrets layer<br/>→ actionable ConfigSchemaError"]
    end

    subgraph WriteGate2 ["● WRITE-TIME SCHEMA GATE (write_config_layer)"]
        WCL2["● write_config_layer<br/>━━━━━━━━━━<br/>validates before write<br/>schema-valid only"]
        WCL_BLOCK["write blocked<br/>━━━━━━━━━━<br/>no file modified"]
    end

    subgraph BypassRoute ["● BYPASS STATE ISOLATION (_log_secret_scan_bypass)"]
        BYPASS_FN["● _log_secret_scan_bypass<br/>━━━━━━━━━━<br/>writes bypass timestamp to .state.yaml<br/>never touches config.yaml"]
    end

    subgraph DoctorGate2 ["● DOCTOR GATE (_check_config_layers_for_secrets)"]
        DOCTOR2["● _check_config_layers_for_secrets<br/>━━━━━━━━━━<br/>reads _SECRETS_ONLY_KEYS<br/>scans user + project config.yaml"]
        DOC_CLEAN["DoctorResult.OK"]
        DOC_CORRUPT["DoctorResult.ERROR<br/>━━━━━━━━━━<br/>contract violation detected<br/>actionable fix guidance"]
    end

    T_SCHEMA_ERR2([ConfigSchemaError])
    T_LOADED2([AutomationConfig loaded])
    T_WRITTEN2([file written])

    DEFAULTS_YAML -->|"merged first, no validation"| VLK
    CONFIG_YAML -->|"validated, secrets forbidden"| VLK
    SECRETS_YAML -->|"validated, is_secrets=True"| VLK
    VLK -->|"pass"| T_LOADED2
    VLK -->|"unknown key"| VLK_SCHEMA
    VLK -->|"secret in config.yaml"| VLK_SECRET
    VLK_SCHEMA --> T_SCHEMA_ERR2
    VLK_SECRET --> T_SCHEMA_ERR2

    WCL2 -->|"schema valid"| T_WRITTEN2
    WCL2 -->|"validation fails"| WCL_BLOCK
    WCL_BLOCK --> T_SCHEMA_ERR2

    BYPASS_FN -->|"writes to"| STATE_YAML

    SOK -->|"drives"| VLK_SECRET
    SOK -->|"drives"| DOCTOR2
    SOK_TEST -.->|"prevents drift"| SOK

    DOCTOR2 -->|"clean"| DOC_CLEAN
    DOCTOR2 -->|"violation"| DOC_CORRUPT

    class DEFAULTS_YAML stateNode;
    class CONFIG_YAML,SECRETS_YAML handler;
    class STATE_YAML,T_LOADED2,T_WRITTEN2 newComponent;
    class VLK detector;
    class VLK_SCHEMA gap;
    class VLK_SECRET,WCL2,BYPASS_FN,DOCTOR2,SOK,SOK_TEST newComponent;
    class WCL_BLOCK,DOC_CORRUPT gap;
    class DOC_CLEAN output;
    class T_SCHEMA_ERR2,T_LOADED2,T_WRITTEN2 terminal;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Teal | Defaults | Baseline layer (never validated) |
| Orange | Config/Secrets | Validated YAML storage locations |
| Green | New/Modified | `write_config_layer`,
`_log_secret_scan_bypass`, `_SECRETS_ONLY_KEYS`, doctor check,
`.state.yaml` |
| Red | Detector | `validate_layer_keys` gate |
| Yellow | Gap | Error paths: unknown key, write blocked, doctor
violation |
| Dark Blue | Terminal | Entry/exit states and `ConfigSchemaError` |

### Operational Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 45, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;

    subgraph InitCmd ["autoskillit init (● modified)"]
        direction TB
        INIT_CMD["autoskillit init<br/>━━━━━━━━━━<br/>interactive project setup"]
        SCAN_CHECK["_check_secret_scanning<br/>━━━━━━━━━━<br/>detect pre-commit scanner<br/>or prompt for bypass consent"]
        BYPASS_OK["bypass accepted<br/>━━━━━━━━━━<br/>user typed exact phrase"]
        NO_BYPASS["scanner found / no consent<br/>━━━━━━━━━━<br/>normal path / abort"]
        LOG_BYPASS["● _log_secret_scan_bypass<br/>━━━━━━━━━━<br/>writes bypass timestamp<br/>to .state.yaml ONLY"]
        WRITE_CFG["write_config_layer<br/>━━━━━━━━━━<br/>writes user config to<br/>.autoskillit/config.yaml<br/>(schema-validated)"]
        REGISTER["_register_mcp_server<br/>━━━━━━━━━━<br/>writes ~/.claude.json"]
    end

    subgraph DoctorCmd ["autoskillit doctor (● modified)"]
        direction TB
        DOCTOR_CMD["autoskillit doctor<br/>━━━━━━━━━━<br/>runs 9 health checks<br/>always all, no early exit"]
        CHK1["Check 1–4: version, scanner,<br/>config existence, MCP server<br/>━━━━━━━━━━<br/>existing checks"]
        CHK4B["● Check 4b: _check_config_layers_for_secrets<br/>━━━━━━━━━━<br/>scans ~/.autoskillit/config.yaml<br/>+ .autoskillit/config.yaml<br/>for _SECRETS_ONLY_KEYS violations"]
        CHK5N["Check 5–9: hooks, git,<br/>worktree, recipe, branch<br/>━━━━━━━━━━<br/>existing checks"]
        DOCTOR_OK["All checks OK<br/>━━━━━━━━━━<br/>Severity.OK for each"]
        DOCTOR_ERR["● Severity.ERROR<br/>━━━━━━━━━━<br/>Shows: dotted key path,<br/>exact YAML fix block,<br/>removal instruction"]
    end

    subgraph ConfigFiles ["CONFIGURATION FILES"]
        direction TB
        USER_CFG2["~/.autoskillit/config.yaml<br/>━━━━━━━━━━<br/>user-level config<br/>SCHEMA_LOCKED<br/>secrets forbidden"]
        PROJ_CFG2[".autoskillit/config.yaml<br/>━━━━━━━━━━<br/>project-level config<br/>SCHEMA_LOCKED<br/>secrets forbidden"]
        SECRETS_FILE2[".autoskillit/.secrets.yaml<br/>━━━━━━━━━━<br/>secret keys here only<br/>e.g. github.token"]
        STATE_FILE2["● .autoskillit/.state.yaml<br/>━━━━━━━━━━<br/>internal operational state<br/>bypass timestamp<br/>never schema-validated"]
    end

    subgraph LoadConfig2 ["load_config (called by all commands)"]
        LC2["● load_config<br/>━━━━━━━━━━<br/>merges all layers<br/>validates each (except defaults)"]
        LC_ERR2["● ConfigSchemaError<br/>━━━━━━━━━━<br/>actionable: exact YAML fix<br/>+ removal instruction"]
    end

    INIT_CMD --> SCAN_CHECK
    SCAN_CHECK -->|"bypass phrase matched"| BYPASS_OK
    SCAN_CHECK -->|"scanner OK / no consent"| NO_BYPASS
    BYPASS_OK --> LOG_BYPASS
    LOG_BYPASS -->|"writes bypass timestamp"| STATE_FILE2
    NO_BYPASS --> WRITE_CFG
    WRITE_CFG -->|"schema-validated write"| PROJ_CFG2
    WRITE_CFG --> REGISTER

    DOCTOR_CMD --> CHK1
    CHK1 --> CHK4B
    CHK4B -->|"reads"| USER_CFG2
    CHK4B -->|"reads"| PROJ_CFG2
    CHK4B -->|"clean"| DOCTOR_OK
    CHK4B -->|"violation found"| DOCTOR_ERR
    CHK4B --> CHK5N

    LC2 -->|"validated layers"| USER_CFG2
    LC2 -->|"validated layers"| PROJ_CFG2
    LC2 -->|"validated layers"| SECRETS_FILE2
    LC2 -->|"secret in config.yaml"| LC_ERR2

    class INIT_CMD,DOCTOR_CMD cli;
    class SCAN_CHECK,NO_BYPASS,WRITE_CFG,REGISTER,CHK1,CHK5N handler;
    class BYPASS_OK,LOG_BYPASS,CHK4B,DOCTOR_ERR,LC_ERR2,LC2 newComponent;
    class USER_CFG2,PROJ_CFG2,SECRETS_FILE2 phase;
    class STATE_FILE2 newComponent;
    class DOCTOR_OK,LC2 output;
    class LC2 detector;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | CLI | Entry-point commands |
| Orange | Handler | Existing processing steps |
| Green | New/Modified | `● _log_secret_scan_bypass`, `●
_check_config_layers_for_secrets`, `● .state.yaml`, `●
ConfigSchemaError` (actionable) |
| Purple | Config | Configuration file layers |
| Red | Detector | `load_config` schema validation gate |
| Dark Teal | Output | Success results |

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260322-002721-721157/temp/rectify/rectify_config-schema-immunity_2026-03-22_000000_part_a.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
… ONLY (#478)

## Summary

`ingredients_to_terminal` in `cli/_ansi.py` received a GFM markdown
string from
`format_ingredients_table`, parsed it back into rows, then computed
column widths with
`max(len(...))` — no ceiling. For `implementation.yaml`, the `run_mode`
description
collapsed to 220+ characters, forcing every terminal line to 245–254
characters. The
architectural weakness is not just the missing cap: it is that the
terminal rendering path
took a roundabout route through a GFM serialized string when the
structured `Recipe`
object was already available at the call site. This creates implicit
coupling, duplicated
width computations, and a rendering contract that is enforced nowhere.

Part A introduces a `TerminalColumn` abstraction that makes width bounds
**structural** —
part of the column specification itself — and routes the terminal path
directly from
structured data, eliminating the fragile GFM round-trip.
`TerminalColumn` and
`_render_terminal_table` are placed in `core/` (L0) to fix a
pre-existing L1→L3 layer
violation where `pipeline/telemetry_fmt.py` was importing from
`cli/_ansi.py`.

## Architecture Impact

### Module Dependency Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 70, 'curve': 'basis'}}}%%
graph TB
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef integration fill:#c62828,stroke:#ef9a9a,stroke-width:2px,color:#fff;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    subgraph L0 ["L0 — CORE (Foundation · stdlib only)"]
        direction LR
        CORE_TT["★ core/_terminal_table.py<br/>━━━━━━━━━━<br/>TerminalColumn NamedTuple<br/>_render_terminal_table()<br/>color-agnostic · pure stdlib"]
        CORE_INIT["● core/__init__.py<br/>━━━━━━━━━━<br/>re-exports TerminalColumn<br/>re-exports _render_terminal_table"]
    end

    subgraph L1 ["L1 — PIPELINE (Service Layer)"]
        direction LR
        TELEM["● pipeline/telemetry_fmt.py<br/>━━━━━━━━━━<br/>imports TerminalColumn from core<br/>format_token_table_terminal()<br/>format_timing_table_terminal()"]
    end

    subgraph L2 ["L2 — RECIPE (Domain Layer)"]
        direction LR
        API["● recipe/_api.py<br/>━━━━━━━━━━<br/>_build_ingredient_rows()<br/>format_ingredients_table()<br/>load_and_validate()"]
        RECIPE_INIT["● recipe/__init__.py<br/>━━━━━━━━━━<br/>re-exports _build_ingredient_rows"]
    end

    subgraph L3 ["L3 — CLI (Application Layer)"]
        direction LR
        CLI_TT["★ cli/_terminal_table.py<br/>━━━━━━━━━━<br/>re-export shim<br/>← imports from core/__init__"]
        ANSI["● cli/_ansi.py<br/>━━━━━━━━━━<br/>TerminalColumn (own copy·colored)<br/>_render_terminal_table (colored)<br/>ingredients_to_terminal()"]
        PROMPTS["● cli/_prompts.py<br/>━━━━━━━━━━<br/>show_cook_preview()<br/>← TYPE_CHECKING import<br/>_build_ingredient_rows"]
    end

    CORE_TT -->|"defines · re-exported by"| CORE_INIT
    CORE_INIT -->|"imports TerminalColumn<br/>_render_terminal_table"| TELEM
    API -->|"_build_ingredient_rows re-exported"| RECIPE_INIT
    CORE_INIT -->|"imports TerminalColumn<br/>_render_terminal_table"| CLI_TT
    RECIPE_INIT -->|"TYPE_CHECKING import<br/>_build_ingredient_rows"| PROMPTS
    ANSI -.->|"defines own copy<br/>(color-enhanced variant)"| ANSI

    class CORE_TT newComponent;
    class CORE_INIT stateNode;
    class TELEM phase;
    class API,RECIPE_INIT handler;
    class CLI_TT newComponent;
    class ANSI,PROMPTS cli;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Green | New Component | New files introduced by this PR (★) |
| Teal | Core Export | `core/__init__.py` — high fan-in re-export hub |
| Purple | Pipeline | `pipeline/telemetry_fmt.py` (L1) — now correctly
imports from L0 |
| Orange | Recipe | `recipe/_api.py` and `recipe/__init__.py` (L2) |
| Dark Blue | CLI | `cli/_terminal_table.py`, `cli/_ansi.py`,
`cli/_prompts.py` (L3) |
| Solid arrows | Valid | Downward dependencies (higher → lower layer) |

### Data Lineage Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart LR
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef integration fill:#c62828,stroke:#ef9a9a,stroke-width:2px,color:#fff;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    subgraph Origin ["Data Origin"]
        YAML["implementation.yaml<br/>━━━━━━━━━━<br/>run_mode: 220-char desc<br/>structured YAML ingredients"]
        RECIPE["Recipe dataclass<br/>━━━━━━━━━━<br/>RecipeIngredient dict<br/>structured source of truth"]
    end

    subgraph TermPath ["★ Terminal Path (new structured route)"]
        direction TB
        ROWS["★ _build_ingredient_rows()<br/>━━━━━━━━━━<br/>recipe/_api.py<br/>list[tuple[str, str, str]]<br/>full-length descriptions"]
        COLS["● TerminalColumn spec<br/>━━━━━━━━━━<br/>core/_terminal_table.py<br/>max_desc=60, align=left"]
        RENDER["● ingredients_to_terminal()<br/>━━━━━━━━━━<br/>cli/_ansi.py<br/>accepts structured rows<br/>truncates at 60 chars + …"]
        ANSI["ANSI Terminal Output<br/>━━━━━━━━━━<br/>≤90 chars/line<br/>run_mode: 'Execution mode…'"]
    end

    subgraph LLMPath ["LLM Path (unchanged)"]
        direction TB
        GFM["format_ingredients_table()<br/>━━━━━━━━━━<br/>recipe/_api.py<br/>GFM markdown string<br/>full 220-char descriptions"]
        LOAD["load_and_validate()<br/>━━━━━━━━━━<br/>ingredients_table field<br/>in LoadRecipeResult"]
        LLM["Claude / LLM UI<br/>━━━━━━━━━━<br/>markdown renderer<br/>displays full text"]
    end

    YAML -->|"load_yaml() + parse"| RECIPE
    RECIPE -->|"★ _build_ingredient_rows()<br/>structured rows"| ROWS
    ROWS -->|"column spec applied"| COLS
    COLS -->|"_render_terminal_table()"| RENDER
    RENDER -->|"bounded ANSI output"| ANSI
    RECIPE -->|"format_ingredients_table()<br/>GFM string"| GFM
    GFM -->|"LoadRecipeResult"| LOAD
    LOAD -->|"MCP response"| LLM

    class YAML cli;
    class RECIPE stateNode;
    class ROWS,COLS newComponent;
    class RENDER handler;
    class ANSI output;
    class GFM phase;
    class LOAD integration;
    class LLM integration;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Origin | YAML source data |
| Teal | Source of Truth | `Recipe` dataclass — structured ingredient
data |
| Green | New/Modified | `_build_ingredient_rows()`, `TerminalColumn`
spec (★/●) |
| Orange | Transformer | `ingredients_to_terminal()` — structured-rows
renderer |
| Dark Teal | Terminal Output | Bounded ANSI output (≤90 chars/line) |
| Purple | GFM Stage | `format_ingredients_table()` — LLM path,
unchanged |
| Red | External Consumer | `load_and_validate()` MCP response + Claude
UI |

### Operational Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef integration fill:#c62828,stroke:#ef9a9a,stroke-width:2px,color:#fff;

    subgraph Entrypoint ["CLI ENTRY POINT"]
        ORDER["autoskillit order<br/>━━━━━━━━━━<br/>cli/app.py:order()<br/>recipe selection TUI"]
    end

    subgraph Preview ["★ PARAMETER PREVIEW (changed rendering)"]
        direction TB
        PREVIEW["● show_cook_preview()<br/>━━━━━━━━━━<br/>cli/_prompts.py<br/>calls _build_ingredient_rows()<br/>NOT format_ingredients_table()"]
        BUILD["★ _build_ingredient_rows()<br/>━━━━━━━━━━<br/>recipe/_api.py (re-exported)<br/>list[tuple[str,str,str]]<br/>full-length descriptions"]
        RENDER["● ingredients_to_terminal()<br/>━━━━━━━━━━<br/>cli/_ansi.py<br/>accepts structured rows<br/>max_desc_width=60"]
        COL_SPEC["★ TerminalColumn spec<br/>━━━━━━━━━━<br/>core/_terminal_table.py<br/>Name≤30  Desc≤60  Default≤20"]
    end

    subgraph Output ["TERMINAL OUTPUT"]
        TERM["● Bounded ANSI Table<br/>━━━━━━━━━━<br/>≤90 chars/line<br/>long descriptions → 'Execution…'<br/>columns stay aligned"]
    end

    subgraph Telemetry ["● TELEMETRY DISPLAY (also fixed)"]
        direction TB
        TOKEN["● format_token_table_terminal()<br/>━━━━━━━━━━<br/>pipeline/telemetry_fmt.py<br/>imports TerminalColumn from core/"]
        TIMING["● format_timing_table_terminal()<br/>━━━━━━━━━━<br/>pipeline/telemetry_fmt.py<br/>imports TerminalColumn from core/"]
        TEL_OUT["Telemetry Table Output<br/>━━━━━━━━━━<br/>step name ≤40 chars<br/>all columns bounded"]
    end

    ORDER -->|"recipe selected"| PREVIEW
    PREVIEW -->|"calls"| BUILD
    BUILD -->|"rows passed to"| RENDER
    COL_SPEC -->|"width constraints"| RENDER
    RENDER -->|"print()"| TERM
    ORDER -.->|"post-run telemetry"| TOKEN
    ORDER -.->|"post-run telemetry"| TIMING
    TOKEN -->|"print()"| TEL_OUT
    TIMING -->|"print()"| TEL_OUT

    class ORDER cli;
    class PREVIEW,RENDER handler;
    class BUILD,COL_SPEC newComponent;
    class TERM,TEL_OUT output;
    class TOKEN,TIMING phase;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | CLI Entry | `autoskillit order` command |
| Orange | Renderer | `show_cook_preview()`, `ingredients_to_terminal()`
(●) |
| Green | New Component | `_build_ingredient_rows()`, `TerminalColumn`
spec (★/●) |
| Dark Teal | Terminal Output | Bounded ANSI output — operator-visible
result |
| Purple | Telemetry | `format_token_table_terminal()`,
`format_timing_table_terminal()` (●) |

Closes #475

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260322-003904-444622/temp/rectify/rectify_order-parameter-table-formatting_2026-03-22_000000_part_a.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

This PR completes the relocation of all temp directory references from
bare `temp/` paths to `.autoskillit/temp/` throughout the codebase,
addressing issue #468. The changes span skill SKILL.md instructions,
recipe YAML files, Python source strings, tests, and documentation, with
the final gap being a one-line fix to a prose `note:` field in
`remediation.yaml` that still referenced `temp/diagnose-ci/`. A
regression test is added to `tests/recipe/test_bundled_recipes.py` to
guard against bare `temp/` paths re-appearing in bundled recipe note
fields.

<details>
<summary>Individual Group Plans</summary>

### Group 1: Implementation Plan: Relocate Temp Directory from `temp/`
to `.autoskillit/temp/`
`ensure_project_temp()` in `core/io.py` already creates and returns
`.autoskillit/temp/` — the runtime is already correct. The remaining
work is a systematic text-replacement across skill SKILL.md
instructions, recipe YAML files, Python source strings, tests, and
documentation: every reference to `temp/skill-name/` must become
`.autoskillit/temp/skill-name/`.

### Group 2: Implementation Plan: Relocate temp/ Path in
remediation.yaml diagnose-ci Note
A single note field in `src/autoskillit/recipes/remediation.yaml` (line
864) still references `temp/diagnose-ci/` instead of
`.autoskillit/temp/diagnose-ci/`. This is the sole remaining gap from
the `relocate-temp-directory` implementation (issue #468), as identified
by the `audit-impl` remediation report. The fix is a one-line change to
the prose `note:` field documenting where the `diagnose_ci` step writes
its output. A regression test is also added to
`tests/recipe/test_bundled_recipes.py` to ensure no bundled recipe YAML
note fields carry bare `temp/` paths in the future.

</details>

## Requirements

### CORE

- **REQ-CORE-001:** The `ensure_project_temp` function must return a
path rooted at `.autoskillit/temp/` relative to the project root, not
`temp/`.
- **REQ-CORE-002:** The `.autoskillit/temp/` directory must be created
automatically if it does not exist when `ensure_project_temp` is called.
- **REQ-CORE-003:** All production code that constructs temp file paths
must use `ensure_project_temp` as the single source of truth — no
hardcoded `temp/` literals.

### GIT

- **REQ-GIT-001:** The project `.gitignore` must include a pattern that
excludes `.autoskillit/temp/` from version control.
- **REQ-GIT-002:** The legacy `temp/` gitignore entry may be retained
for backward compatibility but must not be the primary mechanism.

### RECIPE

- **REQ-RECIPE-001:** All bundled recipe YAML files that reference
`temp/` paths must be updated to reference `.autoskillit/temp/`.
- **REQ-RECIPE-002:** All bundled skill SKILL.md files that reference
`temp/` output paths must be updated to reference `.autoskillit/temp/`.
- **REQ-RECIPE-003:** Recipe validation and semantic rules that check
temp directory paths must recognize `.autoskillit/temp/` as the
canonical location.

### DOCS

- **REQ-DOCS-001:** The CLAUDE.md "Temporary Files" rule must reference
`.autoskillit/temp/` as the required destination for temp files.

### TEST

- **REQ-TEST-001:** All existing tests that assert on temp directory
paths must be updated to expect `.autoskillit/temp/`.
- **REQ-TEST-002:** A test must verify that `ensure_project_temp`
returns a path under `.autoskillit/temp/`.

## Architecture Impact

### Operational Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;

    subgraph Entry ["CLI ENTRY POINTS"]
        COOK["autoskillit cook<br/>━━━━━━━━━━<br/>Interactive skill session<br/>skill execution"]
        ORDER["autoskillit order<br/>━━━━━━━━━━<br/>Recipe pipeline<br/>orchestration"]
    end

    subgraph Validation ["● PATH VALIDATION HOOK"]
        HOOK["● skill_cmd_check.py<br/>━━━━━━━━━━<br/>_PATH_PREFIXES:<br/>('/', './', '.autoskillit/')"]
        REJECT["bare temp/ rejected<br/>━━━━━━━━━━<br/>no longer a valid<br/>path prefix"]
    end

    subgraph Config ["● RECIPE CONFIGURATION"]
        IMPL["● implementation.yaml<br/>━━━━━━━━━━<br/>diagnose-ci paths<br/>updated"]
        MERGE["● merge-prs.yaml<br/>━━━━━━━━━━<br/>plans_dir default:<br/>.autoskillit/temp/merge-prs"]
        REMED["● remediation.yaml<br/>━━━━━━━━━━<br/>diagnose-ci note<br/>path corrected"]
        IMPLG["● implementation-groups.yaml<br/>━━━━━━━━━━<br/>group plan paths<br/>updated"]
    end

    subgraph Foundation ["CORE FOUNDATION (unchanged)"]
        EPT["ensure_project_temp()<br/>━━━━━━━━━━<br/>creates .autoskillit/temp/<br/>manages .gitignore entries"]
    end

    subgraph Skills ["● SKILL OUTPUT PATHS (58 SKILL.md files)"]
        SKILLOUT["● SKILL.md instructions<br/>━━━━━━━━━━<br/>output → .autoskillit/temp/skill-name/<br/>(was: temp/skill-name/)"]
    end

    subgraph Artifacts ["ARTIFACT STORAGE"]
        TEMPDIR[".autoskillit/temp/<br/>━━━━━━━━━━<br/>.autoskillit/temp/make-plan/<br/>.autoskillit/temp/investigate/<br/>.autoskillit/temp/open-pr/<br/>..."]
        GITIGNORE[".autoskillit/.gitignore<br/>━━━━━━━━━━<br/>entry: temp/<br/>(relative — correct as-is)"]
    end

    COOK -->|"run_skill"| HOOK
    ORDER -->|"run_skill"| HOOK
    HOOK -->|"valid .autoskillit/ path"| SKILLOUT
    HOOK -->|"invalid bare temp/"| REJECT
    SKILLOUT -->|"writes to"| TEMPDIR
    EPT -->|"creates"| TEMPDIR
    TEMPDIR --> GITIGNORE
    MERGE -->|"default plans_dir"| TEMPDIR
    IMPL -->|"diagnose path"| TEMPDIR
    REMED -->|"note corrected"| TEMPDIR
    IMPLG -->|"group paths"| TEMPDIR

    class COOK,ORDER cli;
    class HOOK,REJECT detector;
    class EPT stateNode;
    class SKILLOUT phase;
    class TEMPDIR,GITIGNORE output;
    class IMPL,MERGE,REMED,IMPLG handler;
```

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;

    %% TERMINALS %%
    START([PreToolUse: run_skill invoked])
    ALLOW([EXIT 0 — allowed])
    DENY([EXIT 0 + JSON deny — blocked])

    subgraph Parse ["PARSE PHASE"]
        direction TB
        READ["Read stdin JSON<br/>━━━━━━━━━━<br/>extract tool_input<br/>extract skill_command"]
        PARSESKILL{"skill_command<br/>present?"}
        PARSENAME{"skill in<br/>PATH_ARG_SKILLS?<br/>━━━━━━━━━━<br/>(implement-worktree,<br/>implement-worktree-no-merge,<br/>retry-worktree,<br/>resolve-failures)"}
    end

    subgraph Validate ["● PATH VALIDATION PHASE"]
        direction TB
        SPLIT["Split args into tokens"]
        FIRSTPATH{"first token<br/>path-like?<br/>━━━━━━━━━━<br/>● _PATH_PREFIXES:<br/>('/', './', '.autoskillit/')<br/>(was: + 'temp/')"}
        LATERPATH{"any later token<br/>path-like?"}
        BUILDFIX["Build correction:<br/>skill_name + path_token"]
    end

    %% FLOW %%
    START --> READ
    READ --> PARSESKILL
    PARSESKILL -->|"no"| ALLOW
    PARSESKILL -->|"yes"| PARSENAME
    PARSENAME -->|"not a path-arg skill"| ALLOW
    PARSENAME -->|"is path-arg skill"| SPLIT
    SPLIT --> FIRSTPATH
    FIRSTPATH -->|"yes — correct format"| ALLOW
    FIRSTPATH -->|"no — first token not path-like"| LATERPATH
    LATERPATH -->|"no path found anywhere<br/>(allow — skill Step 0 handles)"| ALLOW
    LATERPATH -->|"path found but not first<br/>(anti-pattern detected)"| BUILDFIX
    BUILDFIX --> DENY

    %% CLASS ASSIGNMENTS %%
    class START,ALLOW,DENY terminal;
    class READ,SPLIT handler;
    class PARSESKILL,PARSENAME stateNode;
    class FIRSTPATH,LATERPATH detector;
    class BUILDFIX phase;
```

Closes #468

## Implementation Plan

Plan files:
-
`/home/talon/projects/autoskillit-runs/impl-20260322-120405-819376/temp/make-plan/relocate_temp_directory_plan_2026-03-22_120000.md`
-
`/home/talon/projects/autoskillit-runs/impl-20260322-120405-819376/temp/make-plan/relocate_temp_directory_remediation_plan_2026-03-22_125200.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
… B ONLY (#483)

## Summary

Part A added the `advisory-step-missing-context-limit` semantic rule and
fixed the `review` step in three recipes. After Part A, the rule still
emitted WARNINGs for three other advisory steps (`audit_impl`,
`open_pr_step`, `ci_conflict_fix`) in each of the three main recipes.
Part B resolves those remaining WARNINGs by adding explicit
`on_context_limit` routes, then tightens the test gate so that zero
advisory-step WARNINGs are permitted in bundled recipes going forward.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;

    START([START])
    ESCALATE([ESCALATE_STOP])
    RELEASE_FAIL([RELEASE_ISSUE_FAILURE])

    subgraph AdvisoryGate ["Advisory Step Gate (skip_when_false)"]
        direction TB
        REVIEW["● review<br/>━━━━━━━━━━<br/>skip_when_false: review_approach<br/>retries: 1<br/>on_context_limit: dry_walkthrough"]
        AUDIT["● audit_impl<br/>━━━━━━━━━━<br/>skip_when_false: inputs.audit<br/>on_context_limit: escalate_stop"]
        OPEN_PR["● open_pr_step<br/>━━━━━━━━━━<br/>skip_when_false: inputs.open_pr<br/>on_context_limit: release_issue_failure"]
        CI_FIX["● ci_conflict_fix<br/>━━━━━━━━━━<br/>skip_when_false: detect_ci_conflicts<br/>on_context_limit: release_issue_failure"]
    end

    subgraph SemanticRule ["● advisory-step-missing-context-limit Rule"]
        direction LR
        RULE["advisory-step-missing-context-limit<br/>━━━━━━━━━━<br/>WARNING: run_skill + skip_when_false<br/>but no on_context_limit"]
        RULECHECK{"on_context_limit<br/>set?"}
    end

    subgraph NormalFlow ["Normal Execution Path"]
        direction TB
        DRY["dry_walkthrough<br/>━━━━━━━━━━<br/>retries: 3"]
        TEST["test"]
        MERGE["merge"]
        PUSH["push"]
        CI_WATCH["ci_watch"]
    end

    subgraph AuditVerdict ["Audit Verdict Routing"]
        direction LR
        VERDICT{"audit verdict"}
        REMEDIATE["remediate → make_plan"]
    end

    START --> REVIEW
    REVIEW -->|"on_success"| DRY
    REVIEW -->|"on_context_limit (●new)"| DRY
    REVIEW -->|"on_failure"| RELEASE_FAIL

    DRY --> TEST
    TEST --> AUDIT

    AUDIT -->|"on_result: GO"| MERGE
    AUDIT --> VERDICT
    VERDICT -->|"NO GO"| REMEDIATE
    VERDICT -->|"error"| ESCALATE
    AUDIT -->|"on_context_limit (●new)"| ESCALATE
    AUDIT -->|"on_failure"| ESCALATE
    AUDIT -->|"skipped"| MERGE

    REMEDIATE --> DRY

    MERGE --> PUSH
    PUSH --> OPEN_PR
    OPEN_PR -->|"on_success"| CI_WATCH
    OPEN_PR -->|"on_context_limit (●new)"| RELEASE_FAIL
    OPEN_PR -->|"on_failure"| RELEASE_FAIL
    OPEN_PR -->|"skipped"| CI_WATCH

    CI_WATCH -->|"on_success"| MERGE
    CI_WATCH -->|"on_failure → detect_ci_conflict"| CI_FIX
    CI_FIX -->|"on_success"| PUSH
    CI_FIX -->|"on_context_limit (●new)"| RELEASE_FAIL
    CI_FIX -->|"on_failure"| RELEASE_FAIL

    RULE --> RULECHECK
    RULECHECK -->|"missing"| RULE
    RULECHECK -->|"present"| RULE

    %% CLASS ASSIGNMENTS %%
    class START terminal;
    class ESCALATE,RELEASE_FAIL terminal;
    class REVIEW,AUDIT,OPEN_PR,CI_FIX handler;
    class DRY,TEST,MERGE,PUSH,CI_WATCH phase;
    class VERDICT stateNode;
    class REMEDIATE detector;
    class RULE,RULECHECK output;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | START, ESCALATE_STOP, RELEASE_ISSUE_FAILURE |
| Orange | Handler | ● Modified advisory steps (review, audit_impl,
open_pr_step, ci_conflict_fix) |
| Purple | Phase | Normal execution steps (dry_walkthrough, test, merge,
push, ci_watch) |
| Teal | State | Decision/routing nodes |
| Red | Detector | Remediation loop |
| Dark Teal | Output | Semantic rule nodes |

### Error/Resilience Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;

    subgraph SemanticGate ["● Semantic Validation Gate (rules_worktree.py)"]
        direction LR
        RULE["● advisory-step-missing-context-limit<br/>━━━━━━━━━━<br/>run_skill + skip_when_false<br/>but no on_context_limit → WARNING"]
        RCHECK{"on_context_limit<br/>declared?"}
        RULE_OK["Rule passes<br/>━━━━━━━━━━<br/>step is compliant"]
    end

    subgraph ReviewResilience ["● review step (skip_when_false: review_approach, retries: 1)"]
        direction TB
        REVIEW_RUN["run_skill: review-approach<br/>━━━━━━━━━━<br/>advisory; can be skipped"]
        REVIEW_STALE["context exhausted / stale"]
        REVIEW_RETRY{"retries<br/>remaining?"}
        REVIEW_SKIP["● on_context_limit: dry_walkthrough<br/>━━━━━━━━━━<br/>skip to next step (safe)"]
    end

    subgraph AuditResilience ["● audit_impl step (skip_when_false: inputs.audit) — merge gate"]
        direction TB
        AUDIT_RUN["run_skill: audit-impl<br/>━━━━━━━━━━<br/>merge gate; GO/NO GO verdict"]
        AUDIT_FAIL["context exhausted<br/>━━━━━━━━━━<br/>verdict unavailable"]
        AUDIT_LIMIT["● on_context_limit: escalate_stop<br/>━━━━━━━━━━<br/>abort — unapproved merge unsafe"]
    end

    subgraph PRResilience ["● open_pr_step + ci_conflict_fix (skip_when_false guarded)"]
        direction TB
        OPENPR_RUN["run_skill: open-pr<br/>━━━━━━━━━━<br/>skip_when_false: inputs.open_pr"]
        OPENPR_LIMIT["● on_context_limit: release_issue_failure<br/>━━━━━━━━━━<br/>PR state unknown; safe release"]
        CIFIX_RUN["run_skill: ci-conflict-fix<br/>━━━━━━━━━━<br/>skip_when_false: detect_ci_conflicts<br/>retries: 1"]
        CIFIX_LIMIT["● on_context_limit: release_issue_failure<br/>━━━━━━━━━━<br/>incomplete fix; push unsafe"]
    end

    T_SKIP([dry_walkthrough — continue])
    T_ABORT([ESCALATE_STOP])
    T_RELEASE([RELEASE_ISSUE_FAILURE])

    %% SEMANTIC GATE %%
    RULE --> RCHECK
    RCHECK -->|"missing"| RULE
    RCHECK -->|"present"| RULE_OK

    %% REVIEW RESILIENCE %%
    REVIEW_RUN -->|"stale / context limit"| REVIEW_STALE
    REVIEW_STALE --> REVIEW_RETRY
    REVIEW_RETRY -->|"yes (retries: 1)"| REVIEW_RUN
    REVIEW_RETRY -->|"exhausted"| REVIEW_SKIP
    REVIEW_SKIP --> T_SKIP

    %% AUDIT RESILIENCE %%
    AUDIT_RUN -->|"context exhausted"| AUDIT_FAIL
    AUDIT_FAIL --> AUDIT_LIMIT
    AUDIT_LIMIT --> T_ABORT

    %% PR RESILIENCE %%
    OPENPR_RUN -->|"context exhausted"| OPENPR_LIMIT
    OPENPR_LIMIT --> T_RELEASE
    CIFIX_RUN -->|"context exhausted<br/>(after retries)"| CIFIX_LIMIT
    CIFIX_LIMIT --> T_RELEASE

    %% CLASS ASSIGNMENTS %%
    class RULE,RCHECK detector;
    class RULE_OK output;
    class REVIEW_RUN,AUDIT_RUN,OPENPR_RUN,CIFIX_RUN handler;
    class REVIEW_STALE,AUDIT_FAIL gap;
    class REVIEW_RETRY stateNode;
    class REVIEW_SKIP,AUDIT_LIMIT,OPENPR_LIMIT,CIFIX_LIMIT output;
    class T_SKIP,T_ABORT,T_RELEASE terminal;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Red | Semantic Gate | Validation rule that detects missing
on_context_limit |
| Dark Teal | Output | Recovery routing destinations (● new
on_context_limit routes) |
| Orange | Handler | Advisory run_skill steps that were modified |
| Yellow | Failed | Context-exhaustion failure states |
| Teal | Circuit | Retry-remaining decision |
| Dark Blue | Terminal | Final routing destinations |

Closes #481

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260322-120142-791819/temp/rectify/rectify_review-step-context-limit-routing_2026-03-22_120142_part_b.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

Part A introduced `BackgroundTaskSupervisor` to supervise the
`report_bug` background task. Part B hardens the broader architecture so
the class of bugs — **unmonitored fire-and-forget tasks** and
**unenforced "Never raises" contracts** — is impossible to reintroduce
without tests failing.

Two structural safeguards:

1. **Extend the anyio migration arch test to cover `server/`** — adds
`asyncio.create_task` to the list of banned primitives in server code,
so any future fire-and-forget introduction breaks CI immediately.

2. **Structural "Never raises" contract enforcement** — adds an arch
test that finds all functions claiming "Never raises" in their
docstrings and asserts they have a top-level `try:/except Exception:`
block. Applies the missing guard to `_file_or_update_github_issue`, the
other unenforced "Never raises" function in `tools_github.py`.

## Architecture Impact

### Concurrency Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;

    START([report_bug called])

    subgraph EventLoop ["ASYNCIO EVENT LOOP (Single Thread)"]
        direction TB
        GATE["● Gate Check<br/>━━━━━━━━━━<br/>_require_enabled()"]
        SEV{"severity?"}

        subgraph BlockingPath ["BLOCKING PATH"]
            direction TB
            BLOCK_AWAIT["await _run_report_session()<br/>━━━━━━━━━━<br/>Runs inline, caller waits"]
        end

        subgraph NonBlockingPath ["NON-BLOCKING PATH"]
            direction TB
            STATUS_FILE["Write status.json<br/>━━━━━━━━━━<br/>status: pending"]
            SUBMIT["● tool_ctx.background.submit()<br/>━━━━━━━━━━<br/>Schedules coroutine as Task"]
            RETURN_DISP["Return 'dispatched'<br/>━━━━━━━━━━<br/>Caller unblocked immediately"]
        end
    end

    subgraph Supervisor ["★ BackgroundTaskSupervisor (pipeline/background.py)"]
        direction TB
        CREATE_TASK["asyncio.create_task(_supervise_task)<br/>━━━━━━━━━━<br/>Task added to _tasks set"]
        DONE_CB["task.add_done_callback<br/>━━━━━━━━━━<br/>_tasks.discard — auto-cleanup"]
        SUPERVISE["_supervise_task(coro)<br/>━━━━━━━━━━<br/>Wraps coro in try/except"]
    end

    subgraph BackgroundTask ["BACKGROUND TASK (asyncio.Task — runs concurrently)"]
        direction TB
        RUN_CORO["await _run_report_session()<br/>━━━━━━━━━━<br/>Headless Claude session"]
        EXC_CHECK{"outcome?"}
        SUCCESS_PATH["Return result dict<br/>━━━━━━━━━━<br/>Normal completion"]
        CANCEL_PATH["Write 'cancelled'<br/>━━━━━━━━━━<br/>Re-raise CancelledError"]
        FAIL_PATH["Log error<br/>━━━━━━━━━━<br/>Write 'failed' status.json<br/>Record to AuditStore<br/>Call on_exception callback"]
    end

    subgraph SharedState ["SHARED STATE (thread-safe via asyncio)"]
        direction TB
        TASKS_SET["★ _tasks: set[asyncio.Task]<br/>━━━━━━━━━━<br/>Protected by event loop GIL"]
        AUDIT_LOG["● AuditStore<br/>━━━━━━━━━━<br/>record_failure() on exception"]
        STATUS_JSON["★ status.json files<br/>━━━━━━━━━━<br/>atomic_write — never races"]
    end

    DRAIN["drain() — asyncio.gather(*_tasks)<br/>━━━━━━━━━━<br/>Await all → tests / shutdown"]

    START --> GATE
    GATE --> SEV
    SEV -->|"blocking"| BLOCK_AWAIT
    SEV -->|"non_blocking"| STATUS_FILE
    STATUS_FILE --> SUBMIT
    SUBMIT --> RETURN_DISP
    SUBMIT --> CREATE_TASK
    CREATE_TASK --> DONE_CB
    CREATE_TASK --> SUPERVISE
    DONE_CB -.->|"on done"| TASKS_SET
    SUPERVISE --> RUN_CORO
    RUN_CORO --> EXC_CHECK
    EXC_CHECK -->|"success"| SUCCESS_PATH
    EXC_CHECK -->|"CancelledError"| CANCEL_PATH
    EXC_CHECK -->|"Exception"| FAIL_PATH
    CREATE_TASK --> TASKS_SET
    FAIL_PATH --> AUDIT_LOG
    FAIL_PATH --> STATUS_JSON
    CANCEL_PATH --> STATUS_JSON
    BLOCK_AWAIT --> STATUS_JSON
    TASKS_SET --> DRAIN

    class START terminal;
    class GATE handler;
    class SEV stateNode;
    class BLOCK_AWAIT phase;
    class STATUS_FILE,RETURN_DISP output;
    class SUBMIT handler;
    class CREATE_TASK,DONE_CB,SUPERVISE newComponent;
    class RUN_CORO phase;
    class EXC_CHECK detector;
    class SUCCESS_PATH output;
    class CANCEL_PATH,FAIL_PATH detector;
    class TASKS_SET,AUDIT_LOG,STATUS_JSON newComponent;
    class DRAIN stateNode;
```

### Error/Resilience Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;

    CALL_START([report_bug tool called])

    subgraph ArchGuards ["★ ARCH TEST GUARDS (CI Prevention Layer)"]
        direction LR
        ARCH1["★ test_never_raises_contracts<br/>━━━━━━━━━━<br/>AST-scans server/ for 'Never raises'<br/>functions without top-level try/except<br/>→ fails CI on violation"]
        ARCH2["● test_server_has_no_asyncio_create_task<br/>━━━━━━━━━━<br/>Bans asyncio.create_task in server/<br/>→ fails CI if fire-and-forget reintroduced"]
    end

    subgraph SessionLayer ["_run_report_session() — EXISTING GUARD"]
        direction TB
        RUN_TRY["try: (top-level)<br/>━━━━━━━━━━<br/>Entire session execution wrapped"]
        SESSION_OK["Return result dict<br/>━━━━━━━━━━<br/>success=True"]
        SESSION_EXC["except Exception<br/>━━━━━━━━━━<br/>logger.error() + write status 'failed'<br/>Return {success: False}"]
    end

    subgraph GitHubLayer ["● _file_or_update_github_issue() — NEW GUARD"]
        direction TB
        GH_TRY["try: (top-level — ● NEW)<br/>━━━━━━━━━━<br/>All GitHub API calls wrapped"]
        GH_CONFIG["Check default_repo config<br/>━━━━━━━━━━<br/>Returns {skipped} if not set"]
        GH_SEARCH["search_issues()<br/>━━━━━━━━━━<br/>GitHubFetcher — never raises"]
        GH_ACT{"duplicate?"}
        GH_COMMENT["add_comment()"]
        GH_CREATE["create_issue()"]
        GH_EXC["except Exception (● NEW)<br/>━━━━━━━━━━<br/>logger.error()<br/>Return {skipped: True, reason: ...}"]
    end

    subgraph SupervisorLayer ["★ BackgroundTaskSupervisor._supervise_task()"]
        direction TB
        SUP_AWAIT["await coro (_run_report_session)<br/>━━━━━━━━━━<br/>Background execution"]
        SUP_CANCEL["except CancelledError<br/>━━━━━━━━━━<br/>Write 'cancelled' status.json<br/>Re-raise (propagates normally)"]
        SUP_EXC["except Exception<br/>━━━━━━━━━━<br/>logger.error()<br/>Write 'failed' status.json<br/>AuditStore.record_failure()<br/>on_exception callback"]
    end

    subgraph StatusFiles ["STATUS FILE OUTPUTS"]
        direction LR
        STATUS_CANCELLED["status.json<br/>{'status': 'cancelled'}"]
        STATUS_FAILED["status.json<br/>{'status': 'failed', 'error': ...}"]
        STATUS_SUCCESS["status.json<br/>{'status': 'success'}"]
    end

    AUDIT["● AuditStore<br/>━━━━━━━━━━<br/>record_failure(subtype='background_exception')"]

    CALL_START --> RUN_TRY
    RUN_TRY --> SESSION_OK
    RUN_TRY --> GH_TRY
    RUN_TRY -->|"Exception"| SESSION_EXC
    SESSION_EXC --> STATUS_FAILED
    GH_TRY --> GH_CONFIG
    GH_CONFIG --> GH_SEARCH
    GH_SEARCH --> GH_ACT
    GH_ACT -->|"yes"| GH_COMMENT
    GH_ACT -->|"no"| GH_CREATE
    GH_TRY -->|"Exception (NEW)"| GH_EXC
    SUP_AWAIT --> SESSION_OK
    SUP_AWAIT -->|"CancelledError"| SUP_CANCEL
    SUP_AWAIT -->|"Exception"| SUP_EXC
    SUP_CANCEL --> STATUS_CANCELLED
    SUP_EXC --> STATUS_FAILED
    SUP_EXC --> AUDIT
    RUN_TRY -.->|"called by"| SUP_AWAIT

    class CALL_START terminal;
    class ARCH1 newComponent;
    class ARCH2 handler;
    class RUN_TRY,SESSION_OK phase;
    class SESSION_EXC detector;
    class GH_TRY,GH_CONFIG,GH_SEARCH phase;
    class GH_ACT stateNode;
    class GH_COMMENT,GH_CREATE handler;
    class GH_EXC detector;
    class SUP_AWAIT phase;
    class SUP_CANCEL,SUP_EXC detector;
    class STATUS_CANCELLED,STATUS_FAILED,STATUS_SUCCESS output;
    class AUDIT newComponent;
```

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;

    START([report_bug called])
    DONE_BLOCK([COMPLETE — blocking])
    DONE_NBCK([DISPATCHED — non_blocking])
    DONE_GATE([ERROR — kitchen closed])

    subgraph Validation ["ENTRY & VALIDATION"]
        direction TB
        GATE["_require_enabled()<br/>━━━━━━━━━━<br/>Kitchen gate check"]
        EXEC_CHK{"executor<br/>configured?"}
        CTX_SETUP["Resolve config + paths<br/>━━━━━━━━━━<br/>report_dir, report_path,<br/>skill_command, write_spec"]
    end

    subgraph Routing ["SEVERITY ROUTING"]
        direction TB
        SEV{"severity?"}
    end

    subgraph BlockingPath ["BLOCKING PATH"]
        direction TB
        B_AWAIT["await _run_report_session()<br/>━━━━━━━━━━<br/>Caller waits for full result"]
        B_NOTIFY{"success?"}
    end

    subgraph NonBlockingPath ["● NON-BLOCKING PATH"]
        direction TB
        NB_STATUS["Write status.json<br/>━━━━━━━━━━<br/>{status: 'pending',<br/> dispatched_at: ...}"]
        NB_SUBMIT["● tool_ctx.background.submit()<br/>━━━━━━━━━━<br/>Schedules coroutine as supervised Task<br/>status_path, label='report_bug'"]
        NB_RETURN["Return immediately<br/>━━━━━━━━━━<br/>{success: true,<br/> status: 'dispatched',<br/> report_path: ...}"]
    end

    subgraph SessionExec ["_run_report_session() — ASYNC COROUTINE"]
        direction TB
        EXEC_RUN["executor.run(skill_command, cwd)<br/>━━━━━━━━━━<br/>Headless Claude session"]
        READ_RPT["Read report.md + parse fingerprint<br/>━━━━━━━━━━<br/>Extract dedup fingerprint"]
        GH_CHK{"github_filing<br/>+ has_token?"}
        GH_FILE["await _file_or_update_github_issue()"]
        SKIP_GH["github = {skipped: True, reason: no_token}"]
    end

    subgraph GitHubFiling ["● _file_or_update_github_issue()"]
        direction TB
        CFG_CHK{"default_repo<br/>configured?"}
        SEARCH["search_issues(fingerprint)"]
        DUP_CHK{"duplicate<br/>found?"}
        DUP_BODY{"error_context<br/>already in body?"}
        ADD_CMT["add_comment()"]
        CRT_ISS["create_issue()"]
        SKIP_CFG["Return {skipped: True}"]
        GH_OUT["Return result dict"]
    end

    START --> GATE
    GATE -->|"disabled"| DONE_GATE
    GATE -->|"enabled"| EXEC_CHK
    EXEC_CHK -->|"no"| DONE_GATE
    EXEC_CHK -->|"yes"| CTX_SETUP
    CTX_SETUP --> SEV
    SEV -->|"blocking"| B_AWAIT
    SEV -->|"non_blocking"| NB_STATUS
    B_AWAIT --> B_NOTIFY
    B_NOTIFY -->|"yes"| DONE_BLOCK
    B_NOTIFY -->|"no"| DONE_BLOCK
    NB_STATUS --> NB_SUBMIT
    NB_SUBMIT --> NB_RETURN
    NB_RETURN --> DONE_NBCK
    NB_SUBMIT -.->|"async (background)"| EXEC_RUN
    EXEC_RUN --> READ_RPT
    READ_RPT --> GH_CHK
    GH_CHK -->|"yes"| GH_FILE
    GH_CHK -->|"no"| SKIP_GH
    GH_FILE --> CFG_CHK
    CFG_CHK -->|"not set"| SKIP_CFG
    CFG_CHK -->|"set"| SEARCH
    SEARCH --> DUP_CHK
    DUP_CHK -->|"yes"| DUP_BODY
    DUP_CHK -->|"no"| CRT_ISS
    DUP_BODY -->|"already present"| GH_OUT
    DUP_BODY -->|"new occurrence"| ADD_CMT
    ADD_CMT --> GH_OUT
    CRT_ISS --> GH_OUT
    SKIP_CFG --> GH_OUT

    class START terminal;
    class DONE_BLOCK,DONE_NBCK,DONE_GATE terminal;
    class GATE detector;
    class EXEC_CHK stateNode;
    class CTX_SETUP phase;
    class SEV stateNode;
    class B_AWAIT phase;
    class B_NOTIFY stateNode;
    class NB_STATUS output;
    class NB_SUBMIT newComponent;
    class NB_RETURN output;
    class EXEC_RUN handler;
    class READ_RPT phase;
    class GH_CHK stateNode;
    class GH_FILE handler;
    class SKIP_GH output;
    class CFG_CHK stateNode;
    class SEARCH handler;
    class DUP_CHK,DUP_BODY stateNode;
    class ADD_CMT,CRT_ISS handler;
    class GH_OUT,SKIP_CFG output;
```

Closes #480

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260322-120142-276410/temp/rectify/rectify_non_blocking_dispatch_immunity_2026-03-22_122500_part_b.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

Part A addressed the execution engine: when a skill writes a file but
omits the structured output token, the system can now recover using tool
call evidence (`_synthesize_from_write_artifacts`) and promotes the
result to `RETRIABLE(CONTRACT_RECOVERY)` instead of abandoning with a
terminal failure. Part B addresses the source: SKILL.md instruction
quality across 20+ path-capture skills. Two compounding defects caused
models to intermittently omit the structured output token — late
instruction positioning (token requirement only in `## Output`, not `##
Critical Constraints`) and a relative/absolute path contradiction
between the save instruction and the contract regex. This PR establishes
the "Concrete Token Instruction" canonical pattern in Critical
Constraints for every affected skill and adds a static CI test that
prevents regression as new skills are added. Together, Part A (recovery
when the model still fails) and Part B (reduced failure rate from
improved instructions) provide defense-in-depth for structured output
compliance.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;

    %% TERMINALS %%
    START([run_skill invoked])
    SUCCEEDED([SkillResult: SUCCEEDED])
    RETRIABLE([SkillResult: RETRIABLE])
    FAILED([SkillResult: FAILED])

    subgraph Parsing ["Phase 1 — NDJSON Parsing"]
        direction TB
        Parse["● parse_session_result<br/>━━━━━━━━━━<br/>Scan stdout NDJSON<br/>Accumulate tool_uses + messages"]
        CSR["ClaudeSessionResult<br/>━━━━━━━━━━<br/>result, subtype, is_error<br/>tool_uses, assistant_messages<br/>write_call_count"]
    end

    subgraph Recovery ["Phase 2 — Recovery Chain"]
        direction TB
        RecA["Recovery A<br/>━━━━━━━━━━<br/>_recover_from_separate_marker<br/>Standalone %%ORDER_UP%% → join messages"]
        RecB["● Recovery B<br/>━━━━━━━━━━<br/>_recover_block_from_assistant_messages<br/>Channel confirmed + patterns missing?<br/>Scan assistant_messages for tokens"]
        RecC["● Recovery C (NEW)<br/>━━━━━━━━━━<br/>_synthesize_from_write_artifacts<br/>write_count≥1 + patterns still missing?<br/>Synthesize token from Write tool_use file_path"]
    end

    subgraph Outcome ["Phase 3 — Outcome Computation"]
        direction TB
        CompS["● _compute_success<br/>━━━━━━━━━━<br/>CHANNEL_B bypass gate<br/>TerminationReason dispatch<br/>_check_session_content"]
        CompR["_compute_retry<br/>━━━━━━━━━━<br/>context_exhausted → RESUME<br/>kill_anomaly → RESUME<br/>marker absent → EARLY_STOP"]
        ContradictionGuard{"Contradiction<br/>Guard<br/>success ∧ retry?"}
        DeadEnd{"Dead-End<br/>Guard<br/>¬success ∧ ¬retry<br/>+ channel confirmed?"}
        ContentEval["● _evaluate_content_state<br/>━━━━━━━━━━<br/>COMPLETE / ABSENT<br/>CONTRACT_VIOLATION / SESSION_ERROR"]
    end

    subgraph PostProcess ["Phase 4 — Post-Processing"]
        direction TB
        NormSub["● _normalize_subtype<br/>━━━━━━━━━━<br/>Resolve CLI vs adjudicated contradiction<br/>→ adjudicated_failure / empty_result / etc."]
        BudgetG1["_apply_budget_guard (pass 1)<br/>━━━━━━━━━━<br/>Consecutive failures > max?<br/>Override needs_retry=False"]
        CRGate["● CONTRACT_RECOVERY gate<br/>━━━━━━━━━━<br/>adjudicated_failure + write_count≥1?<br/>Promote to RETRIABLE(CONTRACT_RECOVERY)"]
        BudgetG2["_apply_budget_guard (pass 2)<br/>━━━━━━━━━━<br/>Cap CONTRACT_RECOVERY retries<br/>→ BUDGET_EXHAUSTED"]
        ZeroWrite["Zero-Write Gate<br/>━━━━━━━━━━<br/>success + write=0 + expected?<br/>Demote to RETRIABLE(ZERO_WRITES)"]
    end

    %% MAIN FLOW %%
    START --> Parse
    Parse --> CSR
    CSR --> RecA
    RecA -->|"completion_marker configured"| RecB
    RecA -->|"no marker config"| RecB
    RecB -->|"channel confirmed + patterns found in messages"| RecC
    RecB -->|"patterns not in messages"| RecC
    RecC -->|"write evidence + path-token patterns → synthesize tokens"| CompS
    RecC -->|"no write evidence or non-path patterns"| CompS
    CompS --> CompR
    CompR --> ContradictionGuard
    ContradictionGuard -->|"success=True AND retry=True<br/>demote success"| DeadEnd
    ContradictionGuard -->|"no contradiction"| DeadEnd
    DeadEnd -->|"¬success ∧ ¬retry ∧ channel confirmed"| ContentEval
    DeadEnd -->|"otherwise"| NormSub
    ContentEval -->|"ABSENT → DRAIN_RACE<br/>promote to RETRIABLE"| NormSub
    ContentEval -->|"CONTRACT_VIOLATION<br/>SESSION_ERROR → FAILED"| NormSub
    NormSub --> BudgetG1
    BudgetG1 -->|"needs_retry=True → budget check"| CRGate
    BudgetG1 -->|"budget exhausted → BUDGET_EXHAUSTED"| FAILED
    CRGate -->|"adjudicated_failure + write_count≥1<br/>→ needs_retry=True, CONTRACT_RECOVERY"| BudgetG2
    CRGate -->|"conditions not met"| ZeroWrite
    BudgetG2 -->|"budget not exhausted"| ZeroWrite
    BudgetG2 -->|"budget exhausted"| FAILED
    ZeroWrite -->|"success + write=0 + expected"| RETRIABLE
    ZeroWrite -->|"all gates passed"| SUCCEEDED
    ZeroWrite -->|"success=False, no retry"| FAILED
    ZeroWrite -->|"needs_retry=True"| RETRIABLE

    %% CLASS ASSIGNMENTS %%
    class START terminal;
    class SUCCEEDED,RETRIABLE,FAILED terminal;
    class Parse,RecA handler;
    class CSR stateNode;
    class RecB,RecC newComponent;
    class CompS,CompR phase;
    class ContradictionGuard,DeadEnd,ContentEval detector;
    class NormSub,BudgetG1,BudgetG2,ZeroWrite handler;
    class CRGate newComponent;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Start and result states (SUCCEEDED / RETRIABLE
/ FAILED) |
| Orange | Handler | Processing nodes (parse, normalize, budget guard) |
| Teal | State | ClaudeSessionResult data container |
| Purple | Phase | Outcome computation nodes |
| Green | New/Modified | Nodes changed in this PR (● Recovery B, ●
Recovery C, ● CONTRACT_RECOVERY gate) |
| Red | Detector | Validation and guard nodes (dead-end guard, content
evaluation) |

### Error Resilience Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;

    T_COMPLETE([SUCCEEDED])
    T_RETRIABLE([RETRIABLE])
    T_FAILED([FAILED — terminal])

    subgraph Prevention ["PREVENTION — Part B: SKILL.md Instruction Hardening"]
        direction TB
        SKILLMd["● SKILL.md<br/>━━━━━━━━━━<br/>Token instruction moved to<br/>Critical Constraints section<br/>Absolute path example given"]
        StaticTest["● test_skill_output_compliance.py<br/>━━━━━━━━━━<br/>Static regex: token instruction<br/>must appear inside ## Critical Constraints<br/>Catches regression as new skills added"]
        Contracts["● skill_contracts.yaml<br/>━━━━━━━━━━<br/>setup-project contract removed<br/>(no emit instruction existed)"]
    end

    subgraph Detection ["DETECTION — Contract Violation Recognition"]
        direction TB
        PatternCheck["● _check_expected_patterns<br/>━━━━━━━━━━<br/>Normalize bold markdown<br/>AND-match all regex patterns<br/>vs session.result"]
        ContentEval["● _evaluate_content_state<br/>━━━━━━━━━━<br/>COMPLETE / ABSENT<br/>CONTRACT_VIOLATION / SESSION_ERROR"]
        DeadEnd{"Dead-End Guard<br/>━━━━━━━━━━<br/>¬success ∧ ¬retry<br/>+ channel confirmed?"}
    end

    subgraph RecoveryChain ["RECOVERY CHAIN — Three-Stage Fallback"]
        direction TB
        RecA["Recovery A: Separate Marker<br/>━━━━━━━━━━<br/>Standalone %%ORDER_UP%% message<br/>→ join assistant_messages"]
        RecB["● Recovery B: Assistant Messages<br/>━━━━━━━━━━<br/>Channel confirmed + patterns missing<br/>→ scan all assistant_messages<br/>(drain-race artifact fix)"]
        RecC["● Recovery C: Artifact Synthesis (NEW)<br/>━━━━━━━━━━<br/>write_count≥1 + patterns still absent<br/>→ scan tool_uses for Write file_path<br/>→ synthesize token = /abs/path"]
    end

    subgraph CircuitBreakers ["CIRCUIT BREAKERS — Retry Caps"]
        direction TB
        BudgetG1["_apply_budget_guard (pass 1)<br/>━━━━━━━━━━<br/>consecutive failures > max_consecutive_retries<br/>→ BUDGET_EXHAUSTED, needs_retry=False"]
        CRGate["● CONTRACT_RECOVERY Gate (NEW)<br/>━━━━━━━━━━<br/>adjudicated_failure + write_count≥1<br/>→ promote to RETRIABLE(CONTRACT_RECOVERY)"]
        BudgetG2["_apply_budget_guard (pass 2)<br/>━━━━━━━━━━<br/>Caps CONTRACT_RECOVERY retries<br/>(prevents infinite loop)"]
        DrainRace["Dead-End Guard → DRAIN_RACE<br/>━━━━━━━━━━<br/>ABSENT state: channel confirmed completion<br/>but result empty → transient → retry"]
    end

    %% PREVENTION → DETECTION %%
    SKILLMd -->|"reduced omission rate"| PatternCheck
    StaticTest -->|"regression guard"| SKILLMd
    Contracts -->|"removes false positives"| PatternCheck

    %% DETECTION %%
    PatternCheck -->|"patterns match"| T_COMPLETE
    PatternCheck -->|"patterns absent"| ContentEval
    ContentEval --> DeadEnd

    %% RECOVERY CHAIN (pre-detection) %%
    RecA -->|"token found in messages"| PatternCheck
    RecA -->|"not found"| RecB
    RecB -->|"token found in assistant_messages"| PatternCheck
    RecB -->|"not found"| RecC
    RecC -->|"synthesized token → updated result"| PatternCheck
    RecC -->|"no write evidence"| PatternCheck

    %% DEAD-END ROUTING %%
    DeadEnd -->|"ABSENT → drain-race"| DrainRace
    DeadEnd -->|"CONTRACT_VIOLATION"| BudgetG1
    DeadEnd -->|"SESSION_ERROR"| T_FAILED

    %% CIRCUIT BREAKERS %%
    DrainRace -->|"RETRIABLE(DRAIN_RACE)"| T_RETRIABLE
    BudgetG1 -->|"budget not exhausted"| CRGate
    BudgetG1 -->|"budget exhausted"| T_FAILED
    CRGate -->|"write_count≥1 → RETRIABLE(CONTRACT_RECOVERY)"| BudgetG2
    CRGate -->|"no write evidence → terminal"| T_FAILED
    BudgetG2 -->|"budget not exhausted"| T_RETRIABLE
    BudgetG2 -->|"budget exhausted → BUDGET_EXHAUSTED"| T_FAILED

    %% CLASS ASSIGNMENTS %%
    class T_COMPLETE,T_RETRIABLE,T_FAILED terminal;
    class SKILLMd,Contracts newComponent;
    class StaticTest newComponent;
    class PatternCheck,ContentEval detector;
    class DeadEnd stateNode;
    class RecA handler;
    class RecB,RecC newComponent;
    class BudgetG1,BudgetG2 phase;
    class CRGate newComponent;
    class DrainRace output;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Final states: SUCCEEDED, RETRIABLE, FAILED |
| Green | New/Modified | Components changed in this PR |
| Red | Detector | Pattern matching and content state evaluation |
| Teal | State | Dead-end guard decision node |
| Orange | Handler | Recovery A (existing) |
| Purple | Phase | Budget guard passes |
| Dark Teal | Recovery | Drain-race promotion |

### State Lifecycle Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;

    T_OK([Contract Satisfied])
    T_RETRY([Contract: Retry Eligible])
    T_VIOLATED([Contract Violated — Terminal])

    subgraph ContractDef ["CONTRACT DEFINITION LAYER"]
        direction LR
        SkillContracts["● skill_contracts.yaml<br/>━━━━━━━━━━<br/>expected_output_patterns<br/>completion_marker<br/>write_behavior<br/>setup-project contract removed"]
        SkillMD["● SKILL.md<br/>━━━━━━━━━━<br/>Critical Constraints section<br/>Concrete token instruction<br/>Absolute path example<br/>(20+ skills updated)"]
        StaticTest["● test_skill_output_compliance.py<br/>━━━━━━━━━━<br/>CI gate: token instruction<br/>must be in ## Critical Constraints<br/>Regex: r'## Critical Constraints.*plan_path\\s*='<br/>Covers all path-capture skills"]
    end

    subgraph ModelExecution ["MODEL EXECUTION — Headless Session"]
        direction TB
        WriteArtifact["Model writes artifact<br/>━━━━━━━━━━<br/>Write tool call<br/>file_path → disk<br/>write_call_count += 1"]
        EmitToken{"● Emits structured token?<br/>━━━━━━━━━━<br/>plan_path = /abs/path<br/>or investigation_path = ...<br/>or diagram_path = ..."}
    end

    subgraph RuntimeRecovery ["RUNTIME RECOVERY — Three-Stage Chain"]
        direction TB
        RecovB["● Recovery B<br/>━━━━━━━━━━<br/>Scan assistant_messages<br/>for token in JSONL stream<br/>(drain-race: stdout not flushed)"]
        RecovC["● Recovery C (NEW)<br/>━━━━━━━━━━<br/>Scan tool_uses for Write.file_path<br/>Synthesize: token_name = file_path<br/>Only for path-capture patterns"]
        Synthesized["● Synthesized contract token<br/>━━━━━━━━━━<br/>plan_path = /abs/path/plan.md<br/>(from Write tool_use metadata)<br/>Prepended to session.result"]
    end

    subgraph ContentStateEval ["CONTENT STATE EVALUATION — session.py"]
        direction TB
        MarkerCheck{"Completion marker<br/>━━━━━━━━━━<br/>%%ORDER_UP%% present<br/>in session.result?"}
        PatternCheck{"● Patterns match?<br/>━━━━━━━━━━<br/>_check_expected_patterns<br/>AND-match all regexes<br/>normalize bold markup"}
        StateDecide["● _evaluate_content_state<br/>━━━━━━━━━━<br/>COMPLETE / ABSENT<br/>CONTRACT_VIOLATION<br/>SESSION_ERROR"]
    end

    subgraph ContractGates ["CONTRACT GATES — Dead-End Guard"]
        direction TB
        AbsentGate{"ContentState<br/>ABSENT?<br/>━━━━━━━━━━<br/>result empty or<br/>marker missing"}
        CVGate{"ContentState<br/>CONTRACT_VIOLATION?<br/>━━━━━━━━━━<br/>marker present<br/>patterns failed"}
        WriteEvidence{"● Write evidence?<br/>━━━━━━━━━━<br/>write_call_count ≥ 1<br/>AND adjudicated_failure"}
    end

    %% CONTRACT DEFINITION FLOW %%
    StaticTest -->|"CI enforces"| SkillMD
    SkillMD -->|"instructs model"| EmitToken
    SkillContracts -->|"defines patterns"| PatternCheck

    %% MODEL EXECUTION %%
    WriteArtifact --> EmitToken
    EmitToken -->|"YES — token emitted"| PatternCheck
    EmitToken -->|"NO — token omitted"| RecovB

    %% RECOVERY %%
    RecovB -->|"found in messages"| PatternCheck
    RecovB -->|"not found"| RecovC
    RecovC -->|"Write.file_path found"| Synthesized
    RecovC -->|"no write evidence"| PatternCheck
    Synthesized --> PatternCheck

    %% CONTENT STATE EVALUATION %%
    PatternCheck -->|"all patterns match"| MarkerCheck
    PatternCheck -->|"patterns absent"| StateDecide
    MarkerCheck -->|"present"| T_OK
    MarkerCheck -->|"absent"| StateDecide
    StateDecide --> AbsentGate
    AbsentGate -->|"ABSENT"| T_RETRY
    AbsentGate -->|"not ABSENT"| CVGate
    CVGate -->|"SESSION_ERROR"| T_VIOLATED
    CVGate -->|"CONTRACT_VIOLATION"| WriteEvidence
    WriteEvidence -->|"write evidence present<br/>CONTRACT_RECOVERY gate"| T_RETRY
    WriteEvidence -->|"no write evidence<br/>terminal violation"| T_VIOLATED

    %% OUTCOMES %%
    T_OK -->|"subtype=success"| T_OK
    T_RETRY -->|"DRAIN_RACE or CONTRACT_RECOVERY<br/>budget-capped by _apply_budget_guard"| T_RETRY

    %% CLASS ASSIGNMENTS %%
    class T_OK,T_RETRY,T_VIOLATED terminal;
    class SkillContracts,SkillMD newComponent;
    class StaticTest newComponent;
    class WriteArtifact handler;
    class EmitToken stateNode;
    class RecovB,RecovC,Synthesized newComponent;
    class PatternCheck,MarkerCheck detector;
    class StateDecide phase;
    class AbsentGate,CVGate,WriteEvidence stateNode;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Contract outcomes: Satisfied, Retry Eligible,
Violated |
| Green | New/Modified | Components changed in this PR |
| Red | Detector | Pattern matching gates and marker checks |
| Purple | Phase | ContentState evaluation dispatcher |
| Teal | State | Decision nodes (marker check, content state, write
evidence) |
| Orange | Handler | Model Write tool call execution |

Closes #477

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260322-120141-753065/temp/rectify/rectify_artifact-aware-contract-recovery_2026-03-22_120141_part_b.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

`format_ingredients_table` (the GFM/MCP rendering path for recipe
ingredients) computes
column widths via raw `max(len(...))` with floors but no ceilings. The
result: a 220-char
`run_mode` description in `implementation.yaml` forces the GFM table to
220+ column-wide
rows, bloating every MCP response that loads that recipe.

The immunity solution: extend `core/_terminal_table.py` with
`_render_gfm_table`, accepting
the same `TerminalColumn` specs already used by the terminal path. Both
rendering paths now
share the same L0 primitive and the same column-spec source of truth.
Width capping becomes
structurally implicit — any new GFM renderer must declare
`TerminalColumn` specs and
automatically inherits the cap.

## Architecture Impact

### Module Dependency Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 70, 'curve': 'basis'}}}%%
graph TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;
    classDef integration fill:#c62828,stroke:#ef9a9a,stroke-width:2px,color:#fff;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    subgraph L0 ["L0 — core/ (stdlib only, zero autoskillit imports)"]
        direction LR
        TT["● core/_terminal_table.py<br/>━━━━━━━━━━<br/>TerminalColumn (NamedTuple)<br/>_render_terminal_table()<br/>★ _render_gfm_table() NEW"]
        INIT["● core/__init__.py<br/>━━━━━━━━━━<br/>Re-exports TerminalColumn<br/>Re-exports _render_terminal_table<br/>★ Re-exports _render_gfm_table NEW"]
    end

    subgraph L1 ["L1 — pipeline/"]
        direction LR
        TFM["pipeline/telemetry_fmt.py<br/>━━━━━━━━━━<br/>TelemetryFormatter<br/>imports: TerminalColumn<br/>imports: _render_terminal_table"]
    end

    subgraph L2 ["L2 — recipe/"]
        direction LR
        API["● recipe/_api.py<br/>━━━━━━━━━━<br/>format_ingredients_table()<br/>_GFM_INGREDIENT_COLUMNS<br/>imports: TerminalColumn<br/>★ imports: _render_gfm_table NEW"]
    end

    subgraph L3 ["L3 — cli/"]
        direction LR
        ANSI["cli/_ansi.py<br/>━━━━━━━━━━<br/>ingredients_to_terminal()<br/>imports: TerminalColumn only<br/>(inline _render_terminal_table)"]
        SHIM["cli/_terminal_table.py<br/>━━━━━━━━━━<br/>Re-export shim<br/>TerminalColumn<br/>_render_terminal_table"]
    end

    subgraph TESTS ["Tests"]
        direction LR
        GUARD["★ tests/arch/test_gfm_rendering_guard.py<br/>━━━━━━━━━━<br/>Arch guard: asserts delegation<br/>and bounded column specs"]
        TAPI["● tests/recipe/test_api.py<br/>━━━━━━━━━━<br/>GFM width cap behavioral tests<br/>Integration test vs real recipe"]
    end

    TT -->|"defines"| INIT
    TFM -->|"imports TerminalColumn<br/>_render_terminal_table"| INIT
    API -->|"imports TerminalColumn<br/>★ + _render_gfm_table"| INIT
    ANSI -->|"imports TerminalColumn"| INIT
    SHIM -->|"imports TerminalColumn<br/>_render_terminal_table"| INIT
    GUARD -->|"asserts delegation<br/>+ bounded max_width"| API
    GUARD -->|"verifies export surface"| INIT
    TAPI -->|"behavioral tests<br/>width cap + truncation"| API

    class TT,INIT stateNode;
    class TFM handler;
    class API phase;
    class ANSI,SHIM cli;
    class GUARD newComponent;
    class TAPI output;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Teal | Core (L0) | `core/_terminal_table.py` and `core/__init__.py` —
high fan-in primitives |
| Orange | Pipeline (L1) | `pipeline/telemetry_fmt.py` — unchanged
consumer |
| Purple | Recipe (L2) | `recipe/_api.py` — key change: now imports
`_render_gfm_table` |
| Dark Blue | CLI (L3) | `cli/_ansi.py` and re-export shim — unchanged
consumers |
| Green (★) | New | `test_gfm_rendering_guard.py` — new arch guard |
| Dark Teal | Modified Test | `test_api.py` — new behavioral tests added
|

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 55, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;

    START([Caller: load_and_validate<br/>or MCP tools])

    subgraph FIT ["● recipe/_api.py — format_ingredients_table()"]
        direction TB
        G1{"ingredients<br/>non-empty?"}
        BIR["build_ingredient_rows()<br/>━━━━━━━━━━<br/>Full-length tuples<br/>(name, desc, default)<br/>No truncation at data layer"]
        G2{"rows<br/>non-empty?"}
        DELEGATE["● delegate to _render_gfm_table()<br/>━━━━━━━━━━<br/>_GFM_INGREDIENT_COLUMNS spec:<br/>Name: max_width=30<br/>Description: max_width=60<br/>Default: max_width=20"]
    end

    subgraph RGT ["● core/_terminal_table.py — _render_gfm_table()"]
        direction TB
        WIDTHS["● Compute column widths<br/>━━━━━━━━━━<br/>col_w = min(<br/>  max(cell_lengths, label_width),<br/>  max_width  ← BOUNDED"]
        HEADER["Render header row<br/>━━━━━━━━━━<br/>| Name | Description | Default |"]
        SEP["Render separator row<br/>━━━━━━━━━━<br/>| ---: | :--- | ---: |<br/>(alignment from TerminalColumn.align)"]
        ROWLOOP{"For each<br/>data row"}
        TRUNC{"cell length<br/>> col_w?"}
        PAD["Pad cell to col_w<br/>━━━━━━━━━━<br/>f'{cell:<{col_w}}'<br/>or right-aligned"]
        ELLIPSIS["Truncate + append '…'<br/>━━━━━━━━━━<br/>cell[:col_w-1] + '…'"]
        EMIT["Emit row:<br/>| cell | cell | cell |"]
        JOIN["Join all rows with newline<br/>━━━━━━━━━━<br/>Return GFM table string"]
    end

    ELIMINATED["✗ ELIMINATED: inline ad-hoc width math<br/>━━━━━━━━━━<br/>dw = max(len(r[1])...) — no ceiling<br/>f'| {desc:<{dw}} |' — uncapped padding<br/>220-char description → 220-wide column"]

    NONE([Return None])
    RESULT([Return GFM table string<br/>All rows ≤ 120 chars wide])

    START --> G1
    G1 -->|"empty"| NONE
    G1 -->|"non-empty"| BIR
    BIR --> G2
    G2 -->|"empty"| NONE
    G2 -->|"non-empty"| DELEGATE
    DELEGATE --> WIDTHS
    WIDTHS --> HEADER
    HEADER --> SEP
    SEP --> ROWLOOP
    ROWLOOP -->|"next row"| TRUNC
    TRUNC -->|"yes"| ELLIPSIS
    TRUNC -->|"no"| PAD
    ELLIPSIS --> EMIT
    PAD --> EMIT
    EMIT -->|"more rows"| ROWLOOP
    EMIT -->|"done"| JOIN
    JOIN --> RESULT
    ELIMINATED -.->|"replaced by DELEGATE"| DELEGATE

    class START terminal;
    class NONE terminal;
    class RESULT terminal;
    class G1,G2 stateNode;
    class BIR handler;
    class DELEGATE phase;
    class WIDTHS,HEADER,SEP newComponent;
    class ROWLOOP stateNode;
    class TRUNC stateNode;
    class PAD newComponent;
    class ELLIPSIS newComponent;
    class EMIT newComponent;
    class JOIN newComponent;
    class ELIMINATED gap;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Entry caller, return paths |
| Teal | Decision | Empty guards, per-row loop, truncation decision |
| Orange | Handler | `build_ingredient_rows` — unchanged data producer |
| Purple | Delegate | Delegation call to `_render_gfm_table` via column
spec |
| Green (●) | New Logic | Width computation, truncation, table emission
— all now in L0 primitive |
| Amber | Eliminated | Inline ad-hoc width math removed by this PR |

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260322-173128-741993/.autoskillit/temp/rectify/rectify_format_ingredients_table_gfm_width_cap_2026-03-22_000000.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit
## Summary

The clone isolation contract (`origin=file://`, `upstream=real-url`) is
enforced at the Python execution layer (`remote_resolver.py`,
`REMOTE_PRECEDENCE = ("upstream", "origin")`) but no mechanism
propagated this contract to the bash layer of SKILL.md files — skills
could freely write `git fetch origin` or `git rebase origin/{branch}`,
silently operating against the stale `file://` clone path rather than
the real GitHub remote.

Part A establishes the **architectural immunity guard**: a new semantic
rule `hardcoded-origin-remote` in `rules_skill_content.py` that fires
during `validate_recipe` / `run_semantic_rules` whenever any
recipe-referenced skill uses a literal `origin` remote name in a
bash-level git command that contacts a remote. Part B (landed in the
same branch) fixes the immediate violations in
`resolve-merge-conflicts`, `retry-worktree`, and `implement-worktree`
SKILL.md files by introducing the `REMOTE=$(git remote get-url upstream
2>/dev/null && echo upstream || echo origin)` pattern.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;

    START([START: validate_recipe])
    END_PASS([PASS: no hardcoded-origin findings])
    END_WARN([WARN: RuleFinding emitted])

    subgraph Validation ["● VALIDATION PIPELINE (rules_skill_content.py)"]
        direction TB
        RSR["run_semantic_rules()<br/>━━━━━━━━━━<br/>iterates recipe steps"]
        SKIP1{"step.tool<br/>== run_skill?"}
        RESOLVE["resolve_skill_name()<br/>━━━━━━━━━━<br/>extract skill name<br/>from skill_command"]
        READMD["_resolve_skill_md()<br/>━━━━━━━━━━<br/>locate SKILL.md"]
        EXTRACT["extract_bash_blocks()<br/>━━━━━━━━━━<br/>_skill_placeholder_parser.py"]
        CHKGIT{"_GIT_REMOTE_COMMAND_RE<br/>━━━━━━━━━━<br/>git fetch/rebase/<br/>log/show/rev-parse?"}
        CHKLIT{"_LITERAL_ORIGIN_RE<br/>━━━━━━━━━━<br/>literal 'origin'<br/>(not $REMOTE)?"}
        EMIT["● _check_hardcoded_origin_remote<br/>━━━━━━━━━━<br/>RuleFinding(WARNING)<br/>rule=hardcoded-origin-remote"]
    end

    subgraph Runtime ["● RUNTIME SKILL EXECUTION (SKILL.md bash blocks)"]
        direction TB
        INVOKE["skill invoked<br/>━━━━━━━━━━<br/>resolve-merge-conflicts<br/>retry-worktree<br/>implement-worktree"]
        REMOTE_DETECT{"git remote get-url upstream<br/>━━━━━━━━━━<br/>upstream reachable?"}
        USE_UP["REMOTE=upstream<br/>━━━━━━━━━━<br/>real GitHub URL"]
        USE_OR["REMOTE=origin<br/>━━━━━━━━━━<br/>fallback (non-clone env)"]
        FETCH["git fetch $REMOTE<br/>━━━━━━━━━━<br/>fetches from correct remote"]
        REBASE["git rebase $REMOTE/{base_branch}<br/>━━━━━━━━━━<br/>rebases against live state"]
    end

    START --> RSR
    RSR --> SKIP1
    SKIP1 -->|"yes"| RESOLVE
    SKIP1 -->|"no: skip step"| END_PASS
    RESOLVE --> READMD
    READMD --> EXTRACT
    EXTRACT --> CHKGIT
    CHKGIT -->|"no git remote cmd"| END_PASS
    CHKGIT -->|"git remote cmd found"| CHKLIT
    CHKLIT -->|"no literal origin"| END_PASS
    CHKLIT -->|"literal origin detected"| EMIT
    EMIT --> END_WARN

    INVOKE --> REMOTE_DETECT
    REMOTE_DETECT -->|"yes"| USE_UP
    REMOTE_DETECT -->|"no"| USE_OR
    USE_UP --> FETCH
    USE_OR --> FETCH
    FETCH --> REBASE

    %% CLASS ASSIGNMENTS %%
    class START,END_PASS,END_WARN terminal;
    class RSR,RESOLVE,READMD,EXTRACT phase;
    class CHKGIT,CHKLIT,REMOTE_DETECT stateNode;
    class EMIT detector;
    class INVOKE handler;
    class USE_UP,USE_OR,FETCH,REBASE newComponent;
    class SKIP1 stateNode;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Start, pass, and warn terminal states |
| Purple | Phase | Validation pipeline processing steps |
| Teal | Decision | Guard conditions and remote detection |
| Red | Detector | Rule finding emission on violation |
| Orange | Handler | Skill invocation entry point |
| Green | New/Modified | Updated remote resolution pattern in SKILL.md
files |

Closes #487

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260322-173128-278959/.autoskillit/temp/rectify/rectify_hardcoded-origin-remote_2026-03-22_180100_part_a.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Trecek and others added 15 commits March 23, 2026 05:15
…Title (#492)

## Summary

The `open-pr` skill's Step 2 extracts the first `# ` heading from the
plan file verbatim and uses it as the PR title. Because `make-plan` and
`rectify` mandate that multi-part plan files include `— PART X ONLY` in
their heading (e.g., `# Implementation Plan: Foo — PART A ONLY`), this
internal scope marker leaks directly into the PR title — making PRs
appear partial when all parts are implemented.

The fix is two-part: (1) update the bash block at Step 2 to pipe through
a `sed` that strips the suffix from the extracted heading, and (2)
update the Step 2 prose to explicitly instruct stripping the suffix
before passing headings to the multi-plan subagent synthesis path. A
contract test is added to guard against regression.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;

    START([START])
    END([END])

    subgraph Extract ["● Step 2: Title Extraction (open-pr/SKILL.md)"]
        direction TB
        HeadExtract["Extract heading<br/>━━━━━━━━━━<br/>head -1 {plan_path}<br/>sed 's/^# //'"]
        StripSuffix["● Strip PART X ONLY suffix<br/>━━━━━━━━━━<br/>sed 's/ *— *PART [A-Z] ONLY$//'<br/>guards against scope-marker leakage"]
    end

    subgraph PlanRoute ["Plan Count Routing"]
        PlanCount{"single or<br/>multiple plans?"}
        SingleUse["Use directly<br/>━━━━━━━━━━<br/>BASE_TITLE = stripped heading"]
        MultiSynth["Subagent synthesis<br/>━━━━━━━━━━<br/>sonnet subagent<br/>synthesizes clean title"]
    end

    subgraph PrefixApply ["Step 2b: run_name Prefix"]
        RunNameCheck{"run_name<br/>switch"}
        FeaturePrefix["[FEATURE] BASE_TITLE<br/>━━━━━━━━━━<br/>run_name starts with 'feature'"]
        FixPrefix["[FIX] BASE_TITLE<br/>━━━━━━━━━━<br/>run_name starts with 'fix'"]
        NoPrefix["BASE_TITLE unchanged<br/>━━━━━━━━━━<br/>any other run_name (e.g. 'impl')"]
    end

    PRCreate["gh pr create<br/>━━━━━━━━━━<br/>--title TITLE"]

    ContractTests["● Contract Tests<br/>━━━━━━━━━━<br/>test_part_suffix_stripped_in_bash_block<br/>test_step2_prose_instructs_suffix_stripping"]

    START --> HeadExtract
    HeadExtract --> StripSuffix
    StripSuffix --> PlanCount
    PlanCount -->|"single"| SingleUse
    PlanCount -->|"multiple"| MultiSynth
    SingleUse --> RunNameCheck
    MultiSynth --> RunNameCheck
    RunNameCheck -->|"feature*"| FeaturePrefix
    RunNameCheck -->|"fix*"| FixPrefix
    RunNameCheck -->|"other"| NoPrefix
    FeaturePrefix --> PRCreate
    FixPrefix --> PRCreate
    NoPrefix --> PRCreate
    PRCreate --> END

    ContractTests -.->|"guards"| StripSuffix

    class START,END terminal;
    class HeadExtract handler;
    class StripSuffix newComponent;
    class PlanCount,RunNameCheck detector;
    class SingleUse,MultiSynth phase;
    class FeaturePrefix,FixPrefix,NoPrefix stateNode;
    class PRCreate output;
    class ContractTests newComponent;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | Start and end points |
| Orange | Handler | Heading extraction processing |
| Green | Modified | ● Nodes modified by this PR (suffix strip, contract
tests) |
| Red | Detector | Decision points (plan count, run_name switch) |
| Purple | Phase | Synthesis paths (direct use, subagent) |
| Teal | State | Prefix variant state nodes |
| Dark Teal | Output | gh pr create invocation |

Closes #488

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/impl-20260322-203748-209804/temp/make-plan/open_pr_part_suffix_plan_2026-03-22_000000.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

When multiple pipelines run in parallel, each one independently reaches
`confirm_cleanup`
(an interactive `action: confirm` step) and prompts the user before
calling `remove_clone`.
This interactive prompt blocks the shared execution context and stalls
all sibling
pipelines that are still executing.

The fix introduces a **deferred cleanup mode** controlled by a
`defer_cleanup` ingredient.
When enabled, each individual pipeline skips the interactive cleanup
gate and instead
registers its clone path and completion status to a shared file-based
registry. After
all parallel pipelines complete, the orchestrator runs a single batch
cleanup phase —
one confirmation prompt, one bulk delete of successful clones, all error
clones preserved
automatically.

Changes span: a new `workspace/clone_registry.py` module, two new MCP
tools
(`register_clone_status`, `batch_cleanup_clones`), a new routing utility
in
`smoke_utils.py`, and updated terminal steps in four recipe YAML files.

## Requirements

### Clone Lifecycle Management

- **REQ-CLONE-001:** The system must suppress clone deletion prompts
while any sibling pipeline in the same batch is still executing.
- **REQ-CLONE-002:** The system must defer clone cleanup to a single
batch operation that runs only after all pipelines in the batch have
completed.
- **REQ-CLONE-003:** The system must exclude from cleanup any clone
whose pipeline encountered an error, preserving it for investigation.
- **REQ-CLONE-004:** The system must only offer deletion for clones
whose pipelines completed successfully and without errors.

### Orchestration Coordination

- **REQ-ORCH-001:** The orchestrator must track completion state of all
parallel pipelines before initiating any cleanup phase.
- **REQ-ORCH-002:** The orchestrator must treat clone cleanup as a
distinct terminal phase that cannot overlap with pipeline execution.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;

    %% TERMINALS %%
    DONE([done])
    ESC([escalate_stop])

    subgraph IssueRelease ["Issue Release (per-pipeline)"]
        direction LR
        RIS["● release_issue_success<br/>━━━━━━━━━━<br/>release_issue tool<br/>target_branch passed"]
        RIT["● release_issue_timeout<br/>━━━━━━━━━━<br/>release_issue tool<br/>no target_branch"]
        RIF["● release_issue_failure<br/>━━━━━━━━━━<br/>release_issue tool<br/>releases claim"]
    end

    subgraph CleanupRouting ["★ Cleanup Mode Routing (per-pipeline)"]
        direction TB
        CDC{"★ check_defer_cleanup<br/>━━━━━━━━━━<br/>check_cleanup_mode()<br/>success path"}
        CDOF{"★ check_defer_on_failure<br/>━━━━━━━━━━<br/>check_cleanup_mode()<br/>failure path"}
    end

    subgraph ImmediateCleanup ["Immediate Cleanup (defer_cleanup=false)"]
        direction TB
        CC["● confirm_cleanup<br/>━━━━━━━━━━<br/>action: confirm<br/>user gate"]
        DC["delete_clone<br/>━━━━━━━━━━<br/>remove_clone keep=false"]
        CF["cleanup_failure<br/>━━━━━━━━━━<br/>remove_clone keep=true<br/>preserves on error"]
    end

    subgraph DeferredReg ["★ Deferred Registration (defer_cleanup=true)"]
        direction TB
        REG["★ register_success_deferred<br/>━━━━━━━━━━<br/>register_clone_status<br/>status=success"]
        REGE["★ register_error_deferred<br/>━━━━━━━━━━<br/>register_clone_status<br/>status=error"]
        RF[("★ clone-cleanup-registry.json<br/>━━━━━━━━━━<br/>atomic JSON writes<br/>{path, status} entries")]
    end

    subgraph BatchPhase ["★ Batch Cleanup Phase (orchestrator — runs ONCE after all pipelines)"]
        direction TB
        BCC["★ batch_confirm_cleanup<br/>━━━━━━━━━━<br/>action: confirm<br/>single user prompt"]
        BDC["★ batch_delete_clones<br/>━━━━━━━━━━<br/>batch_cleanup_clones tool<br/>reads registry"]
    end

    %% SUCCESS PATH %%
    RIS -->|"on_success / on_failure"| CDC
    RIT -->|"on_success / on_failure"| CDC
    CDC -->|"deferred == 'false'"| CC
    CDC -->|"deferred == 'true'"| REG
    CC -->|"user: yes"| DC
    CC -->|"user: no"| DONE
    DC -->|"on_success / on_failure"| DONE
    REG -->|"on_success / on_failure"| DONE
    REG -->|"writes"| RF

    %% FAILURE PATH %%
    RIF -->|"on_success / on_failure"| CDOF
    CDOF -->|"deferred == 'false'"| CF
    CDOF -->|"deferred == 'true'"| REGE
    CF -->|"on_success / on_failure"| ESC
    REGE -->|"on_success / on_failure"| ESC
    REGE -->|"writes"| RF

    %% BATCH PATH (orchestrator-level) %%
    RF -->|"reads all entries"| BDC
    BCC -->|"user: yes"| BDC
    BCC -->|"user: no"| DONE
    BDC -->|"on_success / on_failure"| DONE

    %% CLASS ASSIGNMENTS %%
    class DONE,ESC terminal;
    class RIS,RIT,RIF handler;
    class DC,CF handler;
    class CC handler;
    class CDC,CDOF stateNode;
    class REG,REGE,BCC,BDC newComponent;
    class RF newComponent;
```

### Module Dependency Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 70, 'curve': 'basis'}}}%%
graph TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef integration fill:#c62828,stroke:#ef9a9a,stroke-width:2px,color:#fff;
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;

    subgraph L3 ["LAYER 3 — SERVER"]
        direction LR
        TOOLS_CLONE["● server/tools_clone.py<br/>━━━━━━━━━━<br/>+register_clone_status<br/>+batch_cleanup_clones"]
        HELPERS["● server/helpers.py<br/>━━━━━━━━━━<br/>re-exports clone_registry<br/>module object"]
        SRV_INIT["● server/__init__.py<br/>━━━━━━━━━━<br/>40 kitchen tools<br/>(was 38)"]
    end

    subgraph L1 ["LAYER 1 — WORKSPACE"]
        direction LR
        CLONE_REG["★ workspace/clone_registry.py<br/>━━━━━━━━━━<br/>register_clone()<br/>read_registry()<br/>cleanup_candidates()<br/>batch_delete()"]
        WS_INIT["● workspace/__init__.py<br/>━━━━━━━━━━<br/>+register_clone<br/>+read_registry<br/>+cleanup_candidates"]
    end

    subgraph L0C ["LAYER 0 — CORE"]
        direction LR
        TYPE_CONST["● core/_type_constants.py<br/>━━━━━━━━━━<br/>+register_clone_status<br/>+batch_cleanup_clones<br/>in GATED_TOOLS,<br/>TOOL_SUBSET_TAGS,<br/>TOOL_CATEGORIES"]
        CORE_IO["core/io.py<br/>━━━━━━━━━━<br/>atomic_write()"]
        CORE_LOG["core/logging.py<br/>━━━━━━━━━━<br/>get_logger()"]
    end

    subgraph L0S ["LAYER 0 — STANDALONE UTILS"]
        SMOKE["● smoke_utils.py<br/>━━━━━━━━━━<br/>+check_cleanup_mode()<br/>stdlib only — no autoskillit deps"]
    end

    subgraph EXT ["EXTERNAL"]
        direction LR
        FASTMCP["fastmcp<br/>━━━━━━━━━━<br/>Context, CurrentContext<br/>@mcp.tool decorator"]
        STDLIB["stdlib<br/>━━━━━━━━━━<br/>json, pathlib, asyncio<br/>typing, collections.abc"]
    end

    %% L3 → L1 (valid downward) %%
    HELPERS -->|"imports module<br/>clone_registry"| CLONE_REG
    TOOLS_CLONE -->|"uses clone_registry<br/>via helpers"| HELPERS

    %% L3 → L0 (valid downward) %%
    TOOLS_CLONE -->|"imports get_logger"| CORE_LOG
    SRV_INIT -->|"reads GATED_TOOLS<br/>tool counts"| TYPE_CONST

    %% L1 → L1 (intra-layer, package init) %%
    WS_INIT -->|"re-exports 3 functions"| CLONE_REG

    %% L1 → L0 (valid downward) %%
    CLONE_REG -->|"atomic_write()"| CORE_IO
    CLONE_REG -->|"get_logger()"| CORE_LOG

    %% External %%
    TOOLS_CLONE -->|"@mcp.tool<br/>Context"| FASTMCP
    CLONE_REG --> STDLIB
    SMOKE --> STDLIB

    %% CLASS ASSIGNMENTS %%
    class TOOLS_CLONE,SRV_INIT cli;
    class HELPERS phase;
    class WS_INIT handler;
    class CLONE_REG newComponent;
    class TYPE_CONST,CORE_IO,CORE_LOG stateNode;
    class SMOKE handler;
    class FASTMCP,STDLIB integration;
```

### C4 Container Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60, 'curve': 'basis'}}}%%
graph TB
    %% CLASS DEFINITIONS %%
    classDef cli fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef integration fill:#c62828,stroke:#ef9a9a,stroke-width:2px,color:#fff;

    %% CALLERS %%
    PIPE1(["Parallel Pipeline 1<br/>━━━━━━━━━━<br/>headless session<br/>calls on completion"])
    PIPE2(["Parallel Pipeline N<br/>━━━━━━━━━━<br/>headless session<br/>calls on completion"])
    ORCH(["Orchestrator<br/>━━━━━━━━━━<br/>interactive session<br/>runs batch phase once"])

    subgraph Server ["● FastMCP Server (42 tools, 40 kitchen-tagged)"]
        direction TB

        subgraph CloneTools ["● tools_clone.py — Clone & Remote category"]
            direction LR
            CLONE_REPO["clone_repo<br/>━━━━━━━━━━<br/>clone GitHub repos<br/>for pipeline isolation"]
            REMOVE_CLONE["remove_clone<br/>━━━━━━━━━━<br/>delete clone dir<br/>(immediate path)"]
            PUSH["push_to_remote<br/>━━━━━━━━━━<br/>push branch to remote"]
            REG_TOOL["★ register_clone_status<br/>━━━━━━━━━━<br/>status: success | error<br/>clone_path, registry_path"]
            BATCH_TOOL["★ batch_cleanup_clones<br/>━━━━━━━━━━<br/>reads registry<br/>deletes success clones"]
        end

        TYPE_CONST["● core/_type_constants.py<br/>━━━━━━━━━━<br/>GATED_TOOLS: +2 tools<br/>TOOL_SUBSET_TAGS: clone tag<br/>TOOL_CATEGORIES: Clone & Remote"]
    end

    subgraph WorkspaceLib ["Workspace Library (L1)"]
        direction TB
        CLONE_REG_MOD["★ workspace/clone_registry.py<br/>━━━━━━━━━━<br/>register_clone() — atomic append<br/>read_registry() — safe read<br/>cleanup_candidates() — partition<br/>batch_delete() — bulk remove"]
    end

    subgraph Storage ["Storage"]
        REGISTRY[("★ clone-cleanup-registry.json<br/>━━━━━━━━━━<br/>JSON on disk<br/>[ {path, status}, … ]<br/>atomic writes — parallel-safe")]
    end

    %% CONNECTIONS %%
    PIPE1 -->|"MCP: register_clone_status<br/>status=success|error"| REG_TOOL
    PIPE2 -->|"MCP: register_clone_status<br/>status=success|error"| REG_TOOL
    ORCH -->|"MCP: batch_cleanup_clones<br/>(after all pipelines done)"| BATCH_TOOL

    REG_TOOL -->|"calls register_clone()"| CLONE_REG_MOD
    BATCH_TOOL -->|"calls batch_delete()"| CLONE_REG_MOD
    BATCH_TOOL -->|"calls remove_clone (sync)"| REMOVE_CLONE

    CLONE_REG_MOD -->|"atomic writes"| REGISTRY
    CLONE_REG_MOD -->|"reads entries"| REGISTRY

    Server -.->|"gating lookup"| TYPE_CONST

    %% CLASS ASSIGNMENTS %%
    class PIPE1,PIPE2,ORCH cli;
    class CLONE_REPO,REMOVE_CLONE,PUSH handler;
    class REG_TOOL,BATCH_TOOL,CLONE_REG_MOD newComponent;
    class REGISTRY stateNode;
    class TYPE_CONST phase;
```

Closes #486

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/impl-20260322-203748-529080/temp/make-plan/defer_clone_cleanup_plan_2026-03-22_205207.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
## Summary

`_format_response()` in `pretty_output.py` conflated two independent
concerns: **envelope detection** (did the Claude Code `{"result":
"..."}` wrapper contain a JSON dict or plain text?) and **formatter
dispatch** (which formatter handles this tool?). When a tool returned
plain text, an `except: pass` silently left `data` as the raw envelope
dict, causing named formatters to receive the wrong shape — producing
truncated near-empty output (`## token_summary`) with no error. The fix
extracts `_resolve_payload()` which produces a typed `_DictPayload` or
`_PlainTextPayload` before any dispatch, making it structurally
impossible for a named dict-formatter to receive a plain-text envelope.
A secondary defect — `_UNFORMATTED_TOOLS` was declared but never
consulted — is also corrected by making it an active behavioral gate.

## Architecture Impact

### Process Flow Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;

    %% TERMINALS %%
    START([PostToolUse Event])
    PASSTHRU([Pass-Through: exit 0])
    FORMATTED([Formatted Output: exit 0])

    subgraph Entry ["Hook Entry — main()"]
        direction TB
        ParseStdin["ParseStdin<br/>━━━━━━━━━━<br/>json.loads(stdin)"]
        ParseErr{"JSONDecodeError<br/>or non-dict?"}
        ExtractFields["ExtractFields<br/>━━━━━━━━━━<br/>tool_name + tool_response"]
        MissingFields{"Missing<br/>fields?"}
        PipelineMode["_is_pipeline_mode()<br/>━━━━━━━━━━<br/>Read hook config JSON"]
    end

    subgraph Resolution ["★ _resolve_payload() — NEW typed resolver"]
        direction TB
        ParseOuter["ParseOuter<br/>━━━━━━━━━━<br/>json.loads(tool_response)"]
        OuterErr{"parse error<br/>or non-dict?"}
        EnvCheck{"mcp__ prefix +<br/>single result key +<br/>str value?"}
        InnerParse["★ InnerParse<br/>━━━━━━━━━━<br/>json.loads(data[result])"]
        InnerDict{"inner is<br/>dict?"}
        DictPayload["★ _DictPayload(data=inner)<br/>━━━━━━━━━━<br/>Unwrapped JSON object"]
        PlainText["★ _PlainTextPayload(text)<br/>━━━━━━━━━━<br/>Pre-formatted string"]
        BareDict["_DictPayload(data)<br/>━━━━━━━━━━<br/>No envelope to strip"]
    end

    subgraph PlainDispatch ["★ Plain-Text Dispatch — NEW branch"]
        direction TB
        PlainLookup{"★ short_name in<br/>_PLAIN_TEXT_FORMATTERS?"}
        PlainHandler["★ _fmt_open_kitchen_plain_text()<br/>━━━━━━━━━━<br/>Custom plain-text render"]
        PlainPassThru["PassThru<br/>━━━━━━━━━━<br/>Return text unchanged"]
    end

    subgraph DictDispatch ["● Dict Dispatch — _UNFORMATTED_TOOLS now active gate"]
        direction TB
        GateErr{"subtype ==<br/>gate_error?"}
        ToolExc{"subtype ==<br/>tool_exception?"}
        UnformCheck{"● short_name in<br/>_UNFORMATTED_TOOLS?"}
        FormatLookup{"short_name in<br/>_FORMATTERS?"}
        GateFmt["_fmt_gate_error()<br/>━━━━━━━━━━<br/>Gate error renderer"]
        ExcFmt["_fmt_tool_exception()<br/>━━━━━━━━━━<br/>Exception renderer"]
        NamedFmt["_FORMATTERS[short_name]<br/>━━━━━━━━━━<br/>Named formatter dispatch"]
        Generic["_fmt_generic()<br/>━━━━━━━━━━<br/>Fallback KV renderer"]
    end

    %% FLOW %%
    START --> ParseStdin
    ParseStdin --> ParseErr
    ParseErr -->|"yes"| PASSTHRU
    ParseErr -->|"no"| ExtractFields
    ExtractFields --> MissingFields
    MissingFields -->|"yes"| PASSTHRU
    MissingFields -->|"no"| PipelineMode
    PipelineMode --> ParseOuter

    ParseOuter --> OuterErr
    OuterErr -->|"yes"| PASSTHRU
    OuterErr -->|"no"| EnvCheck
    EnvCheck -->|"bare dict"| BareDict
    EnvCheck -->|"envelope detected"| InnerParse
    InnerParse -->|"success"| InnerDict
    InnerParse -->|"JSONDecodeError"| PlainText
    InnerDict -->|"true"| DictPayload
    InnerDict -->|"false: list/str"| PlainText

    DictPayload --> GateErr
    BareDict --> GateErr
    PlainText --> PlainLookup

    PlainLookup -->|"found"| PlainHandler
    PlainLookup -->|"not found"| PlainPassThru
    PlainHandler --> FORMATTED
    PlainPassThru --> FORMATTED

    GateErr -->|"yes"| GateFmt
    GateErr -->|"no"| ToolExc
    ToolExc -->|"yes"| ExcFmt
    ToolExc -->|"no"| UnformCheck
    UnformCheck -->|"yes"| Generic
    UnformCheck -->|"no"| FormatLookup
    FormatLookup -->|"found"| NamedFmt
    FormatLookup -->|"not found"| Generic

    GateFmt --> FORMATTED
    ExcFmt --> FORMATTED
    NamedFmt --> FORMATTED
    Generic --> FORMATTED

    %% CLASS ASSIGNMENTS %%
    class START,PASSTHRU,FORMATTED terminal;
    class ParseErr,MissingFields,OuterErr,EnvCheck,InnerDict,PlainLookup,GateErr,ToolExc,FormatLookup stateNode;
    class ParseStdin,ExtractFields,PipelineMode,InnerParse,BareDict,GateFmt,ExcFmt,NamedFmt,Generic handler;
    class UnformCheck detector;
    class DictPayload,PlainText,PlainHandler,PlainPassThru,PlainDispatch newComponent;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | PostToolUse event, pass-through, formatted
output |
| Teal | State | Decision points and routing conditions |
| Orange | Handler | Processing nodes (parse, extract, format) |
| Green | New Component | ★ New: `_resolve_payload`,
`_PlainTextPayload`, `_DictPayload`, `_PLAIN_TEXT_FORMATTERS` |
| Red | Detector | ● `_UNFORMATTED_TOOLS` behavioral gate (previously
dead code, now active) |

### Error Resilience Diagram

```mermaid
%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50, 'curve': 'basis'}}}%%
flowchart TB
    %% CLASS DEFINITIONS %%
    classDef terminal fill:#1a237e,stroke:#7986cb,stroke-width:2px,color:#fff;
    classDef stateNode fill:#004d40,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef handler fill:#e65100,stroke:#ffb74d,stroke-width:2px,color:#fff;
    classDef phase fill:#6a1b9a,stroke:#ba68c8,stroke-width:2px,color:#fff;
    classDef newComponent fill:#2e7d32,stroke:#81c784,stroke-width:2px,color:#fff;
    classDef detector fill:#b71c1c,stroke:#ef5350,stroke-width:2px,color:#fff;
    classDef output fill:#00695c,stroke:#4db6ac,stroke-width:2px,color:#fff;
    classDef gap fill:#ff6f00,stroke:#ffa726,stroke-width:2px,color:#000;

    %% TERMINALS %%
    HOOK_START([PostToolUse Event])
    PASSTHRU([Pass-Through: exit 0<br/>Claude Code unaffected])
    FORMATTED([Formatted Output: exit 0])

    subgraph FailOpen ["Fail-Open Sentinels — main()"]
        direction TB
        OuterParse["json.loads(stdin)<br/>━━━━━━━━━━<br/>Parse hook event"]
        ParseFail{"JSONDecodeError?"}
        ExtractOK["Extract tool_name<br/>+ tool_response"]
        FieldsMissing{"Fields<br/>missing?"}
        FormatCall["● _format_response()<br/>━━━━━━━━━━<br/>Format + dispatch"]
        AnyException{"Exception<br/>raised?"}
        NullCheck{"formatted<br/>is None?"}
    end

    subgraph PayloadGates ["★ _resolve_payload() — Validation Gates"]
        direction TB
        ParseTR["json.loads(tool_response)<br/>━━━━━━━━━━<br/>Outer parse"]
        TRFail{"JSONDecodeError<br/>or non-dict?"}
        EnvGate{"MCP envelope<br/>detected?"}
        InnerAttempt["★ json.loads(inner)<br/>━━━━━━━━━━<br/>Inner parse attempt"]
        InnerFail{"JSONDecodeError<br/>or non-dict inner?"}
    end

    subgraph InnerFixGap ["Inner Parse: Error becomes Signal (★ Fixed)"]
        direction TB
        OldBroken["BROKEN (pre-fix)<br/>━━━━━━━━━━<br/>except: pass → wrong shape<br/>→ empty ## token_summary"]
        NewFixed["★ _PlainTextPayload(text)<br/>━━━━━━━━━━<br/>JSONDecodeError is the signal<br/>→ correct plain-text dispatch"]
    end

    subgraph SafeGuards ["● _UNFORMATTED_TOOLS — Structural Safeguard (now active)"]
        direction TB
        UnformGate{"● short_name in<br/>_UNFORMATTED_TOOLS?"}
        SafeRoute["_fmt_generic()<br/>━━━━━━━━━━<br/>Guaranteed safe fallback"]
        NamedDispatch["_FORMATTERS[short_name]<br/>━━━━━━━━━━<br/>Named formatter (dict only)"]
    end

    %% FLOW %%
    HOOK_START --> OuterParse
    OuterParse --> ParseFail
    ParseFail -->|"yes"| PASSTHRU
    ParseFail -->|"no"| ExtractOK
    ExtractOK --> FieldsMissing
    FieldsMissing -->|"yes"| PASSTHRU
    FieldsMissing -->|"no"| FormatCall
    FormatCall --> AnyException
    AnyException -->|"yes"| PASSTHRU
    AnyException -->|"no"| NullCheck
    NullCheck -->|"yes: None"| PASSTHRU
    NullCheck -->|"no: str"| FORMATTED

    FormatCall --> ParseTR
    ParseTR --> TRFail
    TRFail -->|"yes → None"| PASSTHRU
    TRFail -->|"no"| EnvGate
    EnvGate -->|"yes: MCP envelope"| InnerAttempt
    EnvGate -->|"no: bare dict"| UnformGate
    InnerAttempt --> InnerFail
    InnerFail -->|"yes: JSONDecodeError"| NewFixed
    InnerFail -->|"no: inner dict"| UnformGate

    NewFixed --> FORMATTED
    OldBroken -.->|"replaced by ★"| NewFixed

    UnformGate -->|"yes → _fmt_generic"| SafeRoute
    UnformGate -->|"no"| NamedDispatch
    SafeRoute --> FORMATTED
    NamedDispatch --> FORMATTED

    %% CLASS ASSIGNMENTS %%
    class HOOK_START,PASSTHRU,FORMATTED terminal;
    class ParseFail,FieldsMissing,AnyException,NullCheck,TRFail,EnvGate,InnerFail stateNode;
    class OuterParse,ExtractOK,ParseTR,InnerAttempt,SafeRoute,NamedDispatch handler;
    class FormatCall phase;
    class UnformGate detector;
    class NewFixed,PayloadGates newComponent;
    class OldBroken gap;
```

**Color Legend:**
| Color | Category | Description |
|-------|----------|-------------|
| Dark Blue | Terminal | PostToolUse event, pass-through, formatted
output |
| Teal | State | Decision points and failure checks |
| Orange | Handler | Parse and processing nodes |
| Purple | Phase | `_format_response` entry point |
| Green | New Component | ★ Fixed: `_PlainTextPayload` captures inner
JSONDecodeError correctly |
| Red | Detector | ● `_UNFORMATTED_TOOLS` behavioral gate (structural
safeguard) |
| Yellow/Amber | Gap | Pre-fix broken behavior: silent data corruption
via `except: pass` |

Closes #494

## Implementation Plan

Plan file:
`/home/talon/projects/autoskillit-runs/remediation-20260323-081448-474525/.autoskillit/temp/rectify/rectify_pretty_output_hook_payload_dispatch_2026-03-23_084500.md`

🤖 Generated with [Claude Code](https://claude.com/claude-code) via
AutoSkillit

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Remove the generic open-pr-main skill from skills_extended/ and replace it
with a comprehensive project-local promote-to-main skill in .claude/skills/.
The new skill adds pre-flight checks, commit categorization, per-domain risk
scoring, breaking change audit, test coverage delta, regression risk analysis,
release notes generation, traceability matrix, migration detection, cross-domain
dependency analysis, and dry-run mode. Subagents are granted autonomy to spawn
their own sub-subagents at discretion.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When >= 20 PRs have been squash-merged into integration since divergence
from main, the skill bumps the minor version (X.Y+1.0) on integration
before creating the promotion PR. Below 20, the CI workflow's automatic
patch bump suffices. In dry-run mode, the bump is reported but not applied.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The test name claimed to verify on_failure routing but only asserted
path_contamination appeared in the prompt. Now mirrors the drain_race
test pattern with segment proximity check.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…tchen rule tests

Split combined assert with logical-and into separate assertions for
clear failure identification. Scope gh-pr-merge prohibition check to
the specific rule containing the phrase rather than all rules.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace relative Path("src/autoskillit/cli") with __file__-based
resolution to prevent vacuous pass when pytest runs from a non-root
directory.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Log OSError in is_first_run() and gather_intel timeout/failure at
debug level instead of silently swallowing. Capture intel_future
result value for debug visibility.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The kitchen rule uses "Never" (title case) which was missed by the
exact-case keyword list. Switch to .lower() comparison.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Promote integration to main (28 PRs, 25 issues, 16 fixes, 16 features)
" before proceeding.\n"
)
print(" To bypass, type exactly:\n")
print(f" {_D}{_SECRET_SCAN_BYPASS_PHRASE}{_R}\n")

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.

Copilot Autofix

AI 21 days ago

In general, to fix clear-text logging of sensitive information, avoid printing or logging secret or password-like values directly. Instead, describe to the user what to do without echoing the sensitive value, or handle the interaction in a way that doesn’t expose the value in logs (e.g., non-echoed input, partial redaction, or interactive-only presentation).

For this specific case, we should stop printing _SECRET_SCAN_BYPASS_PHRASE directly. The simplest fix that preserves behavior is:

  • Keep the internal constant _SECRET_SCAN_BYPASS_PHRASE unchanged so the comparison logic works as-is.
  • Change the instructions printed to the user so they no longer embed the full phrase in clear text. Instead, we can:
    • Either describe the phrase (e.g., “Type the exact bypass phrase shown above in your documentation”).
    • Or print a partially redacted version (e.g., with some characters replaced by *), so the rule no longer detects a full secret/password being logged, but the user still sees most of it.
  • Leave the input and comparison logic unchanged; the user still has to type the full phrase correctly, but we are not logging it.

Because we can only change the shown snippet, the best minimal change is to modify line 255 so it no longer directly interpolates _SECRET_SCAN_BYPASS_PHRASE. For example, we can construct a masked version of the phrase in code right at that point and print the masked version instead of the full phrase. This keeps the same UI style, requires no new imports, and doesn’t affect any other behavior.

Concretely in src/autoskillit/cli/_init_helpers.py:

  • Right before printing the bypass phrase, introduce a local variable masked_phrase that redacts some of the characters of _SECRET_SCAN_BYPASS_PHRASE (for instance, replace the middle part with ***).
  • Change the print(f" {_D}{_SECRET_SCAN_BYPASS_PHRASE}{_R}\n") line to print masked_phrase instead.
  • No new imports, methods, or global definitions are required.
Suggested changeset 1
src/autoskillit/cli/_init_helpers.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/src/autoskillit/cli/_init_helpers.py b/src/autoskillit/cli/_init_helpers.py
--- a/src/autoskillit/cli/_init_helpers.py
+++ b/src/autoskillit/cli/_init_helpers.py
@@ -252,7 +252,12 @@
         "  before proceeding.\n"
     )
     print("  To bypass, type exactly:\n")
-    print(f"  {_D}{_SECRET_SCAN_BYPASS_PHRASE}{_R}\n")
+    masked_phrase = (
+        _SECRET_SCAN_BYPASS_PHRASE[:10]
+        + "..."
+        + _SECRET_SCAN_BYPASS_PHRASE[-10:]
+    )
+    print(f"  {_D}{masked_phrase}{_R}\n")
     response = input("  > ").strip()
     if response != _SECRET_SCAN_BYPASS_PHRASE:
         print(f"\n  {_B}Aborted.{_R} Phrase did not match.")
EOF
@@ -252,7 +252,12 @@
" before proceeding.\n"
)
print(" To bypass, type exactly:\n")
print(f" {_D}{_SECRET_SCAN_BYPASS_PHRASE}{_R}\n")
masked_phrase = (
_SECRET_SCAN_BYPASS_PHRASE[:10]
+ "..."
+ _SECRET_SCAN_BYPASS_PHRASE[-10:]
)
print(f" {_D}{masked_phrase}{_R}\n")
response = input(" > ").strip()
if response != _SECRET_SCAN_BYPASS_PHRASE:
print(f"\n {_B}Aborted.{_R} Phrase did not match.")
Copilot is powered by AI and may make mistakes. Always verify output.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Investigated — this is intentional. Line 41 defines _SECRET_SCAN_BYPASS_PHRASE as the fixed string "I accept the risk of leaking secrets without pre-commit scanning". Line 255 prints this constant to the terminal inside _check_secret_scanning() as an interactive consent prompt — it is not logging any user-supplied secret or password. The scanner flagged the words "secret"/"secrets" in the variable name and string value, but no sensitive data (credentials, keys, passwords) is involved.

@Trecek Trecek merged commit 86fa625 into stable Mar 24, 2026
7 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants