Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 4 additions & 27 deletions skills/agent-md-refactor/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
name: agent-md-refactor
description: Refactor bloated AGENTS.md, CLAUDE.md, or similar agent instruction files to follow progressive disclosure principles. Splits monolithic files into organized, linked documentation.
description: Refactor bloated AGENTS.md, CLAUDE.md, COPILOT.md, or similar agent instruction files into organized, linked documentation following progressive disclosure principles. Use when files are too long, hard to maintain, or the user wants to split, reorganize, break up, or simplify their agent configuration files.
license: MIT
---

Expand All @@ -10,18 +10,6 @@ Refactor bloated agent instruction files (AGENTS.md, CLAUDE.md, COPILOT.md, etc.

---

## Triggers

Use this skill when:
- "refactor my AGENTS.md" / "refactor my CLAUDE.md"
- "split my agent instructions"
- "organize my CLAUDE.md file"
- "my AGENTS.md is too long"
- "progressive disclosure for my instructions"
- "clean up my agent config"

---

## Quick Reference

| Phase | Action | Output |
Expand Down Expand Up @@ -101,9 +89,10 @@ Organize remaining instructions into logical categories.

**Grouping rules:**
1. Each file should be self-contained for its topic
2. Aim for 3-8 files (not too granular, not too broad)
2. Aim for 3-8 files (not too granular, not too broad) β€” too many categories cause fragmentation
3. Name files clearly: `{topic}.md`
4. Include only actionable instructions
4. Include only actionable instructions β€” vague guidance wastes tokens
5. Use a flat structure with links β€” avoid deep nesting

---

Expand Down Expand Up @@ -217,18 +206,6 @@ Identify instructions that should be removed entirely.

---

## Anti-Patterns

| Avoid | Why | Instead |
|-------|-----|---------|
| Keeping everything in root | Bloated, hard to maintain | Split into linked files |
| Too many categories | Fragmentation | Consolidate related topics |
| Vague instructions | Wastes tokens, no value | Be specific or delete |
| Duplicating defaults | Agent already knows | Only override when needed |
| Deep nesting | Hard to navigate | Flat structure with links |

---

## Examples

### Before (Bloated Root)
Expand Down
7 changes: 2 additions & 5 deletions skills/backend-to-frontend-handoff-docs/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
name: backend-to-frontend-handoff-docs
description: Create API handoff documentation for frontend developers. Use when backend work is complete and needs to be documented for frontend integration, or user says 'create handoff', 'document API', 'frontend handoff', or 'API documentation'.
description: Generate API handoff documentation including endpoint specifications, request/response schemas, authentication details, validation rules, and example payloads for frontend developers. Use when backend work is complete and needs to be documented for frontend integration, or user says 'create handoff', 'document API', 'frontend handoff', or 'API documentation'.
---

# API Handoff Mode
Expand All @@ -9,8 +9,6 @@ description: Create API handoff documentation for frontend developers. Use when

You are a backend developer completing API work. Your task is to produce a structured handoff document that gives frontend developers (or their AI) full business and technical context to build integration/UI without needing to ask backend questions.

> **When to use**: After completing backend API workβ€”endpoints, DTOs, validation, business logicβ€”run this mode to generate handoff documentation.

> **Simple API shortcut**: If the API is straightforward (CRUD, no complex business logic, obvious validation), skip the full templateβ€”just provide the endpoint, method, and example request/response JSON. Frontend can infer the rest.

## Goal
Expand Down Expand Up @@ -109,7 +107,6 @@ interface ExampleDto {
---

## Rules
- **NO CHAT OUTPUT**β€”produce only the handoff markdown block, nothing else.
- Be precise: types, constraints, examplesβ€”not vague prose.
- Include real example payloads where helpful.
- Surface non-obvious behaviorsβ€”don't assume frontend will "just know."
Expand All @@ -119,4 +116,4 @@ interface ExampleDto {
- If something is incomplete or TBD, say so explicitly.

## After Generating
Write the final markdown into the handoff file onlyβ€”do not echo it in chat. (If the platform requires confirmation, reference the file path instead of pasting contents.)
Write the final markdown into the handoff file onlyβ€”reference the file path instead of pasting contents in chat.
23 changes: 8 additions & 15 deletions skills/c4-architecture/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@ Generate software architecture documentation using C4 model diagrams in Mermaid
1. **Understand scope** - Determine which C4 level(s) are needed based on audience
2. **Analyze codebase** - Explore the system to identify components, containers, and relationships
3. **Generate diagrams** - Create Mermaid C4 diagrams at appropriate abstraction levels
4. **Document** - Write diagrams to markdown files with explanatory context
4. **Validate** - Verify all relationships have technology labels, every element has a description, and diagrams render correctly in Mermaid preview
5. **Document** - Write diagrams to markdown files with explanatory context

## C4 Diagram Levels

Expand Down Expand Up @@ -203,22 +204,14 @@ Use `$offsetX` and `$offsetY` to fix overlapping relationship labels.
4. **Include technology labels** - "JSON/HTTPS", "JDBC", "gRPC"
5. **Stay under 20 elements per diagram** - Split complex systems into multiple diagrams

### Clarity Guidelines

1. **Start at Level 1** - Context diagrams help frame the system scope
2. **One diagram per file** - Keep diagrams focused on a single abstraction level
3. **Meaningful aliases** - Use descriptive aliases (e.g., `orderService` not `s1`)
4. **Concise descriptions** - Keep descriptions under 50 characters when possible
5. **Always include a title** - "System Context diagram for [System Name]"

### What to Avoid
### C4-Specific Gotchas

See [references/common-mistakes.md](references/common-mistakes.md) for detailed anti-patterns:
- Confusing containers (deployable) vs components (non-deployable)
- Modeling shared libraries as containers
- Showing message brokers as single containers instead of individual topics
- Adding undefined abstraction levels like "subcomponents"
- Removing type labels to "simplify" diagrams
- Confusing containers (deployable units) vs components (non-deployable modules within a container)
- Modeling shared libraries as containers β€” they are not independently deployable
- Showing message brokers as single containers instead of individual topics/queues
- Adding undefined abstraction levels like "subcomponents" β€” C4 has exactly four levels
- Removing type labels to "simplify" diagrams β€” types are what make C4 diagrams self-describing

## Microservices Guidelines

Expand Down
84 changes: 48 additions & 36 deletions skills/codex/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,66 +1,78 @@
---
name: codex
description: Use when the user asks to run Codex CLI (codex exec, codex resume) or references OpenAI Codex for code analysis, refactoring, or automated editing. Uses GPT-5.2 by default for state-of-the-art software engineering.
description: Executes OpenAI Codex CLI for AI-assisted code generation, multi-file editing, and automated refactoring. Use when the user asks to run Codex CLI commands (codex exec, codex resume), generate code patches, apply multi-file edits, execute AI-assisted shell commands, or references OpenAI Codex for code analysis or automated editing.
---

# Codex Skill Guide

## Running a Task
1. Default to `gpt-5.2` model. Ask the user (via `AskUserQuestion`) which reasoning effort to use (`xhigh`,`high`, `medium`, or `low`). User can override model if needed (see Model Options below).

1. Default to `gpt-5.2` model. Ask the user (via `AskUserQuestion`) which reasoning effort to use (`xhigh`, `high`, `medium`, or `low`). User can override model if needed (see Model Options below).
2. Select the sandbox mode required for the task; default to `--sandbox read-only` unless edits or network access are necessary.
3. Assemble the command with the appropriate options:
- `-m, --model <MODEL>`
- `--config model_reasoning_effort="<high|medium|low>"`
- `--sandbox <read-only|workspace-write|danger-full-access>`
- `--full-auto`
- `-C, --cd <DIR>`
- `--skip-git-repo-check`
3. Always use --skip-git-repo-check.
4. When continuing a previous session, use `codex exec --skip-git-repo-check resume --last` via stdin. When resuming don't use any configuration flags unless explicitly requested by the user e.g. if he species the model or the reasoning effort when requesting to resume a session. Resume syntax: `echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null`. All flags have to be inserted between exec and resume.
5. **IMPORTANT**: By default, append `2>/dev/null` to all `codex exec` commands to suppress thinking tokens (stderr). Only show stderr if the user explicitly requests to see thinking tokens or if debugging is needed.
6. Run the command, capture stdout/stderr (filtered as appropriate), and summarize the outcome for the user.
7. **After Codex completes**, inform the user: "You can resume this Codex session at any time by saying 'codex resume' or asking me to continue with additional analysis or changes."
- `--skip-git-repo-check` (always include this flag)
4. **IMPORTANT**: Append `2>/dev/null` to all `codex exec` commands to suppress thinking tokens (stderr). Only show stderr if the user explicitly requests it or debugging is needed.
5. Run the command, capture output, and summarize the outcome for the user.
6. **Validate**: If the exit code is non-zero, report the error and ask the user how to proceed before retrying.
7. **After Codex completes**, inform the user: "You can resume this Codex session at any time by saying 'codex resume'."

### Resuming a Session

When continuing a previous session, pipe the new prompt via stdin. Do not add configuration flags unless the user explicitly requests a different model or reasoning effort:

```bash
echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null
```

All flags must be inserted between `exec` and `resume`. The resumed session inherits model, reasoning effort, and sandbox mode from the original.

### Quick Reference
| Use case | Sandbox mode | Key flags |

| Use case | Sandbox mode | Command pattern |
| --- | --- | --- |
| Read-only review or analysis | `read-only` | `--sandbox read-only 2>/dev/null` |
| Apply local edits | `workspace-write` | `--sandbox workspace-write --full-auto 2>/dev/null` |
| Permit network or broad access | `danger-full-access` | `--sandbox danger-full-access --full-auto 2>/dev/null` |
| Resume recent session | Inherited from original | `echo "prompt" \| codex exec --skip-git-repo-check resume --last 2>/dev/null` (no flags allowed) |
| Run from another directory | Match task needs | `-C <DIR>` plus other flags `2>/dev/null` |
| Read-only review or analysis | `read-only` | `codex exec --skip-git-repo-check -m gpt-5.2 --sandbox read-only "prompt" 2>/dev/null` |
| Apply local edits | `workspace-write` | `codex exec --skip-git-repo-check --sandbox workspace-write --full-auto "prompt" 2>/dev/null` |
| Network or broad access | `danger-full-access` | `codex exec --skip-git-repo-check --sandbox danger-full-access --full-auto "prompt" 2>/dev/null` |
| Resume recent session | Inherited | `echo "prompt" \| codex exec --skip-git-repo-check resume --last 2>/dev/null` |
| Run from another directory | Match task needs | `codex exec --skip-git-repo-check -C <DIR> "prompt" 2>/dev/null` |

## Model Options

| Model | Best for | Context window | Key features |
| --- | --- | --- | --- |
| `gpt-5.2-max` | **Max model**: Ultra-complex reasoning, deep problem analysis | 400K input / 128K output | 76.3% SWE-bench, adaptive reasoning, $1.25/$10.00 |
| `gpt-5.2` ⭐ | **Flagship model**: Software engineering, agentic coding workflows | 400K input / 128K output | 76.3% SWE-bench, adaptive reasoning, $1.25/$10.00 |
| `gpt-5.2-mini` | Cost-efficient coding (4x more usage allowance) | 400K input / 128K output | Near SOTA performance, $0.25/$2.00 |
| `gpt-5.1-thinking` | Ultra-complex reasoning, deep problem analysis | 400K input / 128K output | Adaptive thinking depth, runs 2x slower on hardest tasks |
| Model | Best for |
| --- | --- |
| `gpt-5.2` (default) | Software engineering, agentic coding workflows |
| `gpt-5.2-max` | Ultra-complex reasoning, deep problem analysis |
| `gpt-5.2-mini` | Cost-efficient coding (4x more usage allowance) |
| `gpt-5.1-thinking` | Adaptive thinking depth for hardest tasks |

**GPT-5.2 Advantages**: 76.3% SWE-bench (vs 72.8% GPT-5), 30% faster on average tasks, better tool handling, reduced hallucinations, improved code quality. Knowledge cutoff: September 30, 2024.
All models support 400K input / 128K output context windows.

**Reasoning Effort Levels**:
- `xhigh` - Ultra-complex tasks (deep problem analysis, complex reasoning, deep understanding of the problem)
- `high` - Complex tasks (refactoring, architecture, security analysis, performance optimization)
- `medium` - Standard tasks (refactoring, code organization, feature additions, bug fixes)
- `low` - Simple tasks (quick fixes, simple changes, code formatting, documentation)
### Reasoning Effort Levels

**Cached Input Discount**: 90% off ($0.125/M tokens) for repeated context, cache lasts up to 24 hours.
| Level | When to use | Examples |
| --- | --- | --- |
| `xhigh` | Ultra-complex tasks | Deep problem analysis, complex multi-system reasoning |
| `high` | Complex tasks | Architecture refactoring, security analysis, performance optimization |
| `medium` | Standard tasks | Code organization, feature additions, bug fixes |
| `low` | Simple tasks | Quick fixes, formatting, documentation updates |

## Following Up
- After every `codex` command, immediately use `AskUserQuestion` to confirm next steps, collect clarifications, or decide whether to resume with `codex exec resume --last`.
- When resuming, pipe the new prompt via stdin: `echo "new prompt" | codex exec resume --last 2>/dev/null`. The resumed session automatically uses the same model, reasoning effort, and sandbox mode from the original session.
- Restate the chosen model, reasoning effort, and sandbox mode when proposing follow-up actions.

1. After every `codex` command, use `AskUserQuestion` to confirm next steps or decide whether to resume.
2. Restate the chosen model, reasoning effort, and sandbox mode when proposing follow-up actions.

## Error Handling
- Stop and report failures whenever `codex --version` or a `codex exec` command exits non-zero; request direction before retrying.
- Before you use high-impact flags (`--full-auto`, `--sandbox danger-full-access`, `--skip-git-repo-check`) ask the user for permission using AskUserQuestion unless it was already given.
- When output includes warnings or partial results, summarize them and ask how to adjust using `AskUserQuestion`.

## CLI Version
1. **Non-zero exit**: Stop, report the failure, and request direction via `AskUserQuestion` before retrying.
2. **High-impact flags**: Before using `--full-auto`, `--sandbox danger-full-access`, or `--skip-git-repo-check`, ask permission via `AskUserQuestion` unless already granted.
3. **Warnings or partial results**: Summarize issues and ask how to adjust via `AskUserQuestion`.
4. **Validation checkpoint**: After each `codex exec` run, check the exit code. If non-zero, inspect stderr (`codex exec ... 2>&1`), adjust the prompt or flags, and retry with user approval.

Requires Codex CLI v0.57.0 or later for GPT-5.2 model support. The CLI defaults to `gpt-5.2` on macOS/Linux and `gpt-5.2` on Windows. Check version: `codex --version`
## CLI Version

Use `/model` slash command within a Codex session to switch models, or configure default in `~/.codex/config.toml`.
Requires Codex CLI v0.57.0+. Check version: `codex --version`. Use `/model` within a session to switch models, or set defaults in `~/.codex/config.toml`.
2 changes: 1 addition & 1 deletion skills/crafting-effective-readmes/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
name: crafting-effective-readmes
description: Use when writing or improving README files. Not all READMEs are the same β€” provides templates and guidance matched to your audience and project type.
description: Generate and improve README.md files β€” structure sections, write installation instructions, create usage examples, format code blocks, and add badges. Use when writing or improving README files, project documentation, repo descriptions, or markdown docs. Provides templates and guidance matched to your audience and project type.
---

# Crafting Effective READMEs
Expand Down
Loading