Skip to content

Feedback on your impact-analyzer skill #119

@RichardHightower

Description

@RichardHightower

I took a look at your impact-analyzer skill and wanted to share some thoughts.

Links:

The TL;DR

You're at 75/100, which is solid C-grade territory. This is based on Anthropic's skill best practices rubric. Your strongest area is Spec Compliance (13/15) — the metadata and YAML are dialed in. Weakest area is Utility (14/20) — the skill doesn't quite deliver enough concrete value or guidance for users trying to actually use it.

What's Working Well

  • Tight spec compliance. Your YAML frontmatter is valid, naming conventions are correct (hyphen-case), and the description includes good trigger phrases like "what will break if I change X" — that's solid discoverability.
  • Clean, concise structure. At 44 lines, there's zero fluff. Each section earns its place, which is the opposite of most skills I review.
  • Good flexibility in scope. Offering impact analysis across files, APIs, components, and models gives users real degrees of freedom instead of a one-size-fits-all tool.

The Big One: Missing Sample Output

This is holding you back the most. You list output types (impact summary, risk level assessment, breaking changes, etc.) but never show what that actually looks like. A user reading your skill has no concrete picture of what they're getting.

Fix: Add a "Sample Output" section showing a real, formatted example — something like:

## Sample Output

**Target:** UserAuthService class

**Impact Summary:**
- 3 direct dependents
- 12 transitive dependents
- Affects login flow, password reset, session management

**Risk Level:** 🔴 HIGH
- Breaking changes: `authenticate()` signature change
- Migration effort: 2-3 hours per dependent

This alone could bump you +2-3 points in Utility.

Other Things Worth Fixing

  1. Add a references directory. Complex scenarios like interpreting risk levels or migration patterns deserve their own docs. Create references/risk-levels.md explaining red/yellow/green criteria and references/migration-patterns.md with common strategies. (+3 points potential)

  2. Error handling guidance is missing. What happens if the target isn't found? If the command fails? Add a step 4: "If target not found, suggest similar names or ask user to clarify scope." (+2 points)

  3. Redundant trigger documentation. Your triggers are listed in metadata, "When to Use" section, AND "Example Triggers" section. Pick the strongest two places and consolidate — saves tokens and confusion. (+1 point)

Quick Wins

  • Add concrete sample output (biggest bang for buck)
  • Create references/ directory for edge cases and strategies
  • Add error handling step to instructions
  • Consolidate trigger documentation to eliminate overlap

Checkout your skill here: [SkillzWave.ai](https://skillzwave.ai) | [SpillWave](https://spillwave.com) We have an agentic skill installer that install skills in 14+ coding agent platforms. Check out this guide on how to improve your agentic skills.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions