-
Notifications
You must be signed in to change notification settings - Fork 0
Description
I took a look at your history-analyzer skill and wanted to share some thoughts.
Links:
The TL;DR
You're at 75/100, which lands you in solid C territory — adequate with some clear gaps to close. This is based on Anthropic's best practices for skill design. Your strongest area is Spec Compliance (13/15) — the metadata and YAML structure are clean. The weaker spots are Utility (14/20) and PDA (22/30) — mostly because you're not showing users what the outputs actually look like or how to interpret them.
What's Working Well
- Clean metadata: Your frontmatter is valid YAML with all required fields, and the description includes specific trigger terms like 'hotspots', 'bus factor', and 'knowledge silos' — that's good for discoverability.
- Focused scope: No fluff here. Each section delivers value without padding. The skill does one thing well: analyzing git history.
- Good trigger coverage: You've got trigger phrases in the description and a solid "When to Use" section that helps Claude activate this at the right moments.
The Big One: Missing Output Examples
This is what's holding you back the most. You describe what the user gets — "Hotspots: Files with most changes (complexity indicators)" — but you never show what that actually looks like. When someone runs /sourceatlas:history, what does the output format look like? Are we talking JSON? A table? A list?
Add a "Sample Output" section showing real results. Something like:
## Sample Output
When analyzing a codebase, you'll get results structured like:
**Hotspots:**
- src/components/Dashboard.tsx (47 commits, 12 contributors)
- src/api/handlers.ts (38 commits, 8 contributors)
**Knowledge Silos:**
- Feature X: Only contributor is alice@company.com
This alone could add +2 points and make the skill way more practical.
Other Things Worth Fixing
-
No interpretation guidance — You say the skill "returns hotspots, coupling analysis, and contributor distribution" but don't explain what to do with it. Add a section like "Interpreting Results" with thresholds: "If a file has 50+ commits by one person, that's a knowledge silo worth addressing." (+1-2 points)
-
Missing prerequisites — You jump straight to "Run
/sourceatlas:history" but don't mention git needs to exist or that 3+ months of commit history makes the analysis meaningful. State these upfront. (+1 point) -
Second-person phrasing — Sections say "Trigger this skill when the user:" instead of direct imperative language. Reframe as "Activate when the request involves:" — removes the redundant "user" references. (+1 point)
-
No validation steps — There's no guidance on checking whether results are reasonable or what high/low values mean. Add a quick "How to Validate Results" checklist.
Quick Wins
- Add sample output block → +2 points
- Write "Interpreting Results" section → +2 points
- Specify git prerequisites → +1 point
- Fix second-person language → +1 point
That's +6 points with focused work — gets you to 81/100 (B territory) with some straightforward additions.
Checkout your skill here: [SkillzWave.ai](https://skillzwave.ai) | [SpillWave](https://spillwave.com) We have an agentic skill installer that install skills in 14+ coding agent platforms. Check out this guide on how to improve your agentic skills.