Skip to content

Feedback on your code-flow-tracer skill #123

@RichardHightower

Description

@RichardHightower

I took a look at your code-flow-tracer skill and wanted to share some thoughts.

Links:

The TL;DR

You're at 70/100, solidly in C territory. This skill has good bones—your writing style is clean (9/10), the spec is solid (13/15), and the trigger phrases are spot-on. But it's leaning too heavily on an external plugin without documenting what you're actually offering. The real gap is between what you promise (11 analysis modes, boundary detection, flow visualization) and what you explain.

What's Working Well

  • Writing style is sharp. Your imperative voice ("Identify", "Run", "Returns") stays consistent throughout. No fluff, no marketing speak.
  • Trigger phrases are excellent. You nailed the discoverability with patterns like "trace code execution", "understand data flow", "analyze dependencies"—developers will actually find this.
  • Spec compliance is tight. Valid YAML frontmatter, correct naming convention, clear third-person description. The basics are locked down.

The Big One: Missing Reference Documentation

Here's what's holding you back most: you mention "11 analysis modes available" but never document a single one. Not even one example.

This is a PDA killer. When someone reads "11 analysis modes," they want to know what they are. Are they query types? Output formats? Analysis strategies? Right now it's marketing fluff with no substance behind it.

The fix: Create references/analysis-modes.md that documents all 11 modes with examples. Even a simple table works:

| Mode | Purpose | Example Query |
|------|---------|---------------|
| call_graph | Show function calls | /sourceatlas:flow --mode call_graph src/api |
| data_flow | Trace data movement | /sourceatlas:flow --mode data_flow src/models |
...

This single addition could net you +5 points, pushing you to 75+.

Other Things Worth Fixing

  1. Zero output examples. You list features like "Call graph visualization (ASCII tree)" and "Boundary detection" but never show what they actually look like. Add a screenshot or sample output block—people need to see the value.

  2. No error handling guidance. What happens if the query returns nothing? What if the codebase isn't indexed? Step 4 should be: "If no results, try broader query terms or verify SourceAtlas has indexed your codebase."

  3. Plugin dependency is fuzzy. Your skill requires /sourceatlas:flow to exist, but you don't mention this is a prerequisite. Add "Requires: SourceAtlas plugin installed and indexed" somewhere visible.

Quick Wins

  • Document those 11 analysis modes (+5 points)
  • Show actual output examples (+3 points)
  • Add error handling step (+2 points)
  • Clarify plugin dependency (+1 point)

That's 11 points of easy improvements sitting on the table.


Checkout your skill here: [SkillzWave.ai](https://skillzwave.ai) | [SpillWave](https://spillwave.com) We have an agentic skill installer that install skills in 14+ coding agent platforms. Check out this guide on how to improve your agentic skills.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions