Skip to content

codebeltnet/agentic

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Agentic Skills

Skills Applied

A curated collection of skills β€” reusable instruction sets that teach AI agents how to follow specific workflows, conventions, and standards. Designed to work with any agent that supports the skills ecosystem: GitHub Copilot, Claude Code, Cursor, Codex, OpenCode, and many more.

What are skills?

Skills are Markdown files that an AI agent reads before responding. When a skill is active, the agent follows the rules it contains β€” consistently, across any tool or model that supports them. They're a lightweight way to encode your team's conventions once and apply them everywhere.

One repo-wide convention matters especially for scaffolding skills: prefer dynamic defaults over hardcoded values whenever a reliable source exists. Derive time-sensitive or environment-sensitive values from git metadata, repo state, or official machine-readable feeds so skills age gracefully instead of drifting.

Another repo rule is intentionally strict: every repo-managed skill ships with its own evals/evals.json, and those evals are run per skill from a temp workspace instead of from inside this repository.

Another part of that workflow is now mandatory too: when a repo-managed skill is created or modified, the author must run both with_skill and without_skill comparison executions from a temp workspace, aggregate the results into benchmark.json, and open eval-viewer/generate_review.py from the installed Anthropic skill-creator copy, typically under ~/.agents/skills/skill-creator/ or ~/.claude/skills/skill-creator/, so a human can review both the Outputs and Benchmark views before sign-off. For new skills the baseline is without_skill; for existing skills it can be without_skill or the previous/original skill version, matching the skill-creator benchmark flow.

One more consistency rule matters for form-driven skills: native input fields are treated as a host feature, not something a model can rely on. Skills in this repo must stay usable with or without UI widgets, and must fall back to the same deterministic one-field-at-a-time flow when the host only supports plain chat.

Validation follows the same philosophy: run scripts/validate-skill-templates.ps1 locally for the fast feedback loop, and let GitHub Actions rerun that same script on pull requests as the safety net. That validator also checks skill frontmatter metadata such as per-skill evals/evals.json files, optional eval fixture paths declared through files, and the 1024-character YAML description limit; it does not replace the paired benchmark review workflow.

Install a skill

Install any skill directly from this repository with a single command:

npx skills add https://github.com/codebeltnet/agentic --skill <skill-name>

For example:

npx skills add https://github.com/codebeltnet/agentic --skill git-visual-commits

Then activate it in your agent. For example, in GitHub Copilot CLI:

Use the skill tool to invoke the "<skill-name>" skill.

Always-on skills

Depending on the agent runtime, skills installed via npx skills add may live in ~/.claude/skills/ and/or ~/.agents/skills/. Treat both as personal global skill folders: if you use both toolchains, keep repo-authored skills mirrored between them so each agent sees the same version. Either way, installed skills are automatically loaded in every session β€” no manual invocation needed. The agent reads the skill's description and activates it when relevant (e.g. you say "commit this" and the git-visual-commits skill kicks in).

If you want a bundle of skills always available, just install them all:

npx skills add https://github.com/codebeltnet/agentic --skill git-visual-commits
npx skills add https://github.com/codebeltnet/agentic --skill git-keep-a-changelog
npx skills add https://github.com/codebeltnet/agentic --skill git-nuget-release-notes
npx skills add https://github.com/codebeltnet/agentic --skill git-nuget-readme
npx skills add https://github.com/codebeltnet/agentic --skill git-visual-squash-summary
npx skills add https://github.com/codebeltnet/agentic --skill skill-creator-agnostic
npx skills add https://github.com/codebeltnet/agentic --skill markdown-illustrator
npx skills add https://github.com/codebeltnet/agentic --skill trunk-first-repo
npx skills add https://github.com/codebeltnet/agentic --skill dotnet-strong-name-signing
# npx skills add https://github.com/codebeltnet/agentic --skill another-skill

Scoping options

Location Scope When to use
~/.agents/skills/ All sessions, all projects Global skills for agents that read the shared ~/.agents install
~/.claude/skills/ All sessions, all projects Your personal defaults β€” always on everywhere
.claude/skills/ (in a repo) Project-scoped Shared team conventions for a specific codebase
.github/skills/ (in a repo) GitHub Copilot / VS Code When your team uses Copilot agent mode in the IDE

Tip: You can mix scopes. Install your personal favorites globally, and add project-specific skills to the repo so your whole team gets them. If you use both ~/.claude/skills/ and ~/.agents/skills/, mirror repo-authored skills to both so sessions stay consistent.

Available Skills

Skill Description
git-visual-commits AI-driven git commit workflow with emoji (gitmoji-first), conventional prefixes, and three identity modes: bot-attributed (git bot commit), human-attributed (git commit), and collaborative (git our commit β€” agent analyzes authorship, human picks attribution). Includes commit body by default (opt out with no-body), semantic intent splitting, and auto-approval mode (yolo / auto). The agent does all the work either way. Stack-agnostic.
git-keep-a-changelog Git-aware Keep a Changelog companion that creates or updates CHANGELOG.md from the current branch by default. Reads full commit subjects and bodies plus the net diff, infers a release heading from a branch version hint like v0.3.0/... when available, creates a compliant changelog if the file does not exist yet, writes a required SemVer-aware release highlight, preserves natural prose wrapping, and curates Added / Changed / Fixed style sections instead of dumping raw commit logs.
git-nuget-release-notes Git-aware NuGet release-notes companion for .NET repos that keep cumulative .nuget/{ProjectName}/PackageReleaseNotes.txt files. Discovers packable src/ projects, resolves concrete package version and availability, creates missing files when needed, and writes per-package ALM / Breaking Changes / New Features / Improvements / Bug Fixes style notes from full commit context plus the net diff instead of dumping commit subjects.
git-nuget-readme Git-aware NuGet README companion for .NET repos that advertise a package from src/. Resolves the real packable project the README should sell, combines git history with actual package metadata, source capabilities, and relevant tests when feasible, preserves honest badge/docs/contributing sections, and writes a forthcoming, adoption-friendly README.md with repo-derived branding, clear value, install, framework-support, and quick-start guidance.
git-visual-squash-summary Non-mutating grouped-summary companion to git-visual-commits. Turns noisy commit stacks into a curated set of compact summary lines for PR or squash contexts, preserving technical identifiers, merging overlap, dropping low-signal noise, highlighting distinct meaningful efforts, and avoiding changelog-style wording or unsupported claims.
skill-creator-agnostic Runner-agnostic overlay for Anthropic skill-creator. Adds repo and environment guardrails for skill authoring and benchmarking: temp-workspace isolation, iteration-N/eval-name/{config}/run-N/ benchmark layout, valid grading.json summaries, generated benchmark.json, honest MEASURED vs SIMULATED labeling, and sync/README discipline for repo-managed skills.
markdown-illustrator Reads a markdown file and answers directly in chat with one document-wide Visual Brief plus one compiled prompt. Infers a compact visual strategy by default, keeps follow-up questions near zero, and only branches when the user explicitly asks for added specificity.
dotnet-new-lib-slnx Scaffold a new .NET NuGet library solution following codebeltnet engineering conventions. Dynamic defaults for TFM/repository metadata, latest-stable NuGet package resolution, tuning projects plus a tooling-based benchmark runner, TFM-aware test environments, strong-name signing, NuGet packaging, DocFX documentation, CI/CD pipeline, and code quality tooling.
dotnet-new-app-slnx Scaffold a new .NET standalone application solution following codebeltnet engineering conventions. Supports Console, Web, and Worker host families with Startup or Minimal hosting patterns; Web expands into Empty Web, Web API, MVC, or Web App / Razor, plus functional tests and a simplified CI pipeline.
trunk-first-repo Initialize a git repository following scaled trunk-based development. Seeds an empty main branch and creates a versioned feature branch (v0.1.0/init), enforcing a PR-first workflow where content only reaches main through peer-reviewed pull requests.
dotnet-strong-name-signing Generate a strong name key (.snk) file for signing .NET assemblies using pure .NET cryptography β€” no Visual Studio Developer PowerShell or sn.exe required. Works in any terminal. Defaults to 1024-bit RSA (matching sn.exe), with 2048 and 4096 available as options.

Copyable Install Commands

If your Markdown viewer supports code-block copy buttons, each command below should be directly copyable.

git-visual-commits

npx skills add https://github.com/codebeltnet/agentic --skill git-visual-commits

git-keep-a-changelog

npx skills add https://github.com/codebeltnet/agentic --skill git-keep-a-changelog

git-nuget-release-notes

npx skills add https://github.com/codebeltnet/agentic --skill git-nuget-release-notes

git-nuget-readme

npx skills add https://github.com/codebeltnet/agentic --skill git-nuget-readme

git-visual-squash-summary

npx skills add https://github.com/codebeltnet/agentic --skill git-visual-squash-summary

skill-creator-agnostic

npx skills add https://github.com/codebeltnet/agentic --skill skill-creator-agnostic

markdown-illustrator

npx skills add https://github.com/codebeltnet/agentic --skill markdown-illustrator

dotnet-new-lib-slnx

npx skills add https://github.com/codebeltnet/agentic --skill dotnet-new-lib-slnx

dotnet-new-app-slnx

npx skills add https://github.com/codebeltnet/agentic --skill dotnet-new-app-slnx

trunk-first-repo

npx skills add https://github.com/codebeltnet/agentic --skill trunk-first-repo

dotnet-strong-name-signing

npx skills add https://github.com/codebeltnet/agentic --skill dotnet-strong-name-signing

Why git-visual-commits?

Commit messages are the most-read documentation in any codebase β€” yet they're usually an afterthought. "fix stuff", "wip", "address PR feedback" tells you nothing six months later. Writing good commits takes discipline, and when you're in flow, it's the first thing that slips.

git-visual-commits handles the entire commit workflowβ€” staging, diffing, crafting the message, choosing the right emoji β€” so every commit is consistent and meaningful without breaking your flow. Whether the agent authors the commit (git bot commit), you do (git commit), or you worked on it together (git our commit), the quality is the same.

  • Gitmoji-first β€” visual commit categories that are scannable at a glance
  • Conventional prefixes β€” init, content, style, fix, refactor, and docs as fallback when gitmoji isn't available
  • Three identity modes β€” bot, human, or collaborative β€” the agent does the work either way, you choose who gets credit
  • Identity lock stays honest β€” git bot commit means bot attribution, not just "AI did the work", and the flow now verifies the resulting author after commit
  • Auto-approval β€” say "yolo" or "auto" to skip the review gate when you trust the agent's judgment
  • Yolo skips confirmation, not discipline β€” auto-approval still requires semantic grouping, mixed-scope checks, and a visible commit plan summary before committing
  • Full worktree by default β€” plain git bot commit yolo means "commit everything currently in git status and group it correctly", not "guess a narrower slice"
  • Commit body by default β€” every commit explains why, not just what β€” opt out with "tmi" or "no-body"
  • Commit bodies are verified after write β€” the workflow now checks the stored commit body so literal escape sequences like \n do not leak into history
  • Short bodies stay readable β€” the workflow no longer hard-wraps short commit bodies at 72 characters, treats mid-sentence wrapping as a verification failure, and repairs the commit instead of leaving noisy prose in history
  • Repo capability additions stay explicit β€” adding a brand-new skill is grouped separately from refactoring an existing skill to support it
  • Shared wording rules stay in lockstep β€” the duplicated commit-language.md reference is kept byte-for-byte identical across both git-visual skills and checked locally plus in CI
  • Semantic intent splitting β€” groups commits by rationale, not just file type β€” config and test logic are always separate
  • Umbrella commits are rejected β€” mixed diffs spanning skill instructions, templates, validators, and repo docs must be split into separate commits instead of bundled into one blob
  • Stack-agnostic β€” works with any language, framework, or project type
  • Squash-and-merge friendly β€” structured commits make PR squash summaries read like a changelog

Why git-visual-squash-summary?

Sometimes the history is already written and the only thing you need is the final grouped summary. A long branch with fixups, rename follow-ups, review nits, and repeated attempts often contains a few real change themes buried inside a messy chronological story. That is where git-visual-squash-summary fits: it reads the real history and diff, then compresses them into a small set of truthful grouped lines.

  • Same visual language β€” reuses the same prefix and emoji rules as git-visual-commits
  • Grouped-lines only β€” returns compact grouped lines only, not a title or body
  • Non-mutating by design β€” drafts the wording only and does not touch git state
  • Distinct efforts stay distinct β€” preserves meaningful change groups instead of forcing one umbrella line
  • Intent over chronology β€” collapses noisy commit stacks into the retained grouped effort
  • Low-signal noise gets dropped β€” typo-only and trivial fixup churn do not deserve their own lines
  • Identifier-safe wording β€” preserves technical names, paths, flags, and types where possible
  • Readable in GitHub and terminals β€” optimized for compact PR and squash-summary views
  • Strict 72-char lines β€” every summary line stays compact and scannable
  • Not a changelog β€” avoids release-note phrasing and commit-subject dumps
  • No unsupported claims β€” summarizes only what the inspected diff can justify

Why git-keep-a-changelog?

Writing CHANGELOG.md well is harder than it looks. Raw commit subjects are too noisy, PR titles often miss migration context, and release notes get much better when the writer actually reads the commit bodies and understands the net diff. That is where git-keep-a-changelog fits: it turns the current branch into a curated Keep a Changelog entry and creates or updates the file directly for review.

  • Keep a Changelog first β€” writes Added, Changed, Deprecated, Removed, Fixed, and Security sections in the expected style
  • Full-commit context β€” reads complete commit messages and the net diff before writing
  • Version-aware by branch β€” uses a branch prefix like v0.3.0/... as the release heading hint when present
  • SemVer-aware highlight β€” always writes a short release TL;DR that explicitly says major, minor, or patch
  • Creates the file when needed β€” seeds a compliant CHANGELOG.md if the repo does not have one yet
  • Natural prose β€” preserves human-readable line breaks without any fixed-width wrapping target
  • Predictable bullet punctuation β€” bullets end with , and the last bullet in each section ends with .
  • Direct file edit β€” creates or updates CHANGELOG.md directly, then stops for human review
  • Compare-link aware β€” can update bottom-of-file compare links when a concrete release heading is added
  • Not a commit dump β€” curates the release story instead of copying git log output into Markdown

Why git-nuget-release-notes?

Repo-wide changelogs are useful, but NuGet packages often need package-scoped release notes that match the package actually being published. In codebelt-style repos, that means cumulative .nuget/{ProjectName}/PackageReleaseNotes.txt files with a very specific shape: concrete version and availability lines, # ALM first, and only the sections that the package really earned.

git-nuget-release-notes reads the actual git history and net diff per packable src/ project, resolves the package version and target framework availability, then updates the package-note files directly for review.

  • Per-package, not repo-wide β€” writes one truthful release block per publishable assembly/package
  • Concrete package metadata β€” resolves Version: and Availability: from the branch/project instead of inventing placeholders
  • Current codebelt format β€” follows the established ALM, Breaking Changes, New Features, Improvements, Bug Fixes, and optional References blueprint
  • Missing-file aware β€” can create .nuget/{ProjectName}/PackageReleaseNotes.txt when a packable project should be represented
  • History-aware β€” preserves cumulative newest-first package history instead of overwriting older entries
  • Not a commit dump β€” uses full commit bodies plus the net diff and avoids line-by-line subject replay

Why git-nuget-readme?

Choosing a NuGet package often happens fast: a developer lands on the README, scans the first screen, checks whether the package fits the problem, and looks for install guidance, supported frameworks, docs, and a quick example. If those signals are vague or buried, the package loses the moment even when the code is good.

git-nuget-readme uses the actual git history, project metadata, and source-level capabilities of the advertised package to refresh the README into something that is both truthful and easier to adopt.

  • Package-first README focus β€” centers the README on the real packable project the repo is advertising
  • Devex-led structure β€” pulls value proposition, installation, framework support, docs, and quick-start guidance closer to the top
  • Grounded sales copy β€” improves the package pitch without inventing features, benchmarks, badges, or docs URLs
  • Source-backed examples β€” prefers real namespaces, package IDs, capability areas, and test-backed usage hints from the codebase
  • Repo-derived identity β€” uses the current repo's own naming and branding conventions instead of importing a by <brand> pattern from another package family
  • Preserve the good parts β€” keeps accurate badges, docs links, contributing guidance, and license sections when they are already working
  • Not a changelog in disguise β€” uses git history for context but writes adoption-oriented README copy instead of replaying commit subjects

Why skill-creator-agnostic?

Anthropic's skill-creator is an excellent base workflow, but the day-to-day friction usually comes from the environment around it: different runners, Windows/PowerShell encoding traps, benchmark layout mistakes, and the temptation to present a synthetic pipeline check as if it were a measured model benchmark.

skill-creator-agnostic keeps the upstream workflow intact and adds the parts teams actually trip over when they want the same skill to hold up across Codex, GitHub Copilot, Opus, and similar agents.

  • Overlay, not fork β€” treats Anthropic skill-creator as the base and layers repo/runtime guardrails on top
  • Runner-agnostic by design β€” chooses from available execution capability instead of assuming one vendor CLI
  • Benchmark-contract aware β€” enforces iteration-N/eval-name/{config}/run-N/, valid grading.json.summary, and generated benchmark.json
  • Tool-path explicit β€” points authors to the installed Anthropic skill-creator copy that provides scripts/aggregate_benchmark.py and eval-viewer/generate_review.py
  • Honest benchmark modes β€” keeps MEASURED and SIMULATED runs clearly separated so pipeline validation never masquerades as model quality
  • PowerShell-safe β€” calls out UTF-8 no BOM, stable counting, provider-path normalization, and other Windows-specific pitfalls
  • Repo-managed discipline β€” keeps per-skill evals, local-install sync, and README updates in scope for first-party skills

Why markdown-illustrator?

Markdown-heavy documents often need one image that sells the whole idea fast: a conference opener, article cover, pitch-slide hero, or visual hook that makes the audience want to keep reading. The problem with many prompt workflows is that they branch immediately into model menus, theme toggles, and style comparisons before the document has even been understood.

markdown-illustrator keeps the job focused. It reads the markdown, distills the whole document into a visualization-first Visual Brief, silently infers a compact visual strategy from the request, and turns that shared brief into one compiled prompt returned directly in chat. If you explicitly ask for a named model or a narrower aesthetic, it honors that request without dragging you through a selection workflow.

  • Visual-Brief first β€” distills the document into subject, narrative, visual opportunity, mood, and must-show elements before prompting
  • One shared Visual Brief, one committed result β€” optimized for covers, keynote slides, and "capture the essence" illustration requests where decisiveness matters more than variants
  • Prompt-compiler behavior β€” translates abstract meaning into concrete visual structure, readable composition, physical medium cues, and explicit failure-mode control
  • Infer, don't interrogate β€” defaults to a strong non-interactive strategy instead of turning intent, treatment, abstraction, and label density into follow-up questions
  • Hero-first defaults β€” when the request is underspecified, the skill defaults toward hero + cinematic editorial + concept-led + minimal labels + 16:9 (or 3:2 when it composes better) rather than a dry explainer graphic
  • Cross-diffuser by design β€” prefers strong natural-language prompting over vendor-specific branching unless the user asks
  • Text-safe prompting β€” steers away from dense embedded copy, fake words, and fragile readable text unless very short labels are truly necessary
  • Anti-repetition by default β€” avoids repeated labels, bullets, steps, callouts, mirrored panels, and echoed document fragments so the image reads like one authoritative artifact rather than many near-duplicates
  • No selection detours β€” skips file creation, model-family, style, theme, and scope menus so the workflow stays fast and focused
  • User steerable when needed β€” the skill stays minimal, but users can still explicitly steer toward directions like whiteboard, blackboard, isometric, or blueprint

Inferred Defaults For markdown-illustrator

The skill should not ask the user to configure these unless the request is genuinely ambiguous in a way that affects correctness. It infers a compact strategy and proceeds.

  • Intent β€” infer hero, digest, diagram, or cover from the user's phrasing; if there is no stronger signal, default to hero
  • Visual treatment β€” preserve explicit styles such as whiteboard, blackboard, scientific, hand-drawn, isometric, or minimal; otherwise default to cinematic editorial
  • Abstraction level β€” use concept-led for spectacle and interest-raising requests, balanced for explanatory or onboarding requests, and literal only when the user explicitly asks for strict fidelity
  • Label density β€” default to minimal, move toward none for hero or infographic-first requests, and use academic only for scientific or textbook-style requests
  • Aspect ratio β€” honor explicit ratios, otherwise default to a wide frame: prefer 16:9, use 3:2 when the composition is more editorial or object-centered, and avoid square by default

Good Trigger Examples For markdown-illustrator

These phrasings reliably signal the skill's intent: a markdown file goes in, and one document-wide visual direction comes back.

  • Use markdown-illustrator on SKILL.md and return the Visual Brief plus one final prompt.
  • Read roadmap.md and create one strong visual direction that captures the whole document.
  • Create a visual digest for onboarding-notes.md.
  • Turn launch-plan.md into a keynote opener image prompt.
  • Use markdown-illustrator on systems.md and keep it blackboard style.
  • Turn product-brief.md into a single Flux-ready hero-image prompt.

Common Visual Directions For markdown-illustrator

These are reference directions for users, not built-in branches in the skill. If you want one of them, ask for it explicitly in the prompt.

whiteboard

  • Pros: approachable, collaborative, strong for brainstorming, product planning, workshops, and messy human energy
  • Cons: can feel too casual or cluttered for polished keynote or editorial uses
  • Guidance: ask for this when the document is about ideation, strategy sessions, or product thinking

blackboard

  • Pros: dramatic, intellectual, layered, great for systems thinking and technical storytelling
  • Cons: can become visually noisy if the source material is already dense
  • Guidance: ask for this when the document is about architecture, strategy, layered concepts, or technical explanation

isometric

  • Pros: excellent for platforms, ecosystems, infrastructure, and layered technical worlds
  • Cons: weaker for abstract or emotional narratives that need symbolism more than structure
  • Guidance: ask for this when the document describes systems, services, stacks, networks, or architectural relationships

blueprint

  • Pros: precise, engineered, authoritative, strong for protocols, design intent, and technical rigor
  • Cons: can feel cold or overly schematic for marketing or human-centered subjects
  • Guidance: ask for this when the document should feel exact, technical, and intentionally designed

editorial illustration

  • Pros: expressive, conceptual, and strong for article covers, essays, and symbolic storytelling
  • Cons: less literal, so it may underperform when the image must explain concrete architecture
  • Guidance: ask for this when the document needs metaphor, mood, or a polished publication-style visual

cinematic

  • Pros: emotional, aspirational, high-impact, strong for keynote heroes and launch moments
  • Cons: can become too grand if the source material really needs clarity over spectacle
  • Guidance: ask for this when the image should feel premium, dramatic, and audience-grabbing

minimal poster

  • Pros: high signal-to-noise, memorable, clean, and strong for one dominant idea
  • Cons: can oversimplify documents with important operational or technical nuance
  • Guidance: ask for this when the document has one central idea that can be reduced to a powerful symbol

Why dotnet-new-lib-slnx and dotnet-new-app-slnx?

Starting a new .NET solution "from scratch" usually means copying from your last project, deleting half of it, and spending an hour wiring up CI, MSBuild props, versioning, and code quality tooling. Every new repo drifts slightly from the last one. Six months later, no two solutions look the same.

dotnet-new-lib-slnx and dotnet-new-app-slnx encode the full codebeltnet convention into repeatable scaffolds β€” from Directory.Build.props to CI pipelines to DocFX. Each skill is focused on its domain: libraries get multi-target frameworks, signing, and NuGet packaging; apps get host family selection, a conditional web-variant choice when needed, hosting patterns, and functional tests.

Note

These scaffolds are not speculative starter kits. They capture conventions already exercised across Codebelt repositories and turn them into a repeatable methodology for new solutions.

  • Convention over configuration β€” opinionated defaults that match real production setups
  • Focused skills β€” library and app concerns are fully separated, no variant confusion
  • Lower cognitive load β€” the library scaffold defaults the main project name from the solution name, pre-fills the repository URL from the repo root folder name, and lets the package website reuse that value unless you override it
  • Default-friendly prompts β€” when a scaffold form already shows a recommended value such as root_namespace = solution_name, leaving the field blank should accept that default instead of sending the agent into a follow-up loop
  • Structured-input fallback stays consistent β€” when a host does not render native form widgets, the scaffold skills now fall back to a deterministic one-field-at-a-time plain-text format instead of improvising the UX
  • Explicit host prompts stay on rails β€” if you already asked for Console, Worker, Web API, MVC, or Razor, the scaffold flow should preselect that host choice and move straight to the remaining fields instead of asking you to restate it
  • Modern TFM choices β€” the .NET scaffold skills compute active target framework quick-picks from the official .NET releases index, offering every supported non-preview LTS and STS channel plus an expanded multi-target preset where applicable
  • Latest stable dependencies β€” Directory.Packages.props is generated from NuGet.org package metadata at scaffold time instead of carrying stale hardcoded NuGet package versions
  • Central package management stays authoritative β€” app scaffolds keep NuGet versions in Directory.Packages.props and do not β€œrepair” restore issues by inlining versions into generated project files
  • Deterministic package resolution beats memory β€” the app scaffold now ships a NuGet resolver script so agents can fetch current per-package versions instead of guessing from stale remembered examples
  • Resolver script is non-interactive by default β€” the app package resolver now defaults to the skill’s own Directory.Packages.props, so agents do not have to remember an extra template path argument during normal scaffolds
  • Library-only package set β€” the library scaffold no longer carries leftover app/bootstrapper package placeholders that do not belong in class library templates
  • Structured benchmarking β€” the scaffold now keeps actual benchmark projects under tuning/, generates a solution-level tooling/benchmark-runner host with BenchmarkDotNet jobs derived from the selected TFMs, targets the runner itself at the highest selected supported runtime, and writes output to reports/
  • Hidden shared assets preserved β€” recursive scaffold copy includes dot-folders such as .bot/, so the generated repo gets the real .bot/README.md template instead of an improvised placeholder
  • UTF-8 by default β€” the scaffold explicitly tells generating agents to preserve UTF-8 when copying and writing text templates, matching the generated .editorconfig
  • Explicit encoding guidance β€” rewritten templates now call for byte-preserving copy when possible, explicit UTF-8 APIs when not, and a quick mojibake sanity check before scaffolding is considered done
  • TFM-aware test runners β€” generated testenvironments.json Docker entries now follow the selected target frameworks instead of using a hardcoded runner tag
  • Shared test environments are required β€” testenvironments.json is part of the app scaffold contract and should never be silently skipped
  • Source-backed runner tags β€” Docker runner tags can be validated against the codebeltnet/ubuntu-testrunner Docker Hub tags feed instead of being assumed
  • Root-aware Dependabot β€” the generated repo watches / for NuGet updates so central package management keeps moving after day one
  • App scaffolds resolve package versions per dependency β€” generated Directory.Packages.props files use package-specific placeholders that are resolved from NuGet.org instead of leaking a generic {LATEST} token
  • ASP.NET package versions stay TFM-aligned β€” net9.0 app scaffolds resolve framework-aligned ASP.NET packages to the latest stable 9.x line instead of accidentally pulling incompatible 10.x packages
  • Web-family scaffolds stay explicit β€” generic Web requests expand into Empty Web, Web API, MVC, or Web App / Razor, with variant-specific project suffixes like .Web, .Api, .Mvc, and .WebApp
  • Current-folder scaffolding β€” both .NET scaffold skills generate directly into the folder you are already in unless you explicitly ask for a nested solution folder
  • PascalCase solution filenames β€” generated .slnx files keep the user-facing solution/product name instead of silently lowercasing it
  • Required artifacts stay required β€” the app scaffold treats .slnx, testenvironments.json, Directory.Packages.props, and the per-host src/ + test/ projects as non-optional outputs, even for single-host scaffolds
  • Shared scaffold assets are copied as a complete set β€” app scaffolds preserve the full assets/shared/ inventory, including dotfiles, .github/, and .bot/, instead of cherry-picking only the files that seem important
  • Target framework stays centralized β€” generated app and test projects inherit TargetFramework from the root Directory.Build.props instead of patching individual .csproj files
  • MinVer stays wired in β€” .NET scaffolds preserve MinVer-based semantic versioning from git tags as a repo-level invariant
  • MinVer bootstrap warnings are expected β€” in non-git or untagged folders, an initial 0.0.0-alpha.0 style version is expected until the repo is initialized and tagged
  • Worker scaffolds build immediately β€” Worker apps now include a starter Worker.cs template so the generated project compiles before custom logic is added
  • Bootstrapper imports are explicit β€” app templates now include the required Codebelt.Bootstrapper.* namespace imports instead of relying on missing implicit usings
  • Clear NuGet metadata mapping β€” prompts and placeholders line up with package metadata such as PackageProjectUrl
  • Solo-friendly defaults β€” company/publisher metadata can default straight from the author name for individual maintainers
  • Complete from the start β€” CI pipeline, code quality, test infrastructure, and governance docs on day one
  • Template-driven β€” real files with placeholders in assets/, not generated strings, so you can inspect and evolve them

Why dotnet-strong-name-signing?

Generating a .snk file traditionally requires sn.exe, which is only available in the Visual Studio Developer PowerShell β€” a common pain point for developers using VS Code, Rider, or plain terminals. This skill uses RSACryptoServiceProvider from the .NET runtime itself, so it works in any PowerShell or terminal without special tooling.

  • No sn.exe dependency β€” uses pure .NET crypto available in any PowerShell session
  • Matches sn.exe defaults β€” 1024-bit RSA by default, with 2048 and 4096 as options
  • Cross-platform β€” works on Windows, macOS, and Linux with PowerShell 7+ or .NET runtime
  • Identity, not security β€” Microsoft's guidance is clear: strong names are about assembly identity, not cryptographic security

Why trunk-first?

Most repositories start with git init followed by committing everything directly to main. This works β€” until someone force-pushes to main, or a half-finished feature lands without review. By the time you add branch protection, the history is already messy.

trunk-first-repo flips this: main starts empty and stays clean from the very first commit. Every piece of content enters through a pull request. This gives you:

  • Review from day one β€” no "we'll add branch protection later" that never happens
  • Clean, meaningful history β€” main tells the story of reviewed, approved changes
  • Version-aware branches β€” v0.0.1/spike-auth vs v1.0.0/release-prep signals project maturity at a glance
  • Zero-friction setup β€” one skill invocation, not a 10-step checklist

Repository structure

skills/
  <skill-name>/
    SKILL.md          # Required β€” the skill definition (loaded by the AI)
    FORMS.md          # Optional β€” structured form fields for parameter collection
    assets/           # Optional β€” file templates, fonts, icons used in output
    scripts/          # Optional β€” executable code (Python, Bash, etc.)
    references/       # Optional β€” detailed reference docs
    evals/            # Required for repo-managed skills β€” per-skill evals/evals.json
      files/          # Optional β€” eval fixture inputs referenced by evals/evals.json files[]

Contributing

See CONTRIBUTING.md for how to add a new skill or improve an existing one.

License

MIT

About

🦾 cross-agent skills for proven Codebelt workflows: git-visual-commits, trunk-first repos, .NET app/library scaffolding, and strong-name signing for Copilot, Claude, Cursor, Codex, and more.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors