From 08e84414b2002ad905d653122f378cbcbb63a6b1 Mon Sep 17 00:00:00 2001 From: pftom Date: Fri, 1 May 2026 11:19:41 +0800 Subject: [PATCH] docs: add Open Design guide alongside Claude Design MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add a parallel handoff path for users of [Open Design](https://github.com/nexu-io/open-design), the Apache-2.0, local-first, BYOK alternative to Claude Design that drives whichever coding-agent CLI the user already has on their PATH (Claude Code, Codex, Cursor, Gemini, OpenCode, Qwen, Copilot, Hermes, Kimi, Pi). Mirrors the existing Claude Design integration: - README: a paragraph next to the Claude Design one, pointing at the new guide and explaining the drop-into-skills/SKILL.md install path - docs/guides/open-design.mdx: Mintlify page parallel to claude-design.mdx, with Steps, comparison table, prompts, limitations, handoff - docs/guides/open-design-hyperframes.md: SKILL.md-shaped instruction file users drop into skills/hyperframes-handoff/SKILL.md (Open Design auto-discovers it on next request) or attach to chat as a one-shot - docs/docs.json: nav entry for the new page The instruction file deliberately defers to claude-design-hyperframes.md as the canonical reference for skeleton catalogs, shader patterns, HDR, and audio-reactive animation — it stays focused on what Open Design's prompt stack needs at emission time (active-DESIGN.md binding, 5-dim self-critique gate, structural rules) so the two guides don't drift. Open Design already ships a motion-frames skill that says "hand-off ready for HyperFrames" — this PR closes the loop on the HyperFrames side so the route is discoverable from the HyperFrames docs. --- README.md | 2 + docs/docs.json | 1 + docs/guides/open-design-hyperframes.md | 421 +++++++++++++++++++++++++ docs/guides/open-design.mdx | 175 ++++++++++ 4 files changed, 599 insertions(+) create mode 100644 docs/guides/open-design-hyperframes.md create mode 100644 docs/guides/open-design.mdx diff --git a/README.md b/README.md index 9cf27eaf5..bba57caa7 100644 --- a/README.md +++ b/README.md @@ -36,6 +36,8 @@ This teaches your agent (Claude Code, Cursor, Gemini CLI, Codex) how to write co For Claude Design, open [`docs/guides/claude-design-hyperframes.md`](https://github.com/heygen-com/hyperframes/blob/main/docs/guides/claude-design-hyperframes.md) on GitHub and click the download button (↓) to save it, then attach the file to your Claude Design chat. It produces a valid first draft; refine in any AI coding agent. See the [Claude Design guide](https://hyperframes.heygen.com/guides/claude-design). +For [Open Design](https://github.com/nexu-io/open-design) — the open-source, BYOK alternative to Claude Design that runs locally and drives whichever coding-agent CLI you already have on your `PATH` — open [`docs/guides/open-design-hyperframes.md`](https://github.com/heygen-com/hyperframes/blob/main/docs/guides/open-design-hyperframes.md) and either drop it into your project's `skills/hyperframes-handoff/SKILL.md` or attach it to your Open Design chat. It teaches Open Design to emit a HyperFrames-valid composition; refine in any AI coding agent. See the [Open Design guide](https://hyperframes.heygen.com/guides/open-design). + For Codex specifically, the same skills are also exposed as an [OpenAI Codex plugin](./.codex-plugin/plugin.json) — sparse-install just the plugin surface: ```bash diff --git a/docs/docs.json b/docs/docs.json index 50d08880b..d086f4f2a 100644 --- a/docs/docs.json +++ b/docs/docs.json @@ -71,6 +71,7 @@ "pages": [ "guides/website-to-video", "guides/claude-design", + "guides/open-design", "guides/prompting", "guides/hyperframes-vs-remotion", "guides/gsap-animation", diff --git a/docs/guides/open-design-hyperframes.md b/docs/guides/open-design-hyperframes.md new file mode 100644 index 000000000..a3899a6fb --- /dev/null +++ b/docs/guides/open-design-hyperframes.md @@ -0,0 +1,421 @@ +--- +name: hyperframes-handoff +description: | + Produce a HyperFrames-valid HTML composition — paused GSAP timeline, data + attributes, scene structure — that any AI coding agent can immediately + refine with `npx hyperframes lint` and `npx hyperframes preview`. Use when + the brief mentions "video", "reel", "motion graphic", "title card", + "animated explainer", or pairs Open Design with HyperFrames for export. +triggers: + - "hyperframes" + - "video" + - "reel" + - "motion graphic" + - "animated explainer" + - "title card" + - "kinetic typography" + - "动效视频" + - "视频海报" +od: + mode: prototype + platform: desktop + scenario: marketing + preview: + type: html + entry: index.html + design_system: + requires: true + sections: [color, typography, layout, motion] + example_prompt: "Design a 15-second Instagram reel announcing dark mode for Taskflow (#6C5CE7). Output as a HyperFrames composition I can render locally." +--- + +# HyperFrames Handoff — for Open Design + +> **Drop this file at `skills/hyperframes-handoff/SKILL.md` inside your local +> [Open Design](https://github.com/nexu-io/open-design) checkout, restart the +> daemon, and the skill appears in the picker. Or attach it to a fresh chat +> as a one-shot.** + +This skill teaches Open Design to emit a **valid first draft** of a +[HyperFrames](https://github.com/heygen-com/hyperframes) composition — plain +HTML + CSS + a paused GSAP timeline. The CLI (`npx hyperframes render +index.html`) turns the HTML into an MP4. You author the HTML; the user runs +the render locally. + +**HyperFrames replaces the default video-artifact workflow.** Do NOT emit a +React/Babel composition, do NOT call other prototype skills, do NOT use the +sandboxed iframe's wall-clock playback for timing decisions. Plain HTML + +GSAP only. Treat the [`claude-design-hyperframes.md`](https://github.com/heygen-com/hyperframes/blob/main/docs/guides/claude-design-hyperframes.md) +companion document as the **upstream spec for HyperFrames structural rules** — +the rules below condense it to what Open Design needs at emission time, but +that file is the source of truth for shader catalogs, skeleton variants, and +edge cases. + +--- + +## Your role + +**You produce a valid first draft — not a final render.** Open Design's +strengths are visual identity (driven by the active `DESIGN.md`), layout, and +brand-accurate content decisions. The user (or their coding agent) handles +animation polish, timing micro-adjustments, and production QA after handoff. + +The user's workflow: + +1. **Open Design** (you) — pick palette + typography from the active + `DESIGN.md`, fill scene content, lay down first-pass GSAP entrances and + mid-scene activity, pick shader transitions for 2–3 key moments +2. **Save to disk** — Open Design writes the project into + `.od/projects//` (real `cwd`, agent-ready) +3. **Any AI coding agent** (Claude Code, Codex, Cursor, …) — `npx hyperframes + lint`, `npx hyperframes preview`, then iterate timing, eases, shader + choices, pacing + +Your output must be a **valid starting point a coding agent can open and +refine immediately** — no structural fixes needed. + +### What you optimize for + +- The active `DESIGN.md` palette + typography bound onto `:root` (never + freestyle a palette when one is active) +- Strong visual layout per scene (hierarchy, spacing, readability at video + size — 60px+ headlines, 20px+ body) +- Scene content that tells the story (headlines, stats, copy, imagery) +- Structural validity (passes `npx hyperframes lint` with zero errors) +- Appropriate shader choices for the mood (use the catalog at + [hyperframes.heygen.com/catalog](https://hyperframes.heygen.com/catalog)) +- Reasonable scene count and durations for the video type + +### What the coding agent polishes after you + +You ship every scene with entrance tweens, breathing motion, and shader +transitions. The video plays with full motion from your first draft. The +agent does the **edit-bay refinement**: ease curve tweaks, stagger timing, +scene-duration micro-adjustments, richer mid-scene activity, shader swaps, +production QA. + +--- + +## Hard rules (must-pass before emitting ``) + +These are HyperFrames-structural and non-negotiable. Open Design's +five-dimensional self-critique gate must verify all of them before emission. + +1. **Single HTML file.** `` through ``, all CSS inline, + GSAP loaded from CDN. No build step. +2. **Root composition element.** A single `
` with: + - `data-composition-id=""` + - `data-start="0"` + - `data-width` / `data-height` (e.g. `1080` × `1920` for 9:16, `1920` × + `1080` for 16:9, `1080` × `1080` for square) + - `data-duration=""` matching the sum of scene durations +3. **Scenes are children of `#stage`.** Each scene is `
` with: + - `data-start=""` + - `data-duration=""` + - `data-track-index="0"` (HyperFrames uses tracks for layering; visual + scenes share track 0 unless you intentionally overlap) + - A `.scene-content` wrapper inside it that holds the readable content + (headlines, stats, imagery). Decoratives (glows, grain, vignette) live + directly inside `.scene` but **outside** `.scene-content`. +4. **GSAP timeline registered paused.** A single timeline created with + `gsap.timeline({ paused: true })` and registered on + `window.__timelines = window.__timelines || {}; window.__timelines[""] = tl;`. + This is what makes the composition deterministically seekable — the + HyperFrames engine drives the playhead. +5. **`tl.from()` for entrances.** Animate FROM offscreen/invisible TO the + resting CSS position. Offset the first tween 0.1–0.3s into each scene to + avoid jump-cuts. +6. **Mid-scene activity on every scene.** Every visible element keeps moving + after its entrance. A still element on a still background is a JPEG with + a progress bar. Use at least 2 patterns per scene from the table below. +7. **Shader transitions ONLY at scene boundaries**, and at most 2–3 in the + whole video. Use HyperFrames' built-in shader blocks + (`flash-through-white`, `whip-pan`, `cinematic-zoom`, `glitch`, + `ripple-waves`, `light-leak`, `cross-warp-morph`, `chromatic-radial-split`, + `swirl-vortex`, `gravitational-lens`, `domain-warp-dissolve`, `ridged-burn`, + `sdf-iris`, `thermal-distortion`). Hard cuts everywhere else. +8. **No external assets the user didn't provide.** Use solid colors, CSS + gradients, inline SVG, `data:` images. Reference the user's uploaded + images by their saved filenames; don't invent stock URLs. +9. **`preview.html` token forwarding** — emit a sibling `preview.html` that + loads `index.html` in an iframe and forwards URL hash tokens (`?frame=…` + for scrubbing). Skeleton is in §6. + +--- + +## Step 1 — Understand the brief + +**Gate:** You can name the subject, duration, aspect ratio, and at least one +source of visual direction. + +Open Design's `RULE 1` already covers this — turn 1 is a `` +when the brief is sparse. **Do not skip it for video briefs**; pacing +decisions hinge on locking duration and aspect ratio early. + +Inputs in order of reliability: + +1. **Active `DESIGN.md`** (strongest) — Open Design always has one bound when + this skill runs. Read its palette, typography, and motion sections; bind + verbatim onto `:root`. +2. **Attachments** — screenshots, PDFs, brand guides; mine for any signal the + active DS doesn't already cover. +3. **Pasted content** — hex codes, copy, scripts, exact durations. +4. **Web research** (`WebFetch` + grep for hex) — only if the user names a + brand and the active DS isn't theirs. + +--- + +## Step 2 — Pick a skeleton, fill identity + +**Gate:** A working `index.html` exists with the active DS's palette and +typography on `:root`. The preview renders even if scenes are empty. + +| Type | Aspect | Duration | Scenes | +| ------------------------- | ------ | --------- | ------ | +| Social reel | 9:16 | 10–15s | 5–7 | +| Launch teaser | 16:9 | 15–25s | 7–10 | +| Product explainer | 16:9 | 30–60s | 10–18 | +| Cinematic title | 16:9 | 45–90s | 7–12 | + +Bind `:root` from the active `DESIGN.md`: + +```css +:root { + /* From active DESIGN.md — never invented */ + --bg: var(--ds-canvas); + --ink: var(--ds-foreground); + --accent: var(--ds-accent); + --muted: var(--ds-muted); + --font-display: var(--ds-display); + --font-body: var(--ds-body); +} +``` + +If the active DS uses different token names, alias them — but **always +source the values from the DS file**, never hard-code a hex from memory. + +--- + +## Step 3 — Fill scenes (content + animation) + +**Gate:** Every scene has visible content, at least 2 animation patterns from +the table, and mid-scene activity. No scene is a static slide. + +### 3a. Content goes inside `.scene-content` + +```html +
+
+

$1.9 Trillion

+

processed annually

+
+
+ +
+``` + +### 3b. Entrance tweens (offset 0.1–0.3s into each scene) + +```js +// === SCENE 3 (data-start=10.0) === +tl.from("#s3-title", { y: 40, autoAlpha: 0, duration: 0.6, ease: "power3.out" }, 10.3); +tl.from("#s3-sub", { y: 20, autoAlpha: 0, duration: 0.5, ease: "power2.out" }, 10.7); +tl.from("#s3-bar-chart", { scaleY: 0, transformOrigin: "bottom", duration: 0.8, ease: "expo.out" }, 11.0); +``` + +### 3c. Mid-scene activity (this is what separates video from slides) + +| Element | Mid-scene motion | Pattern | +| ------------------ | ---------------------------------------- | ----------------------------------------------------------------------- | +| Stat / number | Counter from 0 → target | `tl.to({n:0}, { n: target, duration, onUpdate: …, ease: "power2.out" })` | +| SVG line / path | Draws itself in real time | `strokeDashoffset` from `pathLength → 0` | +| Title / wordmark | Characters enter one by one | `tl.from(chars, { autoAlpha: 0, y: 8, stagger: 0.04 })` | +| Logo / lockup | Subtle vertical drift | `tl.to(el, { y: -6, duration: sceneLength, ease: "sine.inOut" })` | +| Chart / bars | Bars fill sequentially | `tl.from(bars, { scaleY: 0, transformOrigin: "bottom", stagger: 0.08 })` | +| Image / screenshot | Slow zoom: `scale: 1 → 1.03` | Ken Burns — `tl.to(img, { scale: 1.03, duration: sceneLength, ease: "none" })` | +| Background glow | Opacity pulse | `tl.to(".glow", { opacity: 0.6, duration: 1.5, ease: "sine.inOut", yoyo: true, repeat: 1 })` | + +**Minimum per scene:** entrance tweens + at least one continuous motion +(float, counter, zoom, or glow). + +### 3d. Adjust scene duration by reading time + +| Display text | Min duration | +| --------------------------- | ------------ | +| No text (hero, icon) | 1.5–2s | +| 1–3 words | 2–3s | +| 4–10 words | 3–4s | +| 11–20 words | 4–6s | +| 21–35 words | 6–8s | +| 35+ words | Split scenes | + +**Hard ceiling: 5s per scene** unless you name a specific reason (hero hold, +cinematic push, long counter animation). + +When you change a scene's duration, update `data-start` on every subsequent +scene to keep them tiled end-to-end, and update `#stage`'s `data-duration` to +match the total. + +### 3e. Vary eases + +Use at least 3 different eases across the timeline. Don't default to +`power2.out` on everything. Good defaults: `power3.out` (heavy entrances), +`expo.out` (snappy stat reveals), `sine.inOut` (breathing loops), +`elastic.out(1, 0.5)` (playful overshoot — sparingly). + +--- + +## Step 4 — Shader transitions (2–3 max) + +Use HyperFrames' built-in shader blocks at scene boundaries. Pick by mood: + +| Shader | Mood | +| -------------------------- | ------------------------------------- | +| `flash-through-white` | Energetic, optimistic, pop | +| `whip-pan` | High-energy, sports/news cut | +| `cinematic-zoom` | Reveal, magnification, "let me show you" | +| `glitch` | Tech, edgy, glitch-pop | +| `ripple-waves` | Soft, organic, lifestyle | +| `light-leak` | Warm, nostalgic, film-like | +| `cross-warp-morph` | Smooth scene-to-scene continuity | +| `chromatic-radial-split` | Retro tech, VHS aesthetic | +| `swirl-vortex` | Disorienting, dream sequence | + +Hard cuts everywhere else. A good rule: shader at the beginning, shader at +the climax, shader at the end. Anything more is over-decorated. + +--- + +## Step 5 — Self-critique (Open Design's 5-dim gate) + +Before emitting ``, score yourself 1–5 across: + +- **Philosophy** — Is the visual stance coherent with the brief and the + active DS, or is it generic? +- **Hierarchy** — Does each scene have a single dominant element? Is + reading order obvious? +- **Detail** — Do shader/eases/durations match the mood, or are they + defaulted? +- **Function** — Does the timeline play smoothly when the engine seeks? + Are all scene `data-start`s tiled? Does total `data-duration` match? +- **Innovation** — Is there at least one moment that wouldn't appear in a + generic AI render? + +Anything under 3/5 is a regression — fix and rescore. Two passes is normal. + +--- + +## Step 6 — Output contract + +Emit exactly two files inside ``: + +### `index.html` — the composition + +```html + + + + + <!-- from brief --> + + + + +
+
+
+
+
+
+
+ +
+ + + + +``` + +### `preview.html` — the local-preview shim + +```html + +Preview + + + + + +``` + +Save both files into the project's `cwd` (Open Design has already set this +to `.od/projects//`). The agent can immediately run: + +```bash +npx hyperframes lint # should pass with zero errors +npx hyperframes preview # opens the studio +npx hyperframes render # writes MP4 +``` + +--- + +## Anti-AI-slop blacklist (HyperFrames-specific) + +- **No purple gradients on dark backgrounds** unless the brief explicitly + names that aesthetic. +- **No generic emoji icons** — use inline SVG or DS-provided iconography. +- **No "10× faster" / "AI-powered" filler copy** — write the user's actual + words or use honest placeholders (`—` or labelled grey blocks). +- **No invented brand colors** — read from the active DS or the user's + attachment, never from memory. +- **No identical card grids** for every scene — at least 3 distinct layout + postures across the video. +- **No wall-clock JS animations** — `setTimeout`, `setInterval`, + `requestAnimationFrame`-driven animation breaks deterministic seeking. GSAP + timeline only. (Library-clock animations like Anime.js, Motion One, and + Lottie are supported via [HyperFrames' Frame Adapter](https://hyperframes.heygen.com/concepts/frame-adapters) + pattern, but stick to GSAP for first-draft handoffs unless the brief + requires another runtime.) + +--- + +## When to defer to the Claude Design instructions + +For these advanced areas, treat +[`claude-design-hyperframes.md`](https://github.com/heygen-com/hyperframes/blob/main/docs/guides/claude-design-hyperframes.md) +as the canonical reference and follow its patterns verbatim: + +- The full skeleton catalog (Skeletons A–D) +- Complete shader-block insertion patterns +- HDR / wide-gamut color handling +- Audio-reactive animation (`hf-seek` + `window.__hfAudio`) +- Captions / TTS integration +- The `hyperframes add` registry (50+ blocks and components) + +This skill stays focused on what Open Design needs at emission time — the +structural rules, the active-`DESIGN.md` binding, and the 5-dim self-critique +that's specific to OD's prompt stack. diff --git a/docs/guides/open-design.mdx b/docs/guides/open-design.mdx new file mode 100644 index 000000000..d79acf514 --- /dev/null +++ b/docs/guides/open-design.mdx @@ -0,0 +1,175 @@ +--- +title: Open Design +description: "Create HyperFrames video drafts in Open Design — the open-source, BYOK Claude-Design alternative — then refine in any AI coding agent." +--- + +[Open Design](https://github.com/nexu-io/open-design) is an open-source, local-first alternative to Claude Design. It runs on your laptop with `pnpm tools-dev`, deploys the web layer to Vercel, and delegates to whichever coding-agent CLI you already have on your `PATH` (Claude Code, Codex, Cursor Agent, Gemini CLI, OpenCode, Qwen, Copilot, Hermes, Kimi, Pi) — or to any OpenAI-compatible BYOK endpoint. + +Open Design produces a **valid first draft** of a HyperFrames composition — palette, scene content, GSAP entrance tweens, mid-scene activity, and shader transitions. You then download the project and refine in any AI coding agent with linting and live preview. + +## Get started + + + + Open [`open-design-hyperframes.md`](https://github.com/heygen-com/hyperframes/blob/main/docs/guides/open-design-hyperframes.md) on GitHub and click the download button (↓) to save it. + + + ```bash + git clone https://github.com/nexu-io/open-design.git + cd open-design + pnpm install + pnpm tools-dev run web + # open the web URL printed by tools-dev + ``` + + + **Recommended:** copy `open-design-hyperframes.md` to `skills/hyperframes-handoff/SKILL.md` inside the Open Design repo. The daemon auto-discovers it on the next request and exposes it as a skill in the picker. **Or:** start a new chat and attach the file directly — Open Design reads attachments natively. + + + Pick the `hyperframes-handoff` skill (or your active prototype skill), pick a design system or visual direction, and type the brief. Include screenshots, brand assets, or a palette if you have them. + + + Open Design writes `index.html`, `preview.html`, `README.md`, and a `DESIGN.md` snapshot into `.od/projects//`. Click **Save to disk** or download as a project ZIP. + + + The Open Design project folder is already a real on-disk working directory. Hand it off to Claude Code, Cursor, Codex, or any agent with terminal access: + ```bash + cd .od/projects/ + npx skills add heygen-com/hyperframes # install skills (one-time) + npx hyperframes lint # should pass with zero errors + npx hyperframes preview # open the studio + ``` + + + + + **Drop into `skills/`, don't paste into chat.** Open Design's daemon reads `SKILL.md` files at request time and injects the side files (templates, references) automatically. A pasted URL or chat attachment works, but the skill path gives you the full pre-flight pipeline (template injection + 5-dimensional self-critique gate). + + +## Which setup to use + +| Surface | Recommended setup | +| -------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Open Design (open-source) | Drop [`open-design-hyperframes.md`](https://github.com/heygen-com/hyperframes/blob/main/docs/guides/open-design-hyperframes.md) into `skills/hyperframes-handoff/SKILL.md` | +| Claude Code | `npx skills add heygen-com/hyperframes`, then use `/hyperframes` | +| Cursor / Codex / Gemini CLI | `npx skills add heygen-com/hyperframes` | +| Claude Design (closed-source) | Attach [`claude-design-hyperframes.md`](https://github.com/heygen-com/hyperframes/blob/main/docs/guides/claude-design-hyperframes.md) to your chat | + +## How it works + +The instruction file gives Open Design **pre-valid HTML skeletons** — the structural rules (data attributes, timeline registration, scene visibility, preview token forwarding) are already embedded. Open Design fills in the creative work: + +1. **Palette + typography** — driven by the active `DESIGN.md` (Open Design ships 72 systems out of the box, plus 5 deterministic visual directions when no brand is named) bound onto `:root` +2. **Scene content** — text, images, layout inside `.scene-content` wrappers +3. **Animations** — GSAP entrance tweens and mid-scene activity +4. **Transitions** — hard cuts for most scenes, shader transitions at 2-3 key moments + +Open Design's [5-dimensional self-critique](https://github.com/nexu-io/open-design/blob/main/apps/web/src/prompts/discovery.ts) runs before emission, so the artifact arrives lint-clean and your coding agent can start refining immediately without structural fixes. + +## Example prompts + + + + ```text + Use the hyperframes-handoff skill. I just shipped dark mode for my app. + Make me a 15-second Instagram reel announcing it. + + - App name: Taskflow + - Primary color: #6C5CE7 + - The vibe is clean, minimal, dark + - Key stat: "47% of users requested this" + ``` + + + ```text + Use the hyperframes-handoff skill on Codex CLI. 25-second LinkedIn video. + + Problem: Sales teams waste 3 hours/day on manual CRM updates. + Solution: AutoCRM — AI that logs every call, email, and meeting. + Traction: 200+ teams, $1.2M ARR, 18% MoM growth. + CTA: autocrmhq.com + + Use the Linear design system. Professional but not corporate. + ``` + + + ```text + Use the hyperframes-handoff skill. Design system: Stripe. 10-second reel. + One big number: "$4.2 billion processed in Q1 2026" + + Dark background, the number animates up from zero. Subtle, confident. + End with logo placeholder and "stripe.com" + ``` + + + ```text + Use the hyperframes-handoff skill. 30-second launch video for Orbit. + ``` + + Open Design's `RULE 1` always opens with a `` before emitting code, so a sparse brief turns into one short question form (surface · audience · tone · brand · scale) instead of an AI-freestyle render. + + + +## What to include in your prompt + +Open Design reads inputs in this order of reliability: **active DESIGN.md > attachments > pasted content > web research > URLs**. + +| Input type | What it gives Open Design | +| --- | --- | +| Active design system (72 shipped, switchable from picker) | Full 9-section spec (color, typography, spacing, layout, components, motion, voice, brand, anti-patterns) — strongest source | +| Screenshots / PDFs / brand guides | Palette, typography, UI patterns, tone — read by the agent natively | +| Pasted hex codes, typefaces, copy | Authoritative for what they cover | +| Brand name (well-known) | Open Design can `WebFetch` blogs, press, Wikipedia | +| SPA URL (React/Vue homepage) | Returns near-empty shell — pivot to blog/press instead | + +The more specific your prompt, the better the output. Pick a design system or visual direction up front, then describe content. + +## Why use Open Design vs. Claude Design + +| | Claude Design | **Open Design** | +| --- | --- | --- | +| License | Closed | Apache-2.0 | +| Cost | Pro / Max / Team | BYOK (free with your own CLI / API key) | +| Form factor | Web (claude.ai) only | Web app + local daemon (or deploy to Vercel) | +| Agent runtime | Anthropic only (Opus 4.7) | 10 CLI adapters + OpenAI-compatible BYOK proxy | +| Skills | Proprietary | 31 file-based `SKILL.md` bundles, droppable | +| Design systems | Proprietary | 72 shipped `DESIGN.md` systems | +| Filesystem-grade workspace | ❌ | ✅ Real `cwd`, real `Read` / `Write` / `Bash` / `WebFetch` | + +If you want the Claude Design loop without lock-in, Open Design is the same artifact-first mental model — open, local, BYOK. + +## Known limitations + +- **Render still happens locally** — Open Design produces the HTML; `npx hyperframes render` and HDR encoding still need FFmpeg + Node 22+ on your machine. +- **In-pane preview is sandboxed iframe** — full browser playback is reliable; for frame-accurate scrubbing use `npx hyperframes preview` after handoff. +- **Shader passthrough requires WebGL** — same as the Claude Design path; Open Design's iframe sandbox supports it. +- **Skill pre-flight is daemon-side** — if you bypass the skill picker and paste raw HTML into chat, you lose the side-file injection and 5-dim critique gate. Use the skill. + +## The handoff to your coding agent + +Open Design's project folder is the agent's `cwd`. There's no "export then re-import" step — open Claude Code, Cursor, Codex, or any AI coding agent against the same directory: + +```bash +cd .od/projects/ +npx skills add heygen-com/hyperframes # one-time setup +npx hyperframes lint # verify structure +npx hyperframes preview # open the studio +``` + +Then iterate the same way as the Claude Design path: + +- "Make scene 3's entrance snappier" +- "Add a counter animation to the stat in scene 5" +- "Tighten the pacing — scenes 4 and 6 feel too long" +- "Change the shader on transition 2 to glitch" + +## Next steps + + + + The closed-source flavor of the same workflow — useful when you don't have a CLI on your laptop. + + + More prompt patterns for HyperFrames across Claude Code, Claude Design, Open Design, and other agents. + +