feat(skill): add OpenSandbox agent skill for AI coding assistants#528
feat(skill): add OpenSandbox agent skill for AI coding assistants#528futureproperty wants to merge 1 commit intoalibaba:mainfrom
Conversation
Provide a SKILL.md that teaches AI agents (Claude Code, Cursor, OpenCode) how to use OpenSandbox without the MCP tool-schema overhead (~58% token reduction). Includes connection config resolution with persistent ~/.opensandbox.json caching, Python SDK and REST API examples covering the full sandbox lifecycle, and eval prompts for skill validation.
|
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0ca1fae967
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| operations — the SDK wraps this for you. If you need direct access: | ||
|
|
||
| | Method | Endpoint (execd :44772) | Purpose | | ||
| |--------|------------------------|---------| | ||
| | `POST` | `/command` | Run command (SSE stream) | |
There was a problem hiding this comment.
Document execd endpoint resolution before calling
/command
For REST-only workflows, this section skips the step that actually makes execd reachable. POST /v1/sandboxes only gives back a sandbox id; the repo docs require a follow-up GET /v1/sandboxes/{id}/endpoints/44772 (or ?use_server_proxy=true) to obtain a usable execd URL (specs/sandbox-lifecycle.yml:345-375, server/README.md:138-147). Since /command itself is served by execd rather than the lifecycle API host (specs/execd-api.yaml:285-338), an agent following the skill will otherwise try to post /command to $DOMAIN and fail on any deployment where execd is not directly exposed.
Useful? React with 👍 / 👎.
| metrics = await sandbox.get_metrics() | ||
| print(f"CPU: {metrics.cpu_used_in_percent}%, Memory: {metrics.memory_used_in_mib}MB") |
There was a problem hiding this comment.
Use
cpu_used_percentage in the metrics example
This snippet references metrics.cpu_used_in_percent, but the Python SDK exposes SandboxMetrics.cpu_used_percentage (sdks/sandbox/python/src/opensandbox/models/sandboxes.py:449-466). Any assistant that copies this example will raise AttributeError as soon as it tries to inspect CPU usage, so the skill should use the real field name here.
Useful? React with 👍 / 👎.
| | `DELETE` | `/command?id={execId}` | Interrupt command | | ||
| | `POST` | `/files/upload` | Upload file (multipart) | | ||
| | `GET` | `/files/download?path={path}` | Download file | | ||
| | `GET` | `/files/info?path[]={path}` | File metadata | |
There was a problem hiding this comment.
Replace
path[] with repeated path query params
The execd API defines file-info lookups as repeated path parameters, not path[] (specs/execd-api.yaml:465-474), and the smoke test exercises it with params={"path": [file_path]} (components/execd/tests/smoke_api.py:165-167). If an agent copies ?path[]=... from the skill into curl code, FastAPI will not bind it to the documented parameter shape and the request can fail with a validation error.
Useful? React with 👍 / 👎.
|
Great Job! But could you use |
Summary
SKILL.md) that teaches AI coding assistants (Claude Code, Cursor, OpenCode) how to use OpenSandbox — as a lightweight alternative to the MCP server~/.opensandbox.json, full Python SDK + REST API coverage, and eval promptsMotivation
The MCP server registers 17 tools (~7,750 tokens in system prompt). This skill achieves the same coverage in ~4,500 tokens (~58% reduction) by leveraging the agent's existing bash/python tool capabilities. For capable agents that already have bash/python tools, the structured MCP tool schemas are unnecessary overhead.
Changes
opensandbox-skill/SKILL.md— Complete skill covering:~/.opensandbox.json→ env vars → asks user, then persists for future sessionsopensandbox-skill/evals/evals.json— 3 eval prompts for skill validation (SDK usage, agent CLI setup, REST API)Testing
sdks/sandbox/python/), OpenAPI specs (specs/), and all example implementations (examples/)