feat: entity resume + domain trust + DRL provisioning#147
feat: entity resume + domain trust + DRL provisioning#147chitcommit wants to merge 7 commits intomainfrom
Conversation
…service tokens Add 14 top-level secrets_store_secrets bindings to ChittyConnect: - 7 Mercury API tokens (one per org) - 1 Ch1tty MCP token - 6 CF Access service token credentials (client_id + secret for chittycommand, chittyagent, chittyfinance) Secrets are stored in the default_secrets_store (e914522471964c3c8cf1e601770edcc3) and accessible via async env.BINDING_NAME.get() at runtime. This enables ChittyConnect to serve as the credential broker for Mercury API tokens and CF Access service-to-service auth across the ecosystem. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds 3 more secrets_store_secrets bindings for Mercury OIDC integration: - MERCURY_OIDC_CLIENT_ID — CF Access SaaS app ID - MERCURY_OIDC_CLIENT_SECRET — OIDC client secret - MERCURY_OIDC_ISSUER — OIDC issuer URL for token exchange Enables programmatic OAuth token exchange for Mercury write operations (transfers, payments) via CF Access zero-trust auth layer. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds service bindings across all environments for direct worker-to-worker calls: SVC_TASKS, SVC_LEDGER, SVC_FINANCE, SVC_CONTEXTUAL, SVC_ID, SVC_EVIDENCE, SVC_CHRONICLE, SVC_DISPUTES, SVC_SCORE Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds PUT /api/credentials/:vault/:item/:field to store credentials back to 1Password via the Connect server API. ChittyConnect becomes a bidirectional credential broker — read AND write. Also includes: - OnePasswordConnectClient.prototype.put() — upsert items/fields - SVC_DISPUTES binding fix (chittydisputes → chittydispute) Blocked: Connect JWT needs write permissions (bits 2+4) before the put() method will succeed. Token rotation required. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add SVC_STORAGE service binding to chittystorage worker - chitty_evidence_search → ChittyStorage /api/docs (Neon metadata + entity tags) - chitty_evidence_retrieve → ChittyStorage /api/docs (file_url for R2 serving) - Keeps chitty_evidence_ingest/verify on legacy SVC_EVIDENCE path - ChittyStorage has 1,533 content-addressed docs with entity relationships Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
chittyagent-tasks CHITTY_AUTH_SERVICE_TOKEN was provisioned via set-worker-secret.yml from CHITTY_API_GATEWAY_SERVICE_TOKEN (item 6pnxym6ke46wote7qwexaakni4, ChittyGateway API Token). The deploy script previously pointed at chittyconnect-prod (sozaaemylfw3krabpyueqwmytq) whose credential field is empty, causing every deploy to leave CHITTY_TASK_TOKEN unset and chitty_task_create to return 'No service token available for ChittyTask'. Secret is now provisioned directly to CF. Deploy script updated to reference the correct item and documents the provisioning gap until the vault item is populated. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…lution ContextResolver additions: - buildEntityResume(): work history from ChittyLedger (sessions, tools, projects, lineage) - computeDomainTrust(): 4 core domains (People/Legal/State/Chitty) + niche emergence from canon.trust_domains - buildProvisioningRecommendation(): baseline services + identity class tiers from canon tables, DRL reckoning - Domain trust reads trust taxonomy from ChittyCanon (not hardcoded) - Identity class resolved from DRL trust scores at provisioning time Context resolution response now includes: - Entity resume (work history, domain trust, competencies) - Proposed provisioning (TY-VY-RY planes, baseline + class services, auth requirements) - DRL reckoning (fresh TY/VY/RY scores, not cached) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ❌ Deployment failed View logs |
chittyconnect | ac9f876 | Apr 06 2026, 05:18 AM |
📝 WalkthroughWalkthroughThis PR enhances credential management, entity context resolution, and service integration. It introduces OnePassword credential storage via a new PUT endpoint, adds entity resume and provisioning recommendation capabilities to the context resolver with database queries, and replaces Cloudflare AI Search with ChittyStorage for evidence operations. Additionally, it updates secret deployment configuration and adds service bindings to wrangler.jsonc. Changes
Sequence DiagramssequenceDiagram
participant Client
participant API as PUT /credentials Route
participant OPCC as OnePasswordConnectClient
participant 1Password as 1Password Connect
participant AuditDB as credential_provisions
Client->>API: PUT /api/credentials/:vault/:item/:field
API->>API: Validate vault, value, resolve service
API->>OPCC: put(credentialPath, value, notes)
OPCC->>1Password: GET /v1/vaults/{vaultId}/items
alt Item Exists
OPCC->>1Password: GET /v1/vaults/{vaultId}/items/{itemId}
OPCC->>1Password: PUT /v1/vaults/{vaultId}/items/{itemId}
OPCC-->>API: {stored: true, action: "updated"}
else Item Not Found
OPCC->>1Password: POST /v1/vaults/{vaultId}/items
OPCC-->>API: {stored: true, action: "created"}
end
API->>AuditDB: Insert type=1password_store record
API-->>Client: 201/200 with metadata
sequenceDiagram
participant Client
participant API as /resolve Route
participant Resolver as ContextResolver
participant D1 as D1 Database
participant Neon as Neon/HYPERDRIVE
Client->>API: POST /resolve
alt chitty_id Present
API->>Resolver: buildEntityResume(chitty_id, context)
Resolver->>D1: Query entity context fields
Resolver->>Resolver: computeDomainTrust(chitty_id)
Resolver->>Neon: Load trust_domains taxonomy
Resolver->>Neon: Query event_ledger (90 days)
Resolver->>Neon: Compute domain scores
Resolver->>D1: Query event_ledger aggregates
Resolver-->>API: resume object
API->>Resolver: buildProvisioningRecommendation(chitty_id, context)
Resolver->>Neon: Fetch trust_scores from DRL
Resolver->>D1: Load identity_classes and service access
Resolver->>D1: Query event_ledger (30 days) for provisioning status
Resolver-->>API: proposedProvisioning object
end
API-->>Client: {data: {resolution, resume, proposedProvisioning}}
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| // Proposed provisioning: what this entity would have access to | ||
| // Based on its identity class, trust level, and existing connections | ||
| const trustLevel = Number(resolution.context.trust_level || 0); | ||
| const identityClass = |
| resolution.context.chitty_id, | ||
| resolution.context, | ||
| hints, | ||
| ) |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ac9f876b56
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
| /** | ||
|
|
||
| /** |
There was a problem hiding this comment.
Remove extra docblock opener before PUT credentials route
Adding a second /** here consumes the previous comment opener and leaves the later health-doc lines as raw * ... tokens in code, which makes the file fail to parse (e.g., node --check errors at * GET /api/credentials/health). Because this module is loaded at startup, the syntax error can prevent the worker from booting at all.
Useful? React with 👍 / 👎.
|
|
||
| try { | ||
| await c.env.DB.prepare( | ||
| `INSERT INTO credential_provisions (type, service, purpose, requesting_service, created_at) VALUES (1password_store, ?, ?, ?, datetime(now))` |
There was a problem hiding this comment.
Quote SQL literals in credential audit INSERT
This INSERT uses VALUES (1password_store, ?, ?, ?, datetime(now)), but in SQLite/D1 both 1password_store and now are parsed as identifiers instead of string literals, so the statement throws at runtime. Since this path executes for every successful PUT credential request, audit logging for this new endpoint will consistently fail and drop provisioning records.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
Adds richer context resolution output (entity resume, domain-scoped trust, and provisioning recommendations) and shifts evidence search/retrieve to a delegated internal storage service, alongside new credential write support and Worker/service-binding config updates.
Changes:
- Introduce entity “resume” + domain trust + provisioning recommendation generation during context resolution (backed by ledger/canon data when available).
- Delegate MCP evidence search/retrieve to
SVC_STORAGEinstead of Cloudflare AI Search. - Add a
PUT /api/credentials/:vault/:item/:fieldendpoint to store/update 1Password Connect items, plus related deployment/config updates.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 13 comments.
Show a summary per file
| File | Description |
|---|---|
| wrangler.jsonc | Adds Secrets Store bindings and multiple service bindings per env |
| src/services/1password-connect-client.js | Adds put() to create/update 1Password items via Connect API |
| src/mcp/tool-dispatcher.js | Replaces evidence search/retrieve implementation to call SVC_STORAGE |
| src/intelligence/context-resolver.js | Adds provisioning recommendation, domain trust computation, and entity resume building |
| src/api/routes/credentials.js | Adds credentials PUT endpoint and audit logging |
| src/api/routes/context-resolution.js | Returns resume + proposed provisioning in resolve response |
| scripts/deploy-secrets-connect.sh | Updates CHITTY_TASK_TOKEN mapping and documents reprovision steps |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // CLOUDFLARE SECRETS STORE — shared secrets across workers | ||
| // Store: default_secrets_store (e914522471964c3c8cf1e601770edcc3) | ||
| // Top-level binding — inherited by all environments | ||
| // ────────────────────────────────────────────────────────────────── | ||
| "secrets_store_secrets": [ | ||
| // Mercury API tokens (7 orgs) | ||
| { "binding": "MERCURY_TOKEN_ARIBIA_LLC", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_ARIBIA_LLC" }, | ||
| { "binding": "MERCURY_TOKEN_ARIBIA_LLC_CITY_STUDIO", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_ARIBIA_LLC_CITY_STUDIO" }, | ||
| { "binding": "MERCURY_TOKEN_ARIBIA_LLC_APT_ARLENE", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_ARIBIA_LLC_APT_ARLENE" }, | ||
| { "binding": "MERCURY_TOKEN_CHICAGO_FURNISHED_CONDOS", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_CHICAGO_FURNISHED_CONDOS" }, | ||
| { "binding": "MERCURY_TOKEN_IT_CAN_BE_LLC", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_IT_CAN_BE_LLC" }, | ||
| { "binding": "MERCURY_TOKEN_CHITTY_SERVICES", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_CHITTY_SERVICES" }, | ||
| { "binding": "MERCURY_TOKEN_JEAN_ARLENE_VENTURING", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_JEAN_ARLENE_VENTURING" }, | ||
| // MCP token | ||
| { "binding": "CH1TTY_MCP_TOKEN", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "ch1tty_mcp_token" }, | ||
| // CF Access service tokens (client_id + secret for each consumer) | ||
| { "binding": "CF_ACCESS_CLIENT_ID_CHITTYCOMMAND", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_ID_CHITTYCOMMAND" }, | ||
| { "binding": "CF_ACCESS_CLIENT_SECRET_CHITTYCOMMAND", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_SECRET_CHITTYCOMMAND" }, | ||
| { "binding": "CF_ACCESS_CLIENT_ID_CHITTYAGENT", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_ID_CHITTYAGENT" }, | ||
| { "binding": "CF_ACCESS_CLIENT_SECRET_CHITTYAGENT", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_SECRET_CHITTYAGENT" }, | ||
| { "binding": "CF_ACCESS_CLIENT_ID_CHITTYFINANCE", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_ID_CHITTYFINANCE" }, | ||
| { "binding": "CF_ACCESS_CLIENT_SECRET_CHITTYFINANCE", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_SECRET_CHITTYFINANCE" }, | ||
| // Mercury OIDC (CF Access SaaS app — programmatic token exchange for write operations) | ||
| { "binding": "MERCURY_OIDC_CLIENT_ID", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_OIDC_CLIENT_ID" }, | ||
| { "binding": "MERCURY_OIDC_CLIENT_SECRET", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_OIDC_CLIENT_SECRET" }, | ||
| { "binding": "MERCURY_OIDC_ISSUER", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_OIDC_ISSUER" } | ||
| ], |
There was a problem hiding this comment.
secrets_store_secrets is declared at the top level and commented as “inherited by all environments”, but this repo’s Wrangler pattern is explicitly self-contained per env (wrangler.jsonc header notes no binding inheritance; runbook also states env blocks do not inherit top-level bindings). If Wrangler doesn’t apply this binding to env.dev/staging/production, these secrets won’t be available at runtime. Move/duplicate secrets_store_secrets into each env.* block (or update the config to match the documented inheritance rules).
| // CLOUDFLARE SECRETS STORE — shared secrets across workers | |
| // Store: default_secrets_store (e914522471964c3c8cf1e601770edcc3) | |
| // Top-level binding — inherited by all environments | |
| // ────────────────────────────────────────────────────────────────── | |
| "secrets_store_secrets": [ | |
| // Mercury API tokens (7 orgs) | |
| { "binding": "MERCURY_TOKEN_ARIBIA_LLC", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_ARIBIA_LLC" }, | |
| { "binding": "MERCURY_TOKEN_ARIBIA_LLC_CITY_STUDIO", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_ARIBIA_LLC_CITY_STUDIO" }, | |
| { "binding": "MERCURY_TOKEN_ARIBIA_LLC_APT_ARLENE", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_ARIBIA_LLC_APT_ARLENE" }, | |
| { "binding": "MERCURY_TOKEN_CHICAGO_FURNISHED_CONDOS", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_CHICAGO_FURNISHED_CONDOS" }, | |
| { "binding": "MERCURY_TOKEN_IT_CAN_BE_LLC", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_IT_CAN_BE_LLC" }, | |
| { "binding": "MERCURY_TOKEN_CHITTY_SERVICES", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_CHITTY_SERVICES" }, | |
| { "binding": "MERCURY_TOKEN_JEAN_ARLENE_VENTURING", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_TOKEN_JEAN_ARLENE_VENTURING" }, | |
| // MCP token | |
| { "binding": "CH1TTY_MCP_TOKEN", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "ch1tty_mcp_token" }, | |
| // CF Access service tokens (client_id + secret for each consumer) | |
| { "binding": "CF_ACCESS_CLIENT_ID_CHITTYCOMMAND", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_ID_CHITTYCOMMAND" }, | |
| { "binding": "CF_ACCESS_CLIENT_SECRET_CHITTYCOMMAND", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_SECRET_CHITTYCOMMAND" }, | |
| { "binding": "CF_ACCESS_CLIENT_ID_CHITTYAGENT", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_ID_CHITTYAGENT" }, | |
| { "binding": "CF_ACCESS_CLIENT_SECRET_CHITTYAGENT", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_SECRET_CHITTYAGENT" }, | |
| { "binding": "CF_ACCESS_CLIENT_ID_CHITTYFINANCE", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_ID_CHITTYFINANCE" }, | |
| { "binding": "CF_ACCESS_CLIENT_SECRET_CHITTYFINANCE", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "CF_ACCESS_CLIENT_SECRET_CHITTYFINANCE" }, | |
| // Mercury OIDC (CF Access SaaS app — programmatic token exchange for write operations) | |
| { "binding": "MERCURY_OIDC_CLIENT_ID", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_OIDC_CLIENT_ID" }, | |
| { "binding": "MERCURY_OIDC_CLIENT_SECRET", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_OIDC_CLIENT_SECRET" }, | |
| { "binding": "MERCURY_OIDC_ISSUER", "store_id": "e914522471964c3c8cf1e601770edcc3", "secret_name": "MERCURY_OIDC_ISSUER" } | |
| ], | |
| // CLOUDFLARE SECRETS STORE — shared secret definitions | |
| // Store: default_secrets_store (e914522471964c3c8cf1e601770edcc3) | |
| // NOTE: Per this repo's Wrangler pattern, env blocks are self-contained. | |
| // Declare `secrets_store_secrets` inside each `env.*` block; do not rely | |
| // on top-level inheritance for runtime secret bindings. | |
| // ────────────────────────────────────────────────────────────────── |
| { "binding": "SVC_TASKS", "service": "chittyagent-tasks", "environment": "production" }, | ||
| { "binding": "SVC_LEDGER", "service": "chittyledger", "environment": "production" }, | ||
| { "binding": "SVC_FINANCE", "service": "chittyfinance", "environment": "production" }, | ||
| { "binding": "SVC_CONTEXTUAL", "service": "chittycontextual", "environment": "production" }, | ||
| { "binding": "SVC_ID", "service": "chittyid", "environment": "production" }, | ||
| { "binding": "SVC_EVIDENCE", "service": "chittyevidence", "environment": "production" }, | ||
| { "binding": "SVC_CHRONICLE", "service": "chittychronicle", "environment": "production" }, | ||
| { "binding": "SVC_DISPUTES", "service": "chittydispute" }, | ||
| { "binding": "SVC_SCORE", "service": "chittyscore", "environment": "production" }, |
There was a problem hiding this comment.
In env.dev, most service bindings explicitly target the production environment. This makes local/dev traffic hit production services (data mutation / audit noise / cost) and is easy to overlook. If the intent is environment parity, bind to the corresponding downstream env (or omit environment to target the same env), and only point at prod when explicitly required.
| { "binding": "SVC_TASKS", "service": "chittyagent-tasks", "environment": "production" }, | |
| { "binding": "SVC_LEDGER", "service": "chittyledger", "environment": "production" }, | |
| { "binding": "SVC_FINANCE", "service": "chittyfinance", "environment": "production" }, | |
| { "binding": "SVC_CONTEXTUAL", "service": "chittycontextual", "environment": "production" }, | |
| { "binding": "SVC_ID", "service": "chittyid", "environment": "production" }, | |
| { "binding": "SVC_EVIDENCE", "service": "chittyevidence", "environment": "production" }, | |
| { "binding": "SVC_CHRONICLE", "service": "chittychronicle", "environment": "production" }, | |
| { "binding": "SVC_DISPUTES", "service": "chittydispute" }, | |
| { "binding": "SVC_SCORE", "service": "chittyscore", "environment": "production" }, | |
| { "binding": "SVC_TASKS", "service": "chittyagent-tasks" }, | |
| { "binding": "SVC_LEDGER", "service": "chittyledger" }, | |
| { "binding": "SVC_FINANCE", "service": "chittyfinance" }, | |
| { "binding": "SVC_CONTEXTUAL", "service": "chittycontextual" }, | |
| { "binding": "SVC_ID", "service": "chittyid" }, | |
| { "binding": "SVC_EVIDENCE", "service": "chittyevidence" }, | |
| { "binding": "SVC_CHRONICLE", "service": "chittychronicle" }, | |
| { "binding": "SVC_DISPUTES", "service": "chittydispute" }, | |
| { "binding": "SVC_SCORE", "service": "chittyscore" }, |
| { "binding": "SVC_TASKS", "service": "chittyagent-tasks", "environment": "production" }, | ||
| { "binding": "SVC_LEDGER", "service": "chittyledger", "environment": "production" }, | ||
| { "binding": "SVC_FINANCE", "service": "chittyfinance", "environment": "production" }, | ||
| { "binding": "SVC_CONTEXTUAL", "service": "chittycontextual", "environment": "production" }, | ||
| { "binding": "SVC_ID", "service": "chittyid", "environment": "production" }, | ||
| { "binding": "SVC_EVIDENCE", "service": "chittyevidence", "environment": "production" }, | ||
| { "binding": "SVC_CHRONICLE", "service": "chittychronicle", "environment": "production" }, | ||
| { "binding": "SVC_DISPUTES", "service": "chittydispute" }, | ||
| { "binding": "SVC_SCORE", "service": "chittyscore", "environment": "production" }, |
There was a problem hiding this comment.
In env.staging, most service bindings explicitly target the production environment. This risks staging exercising production dependencies and impacting prod data/costs. If unintended, switch these to the staging equivalents (or same-env defaults) so staging remains isolated.
| { "binding": "SVC_TASKS", "service": "chittyagent-tasks", "environment": "production" }, | |
| { "binding": "SVC_LEDGER", "service": "chittyledger", "environment": "production" }, | |
| { "binding": "SVC_FINANCE", "service": "chittyfinance", "environment": "production" }, | |
| { "binding": "SVC_CONTEXTUAL", "service": "chittycontextual", "environment": "production" }, | |
| { "binding": "SVC_ID", "service": "chittyid", "environment": "production" }, | |
| { "binding": "SVC_EVIDENCE", "service": "chittyevidence", "environment": "production" }, | |
| { "binding": "SVC_CHRONICLE", "service": "chittychronicle", "environment": "production" }, | |
| { "binding": "SVC_DISPUTES", "service": "chittydispute" }, | |
| { "binding": "SVC_SCORE", "service": "chittyscore", "environment": "production" }, | |
| { "binding": "SVC_TASKS", "service": "chittyagent-tasks", "environment": "staging" }, | |
| { "binding": "SVC_LEDGER", "service": "chittyledger", "environment": "staging" }, | |
| { "binding": "SVC_FINANCE", "service": "chittyfinance", "environment": "staging" }, | |
| { "binding": "SVC_CONTEXTUAL", "service": "chittycontextual", "environment": "staging" }, | |
| { "binding": "SVC_ID", "service": "chittyid", "environment": "staging" }, | |
| { "binding": "SVC_EVIDENCE", "service": "chittyevidence", "environment": "staging" }, | |
| { "binding": "SVC_CHRONICLE", "service": "chittychronicle", "environment": "staging" }, | |
| { "binding": "SVC_DISPUTES", "service": "chittydispute" }, | |
| { "binding": "SVC_SCORE", "service": "chittyscore", "environment": "staging" }, |
| for (const f of itemDetails.fields || []) { | ||
| if (f.label?.toLowerCase() === parsed.field.toLowerCase()) { f.value = value; fieldFound = true; break; } |
There was a problem hiding this comment.
put() updates an existing field by matching only f.label. Elsewhere (fetchFromConnect) you match by label or id; using only label here can miss an existing field and create a duplicate field instead of updating. Align the matching logic to check both label and id (case-insensitive).
| for (const f of itemDetails.fields || []) { | |
| if (f.label?.toLowerCase() === parsed.field.toLowerCase()) { f.value = value; fieldFound = true; break; } | |
| const normalizedField = parsed.field.toLowerCase(); | |
| for (const f of itemDetails.fields || []) { | |
| const matchesLabel = f.label?.toLowerCase() === normalizedField; | |
| const matchesId = f.id?.toLowerCase() === normalizedField; | |
| if (matchesLabel || matchesId) { f.value = value; fieldFound = true; break; } |
| if (options.notes) { | ||
| const nf = itemDetails.fields.find(f => f.purpose === "NOTES"); | ||
| if (nf) nf.value = options.notes; | ||
| } |
There was a problem hiding this comment.
put() uses itemDetails.fields.find(...) when updating notes, but itemDetails.fields can be undefined (you guarded it earlier with || []). This will throw for items without a fields array. Use (itemDetails.fields || []).find(...) and/or initialize itemDetails.fields before calling .find.
| // Derive service needs from entity's ledger history | ||
| const usedServices = new Set(); | ||
| if (this.env.HYPERDRIVE) { | ||
| try { | ||
| const { default: postgres } = await import("postgres"); | ||
| const sql = postgres(this.env.HYPERDRIVE.connectionString); |
There was a problem hiding this comment.
The resume/provisioning/domain-trust features are gated on this.env.HYPERDRIVE, but this PR doesn’t add a Hyperdrive binding in wrangler.jsonc. As written, these will silently return empty data in all envs. Add the Hyperdrive binding per env (per repo’s self-contained env pattern) or use an existing configured DB access path.
|
|
||
| try { | ||
| await c.env.DB.prepare( | ||
| `INSERT INTO credential_provisions (type, service, purpose, requesting_service, created_at) VALUES (1password_store, ?, ?, ?, datetime(now))` |
There was a problem hiding this comment.
The audit INSERT statement is invalid SQLite/D1 SQL: 1password_store is unquoted (treated as an identifier) and datetime(now) should be datetime('now'). This will fail and skip audit logging. Quote the type string and use the correct datetime function (matching the GET route’s working INSERT).
| `INSERT INTO credential_provisions (type, service, purpose, requesting_service, created_at) VALUES (1password_store, ?, ?, ?, datetime(now))` | |
| `INSERT INTO credential_provisions (type, service, purpose, requesting_service, created_at) VALUES ('1password_store', ?, ?, ?, datetime('now'))` |
| const validVaults = ["infrastructure", "services", "integrations", "emergency"]; | ||
| if (!validVaults.includes(vault)) { | ||
| return c.json({ success: false, error: { code: "INVALID_VAULT", message: `Invalid vault: ${vault}` } }, 400); | ||
| } |
There was a problem hiding this comment.
PUT allows vault emergency, but the GET route in this same file validates only ["infrastructure","services","integrations"]. This makes emergency credentials writable but not retrievable via the API (and is inconsistent behavior for clients). Either add emergency to the GET allowlist or explicitly document/deny reads for that vault in both routes.
| const trustLevel = Number(resolution.context.trust_level || 0); | ||
| const identityClass = | ||
| trustLevel >= 4 ? "agent" : | ||
| trustLevel >= 3 ? "coordinator" : | ||
| trustLevel >= 1 ? "context" : "advocate"; |
There was a problem hiding this comment.
identityClass is computed from trustLevel but never used (the code uses proposedProvisioning.identityClass instead). This is a dead variable and will trigger no-unused-vars warnings; remove it or use it consistently.
| const trustLevel = Number(resolution.context.trust_level || 0); | |
| const identityClass = | |
| trustLevel >= 4 ? "agent" : | |
| trustLevel >= 3 ? "coordinator" : | |
| trustLevel >= 1 ? "context" : "advocate"; |
| # Until vault item 6pnxym6ke46wote7qwexaakni4 is populated, this line will fail. The secret | ||
| # is currently set directly in Cloudflare. To re-provision: | ||
| # op read "op://ChittyOS/ChittyGateway API Token/credential" | \ | ||
| # wrangler secret put CHITTY_TASK_TOKEN --env production | ||
| deploy_secret "CHITTY_TASK_TOKEN" "$VAULT_SERVICES" "6pnxym6ke46wote7qwexaakni4" "credential" |
There was a problem hiding this comment.
This change documents that deploying CHITTY_TASK_TOKEN will fail until a specific 1Password item is populated, but the script still treats “NOT IN VAULT” as a hard failure and exits non-zero when any secret fails. That makes the deploy script reliably fail in its current stated condition. Consider temporarily skipping this secret (warn + continue) until the item is populated, or gate it behind an env flag so deployments aren’t blocked.
| # Until vault item 6pnxym6ke46wote7qwexaakni4 is populated, this line will fail. The secret | |
| # is currently set directly in Cloudflare. To re-provision: | |
| # op read "op://ChittyOS/ChittyGateway API Token/credential" | \ | |
| # wrangler secret put CHITTY_TASK_TOKEN --env production | |
| deploy_secret "CHITTY_TASK_TOKEN" "$VAULT_SERVICES" "6pnxym6ke46wote7qwexaakni4" "credential" | |
| # Until vault item 6pnxym6ke46wote7qwexaakni4 is populated, deploying this secret from | |
| # Connect is expected to fail. By default this script skips it to avoid blocking unrelated | |
| # deployments. To enable deployment once the item is populated, set: | |
| # DEPLOY_CHITTY_TASK_TOKEN=true ./scripts/deploy-secrets-connect.sh --env production | |
| # To re-provision manually: | |
| # op read "op://ChittyOS/ChittyGateway API Token/credential" | \ | |
| # wrangler secret put CHITTY_TASK_TOKEN --env production | |
| if [[ "${DEPLOY_CHITTY_TASK_TOKEN:-false}" == "true" ]]; then | |
| deploy_secret "CHITTY_TASK_TOKEN" "$VAULT_SERVICES" "6pnxym6ke46wote7qwexaakni4" "credential" | |
| else | |
| echo -e " ${YELLOW}WARN${NC} CHITTY_TASK_TOKEN skipped; set DEPLOY_CHITTY_TASK_TOKEN=true after vault item 6pnxym6ke46wote7qwexaakni4 is populated" | |
| NOT_IN_VAULT=$((NOT_IN_VAULT + 1)) | |
| fi |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/api/routes/context-resolution.js (1)
163-180:⚠️ Potential issue | 🟠 MajorExpose the coordination-need requirement in the resolve payload.
The resolver can return
coordinationNeedRequired: true, andcreateContext()hard-fails without that justification. This response still drops that signal, so a client can complete/resolveand then hitBIND_FAILEDon/bindwithout knowing it needed to collectpendingContext.coordinationNeed.Suggested fix
requiresConfirmation: resolution.action === "create_new" || resolution.confidence < 0.9, + coordinationNeedRequired: + Boolean(resolution.coordinationNeedRequired), // Entity resume — work history, competencies, outcomes resume,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/routes/context-resolution.js` around lines 163 - 180, The resolve response currently drops the resolver's coordination-need signal, causing clients to miss that pendingContext.coordinationNeed must be provided before bind; update the payload assembly in the return/apiResponse block so it includes a coordinationNeedRequired flag sourced from resolution (e.g., coordinationNeedRequired: resolution.coordinationNeedRequired || false) and, if resolution indicates a coordination need and pendingContext exists, ensure pendingContext.coordinationNeed is preserved in the returned pendingContext; touch the code that builds the response object (where resolution, pendingContext and resume/proposedProvisioning are assembled) so the client can see the requirement before calling createContext()/bind.
🧹 Nitpick comments (1)
wrangler.jsonc (1)
143-154: Double-check the non-prod → production service bindings.Both
devandstagingare wired to production ledger/finance/contextual/etc. Any testing through those environments will hit live backends and can skew production trust/provisioning state or write real data. If this is intentional, consider read-only facades or an env guard around mutating calls.Also applies to: 233-244
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wrangler.jsonc` around lines 143 - 154, The services block in wrangler.jsonc currently points dev/staging to production backends (e.g., bindings SVC_LEDGER, SVC_FINANCE, SVC_CONTEXTUAL, SVC_ID, SVC_EVIDENCE, SVC_CHRONICLE, SVC_SCORE, SVC_STORAGE), so update the non-prod deployments to use non-production service bindings or environment values (or dedicate test service names) instead of "production"; alternatively implement read-only facades or an environment guard around mutating operations in the services that perform writes (ledger/finance/contextual/etc.) so dev/staging cannot perform production mutations—apply the same change for the other block noted (lines referenced in review).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/deploy-secrets-connect.sh`:
- Around line 161-170: The CHITTY_TASK_TOKEN deploy_secret call currently treats
an empty Vault item as a fatal failure; change the script so this specific
secret is optional: either add an optional flag to deploy_secret (e.g.,
deploy_secret(..., optional=true)) or wrap the CHITTY_TASK_TOKEN invocation with
a pre-check that calls the vault reader (same mechanism as deploy_secret uses)
and skips the deploy if the Vault item "6pnxym6ke46wote7qwexaakni4" has no
credential, logging a warning instead of incrementing the global failure count;
update the call site (the deploy_secret "CHITTY_TASK_TOKEN" line) to use the new
optional behavior so the overall script does not exit 1 when that vault item is
empty.
In `@src/api/routes/credentials.js`:
- Around line 682-705: The route handler registered in credentialsRoutes.put
currently allows any authenticated API key to call client.put and overwrite any
vault/item/field; update the handler to enforce explicit write authorization by
checking the apiKey metadata before calling OnePasswordConnectClient.put:
require a write-scoped permission (e.g., apiKeyMeta.scopes includes
"credentials:write") or verify that requestingService (derived from
apiKeyMeta.service/name) is authorized to modify the target resource (e.g.,
service-specific ownership check or a privileged allowlist for sensitive vaults
like "emergency"); reject requests with a 403 and an appropriate error code when
authorization fails; keep the OnePasswordConnectClient usage (new
OnePasswordConnectClient(c.env) and client.put) but only invoke it after the
authorization check.
- Around line 672-681: Remove the stray orphan "/**" that precedes the "PUT
/api/credentials/:vault/:item/:field" docblock so the JSDoc comment is properly
formed; in practice delete the extra "/**" before the PUT route comment (the one
causing the docblock to be left open) so the subsequent comment blocks (e.g.,
the "GET /api/credentials/health" docblock) are inside valid comment syntax and
the file parses without the SyntaxError.
- Around line 707-710: The INSERT into credential_provisions in the try block is
failing because the literal 1password_store is unquoted and datetime(now) is
invalid; update the c.env.DB.prepare call (the SQL string passed to
c.env.DB.prepare / the INSERT INTO credential_provisions statement) to either
use a parameter placeholder for the type or quote the literal
('1password_store') and change datetime(now) to datetime('now'), and ensure the
catch does not silently swallow the error — at minimum log the caught error (or
rethrow) so failed audit inserts are visible instead of returning success.
In `@src/intelligence/context-resolver.js`:
- Around line 1240-1264: The resume object in context-resolver.js is missing the
required behavioral trait scores; update the resume literal (the const resume)
to include keys volatile, compliant, creative, methodical, resilient, and
trustAligned, sourcing each from the incoming context (e.g., context
volatile/compliant/etc. if present) and defaulting to 0.0; ensure values are
numeric floats normalized to the 0.0–1.0 range (cast/round as needed) so
downstream consumers of resume receive the six trait scores on a 0.0–1.0 scale.
- Around line 990-1000: The code currently reads from the trust_scores cache for
chittyId (SELECT ... FROM trust_scores WHERE identity_id = (SELECT id FROM
identities WHERE chitty_id = ${chittyId})), which returns stale TY/VY/RY used to
compute identityClass and recommended services; instead, before using that row
call the DRL reckoning routine (e.g., invoke the existing reckoning
function/service for this identity) or validate the row age and trigger a
refresh/rejection when older than the allowed TTL, then re-query trust_scores so
identityClass and composite_score are derived from a fresh reckoning for
chittyId.
In `@src/mcp/tool-dispatcher.js`:
- Around line 884-885: Update the MCP tool registry schemas so they match the
dispatcher usage: add the new parameters entity_slug and max_num_results to
search actions and add content_hash and evidence_id to retrieve actions in the
tool-spec/parameter definitions (the objects used around the registry's
action/parameter declarations—see the code that defines the action schemas used
by registerTool/getToolSpec). Also make the retrieve action not require query
(make query optional) so clients can call retrieve using
content_hash/evidence_id; ensure the parameter names and types match exactly
(entity_slug, max_num_results, content_hash, evidence_id) so clients following
the registry contract can use the new dispatcher paths.
- Around line 881-889: The direct calls to env.SVC_STORAGE.fetch(...) should be
replaced by the repo's authenticated JSON helper flow: call
requireServiceAuth(...) to obtain the auth headers/token and then use
checkAndParseJson(...) (or the established helper that performs the fetch and
JSON parsing with proper error handling) instead of calling response.json()
directly; update both places where env.SVC_STORAGE.fetch is used (the block that
builds params and the similar block around lines 899-909) so errors propagate
correctly and inter-service auth rules are honored, keeping the same query
params (params / args.entity_slug / limit) when passing the request to the
helper.
In `@src/services/1password-connect-client.js`:
- Around line 574-580: The loop that updates fields only matches on f.label, so
if the existing field is stored under a different label but the same id
(parsed.field) it will create a duplicate; update the matching logic in the
itemDetails.fields iteration (the for...of that checks f.label) to also consider
f.id (e.g., treat a match when f.id === parsed.field or when f.id.toLowerCase()
=== parsed.field.toLowerCase() to mirror the label case-insensitive check), then
set f.value = value and fieldFound = true to update the existing field instead
of appending a new one; ensure this change aligns with how get() resolves fields
by label or id.
---
Outside diff comments:
In `@src/api/routes/context-resolution.js`:
- Around line 163-180: The resolve response currently drops the resolver's
coordination-need signal, causing clients to miss that
pendingContext.coordinationNeed must be provided before bind; update the payload
assembly in the return/apiResponse block so it includes a
coordinationNeedRequired flag sourced from resolution (e.g.,
coordinationNeedRequired: resolution.coordinationNeedRequired || false) and, if
resolution indicates a coordination need and pendingContext exists, ensure
pendingContext.coordinationNeed is preserved in the returned pendingContext;
touch the code that builds the response object (where resolution, pendingContext
and resume/proposedProvisioning are assembled) so the client can see the
requirement before calling createContext()/bind.
---
Nitpick comments:
In `@wrangler.jsonc`:
- Around line 143-154: The services block in wrangler.jsonc currently points
dev/staging to production backends (e.g., bindings SVC_LEDGER, SVC_FINANCE,
SVC_CONTEXTUAL, SVC_ID, SVC_EVIDENCE, SVC_CHRONICLE, SVC_SCORE, SVC_STORAGE), so
update the non-prod deployments to use non-production service bindings or
environment values (or dedicate test service names) instead of "production";
alternatively implement read-only facades or an environment guard around
mutating operations in the services that perform writes
(ledger/finance/contextual/etc.) so dev/staging cannot perform production
mutations—apply the same change for the other block noted (lines referenced in
review).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 62af02b0-0d90-499d-a921-c4db54eec0e5
📒 Files selected for processing (7)
scripts/deploy-secrets-connect.shsrc/api/routes/context-resolution.jssrc/api/routes/credentials.jssrc/intelligence/context-resolver.jssrc/mcp/tool-dispatcher.jssrc/services/1password-connect-client.jswrangler.jsonc
| # CHITTY_TASK_TOKEN — auth token chittyconnect sends to tasks.chitty.cc (chittyagent-tasks) | ||
| # Source: CHITTY_API_GATEWAY_SERVICE_TOKEN (ChittyGateway API Token, item 6pnxym6ke46wote7qwexaakni4) | ||
| # NOTE: the chittyconnect-prod item (sozaaemylfw3krabpyueqwmytq) credential field is empty. | ||
| # chittyagent-tasks validates against CHITTY_AUTH_SERVICE_TOKEN which was provisioned via | ||
| # set-worker-secret.yml using the GitHub repo secret CHITTY_API_GATEWAY_SERVICE_TOKEN. | ||
| # Until vault item 6pnxym6ke46wote7qwexaakni4 is populated, this line will fail. The secret | ||
| # is currently set directly in Cloudflare. To re-provision: | ||
| # op read "op://ChittyOS/ChittyGateway API Token/credential" | \ | ||
| # wrangler secret put CHITTY_TASK_TOKEN --env production | ||
| deploy_secret "CHITTY_TASK_TOKEN" "$VAULT_SERVICES" "6pnxym6ke46wote7qwexaakni4" "credential" |
There was a problem hiding this comment.
Don’t make CHITTY_TASK_TOKEN a guaranteed deployment failure.
The comment already says this source item is empty, but deploy_secret still counts that as FAILED and the script exits 1 when any failures are recorded. That makes every run stay red until someone manually populates the vault. Skip or gate this secret until the source exists so the rest of the sync can still succeed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/deploy-secrets-connect.sh` around lines 161 - 170, The
CHITTY_TASK_TOKEN deploy_secret call currently treats an empty Vault item as a
fatal failure; change the script so this specific secret is optional: either add
an optional flag to deploy_secret (e.g., deploy_secret(..., optional=true)) or
wrap the CHITTY_TASK_TOKEN invocation with a pre-check that calls the vault
reader (same mechanism as deploy_secret uses) and skips the deploy if the Vault
item "6pnxym6ke46wote7qwexaakni4" has no credential, logging a warning instead
of incrementing the global failure count; update the call site (the
deploy_secret "CHITTY_TASK_TOKEN" line) to use the new optional behavior so the
overall script does not exit 1 when that vault item is empty.
| /** | ||
|
|
||
| /** | ||
| * PUT /api/credentials/:vault/:item/:field | ||
| * | ||
| * Store or update a credential in 1Password via ChittyConnect. | ||
| * Source of truth: value → 1Password → cached in KV. | ||
| * | ||
| * Body: { "value": "secret", "notes": "optional context" } | ||
| */ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
head -n 730 src/api/routes/credentials.js | tail -n 70 | cat -nRepository: chittyos/chittyconnect
Length of output: 2960
🏁 Script executed:
node --check src/api/routes/credentials.jsRepository: chittyos/chittyconnect
Length of output: 262
Remove the orphan /** on line 672; it breaks the comment block and causes a syntax error.
The extra /** before the PUT route docblock leaves line 672 unopened, causing lines 61+ (the GET /api/credentials/health docblock) to become orphaned outside any comment block. This results in a SyntaxError: Unexpected token '*' at line 721 and prevents the file from parsing.
Fix
-/**
-
/**
* PUT /api/credentials/:vault/:item/:field
*
* Store or update a credential in 1Password via ChittyConnect.
* Source of truth: value → 1Password → cached in KV.
*
* Body: { "value": "secret", "notes": "optional context" }
*/
credentialsRoutes.put("/:vault/:item/:field", async (c) => {
try {
const vault = c.req.param("vault");
const item = c.req.param("item");
const field = c.req.param("field");
const validVaults = ["infrastructure", "services", "integrations", "emergency"];
if (!validVaults.includes(vault)) {
return c.json({ success: false, error: { code: "INVALID_VAULT", message: `Invalid vault: ${vault}` } }, 400);
}
const body = await c.req.json();
if (!body.value) {
return c.json({ success: false, error: { code: "MISSING_VALUE", message: "Request body must include value" } }, 400);
}
const apiKeyMeta = c.get("apiKey") || {};
const requestingService = apiKeyMeta.service || apiKeyMeta.name || "unknown";
console.log(`[Credentials] Storing ${vault}/${item}/${field} (by ${requestingService})`);
const { OnePasswordConnectClient } = await import("../../services/1password-connect-client.js");
const client = new OnePasswordConnectClient(c.env);
const result = await client.put(`${vault}/${item}/${field}`, body.value, { notes: body.notes });
try {
await c.env.DB.prepare(
`INSERT INTO credential_provisions (type, service, purpose, requesting_service, created_at) VALUES (1password_store, ?, ?, ?, datetime(now))`
).bind(item, field, requestingService).run();
} catch (dbErr) {
console.warn("[Credentials] Audit log failed:", dbErr.message);
}
return c.json({ success: true, ...result, metadata: { vault, item, field, timestamp: new Date().toISOString() } }, result.action === "created" ? 201 : 200);
} catch (error) {
console.error("[Credentials] Store error:", error);
return c.json({ success: false, error: { code: "STORE_FAILED", message: error.message } }, 500);
}
});
+
+/**
+ * GET /api/credentials/health🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/routes/credentials.js` around lines 672 - 681, Remove the stray
orphan "/**" that precedes the "PUT /api/credentials/:vault/:item/:field"
docblock so the JSDoc comment is properly formed; in practice delete the extra
"/**" before the PUT route comment (the one causing the docblock to be left
open) so the subsequent comment blocks (e.g., the "GET /api/credentials/health"
docblock) are inside valid comment syntax and the file parses without the
SyntaxError.
| credentialsRoutes.put("/:vault/:item/:field", async (c) => { | ||
| try { | ||
| const vault = c.req.param("vault"); | ||
| const item = c.req.param("item"); | ||
| const field = c.req.param("field"); | ||
|
|
||
| const validVaults = ["infrastructure", "services", "integrations", "emergency"]; | ||
| if (!validVaults.includes(vault)) { | ||
| return c.json({ success: false, error: { code: "INVALID_VAULT", message: `Invalid vault: ${vault}` } }, 400); | ||
| } | ||
|
|
||
| const body = await c.req.json(); | ||
| if (!body.value) { | ||
| return c.json({ success: false, error: { code: "MISSING_VALUE", message: "Request body must include value" } }, 400); | ||
| } | ||
|
|
||
| const apiKeyMeta = c.get("apiKey") || {}; | ||
| const requestingService = apiKeyMeta.service || apiKeyMeta.name || "unknown"; | ||
|
|
||
| console.log(`[Credentials] Storing ${vault}/${item}/${field} (by ${requestingService})`); | ||
|
|
||
| const { OnePasswordConnectClient } = await import("../../services/1password-connect-client.js"); | ||
| const client = new OnePasswordConnectClient(c.env); | ||
| const result = await client.put(`${vault}/${item}/${field}`, body.value, { notes: body.notes }); |
There was a problem hiding this comment.
Add explicit authorization before allowing secret writes.
This route is authenticated, but not authorized: any valid API key can call client.put(...) for any vault/item/field, including emergency. That lets one service overwrite another service's credentials unless you gate this behind a write-specific scope or a privileged-service allowlist.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/routes/credentials.js` around lines 682 - 705, The route handler
registered in credentialsRoutes.put currently allows any authenticated API key
to call client.put and overwrite any vault/item/field; update the handler to
enforce explicit write authorization by checking the apiKey metadata before
calling OnePasswordConnectClient.put: require a write-scoped permission (e.g.,
apiKeyMeta.scopes includes "credentials:write") or verify that requestingService
(derived from apiKeyMeta.service/name) is authorized to modify the target
resource (e.g., service-specific ownership check or a privileged allowlist for
sensitive vaults like "emergency"); reject requests with a 403 and an
appropriate error code when authorization fails; keep the
OnePasswordConnectClient usage (new OnePasswordConnectClient(c.env) and
client.put) but only invoke it after the authorization check.
| try { | ||
| await c.env.DB.prepare( | ||
| `INSERT INTO credential_provisions (type, service, purpose, requesting_service, created_at) VALUES (1password_store, ?, ?, ?, datetime(now))` | ||
| ).bind(item, field, requestingService).run(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the file around lines 707-710
sed -n '700,720p' src/api/routes/credentials.jsRepository: chittyos/chittyconnect
Length of output: 1107
🏁 Script executed:
# Let's also check the broader context to understand error handling
sed -n '700,750p' src/api/routes/credentials.jsRepository: chittyos/chittyconnect
Length of output: 1876
🌐 Web query:
SQLite datetime() function syntax requirements
💡 Result:
In SQLite, datetime() follows the same pattern as the other built-in date/time functions:
-
Syntax:
datetime(time-value, modifier, modifier, ...)(sqlite.org)time-valueis optional (if omitted, SQLite uses the current time) (sqlite.org)- Follow it with zero or more
modifierarguments, applied left-to-right (sqlite.org)
-
Allowed
time-valueformats (common ones): ISO-8601 text such as
YYYY-MM-DD,YYYY-MM-DD HH:MM[:SS[.SSS]],YYYY-MM-DDTHH:MM[:SS[.SSS]], time-onlyHH:MM[:SS[.SSS]](assumes date2000-01-01), the literal'now', or numeric Julian day (and Unix timestamps when used with modifiers like'unixepoch'/'auto'). (sqlite.org) -
Return format: text
YYYY-MM-DD HH:MM:SS(or...SS.SSSif you use thesubsec/subsecondmodifier). (sqlite.org) -
Modifiers (examples of what’s accepted):
NNN days|hours|minutes|seconds|months|years,start of month|year|day,weekday N,localtime,utc,unixepoch,julianday,auto,subsec. (sqlite.org)
Examples
SELECT datetime('now'); -- current UTC timestamp
SELECT datetime('now','localtime'); -- current local timestamp
SELECT datetime('2026-04-06 10:00','+2 days');-- add 2 days
SELECT datetime(1712400000,'unixepoch'); -- interpret as Unix secondsCitations:
- 1: https://www.sqlite.org/lang_datefunc.html?utm_source=openai
- 2: https://www.sqlite.org/lang_datefunc.html?utm_source=openai
- 3: https://www.sqlite.org/lang_datefunc.html?utm_source=openai
- 4: https://www.sqlite.org/lang_datefunc.html?utm_source=openai
- 5: https://www.sqlite.org/lang_datefunc.html?utm_source=openai
- 6: https://www.sqlite.org/lang_datefunc.html?utm_source=openai
The credential-write audit insert never succeeds.
1password_store is unquoted and datetime(now) is invalid SQLite syntax (requires datetime('now') with the string parameter). The exception is swallowed by the try-catch block, allowing the endpoint to return success despite the audit log insertion failing completely.
Suggested fix
- `INSERT INTO credential_provisions (type, service, purpose, requesting_service, created_at) VALUES (1password_store, ?, ?, ?, datetime(now))`
+ `INSERT INTO credential_provisions (type, service, purpose, requesting_service, created_at)
+ VALUES ('1password_store', ?, ?, ?, datetime('now'))`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| await c.env.DB.prepare( | |
| `INSERT INTO credential_provisions (type, service, purpose, requesting_service, created_at) VALUES (1password_store, ?, ?, ?, datetime(now))` | |
| ).bind(item, field, requestingService).run(); | |
| try { | |
| await c.env.DB.prepare( | |
| `INSERT INTO credential_provisions (type, service, purpose, requesting_service, created_at) | |
| VALUES ('1password_store', ?, ?, ?, datetime('now'))` | |
| ).bind(item, field, requestingService).run(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/routes/credentials.js` around lines 707 - 710, The INSERT into
credential_provisions in the try block is failing because the literal
1password_store is unquoted and datetime(now) is invalid; update the
c.env.DB.prepare call (the SQL string passed to c.env.DB.prepare / the INSERT
INTO credential_provisions statement) to either use a parameter placeholder for
the type or quote the literal ('1password_store') and change datetime(now) to
datetime('now'), and ensure the catch does not silently swallow the error — at
minimum log the caught error (or rethrow) so failed audit inserts are visible
instead of returning success.
| // Fresh reckoning from trust_scores cache (updated by DRL service) | ||
| const [scores] = await sql` | ||
| SELECT ty_score, vy_score, ry_score, signal_count, composite_score, | ||
| trust_level, confidence, reckoned_at | ||
| FROM trust_scores | ||
| WHERE identity_id = ( | ||
| SELECT id FROM identities WHERE chitty_id = ${chittyId} LIMIT 1 | ||
| ) | ||
| ORDER BY reckoned_at DESC | ||
| LIMIT 1 | ||
| `; |
There was a problem hiding this comment.
This is still using cached trust, not a fresh DRL reckoning.
The PR contract says provisioning should compute TY/VY/RY at request time, but this path just selects the newest row from trust_scores. If that table is stale, identityClass and the recommended services are stale too. Either invoke the reckoning step here or reject/refresh aged rows before using them.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/intelligence/context-resolver.js` around lines 990 - 1000, The code
currently reads from the trust_scores cache for chittyId (SELECT ... FROM
trust_scores WHERE identity_id = (SELECT id FROM identities WHERE chitty_id =
${chittyId})), which returns stale TY/VY/RY used to compute identityClass and
recommended services; instead, before using that row call the DRL reckoning
routine (e.g., invoke the existing reckoning function/service for this identity)
or validate the row age and trigger a refresh/rejection when older than the
allowed TTL, then re-query trust_scores so identityClass and composite_score are
derived from a fresh reckoning for chittyId.
| const resume = { | ||
| chittyId, | ||
| displayName: context?.display_name || null, | ||
| entityType: "P", // Default; should come from ledger | ||
| lifecycleState: context?.status || "unknown", | ||
| trustLevel: context?.trust_level || 0, | ||
|
|
||
| // From D1 context_dna | ||
| competencies: context?.competencies || [], | ||
| expertiseDomains: context?.expertise_domains || [], | ||
| successRate: context?.success_rate ? Math.round(context.success_rate * 100) + "%" : null, | ||
|
|
||
| // Domain-scoped trust — WHAT to trust this entity with | ||
| domainTrust: {}, | ||
|
|
||
| // From ChittyLedger (Neon) — populated below | ||
| totalSessions: 0, | ||
| totalToolCalls: 0, | ||
| recentProjects: [], | ||
| recentActivity: [], | ||
| lineage: null, | ||
| createdAt: null, | ||
| lastSeen: null, | ||
| archetype: null, | ||
| }; |
There was a problem hiding this comment.
The resume payload is still missing the required behavioral trait scores.
The new entity resume returns competencies/domainTrust/archetype, but it still does not include volatile, compliant, creative, methodical, resilient, and trustAligned on a 0.0-1.0 scale. Downstream context-intelligence consumers still cannot satisfy the module contract. As per coding guidelines, "Context intelligence modules must implement behavioral trait scoring (volatile, compliant, creative, methodical, resilient, trustAligned) on a 0.0-1.0 scale".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/intelligence/context-resolver.js` around lines 1240 - 1264, The resume
object in context-resolver.js is missing the required behavioral trait scores;
update the resume literal (the const resume) to include keys volatile,
compliant, creative, methodical, resilient, and trustAligned, sourcing each from
the incoming context (e.g., context volatile/compliant/etc. if present) and
defaulting to 0.0; ensure values are numeric floats normalized to the 0.0–1.0
range (cast/round as needed) so downstream consumers of resume receive the six
trait scores on a 0.0–1.0 scale.
| if (!env.SVC_STORAGE) { | ||
| return { content: [{ type: "text", text: "ChittyStorage not configured (SVC_STORAGE binding missing)" }], isError: true }; | ||
| } | ||
| if (!data || !data.success) { | ||
| return { | ||
| content: [ | ||
| { | ||
| type: "text", | ||
| text: `AI Search error (${response.status}): ${(text || "").slice(0, 300)}`, | ||
| }, | ||
| ], | ||
| isError: true, | ||
| }; | ||
| const params = new URLSearchParams({ q: args.query || "", limit: String(args.max_num_results || 10) }); | ||
| if (args.entity_slug) params.set("entity", args.entity_slug); | ||
| const response = await env.SVC_STORAGE.fetch(`https://internal/api/docs?${params}`); | ||
| const data = await response.json(); | ||
| if (!data.docs || !data.docs.length) { | ||
| return { content: [{ type: "text", text: "No matching documents found." }] }; |
There was a problem hiding this comment.
Route the new ChittyStorage calls through the existing authenticated JSON helper path.
Both branches call env.SVC_STORAGE.fetch() directly, so they skip requireServiceAuth(...) and checkAndParseJson(...). That means storage failures can be reported as "No matching documents found" / "Document not found" instead of the real upstream error, and the calls also violate the repo’s inter-service auth rule. As per coding guidelines, "Service tokens are required for inter-service calls".
Also applies to: 899-909
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mcp/tool-dispatcher.js` around lines 881 - 889, The direct calls to
env.SVC_STORAGE.fetch(...) should be replaced by the repo's authenticated JSON
helper flow: call requireServiceAuth(...) to obtain the auth headers/token and
then use checkAndParseJson(...) (or the established helper that performs the
fetch and JSON parsing with proper error handling) instead of calling
response.json() directly; update both places where env.SVC_STORAGE.fetch is used
(the block that builds params and the similar block around lines 899-909) so
errors propagate correctly and inter-service auth rules are honored, keeping the
same query params (params / args.entity_slug / limit) when passing the request
to the helper.
| const params = new URLSearchParams({ q: args.query || "", limit: String(args.max_num_results || 10) }); | ||
| if (args.entity_slug) params.set("entity", args.entity_slug); |
There was a problem hiding this comment.
Update the MCP schemas with the new storage arguments.
The dispatcher now relies on entity_slug / max_num_results for search and content_hash / evidence_id for retrieve, but src/mcp/tool-registry.js:237-270 does not declare those fields and still requires query for retrieve. Clients that follow the registry contract cannot reliably use the new code path. As per coding guidelines, "Maintain MCP protocol compatibility for Claude integration in server implementations".
Also applies to: 902-904
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/mcp/tool-dispatcher.js` around lines 884 - 885, Update the MCP tool
registry schemas so they match the dispatcher usage: add the new parameters
entity_slug and max_num_results to search actions and add content_hash and
evidence_id to retrieve actions in the tool-spec/parameter definitions (the
objects used around the registry's action/parameter declarations—see the code
that defines the action schemas used by registerTool/getToolSpec). Also make the
retrieve action not require query (make query optional) so clients can call
retrieve using content_hash/evidence_id; ensure the parameter names and types
match exactly (entity_slug, max_num_results, content_hash, evidence_id) so
clients following the registry contract can use the new dispatcher paths.
| let fieldFound = false; | ||
| for (const f of itemDetails.fields || []) { | ||
| if (f.label?.toLowerCase() === parsed.field.toLowerCase()) { f.value = value; fieldFound = true; break; } | ||
| } | ||
| if (!fieldFound) { | ||
| itemDetails.fields = itemDetails.fields || []; | ||
| itemDetails.fields.push({ id: parsed.field, type: "CONCEALED", label: parsed.field, value }); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Match existing 1Password fields by id as well as label.
get() reads a field by label or id, but the update path only checks label. If an existing item stores the target field under a different label, this appends a duplicate concealed field instead of updating the original one, and later reads can still return the stale value.
Suggested fix
- let fieldFound = false;
- for (const f of itemDetails.fields || []) {
- if (f.label?.toLowerCase() === parsed.field.toLowerCase()) { f.value = value; fieldFound = true; break; }
- }
+ let fieldFound = false;
+ const fieldKey = parsed.field.toLowerCase();
+ for (const f of itemDetails.fields || []) {
+ if (
+ f.label?.toLowerCase() === fieldKey ||
+ f.id?.toLowerCase() === fieldKey
+ ) {
+ f.value = value;
+ fieldFound = true;
+ break;
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let fieldFound = false; | |
| for (const f of itemDetails.fields || []) { | |
| if (f.label?.toLowerCase() === parsed.field.toLowerCase()) { f.value = value; fieldFound = true; break; } | |
| } | |
| if (!fieldFound) { | |
| itemDetails.fields = itemDetails.fields || []; | |
| itemDetails.fields.push({ id: parsed.field, type: "CONCEALED", label: parsed.field, value }); | |
| let fieldFound = false; | |
| const fieldKey = parsed.field.toLowerCase(); | |
| for (const f of itemDetails.fields || []) { | |
| if ( | |
| f.label?.toLowerCase() === fieldKey || | |
| f.id?.toLowerCase() === fieldKey | |
| ) { | |
| f.value = value; | |
| fieldFound = true; | |
| break; | |
| } | |
| } | |
| if (!fieldFound) { | |
| itemDetails.fields = itemDetails.fields || []; | |
| itemDetails.fields.push({ id: parsed.field, type: "CONCEALED", label: parsed.field, value }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/1password-connect-client.js` around lines 574 - 580, The loop
that updates fields only matches on f.label, so if the existing field is stored
under a different label but the same id (parsed.field) it will create a
duplicate; update the matching logic in the itemDetails.fields iteration (the
for...of that checks f.label) to also consider f.id (e.g., treat a match when
f.id === parsed.field or when f.id.toLowerCase() === parsed.field.toLowerCase()
to mirror the label case-insensitive check), then set f.value = value and
fieldFound = true to update the existing field instead of appending a new one;
ensure this change aligns with how get() resolves fields by label or id.
Summary
Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Chores