Skip to content

feat(examples): RSC concurrency safety PoC (rsc-poc-postgres + rsc-poc-mongo)#370

Open
aqrln wants to merge 8 commits intomainfrom
tml-2164-rsc-concurrency-safety-poc
Open

feat(examples): RSC concurrency safety PoC (rsc-poc-postgres + rsc-poc-mongo)#370
aqrln wants to merge 8 commits intomainfrom
tml-2164-rsc-concurrency-safety-poc

Conversation

@aqrln
Copy link
Copy Markdown
Member

@aqrln aqrln commented Apr 22, 2026

Closes TML-2164. Ready for review.

TL;DR

Two Next.js 16 App Router apps (examples/rsc-poc-postgres and
examples/rsc-poc-mongo) exercise parallel React Server Components sharing a
single Prisma Next runtime each. Full findings write-up at
docs/reference/rsc-concurrency-findings.md.

Verdict: no correctness bugs on either family. One performance bug (H2 —
redundant cold-start marker reads under onFirstUse/startup) with a
shared-promise-dedupe fix sketched out in the findings doc. Pool-sizing
guidance is deferred.

Where to start reviewing

  1. docs/reference/rsc-concurrency-findings.md
    — the permanent write-up. Covers per-hypothesis results, per-page operation
    accounting, and the recommended H2 fix. Best overview of the project.
  2. examples/rsc-poc-postgres/README.md — end-user view of the Postgres
    app, including observed numbers from manual k6 runs.
  3. examples/rsc-poc-mongo/README.md — same for the Mongo app, with an
    explicit "what's different and why" section up front.

Per-hypothesis results

  • H1 (ORM Collection cache race) — non-issue. Proxy get trap is
    synchronous; can't interleave on Node's event loop.
  • H2 (redundant marker reads under onFirstUse) — real bug,
    non-critical, fix recommended.
    Cold-start page shows K marker reads for K
    concurrent components. Fix is ~10 lines in
    RuntimeCoreImpl.verifyPlanIfNeeded().
  • H3 (always-mode race) — non-issue. The predicted "always mode
    skips verification under concurrency" race doesn't survive a source re-read;
    the reset and the check are synchronous neighbors. Pinned by integration
    test as markerReads === queryCount.
  • H4 (pool pressure) — sizing/liveness concern, not a safety bug. With
    PoC's fast queries, both families sustained 100 VUs × 50s on 5-slot pools
    without failures.
  • H5 (Mongo has no H2/H3 analogue) — confirmed by construction and
    measurement. Mongo runtime has no verification state; ~60% higher throughput
    than Postgres for the same page shape.

Commit sequence

  1. docs(rsc-poc): add shaping plan… — shaping artifact (later superseded by
    the findings doc and deleted in commit 8).
  2. examples(rsc-poc-postgres): scaffold Next.js 16 PoC app — minimal
    harness: globalThis-scoped runtime singleton, InstrumentedPool subclass
    of pg.Pool, diag counters, one smoke query.
  3. examples(rsc-poc-postgres): 5 parallel Server Components + action + /diag
    — the observation surface. Five async Server Components covering ORM,
    include, SQL DSL, aggregate, and pgvector paths. One smoke-level Server
    Action.
  4. examples(rsc-poc-postgres): /stress routes + k6 scripts + revise H3
    /stress/always, /stress/pool-pressure, and a k6 script supporting three
    scenarios. H3 revised based on source re-read.
  5. examples(rsc-poc-postgres): integration test pinning H2/H3 invariants
    process-level vitest suite; requires DATABASE_URL to run (cleanly skipped
    in CI).
  6. examples(rsc-poc-mongo): full Mongo-side PoC — 5 RSC + action + k6 + tests — scaffold + components + stress + tests combined because the Mongo
    surface is smaller (no verifyMode dimension).
  7. docs(rsc-poc): findings write-up — consolidates everything into
    projects/rsc-concurrency-safety/notes.md (later migrated to
    docs/reference/ in commit 8).
  8. docs(rsc-poc): close out the RSC concurrency PoC project — migrates
    findings to docs/reference/rsc-concurrency-findings.md, updates external
    references, marks the "RSC concurrency safety" row in
    framework-integration-analysis.md's blocking-gaps table as resolved, and
    deletes the transient projects/rsc-concurrency-safety/ directory.

Notable findings worth flagging for reviewers

  • H3 correction happened mid-flight. My original H3 claim (the
    always-mode race) didn't survive a source re-read: the verified = false
    reset and the if (verified) return check are synchronous neighbors with no
    await between them, so the predicted race window doesn't exist. The
    /stress/always route and its integration test became invariant
    confirmations (markerReads === queryCount) rather than race reproducers.
    Reasoning is captured in the H3 section of the findings doc.

  • Two bugs in the PoC's own instrumentation (not in Prisma Next, not in
    pg-pool) — both surfaced only by running the code under load:

    • Wrapping client.release didn't survive pg-pool's per-checkout
      reassignment of the method. Fixed by listening on the pool's 'release'
      event.
    • Counting acquires before super.connect() resolved inflated the
      counter under connect-timeout rejections. Fixed by counting only on
      success.
      Both are documented inline in examples/rsc-poc-postgres/src/lib/pool.ts.
  • Per-page operation counts aren't 1:1 with components on Postgres. A
    single page render issues 6 pool acquires and 7 queries (not 5:5) because
    PostsWithAuthors's include goes through
    dispatchWithMultiQueryIncludes (2 queries on 1 acquired scope) and
    SimilarPostsSample does 2 separate ORM chains. Mongo is 5:5 because
    include() uses $lookup in a single aggregate command. Tables in the
    findings doc under "Per-page operation accounting".

What's next (not in this PR)

  • Implement the H2 fix in a follow-up PR targeting
    packages/1-framework/4-runtime/runtime-executor/src/runtime-core.ts. The
    findings doc has the proposed shape; expected diff is ~10 lines.
  • Pool sizing guidance, edge runtime validation, and transaction semantics
    across Server Components are out of scope for this PoC (deferred).

Testing this PR

# Both apps typecheck and build:
pnpm --filter rsc-poc-postgres typecheck
pnpm --filter rsc-poc-postgres build
pnpm --filter rsc-poc-mongo typecheck
pnpm --filter rsc-poc-mongo build

# Mongo tests run standalone (uses mongodb-memory-server):
pnpm --filter rsc-poc-mongo test  # 9 passed

# Postgres tests require a real pgvector Postgres (skipped in CI without
# DATABASE_URL; ppg-dev rejects concurrent connections which the tests need):
docker run --rm -d --name rsc-poc-pg -p 5434:5432 \
  -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=rsc_poc \
  pgvector/pgvector:pg17
DATABASE_URL=postgresql://postgres:postgres@localhost:5434/rsc_poc \
  pnpm --filter rsc-poc-postgres test  # 6 passed

Refs: TML-2164, project WS3: Runtime pipeline, milestone VP3: RSC concurrency safety.

aqrln added 6 commits April 22, 2026 13:21
Project plan for TML-2164 (VP3 of WS3 Runtime pipeline).

Documents hypotheses H1-H5 about Prisma Next's behavior under Next.js
App Router concurrent rendering, deliverables (two Next.js 16 apps,
k6 stress scripts, one integration test for the predicted always-mode
race, findings doc + conditional ADR), and work breakdown.
Scaffold-only PR for step 2 of the RSC concurrency safety PoC plan.

This adds a minimal Next.js 16 App Router app that boots, reads one user
via the ORM client in a Server Component, and renders the
<DiagPanel /> at page bottom. No stress scenarios, no Server Action,
no parallel components yet — those land in step 3.

Harness highlights:

- Process-scoped runtime singleton via `globalThis` (`src/lib/db.ts`),
  keyed by (verifyMode, poolMax) so /\ and /stress/always don't share
  state. Survives Next.js HMR.

- `InstrumentedPool` (`src/lib/pool.ts`) — subclass of `pg.Pool` that
  counts connection acquires/releases and marker reads (detected by
  SQL text containing `prisma_contract.marker`). Subclassing (not
  composition) is load-bearing: `@prisma-next/postgres` uses
  `instanceof PgPool` to route bindings.

- `diag.ts` — `globalThis`-backed counter registry keyed by
  verifyMode, so the panel and tests can observe marker-read /
  connection-acquire behavior without perturbing it.

Schema + seed are reused verbatim from `prisma-next-demo` per the
plan's copy-paste-over-abstraction guidance.

Refs: TML-2164, `projects/rsc-concurrency-safety/plan.md`
…diag

Adds the core observation surface: five parallel async Server Components
rendering on /, each wrapped in its own Suspense boundary so React kicks
them off concurrently. Plus one smoke-level Server Action and a
machine-readable /diag endpoint for tests and k6.

Components (one per ORM / DSL path worth probing):
- TopUsers: ORM orderBy().take().all() - baseline
- PostsWithAuthors: ORM include() - multi-query dispatch
- RecentPostsRaw: SQL DSL + runtime.execute() - goes straight through
  verifyPlanIfNeeded() without acquireRuntimeScope()
- UserKindBreakdown: ORM groupBy().having().aggregate()
- SimilarPostsSample: pgvector similarity via ORM

CreatePostForm (client) + createPostAction (server action) - smoke-level
mutation. revalidatePath('/') refreshes the reads after insert.

/diag JSON route - reads snapshotAll() from the diag registry. Read after
any page render completes so it's always current, unlike <DiagPanel />
which races sibling Suspense boundaries.

Two notable corrections from live testing:

1. Dropped lints()/budgets() from the middleware chain. Both flag
   ordinary queries the PoC issues (unbounded aggregates on small seed
   data) as errors, which distracts from what we're measuring.

2. Switched connection-release counting from wrapping client.release to
   listening on the pool's 'release' event. pg-pool reassigns
   client.release inside _acquireClient on *every* checkout (see
   _releaseOnce at pg-pool:306), so any wrapper gets clobbered on the
   second acquire of a pooled client - producing a classic 'acquires
   keep growing, releases stuck' anomaly. The 'release' event is
   emitted unconditionally and is the supported observation point.

Manual end-to-end run confirms H2: cold-start page load shows 5 marker
reads for 5 parallel components, then stable. Acquires/releases balance
under 10 parallel requests, no errors.

Refs: TML-2164, projects/rsc-concurrency-safety/plan.md
Adds the observation surface for H3 and H4: two stress routes
(/stress/always pinned to verify=always, /stress/pool-pressure pinned
to poolMax=5) and a single k6 script supporting three scenarios via
SCENARIO=baseline|spike|pool_pressure.

Both routes delegate to a shared <ParallelReadsPage /> body that
renders the same five components + form + diag panel. Routes differ
only by (verifyMode, poolMax) passed to getDb(), which keys the db
singleton registry, so no two routes share a runtime or a pool.

H3 revised in the project plan. The original claim (verify=always
skips verification under concurrency because one query flips verified
between a peer's reset and its own check) doesn't survive re-reading
verifyPlanIfNeeded(): lines (reset verified=false) and
(if verified return) are synchronous neighbors with no await between
them, so in always mode the early-return is unreachable regardless of
peer behavior. The /stress/always route and its forthcoming integration
test become invariant confirmations ('markerReads === queryCount')
rather than race reproducers. Plan updated with the reasoning.

k6 script design: one file, three scenarios switched by env var, so
package.json entries stay clean. setup()/teardown() capture /diag
snapshots before and after each run and log deltas. No in-script
asserts — invariants live in the vitest integration test (step 5).

Running the k6 scripts surfaced one instrumentation bug: InstrumentedPool
was counting acquires *before* super.connect() resolved. Under spike
load the pg connectionTimeoutMillis rejected ~1,100 connects, inflating
the acquire counter and permanently desynchronizing acquires vs
releases. Fixed by counting only on successful resolve. The bug was in
the PoC's instrumentation, not in Prisma Next or pg-pool — but it's the
kind of thing only live load exposes.

Propagated poolMax through the five Server Components as a pass-through
prop. Needed explicit 'number | undefined' on DbOptions (and downstream)
to satisfy exactOptionalPropertyTypes without conditional-spread
boilerplate at every call site.

README updated with the numbers from an initial run of each scenario.

Refs: TML-2164, projects/rsc-concurrency-safety/plan.md
6 tests in test/always-mode-invariant.test.ts, covering:

H2 (onFirstUse cold-start):
- Single cold query -> exactly 1 marker read (sanity baseline).
- K=5 concurrent cold queries -> 1 to K marker reads on first burst,
  0 on subsequent bursts. Upper bound is K, lower bound is 1 - the
  exact number depends on how quickly the first caller flips
  verified=true, which is timing-dependent.

H3 (always-mode invariant):
- For K in {1, 5, 50}: markerReads === K. The K=50 case exercises
  the above-poolMax case where requests queue.
- Repeated bursts (K*BURSTS): cumulative markerReads === K*BURSTS.

Balance invariant (all tests):
- connectionAcquires === connectionReleases.

## Design decisions

Test level is process, not HTTP. The invariant's mechanism lives in
RuntimeCoreImpl.verifyPlanIfNeeded(); RSC is incidental. Firing K
concurrent awaited queries against a shared runtime is identical
on-event-loop to what RSC produces during concurrent rendering, with
much less orchestration overhead. HTTP-level coverage already exists
via k6 + /diag deltas.

The test requires a real Postgres (pgvector-enabled) via DATABASE_URL.
@prisma/dev (used by withDevDatabase) is PGlite-backed and rejects
concurrent TCP connections with 'Connection terminated unexpectedly',
making it unusable for concurrency tests. I tried the scratch-schema
isolation approach first; it fights the contract/adapter's hardcoded
public. schema references, so the final design drops and recreates
public before each test and configures vitest with maxWorkers=1 to
serialize naturally.

Without DATABASE_URL the whole describe is skipped, so CI's
test:examples step passes without needing a Postgres service change.

## Plan updates

Revised §3.3 to document the process-over-HTTP choice and the
DATABASE_URL requirement.

Refs: TML-2164, projects/rsc-concurrency-safety/plan.md
…tests

Scaffold, components, stress routes, k6 scripts, and concurrency
invariant tests for the Mongo family side of the PoC. Steps 6 and 7
of the project plan combined because the Mongo surface is smaller
than the Postgres one (no verifyMode dimension means no /stress/always
route and no corresponding k6 scenario).

## Structure

Mirrors examples/rsc-poc-postgres but with Mongo-shaped differences:

- src/lib/db.ts — globalThis-scoped singleton keyed by poolMax (no
  verifyMode, since MongoRuntimeImpl has no verification state).
- src/lib/diag.ts — counter registry for APM/CMAP events instead of
  pg-pool events. No markerReads counter — it would always be zero
  and muddle the contrast with the Postgres side.
- No InstrumentedPool subclass. MongoClient isn't designed to be
  subclassed; we attach listeners for commandStarted/Succeeded/Failed
  and connectionCheckedOut/In/Created/Closed before client.connect(),
  wired through to the diag module. Matches the shape of what the
  Postgres app's instrumentation produces, with different observables.
- Five Server Components covering the same conceptual ground as the
  Postgres five: baseline ORM, ORM with include, query builder path,
  aggregate pipeline, polymorphism variant. Plus a smoke-level
  Server Action (create SearchEvent).

## Routes

/ — default pool (maxPoolSize: 100)
/stress/pool-pressure — pinned to maxPoolSize: 5 for H4 observation
/diag — JSON snapshot

No /stress/always analogue because the Mongo runtime has no verify
mode to toggle. This asymmetry is the whole point of running this app
alongside the Postgres one.

## Tests

9 invariant tests in test/concurrency-invariants.test.ts, all passing
in ~3s against mongodb-memory-server. Unlike the Postgres tests
(which must skip when DATABASE_URL is unset because @prisma/dev
rejects concurrent connections), these tests run standalone and don't
require any external service — CI picks them up as-is.

Tests pin:
- H5: K concurrent queries issue exactly K commands (no verification
  multiplier). Covered for K in {1, 5, 50} plus cold-start burst.
- Balance: connectionsCheckedOut === connectionsCheckedIn across
  all scenarios including K=50 on maxPoolSize=5 (contention).
- Cumulative: repeated bursts keep per-command accounting linear.

## Observations from manual runs

Cold start / baseline (maxPoolSize: 100):
  iterations 14,003 in 30s (~467 req/s, ~60% faster than Postgres)
  commands 70,015 = exactly 5 per request (no marker read overhead)
  checkOuts 70,015 = checkIns 70,015 (balanced)
  tcpCreated 0 (connections reused)

Pool pressure (maxPoolSize: 5, 100 VUs):
  iterations 17,503 in 50s
  commands 87,515 = exactly 5 per request
  0 failed commands (Mongo multiplexes, doesn't queue like pg)
  checkOuts and checkIns balanced

Contrast to Postgres side:
- PG baseline: ~295 req/s, 6 acquires per request (1 marker + 5 data)
- Mongo baseline: ~467 req/s, 5 commands per request (no marker)
- PG spike (verify=always): 1000s of connection timeouts at 50 VUs
- Mongo under similar pressure: zero failures

H5 confirmed: the Mongo runtime has no H2/H3 analogue by construction.

Refs: TML-2164, projects/rsc-concurrency-safety/plan.md
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 22, 2026

📝 Walkthrough

Walkthrough

This pull request introduces comprehensive proof-of-concept (PoC) applications and detailed findings documentation for React Server Components (RSC) concurrency safety testing. Two complete Next.js 16 example applications are added—one for MongoDB and one for PostgreSQL—each with full integration setup, server components, diagnostic instrumentation, stress testing capabilities, and integration test suites. Documentation updates mark the RSC concurrency investigation as complete and reference the detailed findings.

Changes

Cohort / File(s) Summary
Documentation & Findings
docs/reference/framework-integration-analysis.md, docs/reference/rsc-concurrency-findings.md
Updated framework analysis to mark RSC concurrency PoC as resolved; added comprehensive new findings document detailing hypotheses, test results, identified performance issue (H2 cold-start redundancy), and proposed deduplication fix.
MongoDB PoC App Structure
examples/rsc-poc-mongo/.env.example, .gitignore, biome.jsonc, next.config.js, turbo.json, tsconfig.json, vitest.config.ts
Configuration and tooling setup for Mongo PoC including environment, build, linting, and testing configuration.
MongoDB PoC Documentation & Manifest
examples/rsc-poc-mongo/README.md, package.json, prisma-next.config.ts
Comprehensive Mongo PoC documentation detailing purpose, hypotheses, observable behaviors, setup/running instructions, and dependency/script declarations.
MongoDB PoC Schema
examples/rsc-poc-mongo/prisma/contract.prisma, src/prisma/contract.json, src/prisma/contract.d.ts
Generated and source Prisma schema definitions for Mongo collections (Products, Users, Orders, Events with polymorphic variants) with indexes, validation, and TypeScript types.
MongoDB PoC Runtime & Diagnostics
examples/rsc-poc-mongo/src/lib/db.ts, src/lib/diag.ts
Database runtime singleton management with process-scoped caching, MongoDB client instrumentation with CMAP/APM event listeners, and in-process counter collection for concurrent command/connection tracking.
MongoDB PoC UI Components
examples/rsc-poc-mongo/app/globals.css, app/layout.tsx, app/page.tsx, app/stress/pool-pressure/page.tsx
Global styles (dark theme, utilities, forms), root layout, home page with force-dynamic rendering, and stress test route with pinned pool size (5).
MongoDB PoC Server Components
examples/rsc-poc-mongo/src/server-components/parallel-reads-page.tsx, product-list.tsx, products-by-search.tsx, orders-with-user.tsx, event-type-stats.tsx, search-events.tsx
Five concurrent read-only Server Components (demonstrating ORM queries, query builder, polymorphic queries, and aggregations) wrapped in Suspense plus shared page layout component.
MongoDB PoC Client & Actions
examples/rsc-poc-mongo/src/components/create-event-form.tsx, diag-panel.tsx, app/actions.ts, app/diag/route.ts
Client form component for mutation via server action, diagnostic panel display component, server action for creating search events, and /diag endpoint for counter snapshots.
MongoDB PoC Scripts & Tests
examples/rsc-poc-mongo/scripts/seed.ts, scripts/stress.k6.js, test/concurrency-invariants.test.ts
Database seeding script with fixed product/order/event data, k6 stress test scenarios (baseline/pool-pressure), and Vitest integration tests pinning concurrency invariants (command/connection balance, no H2/H3 anomalies on Mongo).
PostgreSQL PoC App Structure
examples/rsc-poc-postgres/.env.example, .gitignore, biome.jsonc, next.config.js, turbo.json, tsconfig.json, vitest.config.ts
Configuration and tooling setup for Postgres PoC mirroring Mongo structure with pgvector support.
PostgreSQL PoC Documentation & Manifest
examples/rsc-poc-postgres/README.md, package.json, prisma-next.config.ts
Comprehensive Postgres PoC documentation covering H2–H4 hypotheses, routes, observed behaviors, setup/run instructions, and test invariants; includes findings on marker-read cold-start bug and fix approach.
PostgreSQL PoC Schema
examples/rsc-poc-postgres/prisma/schema.prisma, src/prisma/contract.json, src/prisma/contract.d.ts
Prisma schema with pgvector Embedding1536 type, User/Post/Task models (Task with Bug/Feature variants via discriminator), and full TypeScript contract definitions.
PostgreSQL PoC Runtime, Instrumentation & Diagnostics
examples/rsc-poc-postgres/src/lib/db.ts, src/lib/pool.ts, src/lib/diag.ts
Database singleton management with per-(verifyMode, poolMax) caching, custom InstrumentedPool subclass wrapping query/connection events to detect marker reads and count acquires/releases, and in-process counter snapshot system.
PostgreSQL PoC UI Components
examples/rsc-poc-postgres/app/globals.css, app/layout.tsx, app/page.tsx, app/stress/always/page.tsx, app/stress/pool-pressure/page.tsx
Global dark-themed styles, root layout, default page with onFirstUse mode, and stress routes with always mode and pinned pool (5).
PostgreSQL PoC Server Components
examples/rsc-poc-postgres/src/server-components/parallel-reads-page.tsx, top-users.tsx, posts-with-authors.tsx, recent-posts-raw.tsx, user-kind-breakdown.tsx, similar-posts-sample.tsx
Five concurrent read-only Server Components (ORM queries with relations, SQL DSL, aggregation/grouping, pgvector cosine search) wrapped in Suspense plus shared page layout.
PostgreSQL PoC Client & Actions
examples/rsc-poc-postgres/src/components/create-post-form.tsx, diag-panel.tsx, app/actions.ts, app/diag/route.ts
Client form component for post creation, diagnostic panel showing marker reads/connection counts, server action for mutation, and /diag endpoint returning per-mode snapshots with pool state.
PostgreSQL PoC Scripts & Tests
examples/rsc-poc-postgres/scripts/drop-db.ts, scripts/seed.ts, scripts/stress.k6.js, test/always-mode-invariant.test.ts, test/utils/control-client.ts
Database drop/seed scripts, k6 stress scenarios (baseline/spike/pool-pressure), Vitest integration tests pinning H2/H3 marker-read and connection balance invariants, and control client test utility for schema initialization.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

Suggested reviewers

  • igalklebanov

Poem

🐰 Two proof-of-concept tales unfold,
Mongo and Postgres, brave and bold,
Concurrent renders dance with glee,
While diagnostics count each decree,
Cold-start redundancy met its fate—
RSC concurrency, now validated!

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch tml-2164-rsc-concurrency-safety-poc

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new Bot commented Apr 22, 2026

Open in StackBlitz

@prisma-next/mongo-runtime

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-runtime@370

@prisma-next/family-mongo

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/family-mongo@370

@prisma-next/sql-runtime

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-runtime@370

@prisma-next/family-sql

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/family-sql@370

@prisma-next/middleware-telemetry

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/middleware-telemetry@370

@prisma-next/mongo

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo@370

@prisma-next/extension-paradedb

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/extension-paradedb@370

@prisma-next/extension-pgvector

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/extension-pgvector@370

@prisma-next/postgres

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/postgres@370

@prisma-next/sql-orm-client

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-orm-client@370

@prisma-next/sqlite

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sqlite@370

@prisma-next/target-mongo

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/target-mongo@370

@prisma-next/adapter-mongo

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/adapter-mongo@370

@prisma-next/driver-mongo

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/driver-mongo@370

@prisma-next/contract

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/contract@370

@prisma-next/utils

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/utils@370

@prisma-next/config

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/config@370

@prisma-next/errors

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/errors@370

@prisma-next/framework-components

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/framework-components@370

@prisma-next/operations

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/operations@370

@prisma-next/ts-render

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/ts-render@370

@prisma-next/contract-authoring

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/contract-authoring@370

@prisma-next/ids

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/ids@370

@prisma-next/psl-parser

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/psl-parser@370

@prisma-next/psl-printer

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/psl-printer@370

@prisma-next/cli

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/cli@370

@prisma-next/emitter

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/emitter@370

@prisma-next/migration-tools

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/migration-tools@370

prisma-next

npm i https://pkg.pr.new/prisma/prisma-next@370

@prisma-next/vite-plugin-contract-emit

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/vite-plugin-contract-emit@370

@prisma-next/runtime-executor

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/runtime-executor@370

@prisma-next/mongo-codec

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-codec@370

@prisma-next/mongo-contract

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-contract@370

@prisma-next/mongo-value

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-value@370

@prisma-next/mongo-contract-psl

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-contract-psl@370

@prisma-next/mongo-contract-ts

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-contract-ts@370

@prisma-next/mongo-emitter

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-emitter@370

@prisma-next/mongo-schema-ir

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-schema-ir@370

@prisma-next/mongo-query-ast

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-query-ast@370

@prisma-next/mongo-orm

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-orm@370

@prisma-next/mongo-query-builder

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-query-builder@370

@prisma-next/mongo-lowering

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-lowering@370

@prisma-next/mongo-wire

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/mongo-wire@370

@prisma-next/sql-contract

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-contract@370

@prisma-next/sql-errors

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-errors@370

@prisma-next/sql-operations

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-operations@370

@prisma-next/sql-schema-ir

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-schema-ir@370

@prisma-next/sql-contract-psl

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-contract-psl@370

@prisma-next/sql-contract-ts

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-contract-ts@370

@prisma-next/sql-contract-emitter

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-contract-emitter@370

@prisma-next/sql-lane-query-builder

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-lane-query-builder@370

@prisma-next/sql-relational-core

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-relational-core@370

@prisma-next/sql-builder

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/sql-builder@370

@prisma-next/target-postgres

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/target-postgres@370

@prisma-next/target-sqlite

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/target-sqlite@370

@prisma-next/adapter-postgres

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/adapter-postgres@370

@prisma-next/adapter-sqlite

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/adapter-sqlite@370

@prisma-next/driver-postgres

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/driver-postgres@370

@prisma-next/driver-sqlite

npm i https://pkg.pr.new/prisma/prisma-next/@prisma-next/driver-sqlite@370

commit: 3fbebf2

aqrln added 2 commits April 22, 2026 15:43
Consolidates observations from all six implementation commits into a
single doc per hypothesis H1-H5, with recommended fix for H2 and
operation-accounting tables showing why Postgres and Mongo have
different command counts per page.

Verdict: Prisma Next runs correctly under RSC concurrency on both
families. One performance bug (H2 - redundant cold-start marker reads)
with a shared-promise-dedupe fix sketched out. One sizing observation
(H4) documented; fix deferred to May per plan.

Clarification on per-page operation counts that wasn't obvious from
commit messages:

Postgres / page = 6 acquires, 7 queries.
  - PostsWithAuthors's include uses dispatchWithMultiQueryIncludes
    which acquires one runtime scope and runs parent + include on it
    (1 acquire, 2 queries).
  - SimilarPostsSample uses two separate ORM chains (2 acquires, 2
    queries) because seed + similarity are separate awaited calls.
  - The other 3 components are 1:1.
  - On onFirstUse cold start only 5 of the 7 queries race through
    verifyPlanIfNeeded because the 2nd queries in include-using and
    multi-chain components are sequential within their component, so
    only the first query of each component is in flight at first-touch.

Mongo / page = 5 commands, 1:1 with components.
  - include() uses $lookup in a single aggregate command, not a second
    find. That's why the Mongo ratio is 5:5 while Postgres is 6:7.

This file lives at projects/rsc-concurrency-safety/notes.md during
execution and migrates to docs/reference/rsc-concurrency-findings.md
at close-out per the project workflow.

Refs: TML-2164, projects/rsc-concurrency-safety/plan.md
Migrates the findings doc to its permanent home and removes the
transient project directory per the drive-project-workflow.

Changes:

1. Renamed projects/rsc-concurrency-safety/notes.md to
   docs/reference/rsc-concurrency-findings.md with light edits:
   - Header rewritten for permanent-doc framing; dropped the 'this
     is the findings write-up' meta line.
   - Dropped 'Plan §2 updated with...' phrasing where the plan is no
     longer reachable; kept the same reasoning inline.
   - Dropped 'per the plan' / 'as explicitly planned' references to
     the deleted plan.md.
   - Stripped the 'Two mid-flight corrections worth remembering'
     section (project history, not permanent-doc material).
   - Stripped the close-out checklist (this commit is the close-out).
   - Dropped the draft PR link from the Artifacts section.

2. Deleted projects/rsc-concurrency-safety/plan.md (the findings
   doc captures all the hypotheses and reasoning that mattered).

3. Updated external references:
   - examples/rsc-poc-postgres/README.md: 3 references to
     projects/rsc-concurrency-safety/plan.md redirected to the
     findings doc. Also cleaned up a stale '(planned)' note on the
     Mongo companion link.
   - examples/rsc-poc-mongo/README.md: 2 references redirected.
   - docs/reference/framework-integration-analysis.md:
     - Added 'Status — PoC complete' pointer in the 'What needs
       validation' section of Hard problem 2.
     - Marked the 'RSC concurrency safety is untested' row in the
       Blocking gaps table as resolved, with a link to the findings.

Decision on ADR: not filing one. The H2 fix is a behavior change
inside a single method (verifyPlanIfNeeded), backward-compatible
(observable only as fewer marker reads), and doesn't introduce new
abstractions. A PR with a good commit message is the right shape.
Documented this reasoning inline in the findings doc (Recommended
fix for H2 section).

Stops the project per the drive-project-workflow: the transient
projects/rsc-concurrency-safety/ directory is gone, long-lived docs
live in docs/, and the two example apps remain as the PoC artifacts.

Refs: TML-2164
@aqrln aqrln marked this pull request as ready for review April 22, 2026 13:52
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

🧹 Nitpick comments (8)
examples/rsc-poc-mongo/app/page.tsx (1)

5-17: Consider shrinking this long block comment and linking the findings doc.
A short route-purpose comment plus doc link would keep the page file easier to scan.

As per coding guidelines: "Do not add comments if avoidable; prefer code which expresses its intent" and "**/*.{md,ts,tsx}: Keep docs current ... and prefer links to canonical docs over long comments".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-mongo/app/page.tsx` around lines 5 - 17, Replace the long
explanatory block at the top of page.tsx with a concise one-line route-purpose
comment (e.g., "Home route — Mongo baseline for parallel-reads demo") and add a
short inline link to the canonical findings/doc that explains the H2/H3
behavior; keep a brief reference to the shared body component
(parallel-reads-page.tsx) and the relevant runtime concept name
(MongoRuntimeImpl) so readers can navigate to implementation/details, but remove
the extended rationale text from this file to keep it easy to scan.
examples/rsc-poc-postgres/app/page.tsx (1)

5-16: Trim the long route narrative comment and link to the findings doc instead.
Consider replacing this block with a short one-liner plus a doc link to keep route code lean.

♻️ Suggested edit
-/**
- * Home route — the default, `onFirstUse` verify mode, default pool size.
- *
- * This is the route that demonstrates hypothesis H2: on cold start, five
- * parallel Server Components race through `verifyPlanIfNeeded()` and each
- * one issues its own marker read before any of them flips `verified` to
- * true. Reload the page or read `/diag` after it settles to observe the
- * 5 redundant marker reads.
- *
- * See `src/components/parallel-reads-page.tsx` for the shared body used
- * by `/`, `/stress/always`, and `/stress/pool-pressure`.
- */
+/** H2 demo route (`onFirstUse`). See docs/reference/rsc-concurrency-findings.md. */

As per coding guidelines: "Do not add comments if avoidable; prefer code which expresses its intent" and "**/*.{md,ts,tsx}: Keep docs current ... and prefer links to canonical docs over long comments".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-postgres/app/page.tsx` around lines 5 - 16, Replace the long
narrative block comment at the top of app/page.tsx with a single-line summary
and a link to the canonical findings document; remove the multi-paragraph
explanation that describes hypothesis H2 and the reload/diag instructions, and
instead keep a terse comment like "Home route — demonstrates parallel
marker-read behavior; see [canonical findings]" pointing to the findings doc,
and keep the existing reference to src/components/parallel-reads-page.tsx if
helpful; this keeps the route lean while preserving discoverability.
examples/rsc-poc-mongo/package.json (1)

33-35: Consider exact pins for framework versions in this PoC.

Line 33–35 use caret ranges, which can introduce drift in stress-test reproducibility over time.

🔧 Suggested change
-    "next": "^16.1.7",
-    "react": "^19.2.4",
-    "react-dom": "^19.2.4"
+    "next": "16.1.7",
+    "react": "19.2.4",
+    "react-dom": "19.2.4"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-mongo/package.json` around lines 33 - 35, The package.json
dependencies for this PoC use caret ranges ("next", "react", "react-dom") which
allows version drift; change those three dependency entries in package.json to
exact pinned versions (remove the ^ and set the specific version numbers you
want) so stress-test runs are reproducible — update the "next", "react", and
"react-dom" entries accordingly and run npm install / pnpm install to lock the
chosen versions.
docs/reference/rsc-concurrency-findings.md (1)

16-18: Use stable wording instead of VP3 / WS3.

Those labels will age badly in a permanent reference page. Replace them with a durable description of the investigation or a stable doc link.

Based on learnings: do not reference transient project artifacts from durable documentation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/reference/rsc-concurrency-findings.md` around lines 16 - 18, Replace
transient labels "VP3" and "WS3" in the document with a durable description or
stable doc link: search for every occurrence of the tokens VP3 and WS3 (e.g., in
the Context paragraph and headings) and substitute them with a clear, lasting
phrase such as "proof-of-concept investigation into runtime pipeline
integrations" or a permalink to the related framework-integration-analysis
section; ensure the replacement preserves meaning and update any inline
references like "Runtime pipeline" to point to the stable document link instead
of the transient project codes.
examples/rsc-poc-postgres/src/server-components/similar-posts-sample.tsx (1)

65-65: Avoid masking the ID type mismatch with as Char<36>.

If select('id', ...) is widening id, this cast hides that drift right where the predicate should stay type-checked. It would be better to narrow at the query/helper boundary so .neq() doesn't depend on a blind assertion.

As per coding guidelines, "Avoid blind casts like as unknown as X in production TypeScript code. Use type predicates (value is X) or type guards instead to let the compiler narrow types safely."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-postgres/src/server-components/similar-posts-sample.tsx` at
line 65, The predicate is hiding a type mismatch by force-casting queryPost.id
with as Char<36> inside the where call; instead, narrow the id before building
the predicate so post.id.neq remains properly type-checked. Add a type guard or
validator (e.g., isChar36 or ensureQueryPostHasId) that returns value is
Char<36> and use it to assert/narrow queryPost.id prior to calling
db.orm.Post.where (or adjust the helper that runs select('id', ...) so it
preserves the Char<36> type), then pass the already-narrowed queryPost.id into
post.id.neq rather than using a blind cast.
examples/rsc-poc-postgres/src/lib/diag.ts (1)

42-49: Replace the double cast on globalThis.

globalThis as unknown as GlobalWithDiag is broader than needed here. A typed intersection keeps the symbol-keyed registry access without suppressing the compiler as aggressively.

♻️ Proposed fix
 function getRegistry(): DiagRegistry {
-  const g = globalThis as unknown as GlobalWithDiag;
+  const g = globalThis as typeof globalThis & GlobalWithDiag;
   let registry = g[REGISTRY_KEY];
   if (!registry) {
     registry = new Map();

As per coding guidelines, "Avoid blind casts like as unknown as X in production TypeScript code. Use type predicates (value is X) or type guards instead to let the compiler narrow types safely."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-postgres/src/lib/diag.ts` around lines 42 - 49, The double
cast in getRegistry (const g = globalThis as unknown as GlobalWithDiag) is too
broad; replace it with a safe typed intersection or a type guard so the compiler
narrows globalThis without using unknown. Update getRegistry to use a
declaration like treating globalThis as (typeof globalThis & GlobalWithDiag) or
implement a small type-predicate/isGlobalWithDiag function that checks for
REGISTRY_KEY and narrows the type before accessing the symbol-keyed registry;
keep the logic around REGISTRY_KEY and return type DiagRegistry unchanged.
examples/rsc-poc-mongo/scripts/stress.k6.js (2)

78-78: Minor: Description doesn't fully match stage configuration.

The description says "ramping 1 → 100 VUs" but with startVUs: 1 (line 101) and stages starting at target: 10 (line 72), the actual ramp is 1 → 10 → 30 → 60 → 100 → 100. Consider updating the description to reflect the actual ramp pattern.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-mongo/scripts/stress.k6.js` at line 78, Update the
description string so it accurately reflects the configured VU ramp in this
script: replace the current "ramping 1 → 100 VUs" text inside the description
field with a phrase matching the actual pattern driven by startVUs and stages
(e.g., "ramping 1 → 10 → 30 → 60 → 100 → 100 VUs"), ensuring you reference the
same description property and keep it consistent with the startVUs and stages
target values used elsewhere in this file.

158-159: Redundant sleep(0) call.

The comment says "No sleep(); generate continuous pressure" but then immediately calls sleep(0). Since sleep(0) is effectively a no-op in k6, this line can be removed entirely to match the stated intent.

🧹 Remove redundant sleep call
   // No sleep(); generate continuous pressure.
-  sleep(0);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-mongo/scripts/stress.k6.js` around lines 158 - 159, The code
contains a redundant call to sleep(0) that contradicts the comment "No sleep();
generate continuous pressure" — remove the statement sleep(0); (and its
explanatory comment if present) from examples/rsc-poc-mongo/scripts/stress.k6.js
so the script truly generates continuous pressure; locate the occurrence of the
literal sleep(0) and delete it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/reference/rsc-concurrency-findings.md`:
- Around line 108-123: The markdown code fences containing metric snapshots
(e.g., the fence showing "markerReads: 5, connectionAcquires: 11,
connectionReleases: 11" and the block with "commandsStarted: 5,
commandsSucceeded: 5, commandsFailed: 0") should be labeled with a language to
satisfy markdownlint MD040; update each unlabeled triple-backtick fence
(including the other mentioned dumps at the other sections) to use ```text so
the metric snapshots are treated as plain text.

In `@examples/rsc-poc-mongo/app/actions.ts`:
- Around line 54-66: The call to getDb() is currently outside the try block so
connection/init errors escape instead of returning a CreateEventState; wrap the
getDb() call in the same try that calls db.orm.events.variant(...), or expand
the try to start before getDb(), so any errors from getDb() are caught and
handled by the existing catch which returns { status: 'error', message }; update
the code around getDb() and the try/catch in actions.ts (the block that creates
the SearchEvent) accordingly.

In `@examples/rsc-poc-mongo/app/globals.css`:
- Around line 11-13: The CSS custom properties --mono and --sans are triggering
stylelint's value-keyword-case; do not change the CSS to add quotes—update the
stylelint configuration instead: either set the value-keyword-case rule to
ignore font properties (add "ignoreProperties": ["font-family","font"]) or add
the specific mixed-case font identifiers used in --mono and --sans to the rule's
"ignoreKeywords" list so those names are exempt; reference the --mono and --sans
definitions when applying the config change.

In `@examples/rsc-poc-mongo/biome.jsonc`:
- Around line 3-5: The comment above the "extends": "//" entry is contradictory
— it claims the example uses a standalone/non-inheriting config but the file
actually inherits the base config via "extends": "//"; update the comment in
examples/rsc-poc-mongo/biome.jsonc to state that the example intentionally
inherits baseline/global configuration using "extends": "//" so the comment
accurately reflects the behavior of the "extends" key.

In `@examples/rsc-poc-mongo/README.md`:
- Around line 127-131: The fenced code blocks containing plain CLI-like output
(for example the block starting with "commandsStarted: 5, commandsSucceeded: 5,
commandsFailed: 0" and the other similar blocks showing connections/commands
stats) should include a language tag to satisfy markdownlint; update each
triple-backtick fence to use ```text (also add ```text to the similar output
blocks later in the file) so the README remains lint-clean and renders
consistently.
- Around line 3-12: Remove the transient Linear milestone reference by deleting
the inline "Linear: [TML-2164][t]" text and the corresponding reference label
"[t]:
https://linear.app/prisma-company/issue/TML-2164/rsc-concurrency-safety-poc"
from the README, leaving the durable description and the link to
docs/reference/rsc-concurrency-findings.md intact so the README only contains
stable, long-lived references.

In `@examples/rsc-poc-mongo/src/components/diag-panel.tsx`:
- Around line 37-40: The snapshot and client lookup must use the same effective
pool size key: replace the call to getClient({ poolMax }) with getClient({
poolMax: effectivePoolMax }) (or pass effectivePoolMax directly) so both
snapshot(effectivePoolMax) and getClient use the identical effectivePoolMax
value; update any variable names accordingly to ensure snapshot, getClient, and
the client/status reporting all reference effectivePoolMax.
- Around line 69-71: The className ternary for the TCP line always returns
'value' so imbalance styling never applies; update the JSX where tcpBalanced is
used (the span with className on the tcp line in diag-panel.tsx) to return a
different class when false (e.g., tcpBalanced ? 'value' : 'value imbalanced' or
'value warning') and ensure the corresponding CSS selector (e.g.,
.value.imbalanced or .value.warning) exists to highlight the imbalance; keep the
conditional on the tcpBalanced variable and only change the false branch to the
intended warning class.

In `@examples/rsc-poc-mongo/src/server-components/products-by-search.tsx`:
- Around line 61-64: The loop is using an unnecessary type cast that hides the
actual types: remove the cast "row as Product" in the for-await loop over
db.runtime.execute(plan) and push the row directly into results (which is
declared as Product[]); update the block that iterates over
db.runtime.execute(plan) so it uses the preserved return type from
db.runtime.execute and simply does results.push(row) with the existing results,
Product, and plan identifiers.

In `@examples/rsc-poc-mongo/test/concurrency-invariants.test.ts`:
- Around line 134-147: The test seeds data using orm.products.create inside
createTestRuntime()/withFreshRuntime(), which means the runtime is warmed before
the cold-start assertions; change the setup to seed via the raw Mongo client (or
a separate setup path) before constructing the runtime under test so the first
observed operation truly happens on a fresh runtime—move the seeding logic out
of createTestRuntime()/withFreshRuntime() and use the low-level client (not
orm.products.create) or a dedicated seedBeforeRuntime function to insert the
Test Shirt document prior to calling createTestRuntime()/withFreshRuntime().

In `@examples/rsc-poc-postgres/app/actions.ts`:
- Around line 75-77: Current code returns raw internal error text via the
variable "message" in the catch block (return { status: 'error', message }),
which leaks internals; instead, log the full error (e.g., console.error or your
app logger) including the "error" object and a generated correlation id
(crypto.randomUUID() or similar), then return a generic client-facing response
like { status: 'error', message: 'An unexpected error occurred', errorId } where
"message" is no longer the internal message and "errorId" lets you correlate
logs to clients; update the catch block to perform logging + id generation and
replace the returned "message" with the generic text.

In `@examples/rsc-poc-postgres/biome.jsonc`:
- Around line 3-5: The comment above the "extends": "//" entry is misleading;
update it to reflect that this example biome intentionally inherits the
root/global Biome configuration via the "extends": "//" setting (instead of
saying it is standalone), e.g., change the comment to state that example apps
use a biome that extends the root/global config to keep example-specific
settings minimal while inheriting base settings; ensure the comment references
the "extends": "//" line so behavior matches the code.

In `@examples/rsc-poc-postgres/README.md`:
- Around line 243-256: Update the README notes to reflect the current
InstrumentedPool implementation: state that InstrumentedPool (and its connect()
override) uses the pool's fixed hooks to track acquires and releases and
instruments clients via the pool-level hook handlers rather than by wrapping
client.release(), and clarify that marker-read detection is implemented by
intercepting queries through the pool/client hook mechanism (not by replacing
client.query/release directly); update the text that currently describes
wrapping client.release() to describe the hook-based approach used in
InstrumentedPool and connect().
- Around line 21-42: Update the README to reflect the final conclusion that only
H2 (redundant marker reads on cold start) remains: remove or rewrite claims that
H3 is a reproduced correctness bug and delete references to the `/stress/always`
route and the pending H3 integration test; instead state H3 was ruled out after
re-reading the source and link to the project plan or final findings; also
remove/adjust any mentions of H4 if they overstate unresolved issues and ensure
the comparison to the companion `rsc-poc-mongo` app remains as the baseline;
apply these changes to the introductory bullets and the Status section (also
update the other referenced ranges noted in the comment).
- Around line 115-117: The fenced code blocks containing metric dumps and layout
trees (e.g., the block showing "markerReads: 5, connectionAcquires: 11,
connectionReleases: 11" and the other metric/layout-tree dumps) lack language
tags; update each triple-backtick fence to use a plain text tag by changing ```
to ```text for those metric and layout-tree blocks (also update the similar
blocks referenced in the comment: the metric dumps around the layout-tree and
other metric sections) so Markdownlint stops flagging them.
- Around line 3-11: The README contains a transient Linear milestone reference
and link ([t] / TML-2164) that should be removed; edit the
examples/rsc-poc-postgres/README.md to delete the inline Linear reference and
footnote link (the "[t]: https://linear.app/..." line and any mention of "VP3 of
the WS3 runtime-pipeline milestone (Linear: [TML-2164][t])") and replace it with
a durable, generic statement such as "part of internal runtime-pipeline work" or
simply remove the milestone clause while keeping the link to the longer findings
doc (`docs/reference/rsc-concurrency-findings.md`) intact.

In `@examples/rsc-poc-postgres/scripts/stress.k6.js`:
- Around line 136-142: In setup() and teardown() ensure the script treats
non-200 responses from the /diag endpoint as fatal: after calling http.get(...)
and computing diagRes, if diagRes.status !== 200 throw an Error (including
status and body/json) instead of returning null so the k6 run fails fast; update
the blocks that set "before" and "after" (the diagRes/diagBefore and
diagRes/diagAfter logic) to perform this check and throw when not 200 to prevent
silent degradation of results.
- Around line 197-211: The current delta calc collapses snapshots using
beforeByMode keyed only by verifyMode (see beforeByMode, beforeSnaps,
afterSnaps, and verifyMode), which merges distinct pools with different poolMax
and corrupts deltas; change the keying to use the full registry identity (e.g.,
combine verifyMode and poolMax or use a tuple/key string) when building the
"before" lookup and when matching each afterSnap so you compare snapshots for
the same (verifyMode, poolMax) pair, then compute markerReadsDelta,
acquiresDelta, releasesDelta and poolFinal against that specific prior snapshot.

In `@examples/rsc-poc-postgres/src/lib/db.ts`:
- Around line 55-67: Remove the interface and blind cast; instead declare an
intersection type combining typeof globalThis with the optional registry
property (referencing REGISTRY_KEY and DbRegistry) and use that as the declared
type for the local variable in getRegistry (the variable currently named g) so
you assign globalThis to a properly typed variable without using "as unknown
as"; keep the rest of getRegistry and usage of REGISTRY_KEY/DbRegistry
unchanged.

In `@examples/rsc-poc-postgres/src/server-components/similar-posts-sample.tsx`:
- Around line 40-48: The seed selection currently chooses the oldest post
regardless of embedding, causing queryPost/queryEmbedding to be null if that row
has no embedding; update the db.orm.Post.select call (the query that assigns
seed) to filter only posts with a non-null embedding (e.g., add a where clause
that checks embedding is not null) before ordering and taking 1 so queryPost and
queryEmbedding will reliably reference a post that has an embedding.

---

Nitpick comments:
In `@docs/reference/rsc-concurrency-findings.md`:
- Around line 16-18: Replace transient labels "VP3" and "WS3" in the document
with a durable description or stable doc link: search for every occurrence of
the tokens VP3 and WS3 (e.g., in the Context paragraph and headings) and
substitute them with a clear, lasting phrase such as "proof-of-concept
investigation into runtime pipeline integrations" or a permalink to the related
framework-integration-analysis section; ensure the replacement preserves meaning
and update any inline references like "Runtime pipeline" to point to the stable
document link instead of the transient project codes.

In `@examples/rsc-poc-mongo/app/page.tsx`:
- Around line 5-17: Replace the long explanatory block at the top of page.tsx
with a concise one-line route-purpose comment (e.g., "Home route — Mongo
baseline for parallel-reads demo") and add a short inline link to the canonical
findings/doc that explains the H2/H3 behavior; keep a brief reference to the
shared body component (parallel-reads-page.tsx) and the relevant runtime concept
name (MongoRuntimeImpl) so readers can navigate to implementation/details, but
remove the extended rationale text from this file to keep it easy to scan.

In `@examples/rsc-poc-mongo/package.json`:
- Around line 33-35: The package.json dependencies for this PoC use caret ranges
("next", "react", "react-dom") which allows version drift; change those three
dependency entries in package.json to exact pinned versions (remove the ^ and
set the specific version numbers you want) so stress-test runs are reproducible
— update the "next", "react", and "react-dom" entries accordingly and run npm
install / pnpm install to lock the chosen versions.

In `@examples/rsc-poc-mongo/scripts/stress.k6.js`:
- Line 78: Update the description string so it accurately reflects the
configured VU ramp in this script: replace the current "ramping 1 → 100 VUs"
text inside the description field with a phrase matching the actual pattern
driven by startVUs and stages (e.g., "ramping 1 → 10 → 30 → 60 → 100 → 100
VUs"), ensuring you reference the same description property and keep it
consistent with the startVUs and stages target values used elsewhere in this
file.
- Around line 158-159: The code contains a redundant call to sleep(0) that
contradicts the comment "No sleep(); generate continuous pressure" — remove the
statement sleep(0); (and its explanatory comment if present) from
examples/rsc-poc-mongo/scripts/stress.k6.js so the script truly generates
continuous pressure; locate the occurrence of the literal sleep(0) and delete
it.

In `@examples/rsc-poc-postgres/app/page.tsx`:
- Around line 5-16: Replace the long narrative block comment at the top of
app/page.tsx with a single-line summary and a link to the canonical findings
document; remove the multi-paragraph explanation that describes hypothesis H2
and the reload/diag instructions, and instead keep a terse comment like "Home
route — demonstrates parallel marker-read behavior; see [canonical findings]"
pointing to the findings doc, and keep the existing reference to
src/components/parallel-reads-page.tsx if helpful; this keeps the route lean
while preserving discoverability.

In `@examples/rsc-poc-postgres/src/lib/diag.ts`:
- Around line 42-49: The double cast in getRegistry (const g = globalThis as
unknown as GlobalWithDiag) is too broad; replace it with a safe typed
intersection or a type guard so the compiler narrows globalThis without using
unknown. Update getRegistry to use a declaration like treating globalThis as
(typeof globalThis & GlobalWithDiag) or implement a small
type-predicate/isGlobalWithDiag function that checks for REGISTRY_KEY and
narrows the type before accessing the symbol-keyed registry; keep the logic
around REGISTRY_KEY and return type DiagRegistry unchanged.

In `@examples/rsc-poc-postgres/src/server-components/similar-posts-sample.tsx`:
- Line 65: The predicate is hiding a type mismatch by force-casting queryPost.id
with as Char<36> inside the where call; instead, narrow the id before building
the predicate so post.id.neq remains properly type-checked. Add a type guard or
validator (e.g., isChar36 or ensureQueryPostHasId) that returns value is
Char<36> and use it to assert/narrow queryPost.id prior to calling
db.orm.Post.where (or adjust the helper that runs select('id', ...) so it
preserves the Char<36> type), then pass the already-narrowed queryPost.id into
post.id.neq rather than using a blind cast.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: f40bd84f-40cd-420b-bc72-829364bfe9c7

📥 Commits

Reviewing files that changed from the base of the PR and between f0470b5 and 3fbebf2.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (70)
  • docs/reference/framework-integration-analysis.md
  • docs/reference/rsc-concurrency-findings.md
  • examples/rsc-poc-mongo/.env.example
  • examples/rsc-poc-mongo/.gitignore
  • examples/rsc-poc-mongo/README.md
  • examples/rsc-poc-mongo/app/actions.ts
  • examples/rsc-poc-mongo/app/diag/route.ts
  • examples/rsc-poc-mongo/app/globals.css
  • examples/rsc-poc-mongo/app/layout.tsx
  • examples/rsc-poc-mongo/app/page.tsx
  • examples/rsc-poc-mongo/app/stress/pool-pressure/page.tsx
  • examples/rsc-poc-mongo/biome.jsonc
  • examples/rsc-poc-mongo/next.config.js
  • examples/rsc-poc-mongo/package.json
  • examples/rsc-poc-mongo/prisma-next.config.ts
  • examples/rsc-poc-mongo/prisma/contract.prisma
  • examples/rsc-poc-mongo/scripts/seed.ts
  • examples/rsc-poc-mongo/scripts/stress.k6.js
  • examples/rsc-poc-mongo/src/components/create-event-form.tsx
  • examples/rsc-poc-mongo/src/components/diag-panel.tsx
  • examples/rsc-poc-mongo/src/components/parallel-reads-page.tsx
  • examples/rsc-poc-mongo/src/lib/db.ts
  • examples/rsc-poc-mongo/src/lib/diag.ts
  • examples/rsc-poc-mongo/src/prisma/contract.d.ts
  • examples/rsc-poc-mongo/src/prisma/contract.json
  • examples/rsc-poc-mongo/src/server-components/event-type-stats.tsx
  • examples/rsc-poc-mongo/src/server-components/orders-with-user.tsx
  • examples/rsc-poc-mongo/src/server-components/product-list.tsx
  • examples/rsc-poc-mongo/src/server-components/products-by-search.tsx
  • examples/rsc-poc-mongo/src/server-components/search-events.tsx
  • examples/rsc-poc-mongo/test/concurrency-invariants.test.ts
  • examples/rsc-poc-mongo/tsconfig.json
  • examples/rsc-poc-mongo/turbo.json
  • examples/rsc-poc-mongo/vitest.config.ts
  • examples/rsc-poc-postgres/.env.example
  • examples/rsc-poc-postgres/.gitignore
  • examples/rsc-poc-postgres/README.md
  • examples/rsc-poc-postgres/app/actions.ts
  • examples/rsc-poc-postgres/app/diag/route.ts
  • examples/rsc-poc-postgres/app/globals.css
  • examples/rsc-poc-postgres/app/layout.tsx
  • examples/rsc-poc-postgres/app/page.tsx
  • examples/rsc-poc-postgres/app/stress/always/page.tsx
  • examples/rsc-poc-postgres/app/stress/pool-pressure/page.tsx
  • examples/rsc-poc-postgres/biome.jsonc
  • examples/rsc-poc-postgres/next.config.js
  • examples/rsc-poc-postgres/package.json
  • examples/rsc-poc-postgres/prisma-next.config.ts
  • examples/rsc-poc-postgres/prisma/schema.prisma
  • examples/rsc-poc-postgres/scripts/drop-db.ts
  • examples/rsc-poc-postgres/scripts/seed.ts
  • examples/rsc-poc-postgres/scripts/stress.k6.js
  • examples/rsc-poc-postgres/src/components/create-post-form.tsx
  • examples/rsc-poc-postgres/src/components/diag-panel.tsx
  • examples/rsc-poc-postgres/src/components/parallel-reads-page.tsx
  • examples/rsc-poc-postgres/src/lib/db.ts
  • examples/rsc-poc-postgres/src/lib/diag.ts
  • examples/rsc-poc-postgres/src/lib/pool.ts
  • examples/rsc-poc-postgres/src/prisma/contract.d.ts
  • examples/rsc-poc-postgres/src/prisma/contract.json
  • examples/rsc-poc-postgres/src/server-components/posts-with-authors.tsx
  • examples/rsc-poc-postgres/src/server-components/recent-posts-raw.tsx
  • examples/rsc-poc-postgres/src/server-components/similar-posts-sample.tsx
  • examples/rsc-poc-postgres/src/server-components/top-users.tsx
  • examples/rsc-poc-postgres/src/server-components/user-kind-breakdown.tsx
  • examples/rsc-poc-postgres/test/always-mode-invariant.test.ts
  • examples/rsc-poc-postgres/test/utils/control-client.ts
  • examples/rsc-poc-postgres/tsconfig.json
  • examples/rsc-poc-postgres/turbo.json
  • examples/rsc-poc-postgres/vitest.config.ts

Comment on lines +108 to +123
```
markerReads: 5, connectionAcquires: 11, connectionReleases: 11
```

Exactly 5 marker reads for 5 concurrent first-touch queries.
Subsequent page loads show `markerReads: 5` remaining constant — no
further verification. The pinning invariant test
(`always-mode-invariant.test.ts > H2 cold-start`) asserts
`markerReads ∈ [1, K]` and `markerReads` stays at its post-cold value
on warm bursts.

**Observed (rsc-poc-mongo, `/` on cold start):**

```
commandsStarted: 5, commandsSucceeded: 5, commandsFailed: 0
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add languages to the metric snapshot fences.

These unlabeled code fences are already tripping markdownlint (MD040). text is enough for the counter dumps.

Also applies to: 162-165, 184-199, 235-241

🧰 Tools
🪛 markdownlint-cli2 (0.22.0)

[warning] 108-108: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


[warning] 121-121: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/reference/rsc-concurrency-findings.md` around lines 108 - 123, The
markdown code fences containing metric snapshots (e.g., the fence showing
"markerReads: 5, connectionAcquires: 11, connectionReleases: 11" and the block
with "commandsStarted: 5, commandsSucceeded: 5, commandsFailed: 0") should be
labeled with a language to satisfy markdownlint MD040; update each unlabeled
triple-backtick fence (including the other mentioned dumps at the other
sections) to use ```text so the metric snapshots are treated as plain text.

Comment on lines +54 to +66
const db = await getDb();

try {
await db.orm.events.variant('SearchEvent').create({
userId: `rsc-poc-${randomUUID().slice(0, 8)}`,
sessionId: `rsc-poc-session-${randomUUID().slice(0, 8)}`,
timestamp: new Date(),
query: query.trim(),
});
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
return { status: 'error', message };
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Catch connection/init failures too.

getDb() sits outside the try, so a failed runtime init or connect rejects the action instead of returning a CreateEventState to the form.

Suggested fix
-  const db = await getDb();
-
   try {
+    const db = await getDb();
     await db.orm.events.variant('SearchEvent').create({
       userId: `rsc-poc-${randomUUID().slice(0, 8)}`,
       sessionId: `rsc-poc-session-${randomUUID().slice(0, 8)}`,
       timestamp: new Date(),
       query: query.trim(),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const db = await getDb();
try {
await db.orm.events.variant('SearchEvent').create({
userId: `rsc-poc-${randomUUID().slice(0, 8)}`,
sessionId: `rsc-poc-session-${randomUUID().slice(0, 8)}`,
timestamp: new Date(),
query: query.trim(),
});
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
return { status: 'error', message };
}
try {
const db = await getDb();
await db.orm.events.variant('SearchEvent').create({
userId: `rsc-poc-${randomUUID().slice(0, 8)}`,
sessionId: `rsc-poc-session-${randomUUID().slice(0, 8)}`,
timestamp: new Date(),
query: query.trim(),
});
} catch (error) {
const message = error instanceof Error ? error.message : String(error);
return { status: 'error', message };
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-mongo/app/actions.ts` around lines 54 - 66, The call to
getDb() is currently outside the try block so connection/init errors escape
instead of returning a CreateEventState; wrap the getDb() call in the same try
that calls db.orm.events.variant(...), or expand the try to start before
getDb(), so any errors from getDb() are caught and handled by the existing catch
which returns { status: 'error', message }; update the code around getDb() and
the try/catch in actions.ts (the block that creates the SearchEvent)
accordingly.

Comment on lines +11 to +13
--mono: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", monospace;
--sans:
-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n examples/rsc-poc-mongo/app/globals.css | head -20

Repository: prisma/prisma-next

Length of output: 628


🌐 Web query:

stylelint value-keyword-case font family identifiers

💡 Result:

The Stylelint rule "value-keyword-case" enforces lowercase or uppercase casing for CSS keyword values (e.g., "block", "flex") in property values. By default, it considers font family names (e.g., "BlinkMacSystemFont", "Arial", "Helvetica") as keywords and flags non-conforming case, such as uppercase in "lower" mode. Font families are treated as custom identifiers, not ignored automatically for font-family/font properties. To handle this: - Use "ignoreProperties": ["font-family", "font"] to skip case checking entirely for these properties. - Use "ignoreKeywords": ["/[A-Z]/", "specific-font-name"] or regex to ignore camelCase/pascalCase font names. - Examples from issues: For SCSS vars like $main_font_family: -apple-system, BlinkMacSystemFont, sans-serif; or direct font-family: BlinkMacSystemFont, add ignoreKeywords: ["BlinkMacSystemFont"] or ignoreProperties: ["/^\$/"] for variables. Official docs confirm the rule applies to font values unless explicitly ignored via options. A separate rule "font-family-name-quotes" handles quoting, not casing.

Citations:


Fix stylelint value-keyword-case with proper configuration instead of quoting.

Lines 11 and 13 contain unquoted mixed-case font identifiers that violate the value-keyword-case rule. However, the suggested fix of quoting these names does not resolve the issue. The value-keyword-case rule applies regardless of quoting and instead requires either:

  1. Configuring stylelint to ignore font-family/font properties: "ignoreProperties": ["font-family", "font"]
  2. Adding specific font names to ignoreKeywords option in the stylelint config

Quoting font names addresses a different rule (font-family-name-quotes) and should not be used to work around value-keyword-case violations.

🧰 Tools
🪛 Stylelint (17.7.0)

[error] 11-11: Expected "SFMono-Regular" to be "sfmono-regular" (value-keyword-case)

(value-keyword-case)


[error] 11-11: Expected "Menlo" to be "menlo" (value-keyword-case)

(value-keyword-case)


[error] 11-11: Expected "Monaco" to be "monaco" (value-keyword-case)

(value-keyword-case)


[error] 11-11: Expected "Consolas" to be "consolas" (value-keyword-case)

(value-keyword-case)


[error] 13-13: Expected "BlinkMacSystemFont" to be "blinkmacsystemfont" (value-keyword-case)

(value-keyword-case)


[error] 13-13: Expected "Roboto" to be "roboto" (value-keyword-case)

(value-keyword-case)


[error] 13-13: Expected "Arial" to be "arial" (value-keyword-case)

(value-keyword-case)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-mongo/app/globals.css` around lines 11 - 13, The CSS custom
properties --mono and --sans are triggering stylelint's value-keyword-case; do
not change the CSS to add quotes—update the stylelint configuration instead:
either set the value-keyword-case rule to ignore font properties (add
"ignoreProperties": ["font-family","font"]) or add the specific mixed-case font
identifiers used in --mono and --sans to the rule's "ignoreKeywords" list so
those names are exempt; reference the --mono and --sans definitions when
applying the config change.

Comment on lines +3 to +5
// Example apps use a standalone biome config rather than extending the root
// config, to keep example-specific settings self-contained.
"extends": "//"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix contradictory inheritance comment.

Line 3–4 describes a standalone/non-inheriting config, but Line 5 uses "extends": "//" which inherits baseline config. The comment should be updated for accuracy.

✏️ Suggested edit
-  // Example apps use a standalone biome config rather than extending the root
-  // config, to keep example-specific settings self-contained.
+  // Example apps inherit the repo baseline via `extends: "//"` and keep
+  // any example-specific overrides local to this file.

Based on learnings: this repo intentionally uses extends: "//" in biome.jsonc to preserve inheritance from base/global configuration.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Example apps use a standalone biome config rather than extending the root
// config, to keep example-specific settings self-contained.
"extends": "//"
// Example apps inherit the repo baseline via `extends: "//"` and keep
// any example-specific overrides local to this file.
"extends": "//"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-mongo/biome.jsonc` around lines 3 - 5, The comment above the
"extends": "//" entry is contradictory — it claims the example uses a
standalone/non-inheriting config but the file actually inherits the base config
via "extends": "//"; update the comment in examples/rsc-poc-mongo/biome.jsonc to
state that the example intentionally inherits baseline/global configuration
using "extends": "//" so the comment accurately reflects the behavior of the
"extends" key.

Comment on lines +3 to +12
Next.js 16 App Router proof-of-concept for **Prisma Next Mongo runtime
behavior under RSC concurrent rendering**. Paired with `rsc-poc-postgres`;
together they cover VP3 of the WS3 runtime-pipeline milestone (Linear:
[TML-2164][t]).

See [`docs/reference/rsc-concurrency-findings.md`](../../docs/reference/rsc-concurrency-findings.md)
for the full write-up, including the per-hypothesis results and the
recommended fix for the one performance bug this PoC surfaced (H2).

[t]: https://linear.app/prisma-company/issue/TML-2164/rsc-concurrency-safety-poc
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Drop the Linear milestone link from this README.

This is durable example documentation, so tying it to a tracker issue will age badly. The stable findings doc is enough here.

Based on learnings, do not reference transient project artifacts (e.g., under projects/ such as specs, plans, milestone documents) from durable system documentation (package READMEs, architecture/docs under docs/).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-mongo/README.md` around lines 3 - 12, Remove the transient
Linear milestone reference by deleting the inline "Linear: [TML-2164][t]" text
and the corresponding reference label "[t]:
https://linear.app/prisma-company/issue/TML-2164/rsc-concurrency-safety-poc"
from the README, leaving the durable description and the link to
docs/reference/rsc-concurrency-findings.md intact so the README only contains
stable, long-lived references.

Comment on lines +243 to +256
`src/lib/pool.ts` defines `InstrumentedPool`, a subclass of `pg.Pool`.
Subclassing (not wrapping) is deliberate: `@prisma-next/postgres`'s
`resolvePostgresBinding()` uses `instanceof PgPool` to decide whether to
route the input into the `pgPool` binding branch. A composition wrapper
would fail that check.

`InstrumentedPool` overrides `connect()` to:

1. Count pool connection acquires.
2. Instrument the acquired `PoolClient` in place so that:
- `client.query(sql, ...)` matches `sql` against the stable marker-read
fragment (`prisma_contract.marker`) and bumps the marker-read counter
if it's a verification query.
- `client.release()` bumps the release counter.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update the instrumentation notes to match the fixed pool hooks.

This still documents release counting via wrapping client.release(), which is the brittle approach the PoC later replaced. The implementation notes here should match the current src/lib/pool.ts behavior so readers do not copy the superseded path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-postgres/README.md` around lines 243 - 256, Update the
README notes to reflect the current InstrumentedPool implementation: state that
InstrumentedPool (and its connect() override) uses the pool's fixed hooks to
track acquires and releases and instruments clients via the pool-level hook
handlers rather than by wrapping client.release(), and clarify that marker-read
detection is implemented by intercepting queries through the pool/client hook
mechanism (not by replacing client.query/release directly); update the text that
currently describes wrapping client.release() to describe the hook-based
approach used in InstrumentedPool and connect().

Comment on lines +136 to +142
const diagRes = http.get(`${BASE_URL}/diag`);
const before = diagRes.status === 200 ? diagRes.json() : null;

return {
startedAt: new Date().toISOString(),
diagBefore: before,
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Treat /diag failures as fatal for the run.

This script uses the before/after snapshots as its primary evidence, but a 404/500 currently degrades to null and the run still looks valid. Throw in setup()/teardown() when /diag is not 200 so broken diagnostics cannot silently skew the findings.

Also applies to: 177-180

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-postgres/scripts/stress.k6.js` around lines 136 - 142, In
setup() and teardown() ensure the script treats non-200 responses from the /diag
endpoint as fatal: after calling http.get(...) and computing diagRes, if
diagRes.status !== 200 throw an Error (including status and body/json) instead
of returning null so the k6 run fails fast; update the blocks that set "before"
and "after" (the diagRes/diagBefore and diagRes/diagAfter logic) to perform this
check and throw when not 200 to prevent silent degradation of results.

Comment on lines +197 to +211
const beforeByMode = Object.fromEntries(beforeSnaps.map((s) => [s.verifyMode, s]));
const deltas = afterSnaps.map((a) => {
const b = beforeByMode[a.verifyMode] || {
markerReads: 0,
connectionAcquires: 0,
connectionReleases: 0,
};
return {
verifyMode: a.verifyMode,
markerReadsDelta: a.markerReads - b.markerReads,
acquiresDelta: a.connectionAcquires - b.connectionAcquires,
releasesDelta: a.connectionReleases - b.connectionReleases,
poolFinal: a.pool,
};
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't collapse /diag snapshots by verifyMode alone.

The runtime registry is keyed by (verifyMode, poolMax), so beforeByMode[a.verifyMode] will merge distinct pools as soon as an onFirstUse|10 entry and an onFirstUse|5 entry coexist. That makes the reported deltas wrong for pool_pressure and any mixed/manual runs. Join on the full registry identity instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-postgres/scripts/stress.k6.js` around lines 197 - 211, The
current delta calc collapses snapshots using beforeByMode keyed only by
verifyMode (see beforeByMode, beforeSnaps, afterSnaps, and verifyMode), which
merges distinct pools with different poolMax and corrupts deltas; change the
keying to use the full registry identity (e.g., combine verifyMode and poolMax
or use a tuple/key string) when building the "before" lookup and when matching
each afterSnap so you compare snapshots for the same (verifyMode, poolMax) pair,
then compute markerReadsDelta, acquiresDelta, releasesDelta and poolFinal
against that specific prior snapshot.

Comment on lines +55 to +67
interface GlobalWithRegistry {
[REGISTRY_KEY]?: DbRegistry;
}

function getRegistry(): DbRegistry {
const g = globalThis as unknown as GlobalWithRegistry;
let registry = g[REGISTRY_KEY];
if (!registry) {
registry = new Map();
g[REGISTRY_KEY] = registry;
}
return registry;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Check if file exists and read the relevant section
if [ -f "examples/rsc-poc-postgres/src/lib/db.ts" ]; then
  echo "=== File exists, showing lines 50-75 ===";
  sed -n '50,75p' "examples/rsc-poc-postgres/src/lib/db.ts";
else
  echo "File not found";
fi

Repository: prisma/prisma-next

Length of output: 795


🏁 Script executed:

# Search for usages of REGISTRY_KEY and registry variable to understand scope
echo "=== Searching for REGISTRY_KEY usages ===";
rg "REGISTRY_KEY" examples/rsc-poc-postgres/src/lib/db.ts -B 2 -A 2

echo "";
echo "=== Searching for getRegistry() calls ===";
rg "getRegistry" examples/rsc-poc-postgres/src/lib/ -A 2 -B 2

Repository: prisma/prisma-next

Length of output: 2976


🏁 Script executed:

# Check if GlobalWithRegistry type is used elsewhere
echo "=== Searching for GlobalWithRegistry usage ===";
rg "GlobalWithRegistry" examples/rsc-poc-postgres/src/lib/ -B 2 -A 2

Repository: prisma/prisma-next

Length of output: 795


Use an intersection type to eliminate the blind cast on globalThis.

This code uses as unknown as GlobalWithRegistry, which violates the repo's explicit ban on blind casts in example TypeScript. Replace the interface with an intersection type to preserve type safety without the intermediate unknown:

Fix
-interface GlobalWithRegistry {
-  [REGISTRY_KEY]?: DbRegistry;
-}
+type GlobalWithRegistry = typeof globalThis & {
+  [REGISTRY_KEY]?: DbRegistry;
+};

 function getRegistry(): DbRegistry {
-  const g = globalThis as unknown as GlobalWithRegistry;
+  const g = globalThis as GlobalWithRegistry;
   let registry = g[REGISTRY_KEY];

Per coding guidelines: "{packages,examples}/**/*.{ts,tsx}: Avoid blind casts like as unknown as X."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
interface GlobalWithRegistry {
[REGISTRY_KEY]?: DbRegistry;
}
function getRegistry(): DbRegistry {
const g = globalThis as unknown as GlobalWithRegistry;
let registry = g[REGISTRY_KEY];
if (!registry) {
registry = new Map();
g[REGISTRY_KEY] = registry;
}
return registry;
}
type GlobalWithRegistry = typeof globalThis & {
[REGISTRY_KEY]?: DbRegistry;
};
function getRegistry(): DbRegistry {
const g = globalThis as GlobalWithRegistry;
let registry = g[REGISTRY_KEY];
if (!registry) {
registry = new Map();
g[REGISTRY_KEY] = registry;
}
return registry;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-postgres/src/lib/db.ts` around lines 55 - 67, Remove the
interface and blind cast; instead declare an intersection type combining typeof
globalThis with the optional registry property (referencing REGISTRY_KEY and
DbRegistry) and use that as the declared type for the local variable in
getRegistry (the variable currently named g) so you assign globalThis to a
properly typed variable without using "as unknown as"; keep the rest of
getRegistry and usage of REGISTRY_KEY/DbRegistry unchanged.

Comment on lines +40 to +48
const seed = await db.orm.Post.select('id', 'title', 'embedding')
.orderBy((post) => post.createdAt.asc())
.take(1)
.all();

const queryPost = seed[0];
const queryEmbedding = queryPost?.embedding;

if (!queryPost || !queryEmbedding) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Pick the seed from posts that actually have embeddings.

This query takes the oldest post overall, not the oldest post with a non-null embedding. If the earliest row has embedding = null, the component falls into the empty state even when later seeded posts are searchable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/rsc-poc-postgres/src/server-components/similar-posts-sample.tsx`
around lines 40 - 48, The seed selection currently chooses the oldest post
regardless of embedding, causing queryPost/queryEmbedding to be null if that row
has no embedding; update the db.orm.Post.select call (the query that assigns
seed) to filter only posts with a non-null embedding (e.g., add a where clause
that checks embedding is not null) before ordering and taking 1 so queryPost and
queryEmbedding will reliably reference a post that has an embedding.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant