Conversation
normalizeStripeSubscription emitted { priceId } while syncProducts
stored { productId, priceId }. The symmetric key-set check in
prepareSubscribeCheckoutCompleted rejected the checkout.completed
webhook with PROVIDER_WEBHOOK_INVALID for any hosted-checkout flow.
Pull productId off the Price object so both paths emit the same shape.
Implement completeCheckout in the Stripe harness with Playwright: fills Stripe's hosted checkout form (card, expiry, CVC, name) via pressSequentially and submits. Replaces the console.log prompt in the 3 previously-manual tests (subscribe-paid-checkout, resubscribe-after-cancel, lifecycle/subscription). Drop vitest.automated.config.ts and vitest.manual.config.ts — with every test automated, the split is obsolete. Collapse scripts to a single "pnpm test" over smoke/vitest.config.ts.
Switch the smoke suite from single-threaded to parallel by replacing per-test binding on :4567 with a shared webhook hub: - hub.ts: one long-lived HTTP server started in globalSetup. Routes Stripe events to per-test workers by provider customer ID, buffers briefly for customers not yet registered. - Each test's webhook server now binds :0 (random free port) and registers its provider customer IDs with the hub on creation, unregisters on cleanup. - DB name suffix adds randomness — Date.now() collided under parallelism. Stripe test mode caps at 25 ops/sec. To keep parallel setup from tripping the limit: - maxNetworkRetries: 3 on Stripe SDK clients (both paykit's internal client and all e2e harness clients). SDK retries 429s with backoff that respects Retry-After — strict production improvement. - maxWorkers: 6 caps concurrent syncProducts bursts. Full suite: 21/21 pass, ~100s wall-clock (down from ~525s sequential).
pnpm test and pnpm test:stripe pointed at the same command, which hid the fact that the default run was Stripe-only. Drop the generic test and test:watch; every invocation is now explicit about provider.
- Rename smoke/ → core/. "smoke" already has a specific meaning in other suites (platform/runtime sanity checks); our tests are full end-to-end provider flows, so the old name was misleading. - Pull plumbing into test-utils/: env, harness, hub, setup, products. - Extract test plans/features out of setup.ts into products.ts. - Move Stripe's allow_incomplete override (test-only) out of the generic setup.ts into harness/stripe.ts as a new applyTestingOverrides hook on the ProviderHarness interface. - Merge global-setup.ts into hub.ts: the Vitest globalSetup default export lives alongside the hub it starts. - Collapse three per-folder vitest configs into one root vitest.config.ts using Vitest projects (core, cli). - Scripts use --project=core / --project=cli. 21/21 tests pass.
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReorganizes e2e test tooling under Changes
Sequence Diagram(s)sequenceDiagram
participant Runner as E2E Test Runner
participant Hub as Webhook Hub
participant Worker as Per-test Worker
participant Stripe as Stripe Provider
Runner->>Hub: POST /_hub/register { providerCustomerId, workerUrl }
Hub->>Hub: store mapping, drain buffered events
Stripe->>Hub: POST webhook (body includes providerCustomerId)
Hub->>Hub: extract providerCustomerId, lookup workerUrl
alt worker known
Hub->>Worker: forward POST (preserve headers & body)
Worker->>Worker: process webhook
Worker-->>Hub: return response
Hub-->>Stripe: proxy worker response
else worker unknown
Hub->>Hub: buffer event (TTL 60s)
Hub-->>Stripe: 202 Accepted
end
Runner->>Hub: POST /_hub/unregister { providerCustomerId }
Hub->>Hub: remove mapping, clear buffered events
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
e2e/test-utils/setup.ts (1)
119-124:⚠️ Potential issue | 🟡 MinorHardcoded 10s wait on cleanup is brittle and races with
unregisterCustomers.Two concerns with the cleanup ordering:
setTimeout(10_000)is a worst-case guess for cleanup webhook delivery. Under any provider/network slowness this drops late webhooks without signal, making post-mortem of "missing event" failures harder. Consider polling for the expected terminal events (e.g.,customer.deleted/test-clock-related) inwebhookEventrows up to a max timeout instead of a flat sleep.- Once
unregisterCustomersruns, any remaining in-flight webhooks for those customers will be dropped silently by the hub. That's intentional, but combined with (1) it means a slow webhook arriving at 10.5s afterharness.cleanupwon't be processed. If your suite ever depends on assertions happening after the cleanup webhook is recorded, this will silently break.Not blocking — just flag it so it doesn't become an unexplained flake later.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/test-utils/setup.ts` around lines 119 - 124, Replace the brittle fixed 10s sleep by polling the webhookEvent table for the expected terminal events after calling harness.cleanup: implement a helper (e.g., waitForWebhookEvents or pollForTerminalEvents) that queries webhookEvent rows for the given customer IDs until you observe the expected terminal event types (like "customer.deleted" or test-clock related events) or a configurable timeout is reached, then only call unregisterCustomers once the helper resolves (or the timeout elapses); finally close the server after unregisterCustomers completes (server.close in the existing code path) so in-flight webhooks aren't dropped silently and test flakes are avoided.
🧹 Nitpick comments (8)
packages/paykit/src/customer/__tests__/customer.service.test.ts (1)
451-503: Test name doesn't match the scenario being exercised.The test is titled "calls provider when no snapshot exists (first sync)" but the fixture sets
provider.stripe.id = "cus_existing"(i.e. a provider customer already exists, just without sync snapshot fields). PerupsertProviderCustomerincustomer.service.ts, this path takes theupdateCustomerbranch — which is exactly what the assertion on line 493 verifies. A genuine "first sync" case (no provider id at all) would route tocreateCustomerinstead and is not covered here.Consider renaming to something like
"calls updateCustomer when provider id exists but snapshot is missing", and/or adding a separate test for the true first-sync path that assertscreateCustomeris invoked.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/paykit/src/customer/__tests__/customer.service.test.ts` around lines 451 - 503, The test title is misleading: update the spec name from "calls provider when no snapshot exists (first sync)" to something like "calls updateCustomer when provider id exists but snapshot is missing" to match the existing fixture that sets provider.stripe.id = "cus_existing"; alternatively add a new test case that simulates a true first-sync by removing provider.stripe.id (set to undefined or omit it), use a providerMock with createCustomer mocked, call upsertCustomer with upsertProviderCustomer: true, and assert providerMock.createCustomer was called and the provider update (providerUpdate.set) contains the syncedEmail — reference the upsertCustomer/upsertProviderCustomer flow and the providerMock.updateCustomer/createCustomer symbols when making changes.landing/vercel.json (1)
3-3: Consider keepingSKIP_INSTALL_SIMPLE_GIT_HOOKS=1for deploy contexts to avoid unnecessary git hook setup.The
preparescript in rootpackage.json(bunx simple-git-hooks || true) will execute during Vercel'sbun installstep. While the|| trueprevents build failures, installing git hooks in an ephemeral deploy environment is wasteful work. Vercel deploys have no use for pre-commit hooks, and any resolution flakes will be silently swallowed. For cleaner deploy logs and faster builds, keep the env skip for Vercel/CI and consider enabling hooks explicitly for local installs only (viapostinstallwith a conditional or dedicated local script).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@landing/vercel.json` at line 3, Set SKIP_INSTALL_SIMPLE_GIT_HOOKS=1 in the Vercel install command so Vercel's "bun install" doesn't run the root package.json prepare script (bunx simple-git-hooks || true) and waste time installing git hooks in the ephemeral deploy environment; update the "installCommand" in vercel.json to prefix it with the environment variable and, if desired, move hook installation to a local-only step (e.g., a conditional postinstall or dedicated local script) so hooks run for developers but not CI/deploys.vitest.unit.config.ts (1)
6-7: Optional: redundant exclude paths.Since
includealready restricts discovery topackages/**/__tests__/**/*.test.ts, theapps/**,e2e/**, andlanding/**excludes are no-ops. They’re harmless documentation, but you could drop them for clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vitest.unit.config.ts` around lines 6 - 7, The exclude array in vitest.unit.config.ts contains redundant patterns ("apps/**", "e2e/**", "landing/**") because the include setting already restricts test discovery to "packages/**/__tests__/**/*.test.ts"; remove those no-op entries from the exclude array (leave core exclusions like "**/dist/**" and "**/node_modules/**" intact) so that the exclude and include settings are clearer and less noisy..github/workflows/ci.yml (3)
17-101: Optional: deduplicate the three jobs.
lint,typecheck, andunitrepeat the exact same 4-step setup (checkout → setup-node → setup-bun → install). Two cleaner options:
- Extract a local composite action under
.github/actions/setupand call it from each job.- Use a
strategy.matrixover the command (lint/typecheck/test:unit).Both keep concurrency-per-job and
timeout-minutesoverrides easy to express. Not blocking, just future-proofing as more checks are added.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 17 - 101, The workflow duplicates the checkout/setup/install steps across jobs (jobs named lint, typecheck, unit); refactor to deduplicate by either creating a local composite action (e.g., .github/actions/setup) that encapsulates checkout → setup-node → setup-bun → bun install and call it from each job, or replace the three jobs with a single job using strategy.matrix that iterates over the command (e.g., matrix: command: [ "bun lint", "bun typecheck", "bun test:unit" ]) and runs the appropriate run step; ensure you preserve each job’s concurrency grouping and timeout-minutes behavior (apply per-matrix entry or override where needed) and keep job names/identifiers (lint/typecheck/unit) or map them in the matrix to maintain logs and notifications.
28-28:fetch-depth: 0isn't needed for lint/typecheck/unit.Full git history isn't required by any of these jobs — none of them diff against base or read tags. Default shallow checkout (
fetch-depth: 1) cuts clone time materially on large repos.♻️ Suggested change
- uses: actions/checkout@v4 with: - fetch-depth: 0 persist-credentials: falseAlso applies to: 58-58, 86-86
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml at line 28, The workflow currently sets fetch-depth: 0 which is unnecessary for the lint/typecheck/unit jobs and slows cloning; update the three occurrences of the fetch-depth setting in .github/workflows/ci.yml (the entries used by the lint, typecheck, and unit jobs) to either remove the fetch-depth key or set it to fetch-depth: 1 so the actions/checkout step uses a shallow clone; ensure you change all three instances to keep behavior consistent across those jobs.
39-40: Consider caching Bun installs across runs.
bun install --frozen-lockfileruns in all three jobs on every PR push.oven-sh/setup-bun@v2does not cachenode_modulesautomatically; adding anactions/cachestep keyed onbun.lock(and a composite action or reusable workflow to dedupe the boilerplate across the three jobs) would noticeably speed up the matrix.This is a nice-to-have once the workflow is green — not a blocker for the initial CI lane.
Also applies to: 69-70, 97-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 39 - 40, The "Install" step runs `bun install --frozen-lockfile` in each job with no caching; add an `actions/cache` step before that "Install" step to cache node_modules and Bun's cache directory (e.g., node_modules and Bun cache path) and key it on the `bun.lock` file (use a cache key like runner-os-bun-lock-${{ hashFiles('bun.lock') }}), and factor that cache+install sequence into a reusable workflow or composite action so the same boilerplate is used across the three jobs (also update the other identical "Install" steps that run `bun install --frozen-lockfile`).e2e/test-utils/harness/stripe.ts (1)
105-107: Test card expiry12/34will eventually expire.
pressSequentially("1234")typesMM/YY→ December 2034. Stripe test mode accepts this today, but once the date passes the suite will start failing for an entirely non-obvious reason. Consider rolling the year forward to a far-future value (e.g.,1299→ Dec 2099) or computing it dynamically.- await cardExpiry.pressSequentially("1234"); + await cardExpiry.pressSequentially("1299");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/test-utils/harness/stripe.ts` around lines 105 - 107, The test types a fixed expiry "1234" via the cardExpiry locator using pressSequentially, which will eventually become a past date and break tests; update the code that calls cardExpiry.pressSequentially("1234") to use a far-future year (e.g., "1299" for 12/99) or compute the MM/YY dynamically so the expiry is always in the future (adjust in the harness where page.locator("#cardExpiry") and cardExpiry.pressSequentially are used).e2e/test-utils/setup.ts (1)
187-210: ReusesyncStripePaymentMethodand move the Stripe-specific branch into the harness.This block duplicates the logic in
e2e/test-utils/harness/stripe.ts:159-185(syncStripePaymentMethod) and reintroduces a provider switch in the shared setup, defeating one of the goals of the harness abstraction. It also re-instantiates a Stripe client and usesenv.E2E_STRIPE_SK!even thoughharness.validateEnv()(andcreateStripeHarness()) already guard those.Two options, in order of preference:
- Push the post-direct-subscription DB sync into the harness contract (e.g., have
setupCustomerForDirectSubscriptionaccept the PayKit DB / context and do the sync internally), socreateTestCustomerWithPMbecomes provider-agnostic.- Otherwise at minimum call the existing helper:
♻️ Proposed minimal refactor
- // For Stripe, sync the payment method into PayKit DB - if (input.t.harness.id === "stripe") { - const secretKey = env.E2E_STRIPE_SK!; - const stripeClient = new Stripe(secretKey, { maxNetworkRetries: 3 }); - const pm = await stripeClient.paymentMethods.list({ - customer: providerCustomerId, - type: "card", - limit: 1, - }); - const method = pm.data[0]; - if (method) { - await syncPaymentMethodByProviderCustomer(input.t.ctx.database, { - paymentMethod: { - providerMethodId: method.id, - type: method.type, - last4: method.card?.last4, - expiryMonth: method.card?.exp_month, - expiryYear: method.card?.exp_year, - isDefault: true, - }, - providerCustomerId, - providerId: input.t.ctx.provider.id, - }); - } - } + if (input.t.harness.id === "stripe") { + const stripeClient = new Stripe(env.E2E_STRIPE_SK, { maxNetworkRetries: 3 }); + await syncStripePaymentMethod({ + database: input.t.ctx.database, + providerCustomerId, + providerId: input.t.ctx.provider.id, + stripeClient, + }); + }Plus the corresponding
import { syncStripePaymentMethod } from "./harness/stripe";at the top.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/test-utils/setup.ts` around lines 187 - 210, The setup code is duplicating Stripe-specific sync logic and re-creating the Stripe client; replace that by calling the existing harness helper (syncStripePaymentMethod) or, preferably, move the post-direct-subscription DB sync into the harness contract so setupCustomerForDirectSubscription (or createTestCustomerWithPM) stays provider-agnostic; specifically, remove the Stripe branch that uses Stripe/pm/data and syncPaymentMethodByProviderCustomer, import and call syncStripePaymentMethod from e2e/test-utils/harness/stripe.ts (or add a new method on the Stripe harness that accepts the PayKit DB/context and does the sync internally), and rely on harness.validateEnv()/createStripeHarness() for the Stripe secret instead of env.E2E_STRIPE_SK!.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/test-utils/hub.ts`:
- Around line 54-87: The async server handler created in createServer needs
robust try/catch(s) so JSON.parse(body) and any other throws (e.g., in the
for-await read, dropExpired, forwardEvent) don't cause unhandled rejections or
hanging clients; wrap the handler body in a top-level try/catch that logs the
error and ensures res.end() is always called, and add a smaller try/catch around
JSON.parse for the "/_hub/register" and "/_hub/unregister" branches to return a
400 with a clear message on malformed JSON, while other unexpected errors return
a 500; reference the existing createServer callback, JSON.parse calls in the
"/_hub/register" and "/_hub/unregister" blocks, and operations using registry,
buffers, dropExpired, and forwardEvent when adding the error handling.
- Around line 21-34: The hub-only Stripe extractor extractStripeCustomerId is
being used while registerCustomer is invoked unconditionally, so non-Stripe
providers (e.g., harness.id !== "stripe") never get webhooks through the hub;
fix by either (A) guarding the registration call so registerCustomer is only
invoked when harness.id === "stripe" (mirror existing guards around
setupCustomerForDirectSubscription and applyTestingOverrides), or (B) make the
webhook handling provider-aware by adding a provider-specific extractor on the
harness (e.g., harness.extractCustomerId) and replacing calls to
extractStripeCustomerId with harness.extractCustomerId so Polar and others can
return their customer IDs; locate the unconditional registerCustomer invocation
and either wrap it in the harness.id === "stripe" check or replace the extractor
usage with a harness-provided method and implement that in the Polar harness.
In `@package.json`:
- Line 30: The package.json "prepare" script currently calls "bunx
simple-git-hooks || true", which will fetch simple-git-hooks from the registry
on every install (since it was removed from devDependencies) and also silences
real failures; fix by either re-adding "simple-git-hooks" to devDependencies (so
the "prepare" script uses the local package) or by changing the prepare script
to conditionally run only when not in CI (e.g., guard with an env var) and
remove the "|| true" so failures surface; update package.json entries
"devDependencies" to include "simple-git-hooks" or modify the "prepare" script
to check CI before invoking "bunx simple-git-hooks" and drop the
failure-silencing "|| true".
- Line 22: The npm script "test:unit" currently invokes Vitest via a hardcoded
node path; change it to use the package bin shim by replacing "node
./node_modules/vitest/vitest.mjs run --config vitest.unit.config.ts" with the
bin invocation "vitest run --config vitest.unit.config.ts" so package managers
can resolve the correct executable (update the "test:unit" script entry in
package.json accordingly).
In `@packages/stripe/src/stripe-provider.ts`:
- Around line 122-138: The new logic changes providerProduct shape and breaks
webhook comparison; revert to the backward-compatible shape by ensuring
providerProduct is either null or an object containing only priceId (no
productId), and drop the dead typeof price === "string" branch; update the block
that computes providerPriceId/providerProductId (references: firstItem, price,
providerPriceId, providerProductId, providerProduct) so providerProduct is set
to { priceId: providerPriceId } when providerPriceId exists and leave it null
otherwise, ensuring existing stored { priceId } objects continue to match
incoming webhook payloads.
---
Outside diff comments:
In `@e2e/test-utils/setup.ts`:
- Around line 119-124: Replace the brittle fixed 10s sleep by polling the
webhookEvent table for the expected terminal events after calling
harness.cleanup: implement a helper (e.g., waitForWebhookEvents or
pollForTerminalEvents) that queries webhookEvent rows for the given customer IDs
until you observe the expected terminal event types (like "customer.deleted" or
test-clock related events) or a configurable timeout is reached, then only call
unregisterCustomers once the helper resolves (or the timeout elapses); finally
close the server after unregisterCustomers completes (server.close in the
existing code path) so in-flight webhooks aren't dropped silently and test
flakes are avoided.
---
Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 17-101: The workflow duplicates the checkout/setup/install steps
across jobs (jobs named lint, typecheck, unit); refactor to deduplicate by
either creating a local composite action (e.g., .github/actions/setup) that
encapsulates checkout → setup-node → setup-bun → bun install and call it from
each job, or replace the three jobs with a single job using strategy.matrix that
iterates over the command (e.g., matrix: command: [ "bun lint", "bun typecheck",
"bun test:unit" ]) and runs the appropriate run step; ensure you preserve each
job’s concurrency grouping and timeout-minutes behavior (apply per-matrix entry
or override where needed) and keep job names/identifiers (lint/typecheck/unit)
or map them in the matrix to maintain logs and notifications.
- Line 28: The workflow currently sets fetch-depth: 0 which is unnecessary for
the lint/typecheck/unit jobs and slows cloning; update the three occurrences of
the fetch-depth setting in .github/workflows/ci.yml (the entries used by the
lint, typecheck, and unit jobs) to either remove the fetch-depth key or set it
to fetch-depth: 1 so the actions/checkout step uses a shallow clone; ensure you
change all three instances to keep behavior consistent across those jobs.
- Around line 39-40: The "Install" step runs `bun install --frozen-lockfile` in
each job with no caching; add an `actions/cache` step before that "Install" step
to cache node_modules and Bun's cache directory (e.g., node_modules and Bun
cache path) and key it on the `bun.lock` file (use a cache key like
runner-os-bun-lock-${{ hashFiles('bun.lock') }}), and factor that cache+install
sequence into a reusable workflow or composite action so the same boilerplate is
used across the three jobs (also update the other identical "Install" steps that
run `bun install --frozen-lockfile`).
In `@e2e/test-utils/harness/stripe.ts`:
- Around line 105-107: The test types a fixed expiry "1234" via the cardExpiry
locator using pressSequentially, which will eventually become a past date and
break tests; update the code that calls cardExpiry.pressSequentially("1234") to
use a far-future year (e.g., "1299" for 12/99) or compute the MM/YY dynamically
so the expiry is always in the future (adjust in the harness where
page.locator("#cardExpiry") and cardExpiry.pressSequentially are used).
In `@e2e/test-utils/setup.ts`:
- Around line 187-210: The setup code is duplicating Stripe-specific sync logic
and re-creating the Stripe client; replace that by calling the existing harness
helper (syncStripePaymentMethod) or, preferably, move the
post-direct-subscription DB sync into the harness contract so
setupCustomerForDirectSubscription (or createTestCustomerWithPM) stays
provider-agnostic; specifically, remove the Stripe branch that uses
Stripe/pm/data and syncPaymentMethodByProviderCustomer, import and call
syncStripePaymentMethod from e2e/test-utils/harness/stripe.ts (or add a new
method on the Stripe harness that accepts the PayKit DB/context and does the
sync internally), and rely on harness.validateEnv()/createStripeHarness() for
the Stripe secret instead of env.E2E_STRIPE_SK!.
In `@landing/vercel.json`:
- Line 3: Set SKIP_INSTALL_SIMPLE_GIT_HOOKS=1 in the Vercel install command so
Vercel's "bun install" doesn't run the root package.json prepare script (bunx
simple-git-hooks || true) and waste time installing git hooks in the ephemeral
deploy environment; update the "installCommand" in vercel.json to prefix it with
the environment variable and, if desired, move hook installation to a local-only
step (e.g., a conditional postinstall or dedicated local script) so hooks run
for developers but not CI/deploys.
In `@packages/paykit/src/customer/__tests__/customer.service.test.ts`:
- Around line 451-503: The test title is misleading: update the spec name from
"calls provider when no snapshot exists (first sync)" to something like "calls
updateCustomer when provider id exists but snapshot is missing" to match the
existing fixture that sets provider.stripe.id = "cus_existing"; alternatively
add a new test case that simulates a true first-sync by removing
provider.stripe.id (set to undefined or omit it), use a providerMock with
createCustomer mocked, call upsertCustomer with upsertProviderCustomer: true,
and assert providerMock.createCustomer was called and the provider update
(providerUpdate.set) contains the syncedEmail — reference the
upsertCustomer/upsertProviderCustomer flow and the
providerMock.updateCustomer/createCustomer symbols when making changes.
In `@vitest.unit.config.ts`:
- Around line 6-7: The exclude array in vitest.unit.config.ts contains redundant
patterns ("apps/**", "e2e/**", "landing/**") because the include setting already
restricts test discovery to "packages/**/__tests__/**/*.test.ts"; remove those
no-op entries from the exclude array (leave core exclusions like "**/dist/**"
and "**/node_modules/**" intact) so that the exclude and include settings are
clearer and less noisy.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: daf729a7-6801-4ced-8f5d-aed14922860d
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (44)
.github/workflows/ci.ymle2e/cli/setup.tse2e/cli/vitest.config.tse2e/core/cancel/cancel-then-upgrade.test.tse2e/core/cancel/downgrade-change-target.test.tse2e/core/checkout/resubscribe-after-cancel.test.tse2e/core/checkout/subscribe-paid-checkout.test.tse2e/core/customer/default-free.test.tse2e/core/entitlements/check-boolean.test.tse2e/core/entitlements/check-metered.test.tse2e/core/entitlements/stacked-metered.test.tse2e/core/lifecycle/subscription.test.tse2e/core/subscribe/cancel-end-of-cycle.test.tse2e/core/subscribe/cancel-resume.test.tse2e/core/subscribe/downgrade-scheduled.test.tse2e/core/subscribe/downgrade-to-free.test.tse2e/core/subscribe/renewal.test.tse2e/core/subscribe/same-plan-noop.test.tse2e/core/subscribe/subscribe-paid.test.tse2e/core/subscribe/upgrade-immediate.test.tse2e/core/webhook/duplicate-webhook.test.tse2e/core/webhook/subscription-deleted.test.tse2e/package.jsone2e/smoke/harness/stripe.tse2e/smoke/vitest.automated.config.tse2e/smoke/vitest.config.tse2e/smoke/vitest.manual.config.tse2e/smoke/vitest.shared.tse2e/test-utils/env.tse2e/test-utils/harness/index.tse2e/test-utils/harness/polar.tse2e/test-utils/harness/stripe.tse2e/test-utils/harness/types.tse2e/test-utils/hub.tse2e/test-utils/index.tse2e/test-utils/products.tse2e/test-utils/setup.tse2e/vitest.config.tslanding/src/lib/source.tslanding/vercel.jsonpackage.jsonpackages/paykit/src/customer/__tests__/customer.service.test.tspackages/stripe/src/stripe-provider.tsvitest.unit.config.ts
💤 Files with no reviewable changes (6)
- e2e/smoke/vitest.manual.config.ts
- e2e/smoke/vitest.config.ts
- e2e/smoke/vitest.automated.config.ts
- e2e/smoke/vitest.shared.ts
- e2e/smoke/harness/stripe.ts
- e2e/cli/vitest.config.ts
| function extractStripeCustomerId(body: string): string | null { | ||
| try { | ||
| const parsed = JSON.parse(body) as { | ||
| data?: { object?: { id?: string; customer?: string | null; object?: string } }; | ||
| }; | ||
| const obj = parsed.data?.object; | ||
| if (!obj) return null; | ||
| if (obj.object === "customer" && typeof obj.id === "string") return obj.id; | ||
| if (typeof obj.customer === "string") return obj.customer; | ||
| return null; | ||
| } catch { | ||
| return null; | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Look at Polar harness to see how/if it expects to receive webhooks via the hub.
fd -t f 'polar.ts' e2e
echo '--- polar harness file(s) above ---'
fd -t f 'polar.ts' e2e --exec rg -n -C2 'webhook|HUB|register|workerUrl|extract' {}
echo '--- where registerCustomer/unregisterCustomers are called ---'
rg -nP '\b(registerCustomer|unregisterCustomers)\s*\(' --type=ts -C2
echo '--- where extractStripeCustomerId is referenced ---'
rg -nP 'extractStripeCustomerId|extractCustomerId' --type=ts -C2
echo '--- vitest config showing globalSetup wiring ---'
fd -t f 'vitest.config.ts' e2e --exec cat {}Repository: getpaykit/paykit
Length of output: 3502
🏁 Script executed:
# Check the full setup.ts to see if registerCustomer is conditional on provider
sed -n '140,180p' e2e/test-utils/setup.ts
# Check if there are Polar-specific webhook routing or handling
rg -n 'polar|POLAR' e2e/test-utils/hub.ts -i
# Look for any Polar webhook expectations in test files
fd -t f -e test.ts e2e --exec rg -l 'polar.*webhook|webhook.*polar' {} \;
# Check if registerCustomer has any provider guards
rg -n 'registerCustomer' e2e --type=ts -B5 -A2
# Look at how harness interface defines webhook/provider handling
fd -t f 'harness.ts' e2e --exec cat {}Repository: getpaykit/paykit
Length of output: 4233
🏁 Script executed:
# Get the harness type definition to understand the interface
cat e2e/test-utils/harness/types.ts
# Check stripe harness implementation
cat e2e/test-utils/harness/stripe.ts
# Look at what webhookServer methods each harness might implement
rg -n 'webhook|forwardWebhook|routeWebhook' e2e/test-utils/harness --type=ts -A2
# Check if there's any per-provider webhook setup in the main test setup
rg -n 'harness\.' e2e/test-utils/setup.ts | head -30Repository: getpaykit/paykit
Length of output: 10065
🏁 Script executed:
# Check if HUB_URL or HUB_PORT is used in provider setup
rg -n 'HUB_URL|HUB_PORT|hubUrl' e2e --type=ts -B2 -A2
# Look at what webhook delivery mechanism each provider uses
rg -n 'webhook|WEBHOOK' e2e/test-utils/env.ts
# Check if the hub.ts exports HUB_URL and where it's used
rg -n 'export.*HUB' e2e/test-utils/hub.ts -A2
# Search for any webhook routing or setup in Polar tests
fd -t f -e test.ts e2e --exec basename {} \; | sort | uniq
# Check if any Polar test file expects webhook delivery
fd -t f 'polar' e2e --type=ts --exec rg -l 'webhook' {} \;Repository: getpaykit/paykit
Length of output: 2531
🏁 Script executed:
# Check env.ts for webhook configuration
cat e2e/test-utils/env.ts
# Search for how webhookSecret or webhook configuration is used per provider
rg -n 'webhookSecret|webhook' e2e/test-utils/harness/polar.ts -B2 -A2
# Look for any Polar-specific webhook setup in the main test harness setup
rg -n 'createProviderConfig|webhookSecret' e2e/test-utils/setup.ts -B2 -A2
# Check if registerCustomer is truly called for both providers or if it's conditional
rg -n 'registerCustomer' e2e/test-utils/setup.ts -B10 -A3 | head -60Repository: getpaykit/paykit
Length of output: 3481
Hub routing is Stripe-only, but webhook registration happens for all providers.
extractStripeCustomerId only parses Stripe event envelopes. Yet registerCustomer is called unconditionally at setup.ts:168 regardless of harness.id. For PROVIDER=polar, any webhook reaching the hub returns null at lines 21-34 and is silently dropped at lines 91-96, meaning Polar tests never receive webhooks via the hub despite registering for them.
This is inconsistent with other provider-specific setup: setupCustomerForDirectSubscription (setup.ts:187) and applyTestingOverrides are both guarded with if (harness.id === "stripe").
Either:
- Guard
registerCustomeronharness.id === "stripe", or - Make the webhook extractor provider-aware (e.g.,
harness.extractCustomerId(body)or separate handlers per provider).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e/test-utils/hub.ts` around lines 21 - 34, The hub-only Stripe extractor
extractStripeCustomerId is being used while registerCustomer is invoked
unconditionally, so non-Stripe providers (e.g., harness.id !== "stripe") never
get webhooks through the hub; fix by either (A) guarding the registration call
so registerCustomer is only invoked when harness.id === "stripe" (mirror
existing guards around setupCustomerForDirectSubscription and
applyTestingOverrides), or (B) make the webhook handling provider-aware by
adding a provider-specific extractor on the harness (e.g.,
harness.extractCustomerId) and replacing calls to extractStripeCustomerId with
harness.extractCustomerId so Polar and others can return their customer IDs;
locate the unconditional registerCustomer invocation and either wrap it in the
harness.id === "stripe" check or replace the extractor usage with a
harness-provided method and implement that in the Polar harness.
| const server = createServer(async (req: IncomingMessage, res: ServerResponse) => { | ||
| const chunks: Buffer[] = []; | ||
| for await (const chunk of req) chunks.push(chunk as Buffer); | ||
| const body = Buffer.concat(chunks).toString(); | ||
| const path = req.url ?? "/"; | ||
|
|
||
| if (path === "/_hub/register") { | ||
| const { providerCustomerId, workerUrl } = JSON.parse(body) as { | ||
| providerCustomerId: string; | ||
| workerUrl: string; | ||
| }; | ||
| registry.set(providerCustomerId, workerUrl); | ||
| // Drain any buffered events for this customer | ||
| dropExpired(providerCustomerId); | ||
| const pending = buffers.get(providerCustomerId) ?? []; | ||
| buffers.delete(providerCustomerId); | ||
| for (const event of pending) { | ||
| await forwardEvent(workerUrl, event).catch(() => {}); | ||
| } | ||
| res.writeHead(204); | ||
| res.end(); | ||
| return; | ||
| } | ||
|
|
||
| if (path === "/_hub/unregister") { | ||
| const { providerCustomerIds } = JSON.parse(body) as { providerCustomerIds: string[] }; | ||
| for (const id of providerCustomerIds) { | ||
| registry.delete(id); | ||
| buffers.delete(id); | ||
| } | ||
| res.writeHead(204); | ||
| res.end(); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Add error handling around JSON.parse and the async handler body.
The createServer callback is async, and JSON.parse(body) at lines 61 and 79 (and the for-await body read above) are unguarded. A malformed POST to /_hub/register or /_hub/unregister, or any throw inside the handler before res.end() is reached, becomes an unhandled promise rejection — which under --unhandled-rejections=throw (Node default in recent versions) will crash the test runner and tear down all parallel workers. Even if it doesn't crash, the client hangs until socket timeout.
🛡️ Suggested wrapper
const server = createServer(async (req: IncomingMessage, res: ServerResponse) => {
- const chunks: Buffer[] = [];
- for await (const chunk of req) chunks.push(chunk as Buffer);
- const body = Buffer.concat(chunks).toString();
- const path = req.url ?? "/";
-
- if (path === "/_hub/register") {
- const { providerCustomerId, workerUrl } = JSON.parse(body) as {
- providerCustomerId: string;
- workerUrl: string;
- };
+ try {
+ const chunks: Buffer[] = [];
+ for await (const chunk of req) chunks.push(chunk as Buffer);
+ const body = Buffer.concat(chunks).toString();
+ const path = req.url ?? "/";
+
+ if (path === "/_hub/register") {
+ const { providerCustomerId, workerUrl } = JSON.parse(body) as {
+ providerCustomerId: string;
+ workerUrl: string;
+ };
// ...rest unchanged...
}
// ...other branches unchanged...
+ } catch (error) {
+ if (!res.headersSent) res.writeHead(400);
+ res.end(error instanceof Error ? error.message : "hub handler error");
+ }
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const server = createServer(async (req: IncomingMessage, res: ServerResponse) => { | |
| const chunks: Buffer[] = []; | |
| for await (const chunk of req) chunks.push(chunk as Buffer); | |
| const body = Buffer.concat(chunks).toString(); | |
| const path = req.url ?? "/"; | |
| if (path === "/_hub/register") { | |
| const { providerCustomerId, workerUrl } = JSON.parse(body) as { | |
| providerCustomerId: string; | |
| workerUrl: string; | |
| }; | |
| registry.set(providerCustomerId, workerUrl); | |
| // Drain any buffered events for this customer | |
| dropExpired(providerCustomerId); | |
| const pending = buffers.get(providerCustomerId) ?? []; | |
| buffers.delete(providerCustomerId); | |
| for (const event of pending) { | |
| await forwardEvent(workerUrl, event).catch(() => {}); | |
| } | |
| res.writeHead(204); | |
| res.end(); | |
| return; | |
| } | |
| if (path === "/_hub/unregister") { | |
| const { providerCustomerIds } = JSON.parse(body) as { providerCustomerIds: string[] }; | |
| for (const id of providerCustomerIds) { | |
| registry.delete(id); | |
| buffers.delete(id); | |
| } | |
| res.writeHead(204); | |
| res.end(); | |
| return; | |
| } | |
| const server = createServer(async (req: IncomingMessage, res: ServerResponse) => { | |
| try { | |
| const chunks: Buffer[] = []; | |
| for await (const chunk of req) chunks.push(chunk as Buffer); | |
| const body = Buffer.concat(chunks).toString(); | |
| const path = req.url ?? "/"; | |
| if (path === "/_hub/register") { | |
| const { providerCustomerId, workerUrl } = JSON.parse(body) as { | |
| providerCustomerId: string; | |
| workerUrl: string; | |
| }; | |
| registry.set(providerCustomerId, workerUrl); | |
| // Drain any buffered events for this customer | |
| dropExpired(providerCustomerId); | |
| const pending = buffers.get(providerCustomerId) ?? []; | |
| buffers.delete(providerCustomerId); | |
| for (const event of pending) { | |
| await forwardEvent(workerUrl, event).catch(() => {}); | |
| } | |
| res.writeHead(204); | |
| res.end(); | |
| return; | |
| } | |
| if (path === "/_hub/unregister") { | |
| const { providerCustomerIds } = JSON.parse(body) as { providerCustomerIds: string[] }; | |
| for (const id of providerCustomerIds) { | |
| registry.delete(id); | |
| buffers.delete(id); | |
| } | |
| res.writeHead(204); | |
| res.end(); | |
| return; | |
| } | |
| } catch (error) { | |
| if (!res.headersSent) res.writeHead(400); | |
| res.end(error instanceof Error ? error.message : "hub handler error"); | |
| } | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e/test-utils/hub.ts` around lines 54 - 87, The async server handler created
in createServer needs robust try/catch(s) so JSON.parse(body) and any other
throws (e.g., in the for-await read, dropExpired, forwardEvent) don't cause
unhandled rejections or hanging clients; wrap the handler body in a top-level
try/catch that logs the error and ensures res.end() is always called, and add a
smaller try/catch around JSON.parse for the "/_hub/register" and
"/_hub/unregister" branches to return a 400 with a clear message on malformed
JSON, while other unexpected errors return a 500; reference the existing
createServer callback, JSON.parse calls in the "/_hub/register" and
"/_hub/unregister" blocks, and operations using registry, buffers, dropExpired,
and forwardEvent when adding the error handling.
| "dev": "turbo run dev", | ||
| "dev:demo": "bun --filter demo dev", | ||
| "dev:web": "bun --filter landing dev", | ||
| "test:unit": "node ./node_modules/vitest/vitest.mjs run --config vitest.unit.config.ts", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does vitest 4.0 ship a vitest.mjs at the package root, or is the CLI entry under dist/?
💡 Result:
Vitest 4.0 ships a vitest.mjs file at the package root. The CLI entry is under dist/ (specifically dist/cli.js).
Citations:
🏁 Script executed:
# Check vitest's package.json structure to understand the bin entry
npm info vitest@4.0 bin 2>/dev/null || echo "npm info unavailable, checking via unpkg"
# Also look for documentation on the recommended way to run vitest
web_query="vitest package.json bin entry how to invoke"
echo "Need to verify: bin entry point for vitest"Repository: getpaykit/paykit
Length of output: 834
🌐 Web query:
vitest package.json bin entry point how to run
💡 Result:
To run Vitest using the bin entry point from package.json, first install it as a dev dependency: npm install -D vitest This creates a node_modules/.bin/vitest executable symlink. The bin entry point in Vitest's package.json (in the monorepo packages/vitest/package.json) points to the CLI entry, typically "dist/cli.js" or similar built file (e.g., from rollup.config.js building cli-api.ts). To run it: 1. Via npm scripts in your package.json: { "scripts": { "test": "vitest", "test:run": "vitest run", "coverage": "vitest run --coverage" } } Then: npm test 2. Directly via npx (no install needed, downloads/runs latest): npx vitest 3. Directly via local bin (after install): ./node_modules/.bin/vitest Vitest CLI enters watch mode by default in dev, run mode in CI. Use --help for options. Official docs confirm this setup.
Citations:
- 1: https://vitest.dev/guide/
- 2: https://vitest.dev/guide/cli
- 3: https://main.vitest.dev/guide
- 4: https://www.npmjs.com/package/vitest
- 5: https://github.com/vitest-dev/vitest
- 6: https://github.com/vitest-dev/vitest/tree/main/packages/vitest
- 7: https://github.com/vitest-dev/vitest/blob/main/packages/vitest/src/node/core.ts
🏁 Script executed:
# Check if there are any discussions about node_modules path fragility with vitest
echo "Searching for vitest path stability issues across versions..."Repository: getpaykit/paykit
Length of output: 122
🌐 Web query:
vitest node_modules path hoisting bun workspace fragile vitest.mjs
💡 Result:
In Bun workspaces (monorepos), Vitest encounters fragility due to Bun's node_modules hoisting behavior and its interaction with Vite's dependency resolution and optimization. Key issues: - Bun's default hoisting (pre-1.3) or isolated installs place dependencies in root node_modules/.bun/, creating non-flat structures that Vite/Vitest struggles to optimize properly, leading to CJS/ESM conversion failures in browser-like test environments or dev servers [3,24]. - Transitive CJS dependencies in .bun/ paths bypass Vite's pre-bundling, causing runtime SyntaxErrors when served raw via /@fs/ URLs [3]. - TypeScript resolution picks wrong @types/* versions from alphabetically-ordered hoisted symlinks in .bun/node_modules/ [17]. - Vitest worker pools (forks/threads) have Bun compatibility gaps like child_process.unref bugs, MessagePort handling, and inspector support, causing hangs or timeouts especially in run mode or CI [12,13,22,23,31]. - Phantom dependencies from hoisted flat node_modules allow cross-workspace access, breaking isolation [14,18]. Solutions: 1. Use isolated installs to prevent hoisting and phantom deps (default for new workspaces in Bun 1.3+): bunfig.toml: [install] linker = "isolated" This creates per-package isolation via symlinks to .bun/ [2,18,24]. 2. For Vite deps in isolated mode, manually include transitive CJS deps in vite.config.ts optimizeDeps.include with nested paths (fragile) [3]. 3. Configure Vitest pool for Bun: pool: 'forks', poolOptions: { forks: { singleFork: true } } to avoid thread/minThreads issues [22]. 4. Alias 'bun' imports in vitest.config.ts if mocking code imports Bun [9]. 5. Set ssr.resolve.conditions: ['bun'] for runtime-specific subpath imports [10]. 6. Use bun install --linker=hoisted if isolation causes Vite optimizer issues, but risks phantom deps [24]. Vitest works with Bun (4.0.7+) but requires these configs for workspace stability. Official Bun docs recommend isolated for monorepos [24]. No direct mentions of vitest.mjs, likely an internal Vitest worker file affected by path resolution in hoisted setups.
Citations:
- 1: how to prevent module hoisting in workspace? oven-sh/bun#10382
- 2: Bun workspace transitive CJS deps break Vite dev server (/@fs/.bun/ paths skip ESM conversion) oven-sh/bun#29368
- 3: "bun install" inside the monorepo installs all "node_modules" from "packages" into the root folder oven-sh/bun#5688
- 4:
@types/*packages in.bun/node_modules/resolved based on alphabetical workspace package name ordering oven-sh/bun#26140 - 5: Bun Workspaces Considers All Packages From Projects Valid in Others oven-sh/bun#16656
- 6: 4.0.0-beta post
10partially breaks for Bun runtime vitest-dev/vitest#8650 - 7: vitest hangs oven-sh/bun#11268
- 8: https://bun.sh/docs/pm/isolated-installs
Use the vitest bin entry instead of invoking node_modules directly.
node ./node_modules/vitest/vitest.mjs hardcodes a direct path to an internal implementation detail. The supported invocation is the vitest bin shim that package managers resolve via the bin entry, creating a symlink at node_modules/.bin/vitest. This indirection ensures compatibility across package manager hoisting strategies and prevents PATH fragility.
♻️ Suggested change
- "test:unit": "node ./node_modules/vitest/vitest.mjs run --config vitest.unit.config.ts",
+ "test:unit": "vitest run --config vitest.unit.config.ts",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "test:unit": "node ./node_modules/vitest/vitest.mjs run --config vitest.unit.config.ts", | |
| "test:unit": "vitest run --config vitest.unit.config.ts", |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` at line 22, The npm script "test:unit" currently invokes Vitest
via a hardcoded node path; change it to use the package bin shim by replacing
"node ./node_modules/vitest/vitest.mjs run --config vitest.unit.config.ts" with
the bin invocation "vitest run --config vitest.unit.config.ts" so package
managers can resolve the correct executable (update the "test:unit" script entry
in package.json accordingly).
| "release": "turbo build --filter=./packages/* && bumpp", | ||
| "release:canary": "turbo build --filter=./packages/* && bumpp --tag canary", | ||
| "prepare": "simple-git-hooks", | ||
| "prepare": "bunx simple-git-hooks || true", |
There was a problem hiding this comment.
prepare script: removed simple-git-hooks from devDeps but still invokes it via bunx, with errors silenced.
Two concerns:
simple-git-hooksis no longer indevDependencies, sobunx simple-git-hookswill hit the registry on everybun install(including every CI run of the newlint/typecheck/unitjobs) instead of resolving locally.|| trueswallows real failures. If git hook installation breaks (e.g., script signature change), no one will notice until commits start landing un-linted.
If the goal was "skip in CI", the more targeted form is to gate on a CI env var; otherwise put simple-git-hooks back in devDependencies so the install is local and deterministic:
♻️ Suggested change
- "prepare": "bunx simple-git-hooks || true",
+ "prepare": "[ -n \"$CI\" ] && exit 0; simple-git-hooks",…and re-add "simple-git-hooks": "<version>" under devDependencies.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@package.json` at line 30, The package.json "prepare" script currently calls
"bunx simple-git-hooks || true", which will fetch simple-git-hooks from the
registry on every install (since it was removed from devDependencies) and also
silences real failures; fix by either re-adding "simple-git-hooks" to
devDependencies (so the "prepare" script uses the local package) or by changing
the prepare script to conditionally run only when not in CI (e.g., guard with an
env var) and remove the "|| true" so failures surface; update package.json
entries "devDependencies" to include "simple-git-hooks" or modify the "prepare"
script to check CI before invoking "bunx simple-git-hooks" and drop the
failure-silencing "|| true".
| const price = firstItem?.price; | ||
| const providerPriceId = typeof price === "string" ? price : price?.id; | ||
| const providerProductId = | ||
| price && typeof price !== "string" | ||
| ? typeof price.product === "string" | ||
| ? price.product | ||
| : (price.product?.id ?? null) | ||
| : null; | ||
| const periodStart = getEarliestPeriodStart(subscription); | ||
| const periodEnd = getLatestPeriodEnd(subscription); | ||
|
|
||
| let providerProduct: Record<string, string> | null = null; | ||
| if (providerPriceId && providerProductId) { | ||
| providerProduct = { priceId: providerPriceId, productId: providerProductId }; | ||
| } else if (providerPriceId) { | ||
| providerProduct = { priceId: providerPriceId }; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all consumers of providerProduct to confirm the new shape is handled safely.
rg -nP -C3 '\bproviderProduct\b' --type=ts
rg -nP -C3 'existingProviderProduct\b' --type=tsRepository: getpaykit/paykit
Length of output: 28262
🏁 Script executed:
# Check Stripe SDK type definitions in the codebase or package.json
grep -r "SubscriptionItem" --include="*.ts" --include="*.d.ts" packages/stripe/
# Look at the full context of the key comparison in subscription.service.ts
sed -n '330,360p' packages/paykit/src/subscription/subscription.service.ts
# Check if there are any type imports from stripe SDK
grep -n "import.*Stripe" packages/stripe/src/stripe-provider.ts | head -5Repository: getpaykit/paykit
Length of output: 1188
🏁 Script executed:
# Look for web search to verify Stripe SDK types - check if SubscriptionItem.price can be string
# First, let's check what types are actually being used in the codebase
fd -e ts -e tsx packages/stripe | xargs grep -l "price" | head -3Repository: getpaykit/paykit
Length of output: 426
🌐 Web query:
Stripe API SubscriptionItem price type - can it be a string or is it always an object
💡 Result:
In the Stripe API, the price field in a SubscriptionItem object is always an object representing the full Price object, not a string. Official Stripe documentation consistently describes it as price (object) across multiple API versions (e.g., 2024-10-28.acacia, 2025-11-17.clover, 2025-12-15.clover). Example responses from create, retrieve, and list endpoints show price as a nested object with fields like id, object: "price", active, billing_scheme, etc. While the input parameter for creating/updating a subscription item accepts price as a string (Price ID), the returned SubscriptionItem always expands it to the full object. No sources indicate it can be a plain string in the response.
Citations:
- 1: https://docs.stripe.com/api/subscription_items/object
- 2: https://stripe.com/docs/api/subscription_items/object
- 3: https://docs.stripe.com/api/subscription_items/object.md?api-version=2025-03-31.basil
- 4: https://docs.stripe.com/api/subscription_items/create?api-version=2025-09-30.preview
- 5: https://docs.stripe.com/api/subscription_items/object?api-version=2025-10-29.clover
- 6: https://docs.stripe.com/api/subscription_items/object?api-version=2025-12-15.clover
- 7: https://docs.stripe.com/api/subscription_items/create
- 8: https://docs.stripe.com/api/subscription_items/object?api-version=2024-10-28.acacia
- 9: https://docs.stripe.com/api/subscription_items/create?api-version=2025-11-17.clover
- 10: https://docs.stripe.com/api/subscription_items/object?api-version=2025-11-17.clover
- 11: https://stripe.com/docs/api/subscription_items/retrieve?disable_syntax_highlighting=1&expand_all_subspecs=1&lang=python
- 12: https://stripe.com/docs/api/subscription_items/list%3Fdisable_syntax_highlighting%3D1%26expand_all_subspecs%3D1%26lang%3Dpython
- 13: https://docs.stripe.com/api/subscription_items/retrieve?api-version=2025-09-30.preview
Behavior change in providerProduct shape — webhook processing at risk.
The providerProduct shape changed from always being { priceId } | null to { priceId, productId } | null. At subscription.service.ts lines 338–350, the webhook handler compares Object.keys() of both objects. When checkoutProviderProduct has 2 keys but storedProviderProduct has 1, this throws PROVIDER_WEBHOOK_INVALID and breaks webhook processing. On first deploy, existing subscriptions with { priceId } will fail to process incoming webhook events that now send { priceId, productId }.
Suggested refactor — the typeof price === "string" branch is dead code. Per Stripe API docs, SubscriptionItem.price is always an object in responses:
♻️ Simplification
- const price = firstItem?.price;
- const providerPriceId = typeof price === "string" ? price : price?.id;
- const providerProductId =
- price && typeof price !== "string"
- ? typeof price.product === "string"
- ? price.product
- : (price.product?.id ?? null)
- : null;
+ const price = firstItem?.price;
+ const providerPriceId = price?.id;
+ const productRef = price?.product;
+ const providerProductId =
+ typeof productRef === "string" ? productRef : (productRef?.id ?? null);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const price = firstItem?.price; | |
| const providerPriceId = typeof price === "string" ? price : price?.id; | |
| const providerProductId = | |
| price && typeof price !== "string" | |
| ? typeof price.product === "string" | |
| ? price.product | |
| : (price.product?.id ?? null) | |
| : null; | |
| const periodStart = getEarliestPeriodStart(subscription); | |
| const periodEnd = getLatestPeriodEnd(subscription); | |
| let providerProduct: Record<string, string> | null = null; | |
| if (providerPriceId && providerProductId) { | |
| providerProduct = { priceId: providerPriceId, productId: providerProductId }; | |
| } else if (providerPriceId) { | |
| providerProduct = { priceId: providerPriceId }; | |
| } | |
| const price = firstItem?.price; | |
| const providerPriceId = price?.id; | |
| const productRef = price?.product; | |
| const providerProductId = | |
| typeof productRef === "string" ? productRef : (productRef?.id ?? null); | |
| const periodStart = getEarliestPeriodStart(subscription); | |
| const periodEnd = getLatestPeriodEnd(subscription); | |
| let providerProduct: Record<string, string> | null = null; | |
| if (providerPriceId && providerProductId) { | |
| providerProduct = { priceId: providerPriceId, productId: providerProductId }; | |
| } else if (providerPriceId) { | |
| providerProduct = { priceId: providerPriceId }; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/stripe/src/stripe-provider.ts` around lines 122 - 138, The new logic
changes providerProduct shape and breaks webhook comparison; revert to the
backward-compatible shape by ensuring providerProduct is either null or an
object containing only priceId (no productId), and drop the dead typeof price
=== "string" branch; update the block that computes
providerPriceId/providerProductId (references: firstItem, price,
providerPriceId, providerProductId, providerProduct) so providerProduct is set
to { priceId: providerPriceId } when providerPriceId exists and leave it null
otherwise, ensuring existing stored { priceId } objects continue to match
incoming webhook payloads.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
.github/workflows/e2e.yml (2)
34-38: Scope test-only secrets to the steps that need them.
E2E_STRIPE_SK/E2E_STRIPE_WHSECare job-level env vars, so they're exported to every step — includingactions/checkout,setup-node,setup-bun, andbun install. None of those need them, and broader exposure increases the blast radius if any third-party action ever logs the environment. Same hygiene reasoning applies toCF_TUNNEL_TOKEN, which is already correctly scoped to two specific steps.Consider moving the Stripe secrets onto the
Run Stripe E2E suitestep (andValidate Stripe E2E configurationif you still want it validated there).TEST_DATABASE_URLandCF_TUNNEL_HOSTcan stay at the job level since they aren't sensitive.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/e2e.yml around lines 34 - 38, Job-level environment exports include sensitive Stripe secrets (E2E_STRIPE_SK, E2E_STRIPE_WHSEC) which expose them to every step; move those two variables out of the job env block and add them only to the specific steps that need them (the "Run Stripe E2E suite" step and optionally the "Validate Stripe E2E configuration" step), keeping TEST_DATABASE_URL and CF_TUNNEL_HOST at the job level and leaving CF_TUNNEL_TOKEN scoped as it is; ensure the step-level env keys exactly match E2E_STRIPE_SK and E2E_STRIPE_WHSEC so the tests still receive them.
58-58: HoistCF_TUNNEL_TOKENto job-level env to avoid duplication.
CF_TUNNEL_TOKEN: ${{ secrets.CF_TUNNEL_TOKEN_STRIPE }}is repeated on both the validation step and the test step. If you keep secrets job-scoped (or even just promote this one), you remove a footgun where someone adds a third consumer and forgets the third declaration. (Not a functional issue today.)Also applies to: 79-79
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/e2e.yml at line 58, Move the CF_TUNNEL_TOKEN environment declaration up from the individual steps into the job-level env in the e2e GitHub Actions workflow so both consumers (and any future steps) inherit it; specifically, remove CF_TUNNEL_TOKEN: ${{ secrets.CF_TUNNEL_TOKEN_STRIPE }} from the validation and test steps and add a single job-level env entry CF_TUNNEL_TOKEN: ${{ secrets.CF_TUNNEL_TOKEN_STRIPE }} in the e2e.yml job definition so the token is declared once for the job.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/e2e.yml:
- Around line 56-66: The validation step "Validate Stripe E2E configuration"
misses CF_TUNNEL_HOST and therefore lets vars.CF_TUNNEL_HOST_STRIPE fall back to
the hardcoded 't1.paykit.sh'; update that validation block so CF_TUNNEL_HOST is
included in the for-name list (or remove the default assignment) and ensure the
workflow checks vars.CF_TUNNEL_HOST_STRIPE/CF_TUNNEL_HOST explicitly so a
missing or empty host fails fast rather than silently using 't1.paykit.sh'.
- Around line 94-102: Replace the brittle fixed sleep with a bounded readiness
poll that verifies the tunnel is actually routable: implement a retry loop (e.g.
up to a timeout like 60s with short sleeps) that either (a) tails
cloudflared.log and waits for the "Registered tunnel connection" message or (b)
repeatedly curls https://$CF_TUNNEL_HOST/health (or another 204 endpoint) until
it returns a successful response; keep checking that the process identified by
cloudflared_pid is alive during the loop, and on timeout print cloudflared.log
(if present) and exit non‑zero so tests don't run against an unready tunnel.
- Line 82: Remove the ignored --url flag from the cloudflared invocation: update
the command that currently mentions "cloudflared tunnel --url
http://127.0.0.1:4567 run --token \"$CF_TUNNEL_TOKEN\"" to call the
remotely-managed tunnel with "cloudflared tunnel run --token
\"$CF_TUNNEL_TOKEN\"" (i.e., drop the --url argument) and ensure the named
tunnel’s ingress in the Cloudflare dashboard is configured to forward to
http://127.0.0.1:4567 so the workflow routes traffic correctly.
---
Nitpick comments:
In @.github/workflows/e2e.yml:
- Around line 34-38: Job-level environment exports include sensitive Stripe
secrets (E2E_STRIPE_SK, E2E_STRIPE_WHSEC) which expose them to every step; move
those two variables out of the job env block and add them only to the specific
steps that need them (the "Run Stripe E2E suite" step and optionally the
"Validate Stripe E2E configuration" step), keeping TEST_DATABASE_URL and
CF_TUNNEL_HOST at the job level and leaving CF_TUNNEL_TOKEN scoped as it is;
ensure the step-level env keys exactly match E2E_STRIPE_SK and E2E_STRIPE_WHSEC
so the tests still receive them.
- Line 58: Move the CF_TUNNEL_TOKEN environment declaration up from the
individual steps into the job-level env in the e2e GitHub Actions workflow so
both consumers (and any future steps) inherit it; specifically, remove
CF_TUNNEL_TOKEN: ${{ secrets.CF_TUNNEL_TOKEN_STRIPE }} from the validation and
test steps and add a single job-level env entry CF_TUNNEL_TOKEN: ${{
secrets.CF_TUNNEL_TOKEN_STRIPE }} in the e2e.yml job definition so the token is
declared once for the job.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: cebe359d-75ac-489e-972a-d258b797840d
📒 Files selected for processing (1)
.github/workflows/e2e.yml
| CF_TUNNEL_TOKEN: ${{ secrets.CF_TUNNEL_TOKEN_STRIPE }} | ||
| run: | | ||
| set -euo pipefail | ||
| cloudflared tunnel --url http://127.0.0.1:4567 run --token "$CF_TUNNEL_TOKEN" > cloudflared.log 2>&1 & |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does cloudflared tunnel run --tokenhonor a--url flag, or is the origin URL always taken from the remote (dashboard) tunnel configuration?
💡 Result:
No, cloudflared tunnel run --token does not honor the --url flag. The origin URL is always taken from the remote tunnel configuration in the Cloudflare dashboard. When using --token, the command connects to a remotely-managed tunnel. Cloudflared fetches the configuration (including origin services) from the Cloudflare dashboard/API, overriding any local --url flag. This is confirmed in official documentation listing supported flags for --token (logging, etc., but no --url) and a Cloudflare GitHub issue where --token takes priority, pulling config like {"ingress":[{"service":"http://localhost:8080"}]} despite --url http://127.0.0.1:8081. The --url flag is for quick tunnels (cloudflared tunnel --url ...) or locally-managed tunnels (with config file or named tunnel ID without token). For remotely-managed tunnels, update the origin in the dashboard under the tunnel's public hostname routes.
Citations:
- 1: https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/configure-tunnels/run-parameters/
- 2: https://developers.cloudflare.com/tunnel/advanced/run-parameters/
- 3: The --url command line option seems to have no effect 🐛 cloudflare/cloudflared#998
- 4: https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/configure-tunnels/cloudflared-parameters/origin-parameters/
Remove --url flag; it is silently ignored when used with run --token.
cloudflared tunnel run --token uses a remotely-managed tunnel whose origin is configured in the Cloudflare dashboard—the --url parameter is not honored and has no effect. The ingress URL is determined solely by the tunnel configuration on the dashboard. Remove the flag and ensure the named tunnel is configured to route to http://127.0.0.1:4567 in the Cloudflare dashboard.
Proposed fix
- cloudflared tunnel --url http://127.0.0.1:4567 run --token "$CF_TUNNEL_TOKEN" > cloudflared.log 2>&1 &
+ cloudflared tunnel run --token "$CF_TUNNEL_TOKEN" > cloudflared.log 2>&1 &📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cloudflared tunnel --url http://127.0.0.1:4567 run --token "$CF_TUNNEL_TOKEN" > cloudflared.log 2>&1 & | |
| cloudflared tunnel run --token "$CF_TUNNEL_TOKEN" > cloudflared.log 2>&1 & |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/e2e.yml at line 82, Remove the ignored --url flag from the
cloudflared invocation: update the command that currently mentions "cloudflared
tunnel --url http://127.0.0.1:4567 run --token \"$CF_TUNNEL_TOKEN\"" to call the
remotely-managed tunnel with "cloudflared tunnel run --token
\"$CF_TUNNEL_TOKEN\"" (i.e., drop the --url argument) and ensure the named
tunnel’s ingress in the Cloudflare dashboard is configured to forward to
http://127.0.0.1:4567 so the workflow routes traffic correctly.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
e2e/test-utils/harness/stripe.ts (2)
2-2: Use separatedimport typestyle.The repo guideline mandates separated
import typestatements; line 2 mixes a value import (chromium) with inlinetypeimports (type Locator, type Page). Other imports in this file (lines 5, 8) already follow the separated style.♻️ Proposed fix
-import { chromium, type Locator, type Page } from "playwright"; +import { chromium } from "playwright"; +import type { Locator, Page } from "playwright";As per coding guidelines: "Use
import typewith separated style and Node.js protocol (node:fs,node:path) for built-in modules".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/test-utils/harness/stripe.ts` at line 2, The mixed import on the top line should be split into a runtime value import and a type-only import: keep the runtime import for the Playwright browser (chromium) as a regular import and move Locator and Page into a separate import type statement (e.g., import { chromium } from "playwright"; import type { Locator, Page } from "playwright"); also ensure any built-in modules elsewhere in this file follow the Node.js protocol (node:fs, node:path) as per repo guidelines.
102-107: Deduplicate env validation.The same
E2E_STRIPE_SK/E2E_STRIPE_WHSECcheck appears in both the factory body (lines 105-107) andvalidateEnv(lines 239-241). Consider definingvalidateEnvonce and calling it from the factory so the message and conditions can never drift.♻️ Proposed fix
export function createStripeHarness(): ProviderHarness { - const secretKey = env.E2E_STRIPE_SK; - const webhookSecret = env.E2E_STRIPE_WHSEC; - if (!secretKey || !webhookSecret) { - throw new Error("E2E_STRIPE_SK and E2E_STRIPE_WHSEC must be set"); - } + const validateEnv = () => { + if (!env.E2E_STRIPE_SK || !env.E2E_STRIPE_WHSEC) { + throw new Error("E2E_STRIPE_SK and E2E_STRIPE_WHSEC must be set"); + } + }; + validateEnv(); + const secretKey = env.E2E_STRIPE_SK; + const webhookSecret = env.E2E_STRIPE_WHSEC; const stripeClient = new Stripe(secretKey, { maxNetworkRetries: 3 }); return { ... - validateEnv() { - if (!env.E2E_STRIPE_SK || !env.E2E_STRIPE_WHSEC) { - throw new Error("E2E_STRIPE_SK and E2E_STRIPE_WHSEC must be set"); - } - }, + validateEnv, }; }Also applies to: 238-242
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/test-utils/harness/stripe.ts` around lines 102 - 107, The E2E_STRIPE_SK / E2E_STRIPE_WHSEC environment validation is duplicated in createStripeHarness and validateEnv; remove the inline check from createStripeHarness and instead call the existing validateEnv function (or consolidate the check into a single validateEnv implementation) from within createStripeHarness so there's one source of truth for the error message and conditions; update validateEnv (the function referenced around lines 239–241) to perform the exact same checks and throw the canonical Error("E2E_STRIPE_SK and E2E_STRIPE_WHSEC must be set") so the factory (createStripeHarness) simply obtains secretKey/webhookSecret after validateEnv has run.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/test-utils/harness/stripe.ts`:
- Around line 184-220: completeCheckout can leak a Chromium process if
browser.newPage() throws because browser is created before the try; ensure the
browser is always closed by either moving newPage() inside the try block after
launching or initializing browser to null and adding a catch/finally that closes
browser when newPage() fails; update the completeCheckout function to reference
chromium.launch() and browser.newPage() so that any error from newPage()
triggers browser.close() (or closes a non-null browser) in the finally/catch
path.
---
Nitpick comments:
In `@e2e/test-utils/harness/stripe.ts`:
- Line 2: The mixed import on the top line should be split into a runtime value
import and a type-only import: keep the runtime import for the Playwright
browser (chromium) as a regular import and move Locator and Page into a separate
import type statement (e.g., import { chromium } from "playwright"; import type
{ Locator, Page } from "playwright"); also ensure any built-in modules elsewhere
in this file follow the Node.js protocol (node:fs, node:path) as per repo
guidelines.
- Around line 102-107: The E2E_STRIPE_SK / E2E_STRIPE_WHSEC environment
validation is duplicated in createStripeHarness and validateEnv; remove the
inline check from createStripeHarness and instead call the existing validateEnv
function (or consolidate the check into a single validateEnv implementation)
from within createStripeHarness so there's one source of truth for the error
message and conditions; update validateEnv (the function referenced around lines
239–241) to perform the exact same checks and throw the canonical
Error("E2E_STRIPE_SK and E2E_STRIPE_WHSEC must be set") so the factory
(createStripeHarness) simply obtains secretKey/webhookSecret after validateEnv
has run.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 01772cd6-81df-437b-a58b-142d003659d7
📒 Files selected for processing (1)
e2e/test-utils/harness/stripe.ts
| async completeCheckout(url: string) { | ||
| const browser = await chromium.launch({ headless: true }); | ||
| const page = await browser.newPage(); | ||
|
|
||
| try { | ||
| await page.goto(url, { waitUntil: "domcontentloaded" }); | ||
| await page.waitForLoadState("load").catch(() => {}); | ||
|
|
||
| // Stripe's hosted checkout uses custom inputs that require per-key events; | ||
| // fill() does not dispatch them correctly, so use pressSequentially. In | ||
| // CI Stripe sometimes renders these fields inside iframes rather than as | ||
| // top-level inputs, so search both the page and all frames. | ||
| await fillStripeField(page, stripeCardNumberSelectors, "4242424242424242", 60_000); | ||
| await fillStripeField(page, stripeCardExpirySelectors, "1234", 30_000); | ||
| await fillStripeField(page, stripeCardCvcSelectors, "123", 30_000); | ||
|
|
||
| const billingName = await maybeGetVisibleLocator(page, stripeBillingNameSelectors, 5_000); | ||
| if (billingName) { | ||
| await billingName.click(); | ||
| await billingName.pressSequentially("Test Customer"); | ||
| } | ||
|
|
||
| const submitBtn = await waitForVisibleLocator(page, stripeSubmitSelectors, 30_000); | ||
| await submitBtn.click({ force: true }); | ||
|
|
||
| // Wait for Stripe to navigate away from the checkout page (success redirect | ||
| // or embedded confirmation). Don't fail the test if this times out — the | ||
| // webhook poll downstream is the real signal. | ||
| await page | ||
| .waitForURL((u) => !u.toString().includes("checkout.stripe.com"), { | ||
| timeout: 60_000, | ||
| }) | ||
| .catch(() => {}); | ||
| } finally { | ||
| await browser.close(); | ||
| } | ||
| }, |
There was a problem hiding this comment.
Browser leak if newPage() throws.
chromium.launch() (line 185) and browser.newPage() (line 186) both run before the try on line 188. If newPage() rejects, browser is never closed and a Chromium process is left dangling — over many CI runs this can accumulate and cause flakes.
🔒 Proposed fix
async completeCheckout(url: string) {
const browser = await chromium.launch({ headless: true });
- const page = await browser.newPage();
-
try {
+ const page = await browser.newPage();
await page.goto(url, { waitUntil: "domcontentloaded" });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async completeCheckout(url: string) { | |
| const browser = await chromium.launch({ headless: true }); | |
| const page = await browser.newPage(); | |
| try { | |
| await page.goto(url, { waitUntil: "domcontentloaded" }); | |
| await page.waitForLoadState("load").catch(() => {}); | |
| // Stripe's hosted checkout uses custom inputs that require per-key events; | |
| // fill() does not dispatch them correctly, so use pressSequentially. In | |
| // CI Stripe sometimes renders these fields inside iframes rather than as | |
| // top-level inputs, so search both the page and all frames. | |
| await fillStripeField(page, stripeCardNumberSelectors, "4242424242424242", 60_000); | |
| await fillStripeField(page, stripeCardExpirySelectors, "1234", 30_000); | |
| await fillStripeField(page, stripeCardCvcSelectors, "123", 30_000); | |
| const billingName = await maybeGetVisibleLocator(page, stripeBillingNameSelectors, 5_000); | |
| if (billingName) { | |
| await billingName.click(); | |
| await billingName.pressSequentially("Test Customer"); | |
| } | |
| const submitBtn = await waitForVisibleLocator(page, stripeSubmitSelectors, 30_000); | |
| await submitBtn.click({ force: true }); | |
| // Wait for Stripe to navigate away from the checkout page (success redirect | |
| // or embedded confirmation). Don't fail the test if this times out — the | |
| // webhook poll downstream is the real signal. | |
| await page | |
| .waitForURL((u) => !u.toString().includes("checkout.stripe.com"), { | |
| timeout: 60_000, | |
| }) | |
| .catch(() => {}); | |
| } finally { | |
| await browser.close(); | |
| } | |
| }, | |
| async completeCheckout(url: string) { | |
| const browser = await chromium.launch({ headless: true }); | |
| try { | |
| const page = await browser.newPage(); | |
| await page.goto(url, { waitUntil: "domcontentloaded" }); | |
| await page.waitForLoadState("load").catch(() => {}); | |
| // Stripe's hosted checkout uses custom inputs that require per-key events; | |
| // fill() does not dispatch them correctly, so use pressSequentially. In | |
| // CI Stripe sometimes renders these fields inside iframes rather than as | |
| // top-level inputs, so search both the page and all frames. | |
| await fillStripeField(page, stripeCardNumberSelectors, "4242424242424242", 60_000); | |
| await fillStripeField(page, stripeCardExpirySelectors, "1234", 30_000); | |
| await fillStripeField(page, stripeCardCvcSelectors, "123", 30_000); | |
| const billingName = await maybeGetVisibleLocator(page, stripeBillingNameSelectors, 5_000); | |
| if (billingName) { | |
| await billingName.click(); | |
| await billingName.pressSequentially("Test Customer"); | |
| } | |
| const submitBtn = await waitForVisibleLocator(page, stripeSubmitSelectors, 30_000); | |
| await submitBtn.click({ force: true }); | |
| // Wait for Stripe to navigate away from the checkout page (success redirect | |
| // or embedded confirmation). Don't fail the test if this times out — the | |
| // webhook poll downstream is the real signal. | |
| await page | |
| .waitForURL((u) => !u.toString().includes("checkout.stripe.com"), { | |
| timeout: 60_000, | |
| }) | |
| .catch(() => {}); | |
| } finally { | |
| await browser.close(); | |
| } | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e/test-utils/harness/stripe.ts` around lines 184 - 220, completeCheckout
can leak a Chromium process if browser.newPage() throws because browser is
created before the try; ensure the browser is always closed by either moving
newPage() inside the try block after launching or initializing browser to null
and adding a catch/finally that closes browser when newPage() fails; update the
completeCheckout function to reference chromium.launch() and browser.newPage()
so that any error from newPage() triggers browser.close() (or closes a non-null
browser) in the finally/catch path.
This reverts commit f16aa24.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
e2e/test-utils/harness/stripe.ts (1)
92-94:⚠️ Potential issue | 🟡 MinorBrowser leak if
newPage()throws.
browser.newPage()on line 94 still runs before thetryon line 96, so a rejection there leaves the launched Chromium dangling. MovenewPage()inside thetryblock (the first action after launch) so thefinally { browser.close() }always runs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/test-utils/harness/stripe.ts` around lines 92 - 94, In completeCheckout, newPage() is called before the try, so if newPage() throws the launched Chromium (browser) leaks; fix by moving the call to browser.newPage() inside the existing try block immediately after launch (i.e., declare const page = await browser.newPage() as the first action in the try), ensuring the finally block that calls browser.close() always runs and cleans up the browser even if newPage() rejects.
🧹 Nitpick comments (2)
e2e/test-utils/harness/stripe.ts (2)
118-119: PrefersubmitBtn.click()unless there's a specific reason to bypass Playwright's auto-wait.
evaluate((el) => el.click())skips Playwright's actionability checks (visibility, stability, hit-testing), which is usually what makes hosted-checkout flows flaky. If you've hit an overlay/iframe issue that forced this workaround, a short comment explaining why would help; otherwise:Proposed change
- const submitBtn = page.locator(".SubmitButton-TextContainer").first(); - await submitBtn.evaluate((el) => (el as HTMLElement).click()); + const submitBtn = page.locator(".SubmitButton-TextContainer").first(); + await submitBtn.click();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/test-utils/harness/stripe.ts` around lines 118 - 119, Replace the direct DOM click executed via submitBtn.evaluate((el) => (el as HTMLElement).click()) with Playwright's built-in action await submitBtn.click() to preserve auto-wait/actionability checks; update the code around the submitBtn locator in e2e/test-utils/harness/stripe.ts (the submitBtn constant and its click invocation) and if there is a specific reason to bypass Playwright (e.g., overlay/iframe workaround), keep the evaluate approach but add a short inline comment explaining that reason and reference the overlay/iframe details.
30-43: Unsafe type cast defeats compile-time checking of the override.The double cast
(ctx.provider as unknown as Record<string, unknown>).createSubscription = …bypasses type checking entirely. If the method's parameter or return shape diverges from the interface, this override won't catch it.The
datatyping itself (includingproviderProduct: Record<string, string>) correctly matches thePaymentProvider.createSubscriptionsignature, so no narrowing issue exists there. However, derive the override's parameter type directly from the interface (e.g., usingParameters<typeof ctx.provider.createSubscription>[0]or accessing it through the type system) to ensure the test harness stays aligned with the real contract.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/test-utils/harness/stripe.ts` around lines 30 - 43, The override on ctx.provider.createSubscription currently uses an unsafe double-cast ((ctx.provider as unknown as Record<string, unknown>)) which bypasses compile-time checks; change the override to preserve types by typing the parameter from the provider interface (e.g., derive the parameter type from ctx.provider.createSubscription using the type system such as Parameters<typeof ctx.provider.createSubscription>[0] or the PaymentProvider.createSubscription signature) and assign the function without the unknown cast so the compiler enforces the correct parameter/return shapes when implementing createSubscription in the test harness.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/test-utils/harness/stripe.ts`:
- Around line 158-185: Remove the unused exported function
syncStripePaymentMethod: delete its entire implementation and export so the file
no longer declares syncStripePaymentMethod; confirm there are no remaining
references before deleting. Leave the underlying service
syncPaymentMethodByProviderCustomer intact (do not touch its implementation) and
ensure any imports used only by syncStripePaymentMethod are also removed if they
become unused after deletion.
---
Duplicate comments:
In `@e2e/test-utils/harness/stripe.ts`:
- Around line 92-94: In completeCheckout, newPage() is called before the try, so
if newPage() throws the launched Chromium (browser) leaks; fix by moving the
call to browser.newPage() inside the existing try block immediately after launch
(i.e., declare const page = await browser.newPage() as the first action in the
try), ensuring the finally block that calls browser.close() always runs and
cleans up the browser even if newPage() rejects.
---
Nitpick comments:
In `@e2e/test-utils/harness/stripe.ts`:
- Around line 118-119: Replace the direct DOM click executed via
submitBtn.evaluate((el) => (el as HTMLElement).click()) with Playwright's
built-in action await submitBtn.click() to preserve auto-wait/actionability
checks; update the code around the submitBtn locator in
e2e/test-utils/harness/stripe.ts (the submitBtn constant and its click
invocation) and if there is a specific reason to bypass Playwright (e.g.,
overlay/iframe workaround), keep the evaluate approach but add a short inline
comment explaining that reason and reference the overlay/iframe details.
- Around line 30-43: The override on ctx.provider.createSubscription currently
uses an unsafe double-cast ((ctx.provider as unknown as Record<string,
unknown>)) which bypasses compile-time checks; change the override to preserve
types by typing the parameter from the provider interface (e.g., derive the
parameter type from ctx.provider.createSubscription using the type system such
as Parameters<typeof ctx.provider.createSubscription>[0] or the
PaymentProvider.createSubscription signature) and assign the function without
the unknown cast so the compiler enforces the correct parameter/return shapes
when implementing createSubscription in the test harness.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 55344cc3-be4c-4914-97f7-fc4165de41e8
📒 Files selected for processing (1)
e2e/test-utils/harness/stripe.ts
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
.github/workflows/e2e.yml (3)
152-152:⚠️ Potential issue | 🟠 MajorDrop
--urlfromcloudflared tunnel run --token(it is ignored here).Line 152 and Line 256 pass
--urlalongsiderun --token; for remotely-managed token runs this flag is not honored, so the command is misleading.Suggested fix
- cloudflared tunnel --url http://127.0.0.1:4567 run --token "$CF_TUNNEL_TOKEN" > cloudflared.log 2>&1 & + cloudflared tunnel run --token "$CF_TUNNEL_TOKEN" > cloudflared.log 2>&1 & ... - cloudflared tunnel --url http://127.0.0.1:4567 run --token "$CF_TUNNEL_TOKEN" > cloudflared.log 2>&1 & + cloudflared tunnel run --token "$CF_TUNNEL_TOKEN" > cloudflared.log 2>&1 &In current Cloudflare Tunnel docs, is `--url` a supported/effective flag for `cloudflared tunnel run --token` (remotely-managed tunnels), or is origin routing taken from tunnel ingress config?Also applies to: 256-256
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/e2e.yml at line 152, Remove the unsupported --url flag from the cloudflared remotely-managed run invocation: update the command string "cloudflared tunnel --url http://127.0.0.1:4567 run --token \"$CF_TUNNEL_TOKEN\"" used in the workflow to omit --url (so use "cloudflared tunnel run --token \"$CF_TUNNEL_TOKEN\"" or the repo's canonical invocation), and apply the same change for the duplicate occurrence later in the file; origin routing should come from the tunnel ingress config rather than the --url flag.
164-172:⚠️ Potential issue | 🟡 MinorReplace fixed
sleep 5with a bounded readiness poll.Line 164 and Line 268 only gate on process liveness after a static sleep; this is flaky because tunnel registration may complete later than 5s.
Suggested fix
- sleep 5 - - if ! kill -0 "$cloudflared_pid" 2>/dev/null; then - echo "::error::cloudflared exited before tests started" - if [ -f cloudflared.log ]; then - cat cloudflared.log - fi - exit 1 - fi + for i in $(seq 1 60); do + if ! kill -0 "$cloudflared_pid" 2>/dev/null; then + echo "::error::cloudflared exited before tests started" + cat cloudflared.log || true + exit 1 + fi + if grep -q "Registered tunnel connection" cloudflared.log 2>/dev/null; then + break + fi + sleep 1 + done + if ! grep -q "Registered tunnel connection" cloudflared.log 2>/dev/null; then + echo "::error::cloudflared did not become ready within timeout" + cat cloudflared.log || true + exit 1 + fiAlso applies to: 268-276
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/e2e.yml around lines 164 - 172, Replace the fixed "sleep 5" wait with a bounded readiness poll that checks both process liveness (cloudflared_pid) and a readiness indicator in cloudflared.log (e.g., "Registered tunnel" or the line that indicates the tunnel/listener is ready), looping with a short sleep (e.g., 0.5s) until either the readiness is observed or a configurable timeout is reached; if the process dies (kill -0 "$cloudflared_pid") or the timeout elapses, print cloudflared.log and exit non-zero as the existing block does. Ensure you apply the same change to both occurrences around the two sleep 5 usages so they consistently wait for readiness rather than a fixed delay.
105-105:⚠️ Potential issue | 🟡 MinorFail fast when
CF_TUNNEL_HOSTis unset instead of silently defaulting.Line 105 and Line 209 default to hardcoded hosts, but validation (Line 131 and Line 235 loops) never checks
CF_TUNNEL_HOST. This can hide env misconfig and produce confusing webhook-routing failures.Suggested fix
- CF_TUNNEL_HOST: ${{ vars.CF_TUNNEL_HOST_STRIPE || 't1.paykit.sh' }} + CF_TUNNEL_HOST: ${{ vars.CF_TUNNEL_HOST_STRIPE }} ... - for name in CF_TUNNEL_TOKEN E2E_STRIPE_SK E2E_STRIPE_WHSEC TEST_DATABASE_URL; do + for name in CF_TUNNEL_TOKEN CF_TUNNEL_HOST E2E_STRIPE_SK E2E_STRIPE_WHSEC TEST_DATABASE_URL; do if [ -z "${!name:-}" ]; then echo "::error::Missing required configuration: $name" exit 1 fi done ... - CF_TUNNEL_HOST: ${{ vars.CF_TUNNEL_HOST_POLAR || 't2.paykit.sh' }} + CF_TUNNEL_HOST: ${{ vars.CF_TUNNEL_HOST_POLAR }} ... - for name in CF_TUNNEL_TOKEN E2E_POLAR_ACCESS_TOKEN E2E_POLAR_WHSEC TEST_DATABASE_URL; do + for name in CF_TUNNEL_TOKEN CF_TUNNEL_HOST E2E_POLAR_ACCESS_TOKEN E2E_POLAR_WHSEC TEST_DATABASE_URL; do if [ -z "${!name:-}" ]; then echo "::error::Missing required configuration: $name" exit 1 fi doneAlso applies to: 126-136, 209-209, 230-240
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/e2e.yml at line 105, The workflow currently silently defaults CF_TUNNEL_HOST to hardcoded values; change the workflow variable declarations to require CF_TUNNEL_HOST be provided (remove the fallback like || 't1.paykit.sh') and add an explicit validation step that checks the CF_TUNNEL_HOST environment variable (the same validation used for other envs) and exits non‑zero with a clear error if empty so the job fails fast rather than proceeding with a silent default; update both places where CF_TUNNEL_HOST is set and ensure the existing validation loops that run later also include CF_TUNNEL_HOST in their checks.
🧹 Nitpick comments (1)
.github/workflows/release.yml (1)
31-31: Increase check-runs page size to avoid false “missing” conclusions.Line 31 queries check-runs without
per_page; on commits with many checks, required names can be off-page and incorrectly fail the release gate.Suggested fix
- conclusion=$(gh api "repos/${GITHUB_REPOSITORY}/commits/${GITHUB_SHA}/check-runs" --jq ".check_runs | map(select(.app.slug == \"github-actions\" and .name == \"${check_name}\")) | sort_by(.completed_at) | last | .conclusion // \"missing\"") + conclusion=$(gh api "repos/${GITHUB_REPOSITORY}/commits/${GITHUB_SHA}/check-runs?per_page=100" --jq ".check_runs | map(select(.app.slug == \"github-actions\" and .name == \"${check_name}\")) | sort_by(.completed_at) | last | .conclusion // \"missing\"")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release.yml at line 31, The gh api call that sets the conclusion variable may miss check runs because it doesn't request enough results; update the API invocation that populates conclusion (the gh api "repos/${GITHUB_REPOSITORY}/commits/${GITHUB_SHA}/check-runs" call using check_name) to include a larger page size (e.g. per_page=100) or otherwise paginate results so the map/select/sort_by/last pipeline can find the intended check run instead of returning "missing".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/release.yml:
- Line 14: The workflow's permissions only grant "actions: read" but the job
uses the Checks API (check-runs); update the top-level permissions block in the
release workflow to include "checks: read" in addition to "actions: read" so the
Checks API calls succeed (modify the permissions entry that currently contains
actions: read to also grant checks: read).
---
Duplicate comments:
In @.github/workflows/e2e.yml:
- Line 152: Remove the unsupported --url flag from the cloudflared
remotely-managed run invocation: update the command string "cloudflared tunnel
--url http://127.0.0.1:4567 run --token \"$CF_TUNNEL_TOKEN\"" used in the
workflow to omit --url (so use "cloudflared tunnel run --token
\"$CF_TUNNEL_TOKEN\"" or the repo's canonical invocation), and apply the same
change for the duplicate occurrence later in the file; origin routing should
come from the tunnel ingress config rather than the --url flag.
- Around line 164-172: Replace the fixed "sleep 5" wait with a bounded readiness
poll that checks both process liveness (cloudflared_pid) and a readiness
indicator in cloudflared.log (e.g., "Registered tunnel" or the line that
indicates the tunnel/listener is ready), looping with a short sleep (e.g., 0.5s)
until either the readiness is observed or a configurable timeout is reached; if
the process dies (kill -0 "$cloudflared_pid") or the timeout elapses, print
cloudflared.log and exit non-zero as the existing block does. Ensure you apply
the same change to both occurrences around the two sleep 5 usages so they
consistently wait for readiness rather than a fixed delay.
- Line 105: The workflow currently silently defaults CF_TUNNEL_HOST to hardcoded
values; change the workflow variable declarations to require CF_TUNNEL_HOST be
provided (remove the fallback like || 't1.paykit.sh') and add an explicit
validation step that checks the CF_TUNNEL_HOST environment variable (the same
validation used for other envs) and exits non‑zero with a clear error if empty
so the job fails fast rather than proceeding with a silent default; update both
places where CF_TUNNEL_HOST is set and ensure the existing validation loops that
run later also include CF_TUNNEL_HOST in their checks.
---
Nitpick comments:
In @.github/workflows/release.yml:
- Line 31: The gh api call that sets the conclusion variable may miss check runs
because it doesn't request enough results; update the API invocation that
populates conclusion (the gh api
"repos/${GITHUB_REPOSITORY}/commits/${GITHUB_SHA}/check-runs" call using
check_name) to include a larger page size (e.g. per_page=100) or otherwise
paginate results so the map/select/sort_by/last pipeline can find the intended
check run instead of returning "missing".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: d1d649ef-13df-49dd-9419-8c195d8d0b5d
📒 Files selected for processing (2)
.github/workflows/e2e.yml.github/workflows/release.yml
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/e2e.yml:
- Around line 26-28: The concurrency group for the CLI job is incorrectly set to
the same group as the Stripe job; update the CLI job's concurrency.group value
to a distinct group (e.g., change `${{ github.workflow }}-stripe` to `${{
github.workflow }}-cli`) so the `cli` job and `stripe` job no longer serialize;
locate the CLI job definition (job name `cli`) and modify its concurrency.group
accordingly while keeping `cancel-in-progress` behavior as intended.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: e3f301be-46dd-4b9a-9621-6edbf9e735f5
📒 Files selected for processing (2)
.github/workflows/e2e.yml.github/workflows/release.yml
| concurrency: | ||
| group: ${{ github.workflow }}-stripe | ||
| cancel-in-progress: false |
There was a problem hiding this comment.
Concurrency group typo: CLI job is in the Stripe lane.
The cli job's concurrency group is ${{ github.workflow }}-stripe, which is identical to the stripe job's group at line 85. Combined with cancel-in-progress: false, this forces CLI and Stripe E2E to serialize even though they have independent infrastructure (the CLI job doesn't even start cloudflared). This defeats the parallel-job split and roughly doubles wall time when both run via target: all or scheduled cron.
🔧 Proposed fix
concurrency:
- group: ${{ github.workflow }}-stripe
+ group: ${{ github.workflow }}-cli
cancel-in-progress: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| concurrency: | |
| group: ${{ github.workflow }}-stripe | |
| cancel-in-progress: false | |
| concurrency: | |
| group: ${{ github.workflow }}-cli | |
| cancel-in-progress: false |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/e2e.yml around lines 26 - 28, The concurrency group for
the CLI job is incorrectly set to the same group as the Stripe job; update the
CLI job's concurrency.group value to a distinct group (e.g., change `${{
github.workflow }}-stripe` to `${{ github.workflow }}-cli`) so the `cli` job and
`stripe` job no longer serialize; locate the CLI job definition (job name `cli`)
and modify its concurrency.group accordingly while keeping `cancel-in-progress`
behavior as intended.
# Conflicts: # .github/workflows/e2e.yml
Summary
Notes
Summary by CodeRabbit
New Features
Refactoring
Chores