Complete Cloudflare worker-plane migration and production release path#47
Conversation
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
|
The latest updates on your projects. Learn more about Vercel for GitHub. 1 Skipped Deployment
|
This comment was marked as spam.
This comment was marked as spam.
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (7)
📝 WalkthroughWalkthroughThis PR migrates background processing from in-process Node/FFmpeg processors to a Cloudflare Workers + Queues worker plane: it removes local processors and R2/FFmpeg utilities, adds worker dispatch/claim/complete HTTP surfaces and queue handlers, extends Convex schema and control-plane mutations/queries for dispatch/moderation/secondary-assets, and updates UI/hooks/tests for media readiness and prewarming. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant Convex as Convex Control Plane
participant Worker as Cloudflare Worker
participant Provider as External Provider
participant R2 as Cloudflare R2
Client->>Convex: startGeneration (persist record, dispatchStatus: pending)
Convex->>Worker: dispatchSingleGeneration → POST /dispatch
Worker->>Worker: enqueue job in queue
Worker->>Convex: claimSingleGenerationHttp
Convex-->>Worker: { claimed: true, continuation data }
Worker->>Provider: perform generation request
Provider-->>Worker: return media bytes
Worker->>R2: upload media
R2-->>Worker: { url, r2Key, sizeBytes }
Worker->>Convex: completeSingleGenerationHttp (result)
Convex->>Convex: finalize image record, schedule moderation/secondary dispatchs
sequenceDiagram
autonumber
participant Client
participant Convex as Convex Control Plane
participant Worker as Cloudflare Worker
participant VisionAPI as Vision Provider
Client->>Convex: setVisibility (private → public)
Convex->>Convex: mark needsModeration, schedule analyzeImage
Convex->>Worker: dispatchPromptInference
Worker->>Convex: claimModerationHttp (prompt_inference)
Convex-->>Worker: { claimed: true, prompt, imageUrl }
Worker->>VisionAPI: run prompt/image analysis
VisionAPI-->>Worker: analysis result
Worker->>Convex: completePromptInferenceHttp / completeVisionAnalysisHttp
Convex->>Convex: update moderation state
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests (beta)
|
Review Summary by QodoComplete Cloudflare Worker-plane migration with production release integration
WalkthroughsDescription• **Completed Cloudflare Worker-plane migration**: Migrated single generation, batch generation, moderation, and video secondary-assets processing from Convex-native processors to distributed Cloudflare Workers with claim/continue/complete/fail state machines • **Implemented comprehensive Worker runtime**: Added shared Cloudflare Worker with HTTP dispatch endpoints, queue consumers for five job types, and integrations with Groq, OpenRouter, Cerebras, and Pollinations APIs • **Refactored state management**: Updated generatedImages, batchItems, and secondaryAssets tables with worker-plane dispatch tracking fields and recovery state queries • **Integrated Media Transformations**: Switched video derivatives to Cloudflare Media Transformations with fallback handling and updated gallery/media UI for transformed thumbnails and previews • **Extended cleanup paths**: Updated orphan cleanup and delete operations to handle preview assets alongside existing generated/thumbnail/reference assets • **Removed legacy processor stack**: Deleted singleGenerationProcessor.ts, batchProcessor.ts, secondaryAssetsProcessor.ts, and ffmpeg-related helpers • **Wired production release path**: Updated GitHub Actions release workflow to sync Cloudflare Worker secrets and deploy production Worker before tagging • **Enhanced UI/UX**: Added desktop keyboard navigation for lightbox, media prewarming for adjacent images, and media URL ready-state tracking for faster re-renders • **Validated production readiness**: Fixed worker/Convex shared-secret drift, re-validated secondary-assets dispatch, and set required GitHub Actions secrets • **Added comprehensive documentation**: Created migration plan tracker and Convex architecture guidance for background work patterns Diagramflowchart LR
A["Convex Scheduler"] -->|"dispatch via HTTP"| B["Cloudflare Worker"]
B -->|"process jobs"| C["External APIs<br/>Groq/OpenRouter/Cerebras"]
B -->|"generate media"| D["Pollinations API"]
B -->|"transform video"| E["Media Transformations"]
B -->|"callback with result"| F["Convex HTTP Endpoints"]
F -->|"update state"| G["generatedImages<br/>batchItems<br/>secondaryAssets"]
H["UI Components"] -->|"query status"| G
H -->|"display media"| I["Gallery/Lightbox<br/>with Prewarming"]
File Changes1. workers/bloomstudio-worker/src/index.ts
|
Code Review by Qodo
1.
|
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- The Cloudflare worker entrypoint (
workers/bloomstudio-worker/src/index.ts) has grown very large and mixes queue dispatching, provider calls, R2 logic, and moderation helpers; consider splitting it into smaller modules (e.g., generation, batch, moderation, secondary-assets, shared HTTP/Convex helpers) to keep each concern easier to reason about and evolve. - The HTTP callback layer (
convex/cloudflareWorkerHttp.ts) repeats a lot of similar zod schemas and request/response plumbing for claim/continue/complete/fail across different job types; pulling shared patterns (e.g., generic claim/continue helpers or base schemas) into small utilities would reduce duplication and make future worker endpoints less error‑prone. - The backoff computation for retries (
calculateBackoffDelayMsin the worker and the mirrored logic inconvex/cloudflareDispatch.ts) is duplicated; centralizing this in a shared module or at least documenting a single source of truth would help keep retry behavior consistent if the policy changes later.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The Cloudflare worker entrypoint (`workers/bloomstudio-worker/src/index.ts`) has grown very large and mixes queue dispatching, provider calls, R2 logic, and moderation helpers; consider splitting it into smaller modules (e.g., generation, batch, moderation, secondary-assets, shared HTTP/Convex helpers) to keep each concern easier to reason about and evolve.
- The HTTP callback layer (`convex/cloudflareWorkerHttp.ts`) repeats a lot of similar zod schemas and request/response plumbing for claim/continue/complete/fail across different job types; pulling shared patterns (e.g., generic claim/continue helpers or base schemas) into small utilities would reduce duplication and make future worker endpoints less error‑prone.
- The backoff computation for retries (`calculateBackoffDelayMs` in the worker and the mirrored logic in `convex/cloudflareDispatch.ts`) is duplicated; centralizing this in a shared module or at least documenting a single source of truth would help keep retry behavior consistent if the policy changes later.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - I've found 1 issue, and left some high level feedback:
- The Cloudflare worker entrypoint has grown quite large and multi-purpose (
workers/bloomstudio-worker/src/index.ts); consider splitting it into job-type-specific modules (single, batch, moderation, secondary assets) to keep the worker logic easier to reason about and evolve independently. - The worker secret header name and shared-secret handling are duplicated between the worker (
index.ts) and Convex (cloudflareWorkerHttp.ts); consider centralizing this constant and auth helper logic to reduce the risk of header or env name drift. - The Zod request schemas in
cloudflareWorkerHttp.tsare closely coupled to the worker payload types inworkers/bloomstudio-worker/src/index.ts; consider extracting shared TypeScript types (and possibly schemas) so that queue payloads and HTTP callback contracts cannot diverge silently over time.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The Cloudflare worker entrypoint has grown quite large and multi-purpose (`workers/bloomstudio-worker/src/index.ts`); consider splitting it into job-type-specific modules (single, batch, moderation, secondary assets) to keep the worker logic easier to reason about and evolve independently.
- The worker secret header name and shared-secret handling are duplicated between the worker (`index.ts`) and Convex (`cloudflareWorkerHttp.ts`); consider centralizing this constant and auth helper logic to reduce the risk of header or env name drift.
- The Zod request schemas in `cloudflareWorkerHttp.ts` are closely coupled to the worker payload types in `workers/bloomstudio-worker/src/index.ts`; consider extracting shared TypeScript types (and possibly schemas) so that queue payloads and HTTP callback contracts cannot diverge silently over time.
## Individual Comments
### Comment 1
<location path="workers/bloomstudio-worker/src/index.ts" line_range="15-10" />
<code_context>
+ enqueuedAt: number
+}
+
+type BatchItemDispatchRequestBody = {
+ jobType: "batch_item"
+ batchJobId: string
+ itemIndex: number
+ apiKey: string
+ attempt: number
+ enqueuedAt: number
</code_context>
<issue_to_address>
**🚨 issue (security):** Avoid placing raw Pollinations API keys in queue messages to reduce blast radius if queue metadata is exposed.
`BatchItemDispatchRequestBody` (and `SingleGenerationDispatchRequestBody`) currently embed the user’s Pollinations API key in the queued payload. Depending on your Cloudflare configuration, queue messages may be visible in dashboards, logs, or via misconfigured tooling, increasing the impact of any leak.
Consider instead:
- Issuing a short-lived opaque token that the worker exchanges for the real API key via a Convex HTTP endpoint, or
- Having the worker use a server-side key to generate a signed Pollinations URL so user keys never enter the queue.
If you keep this design, please document this trust boundary and review queue access controls, since these keys are effectively user secrets.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| type SingleGenerationDispatchRequestBody = { | ||
| jobType: "single_generation" | ||
| generationId: string | ||
| apiKey: string |
There was a problem hiding this comment.
🚨 issue (security): Avoid placing raw Pollinations API keys in queue messages to reduce blast radius if queue metadata is exposed.
BatchItemDispatchRequestBody (and SingleGenerationDispatchRequestBody) currently embed the user’s Pollinations API key in the queued payload. Depending on your Cloudflare configuration, queue messages may be visible in dashboards, logs, or via misconfigured tooling, increasing the impact of any leak.
Consider instead:
- Issuing a short-lived opaque token that the worker exchanges for the real API key via a Convex HTTP endpoint, or
- Having the worker use a server-side key to generate a signed Pollinations URL so user keys never enter the queue.
If you keep this design, please document this trust boundary and review queue access controls, since these keys are effectively user secrets.
SourceryAI
left a comment
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- The moderation and secondary-assets status enums (e.g., dispatch statuses, stages) are duplicated across schema, contentAnalysis, secondaryAssets, and worker code; consider centralizing these definitions to a single shared module to avoid drift on future changes.
- The Cloudflare worker’s queue handlers and the Convex dispatch actions each implement very similar retry/backoff and dispatch logic per job type; extracting shared helpers for building payloads, computing delays, and logging would reduce duplication and make it easier to tune behavior consistently.
- The new worker HTTP endpoints and their corresponding Convex mutations/queries rely on strict JSON schemas via zod, but many error responses surface only a generic
Invalid request body; exposing slightly more structured context (e.g., which field failed validation) in logs while keeping responses generic would make diagnosing worker/Convex integration issues easier in production.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The moderation and secondary-assets status enums (e.g., dispatch statuses, stages) are duplicated across schema, contentAnalysis, secondaryAssets, and worker code; consider centralizing these definitions to a single shared module to avoid drift on future changes.
- The Cloudflare worker’s queue handlers and the Convex dispatch actions each implement very similar retry/backoff and dispatch logic per job type; extracting shared helpers for building payloads, computing delays, and logging would reduce duplication and make it easier to tune behavior consistently.
- The new worker HTTP endpoints and their corresponding Convex mutations/queries rely on strict JSON schemas via zod, but many error responses surface only a generic `Invalid request body`; exposing slightly more structured context (e.g., which field failed validation) in logs while keeping responses generic would make diagnosing worker/Convex integration issues easier in production.Hi @Simplereally! 👋
Thanks for trying out Sourcery by commenting with @sourcery-ai review! 🚀
Install the sourcery-ai bot to get automatic code reviews on every pull request ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
convex/orphanCleanup.ts (1)
323-323:⚠️ Potential issue | 🟠 MajorMissing
previews/prefix in cleanup action.The
auditOrphanedR2Objectsaction at line 212 scans four prefixes including"previews/", butcleanupOrphanedR2Objectshere only re-lists three prefixes—missing"previews/". This means orphaned preview assets will be detected in audit but never deleted during cleanup.🐛 Proposed fix
- const prefixes = ["generated/", "thumbnails/", "reference/"] + const prefixes = ["generated/", "thumbnails/", "previews/", "reference/"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/orphanCleanup.ts` at line 323, The cleanup action cleanupOrphanedR2Objects defines prefixes = ["generated/", "thumbnails/", "reference/"] but auditOrphanedR2Objects also scans "previews/", so add "previews/" to the prefixes array used in cleanupOrphanedR2Objects (the prefixes variable) so that preview objects discovered by audit get re-listed and removed by the cleanup routine.hooks/queries/use-generate-image.ts (1)
61-76: 🛠️ Refactor suggestion | 🟠 MajorWiden the lifecycle callback params to the actual request union.
Lines 63, 66, and 69 still narrow
paramstoImageGenerationParams, but this hook acceptsVideoGenerationParamseverywhere else and passes them through at runtime. At runtime, all callbacks (onMutate,onSuccess,onError,onSettled) receive the full union fromentry.params, making the exported interface misleading for video callers and violating TypeScript best practices for public APIs.🧩 Proposed fix
+type GenerationParams = ImageGenerationParams | VideoGenerationParams + export interface UseGenerateImageOptions { /** Callback fired when generation starts */ - onMutate?: (params: ImageGenerationParams) => void | Promise<void> + onMutate?: (params: GenerationParams) => void | Promise<void> /** Callback fired on successful generation */ - onSuccess?: (image: GeneratedImage, params: ImageGenerationParams) => void + onSuccess?: (image: GeneratedImage, params: GenerationParams) => void /** Callback fired on generation error */ - onError?: (error: ServerGenerationError, params: ImageGenerationParams) => void + onError?: (error: ServerGenerationError, params: GenerationParams) => void /** Callback fired after mutation settles (success or error) */ onSettled?: ( image: GeneratedImage | undefined, error: ServerGenerationError | null, - params: ImageGenerationParams | VideoGenerationParams + params: GenerationParams ) => void }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hooks/queries/use-generate-image.ts` around lines 61 - 76, The public interface UseGenerateImageOptions incorrectly types the lifecycle callback parameter as ImageGenerationParams only; update the signatures for onMutate, onSuccess, onError, and onSettled in UseGenerateImageOptions so their params argument is the union ImageGenerationParams | VideoGenerationParams (matching the runtime entry.params passed through), and ensure any related type annotations that reference those callbacks use the same union to avoid narrowing and maintain correct public API typings.hooks/use-media-player.ts (1)
158-169:⚠️ Potential issue | 🟠 MajorGuard
markMediaUrlReady()to only persist after frame data is available.
dispatchLoad()is called byhandleVideoLoadedMetadata,handleVideoLoadedData,handleVideoCanPlay, andhandleVideoPlaying. Currently, line 168 marks any URL as ready unconditionally, which meansloadedmetadata(which fires before frame data exists) will cache the URL globally. On next render/navigation with the same URL,isMediaUrlReady(url)skips the loading state even though visual data isn't available yet, causing blank/black frames untilloadeddataarrives.Only persist readiness when
readyState >= HAVE_CURRENT_DATA(indicating at least one frame is available) or for images:Guard readiness on frame availability
- markMediaUrlReady(getMediaSource(element) || currentUrlRef.current) + const mediaSource = getMediaSource(element) || currentUrlRef.current + const canPersistReady = + !(element instanceof HTMLVideoElement) || + element.readyState >= HTMLMediaElement.HAVE_CURRENT_DATA + + if (canPersistReady) { + markMediaUrlReady(mediaSource) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hooks/use-media-player.ts` around lines 158 - 169, dispatchLoad currently calls markMediaUrlReady unconditionally which lets loadedmetadata persist a URL before any frame is available; change the guard so markMediaUrlReady(getMediaSource(element) || currentUrlRef.current) is only called when the element is an image (HTMLImageElement) or when the media element's readyState >= HTMLMediaElement.HAVE_CURRENT_DATA (i.e., at least one frame available). Keep the existing isCurrentMediaElement, clearErrorTimeout, setHasError, setIsLoading behavior and only wrap the markMediaUrlReady call in the ready-state/image check so handleVideoLoadedMetadata no longer caches URLs prematurely.
🧹 Nitpick comments (11)
components/github-star-button.tsx (1)
12-14: Validate the GitHub payload fromunknowninstead of asserting it.The new runtime guard is good, but
as GitHubRepoResponseundercuts it and still trusts unvalidated JSON. Narrow fromunknownfirst, then treatstargazers_countas required.♻️ Proposed refactor
type GitHubRepoResponse = { - stargazers_count?: number; + stargazers_count: number; }; + +function isGitHubRepoResponse(value: unknown): value is GitHubRepoResponse { + return ( + typeof value === "object" && + value !== null && + "stargazers_count" in value && + typeof value.stargazers_count === "number" + ); +} @@ - const data = (await response.json()) as GitHubRepoResponse; + const data: unknown = await response.json(); // Format large numbers (e.g., 1234 -> 1.2k) - const count = data.stargazers_count; - if (typeof count !== "number") { + if (!isGitHubRepoResponse(data)) { throw new Error("Invalid GitHub response"); } + const count = data.stargazers_count;As per coding guidelines, "Avoid
ascasts in TypeScript; prefersatisfies, generics, and runtime validation".Also applies to: 26-31
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/github-star-button.tsx` around lines 12 - 14, The GitHub payload is being cast with `as GitHubRepoResponse`, which defeats the runtime guard; instead, parse the JSON into an unknown and perform a runtime type check that narrows it to the shape { stargazers_count: number }, then treat `stargazers_count` as required. Locate where the JSON is parsed (the place using `as GitHubRepoResponse`), remove the `as` cast, add a small guard function or inline checks that verify the value is an object and has a numeric `stargazers_count`, and only then read `stargazers_count` (update usages around the `GitHubRepoResponse` type and the code paths mentioned at lines ~26-31).components/studio/gallery/persistent-image-gallery.test.tsx (2)
266-270: Exercise the preload boundary, not just an obvious near-end case.With 10 loaded items,
activeImageId="conv-8"is only one item from the end, so this still passes if the threshold regresses from 5 to 1 or 2. I'd move the active image to the first index that should preload, and pair it with a negative case just outside the threshold, so the test actually locks in the boundary behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/studio/gallery/persistent-image-gallery.test.tsx` around lines 266 - 270, The test currently uses activeImageId="conv-8" which is too close to the end and doesn't lock the preload threshold; change the test for PersistentImageGallery to exercise the exact preload boundary by setting activeImageId to the first index that should trigger preload (e.g., the item at preloadThreshold from the end) and assert mockHistoryLoadMore was called with 20, then add a second assertion/variant using an activeImageId one index further from the end (outside the threshold) and assert mockHistoryLoadMore was not called; reference the PersistentImageGallery render, activeImageId prop, and mockHistoryLoadMore to locate and update the test.
254-257: Avoid theascast for this branded type.Line 256 uses
as Id<"generatedImages">, which bypasses type safety. Rather than assuming a helper exists, consider using TypeScript patterns likesatisfiesor creating a simple test helper if this pattern recurs. This aligns with the coding guideline: "Avoidascasts in TypeScript; prefersatisfies, generics, and runtime validation."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/studio/gallery/persistent-image-gallery.test.tsx` around lines 254 - 257, The test creates extendedImages using an unsafe "as Id<\"generatedImages\">" cast which bypasses type safety; update the test to avoid the cast by either (a) using TypeScript's "satisfies" on the array/object literal to ensure each item conforms to Id<\"generatedImages\"> (e.g., apply satisfies to the mapped object) or (b) add a small test helper factory (e.g., makeGeneratedImage(id: string, overrides?)) that returns a properly typed item and use that in the Array.from mapping; adjust the creation of extendedImages and any usage of mockConvexImages[0] accordingly so no "as" cast is needed.components/studio/gallery/persistent-image-gallery.tsx (1)
384-402: Consider usinghandleLoadMoreRef.currentfor consistency.The existing auto-load effect at lines 291-301 uses
handleLoadMoreRef.current()to avoid effect re-runs when the callback is recreated. This new preload effect callshandleLoadMoredirectly, which is in the dependency array and may cause unnecessary effect executions.While the early-return guards (
isLoadingMore) should prevent duplicate API calls, using the ref pattern would be more consistent and reduce effect churn.♻️ Suggested refactor
React.useEffect(() => { if (!activeImageId || !canLoadMore || isLoadingMore) { return; } const activeImageIndex = mappedImages.findIndex( (image) => image.id === activeImageId, ); if (activeImageIndex < 0) { return; } const remainingImages = mappedImages.length - activeImageIndex - 1; if (remainingImages <= LIGHTBOX_PRELOAD_THRESHOLD) { - handleLoadMore(); + handleLoadMoreRef.current(); } - }, [activeImageId, canLoadMore, handleLoadMore, isLoadingMore, mappedImages]); + }, [activeImageId, canLoadMore, isLoadingMore, mappedImages]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/studio/gallery/persistent-image-gallery.tsx` around lines 384 - 402, The effect should call the stable ref instead of the changing callback to prevent unnecessary re-runs: replace direct calls to handleLoadMore with invoking handleLoadMoreRef.current() inside the effect and remove handleLoadMore from the dependency array, keeping activeImageId, canLoadMore, isLoadingMore, and mappedImages (and LIGHTBOX_PRELOAD_THRESHOLD is used unchanged); ensure you still guard with the existing early-return checks and verify handleLoadMoreRef.current exists before calling it.wrangler.jsonc (1)
11-25: Verify the observability master switch.Line 12 sets
observability.enabledtofalsewhile Lines 14-23 enable persisted logs underneath it. If Wrangler treats the top-level flag as authoritative, this worker will deploy without the invocation logs the post-release monitoring plan depends on.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wrangler.jsonc` around lines 11 - 25, The top-level observability.enabled is set to false while nested settings (observability.logs.persist and observability.logs.invocation_logs) are enabled; verify whether Wrangler treats observability.enabled as authoritative and, if so, set observability.enabled to true so the nested log settings take effect, or conversely remove/disable the nested persistence/invocation flags if the top-level flag must remain false; check and adjust the observability.enabled, observability.logs.persist, and observability.logs.invocation_logs entries accordingly to ensure invocation logs are actually enabled at deploy time.convex/contentAnalysis.ts (1)
117-123: Please verify the rate-limited path has another wake-up source.These branches leave the image in
pendingbut skip scheduling anotheranalyzeRecentImagesrun whenever providers are rate-limited. If no other cron/job re-enters this action after the limits expire, moderation recovery will stall indefinitely on those rows.Also applies to: 573-575, 660-667
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/contentAnalysis.ts` around lines 117 - 123, The rate-limited branch in convex/contentAnalysis.ts currently returns null when args.allProvidersRateLimited, leaving rows pending with no guaranteed wake-up; update the logic in the analyzeRecentImages path (the branch checking args.allProvidersRateLimited and the similar branches using queuedImageCount, ANALYSIS_QUEUE_LOOKAHEAD and DELAY_BETWEEN_REQUESTS_MS) to schedule a retry instead of returning null—either return a non-null delay (e.g., DELAY_BETWEEN_REQUESTS_MS) or explicitly enqueue/schedule another analyzeRecentImages invocation so that when provider limits expire the pending images are re-scanned; ensure the same fix is applied to the other occurrences referenced (lines around the queuedImageCount/ANALYSIS_QUEUE_LOOKAHEAD checks).convex/lib/promptInference.ts (1)
23-29: Consider type narrowing instead ofascast for API responses.The
as CerebrasChatCompletionResponsecast at line 144 works because downstream code validates the structure, but per coding guidelines, prefer runtime validation overascasts. SinceparseResultalready validates the final output, this is low risk, but a type guard or schema validation at the response level would be more robust.This is a minor suggestion given the existing validation in
parseResulthandles malformed responses gracefully.🔧 Optional: Add minimal runtime check
- const json = (await result.data.json()) as CerebrasChatCompletionResponse; + const json: unknown = await result.data.json(); + if (typeof json !== "object" || json === null) { + throw new Error("Invalid response format from Cerebras"); + } + const response = json as CerebrasChatCompletionResponse; - const content = json.choices?.[0]?.message?.content; + const content = response.choices?.[0]?.message?.content;Also applies to: 144-144
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/lib/promptInference.ts` around lines 23 - 29, Replace the unsafe `as CerebrasChatCompletionResponse` cast with a lightweight runtime type guard or schema check before passing data to parseResult: implement a small validation function (e.g., isCerebrasChatCompletionResponse) that verifies the top-level shape (choices array with message.content string) and use it to narrow the response; if validation fails, handle the error path (throw or return a clear parse error) so parseResult only receives structurally validated data. Reference the CerebrasChatCompletionResponse type and the parseResult call to locate where the cast is applied and replace it with the guard.app/api/images/delete-bulk/route.ts (1)
97-98: Redundant type check after Zod validation.The
typeof key !== "string"check on line 98 is now redundant since Zod's schema already guarantees all elements are strings. Consider removing it for cleaner code.♻️ Remove redundant check
for (const key of r2Keys) { - if (typeof key !== "string") continue - if (key.includes(`/${userHash}/`)) { authorizedKeys.push(key) } else {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/api/images/delete-bulk/route.ts` around lines 97 - 98, The loop over r2Keys contains a redundant runtime type guard (if (typeof key !== "string") continue) because the Zod validation already guarantees each element is a string; remove that check from the for (const key of r2Keys) loop and ensure r2Keys is typed as string[] (or use the Zod-parsed type) so the code relies on static/validated types rather than repeating a runtime typeof guard.convex/lib/batchGenerationState.test.ts (1)
102-144: Please cover thepending→processingpath.
getBatchStatusAfterItemSettlement()has a dedicated branch forstatus === "pending", but every new case starts from"processing". If that transition regresses, batches can stay pending without this suite catching it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/lib/batchGenerationState.test.ts` around lines 102 - 144, Add a unit test to cover the pending → processing transition in getBatchStatusAfterItemSettlement: create a case that calls getBatchStatusAfterItemSettlement with status: "pending" and item counts that should move the batch to "processing" (e.g., completedCount < totalCount and no terminal condition), and assert the returned status is "processing"; place this alongside the existing tests (same describe block) so future regressions of the pending branch are detected.convex/lib/openrouter.ts (1)
125-125: Validate the provider payload instead of asserting it.
response.json()is untrusted input, andas OpenRouterChatCompletionResponsesuppresses contract drift right at the network boundary. Parse intounknownand narrow/validatechoices[0].message.contentbefore using it so malformed provider responses fail in a controlled place.As per coding guidelines, "Avoid
ascasts in TypeScript; prefersatisfies, generics, and runtime validation."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/lib/openrouter.ts` at line 125, The code currently casts response.json() to OpenRouterChatCompletionResponse which bypasses runtime validation; change the handling in the OpenRouter call to parse JSON into unknown (e.g., const raw = await response.json() as unknown), then perform runtime narrowing/validation on raw to ensure it has the expected shape (object with choices array, choices[0].message and message.content string) before assigning to or using a typed variable; replace the direct cast to OpenRouterChatCompletionResponse with a validated value (or a type-guard function) and handle missing/malformed choices[0].message.content with a controlled error path so malformed provider responses fail safely.convex/batchGeneration.ts (1)
173-194: Consider chunking batchItem creation for very large batches.Creating up to 1000
batchItemssynchronously in a single mutation (lines 173-184) could approach Convex mutation limits for very large batches. The current implementation should work for typical use cases, but if you observe issues with large batches, consider chunking the item creation similar to the dispatch chunking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/batchGeneration.ts` around lines 173 - 194, The loop that creates up to args.count batchItems in a single mutation (the for loop that calls ctx.db.insert("batchItems", ...)) can hit Convex mutation limits for very large batches; change it to insert items in configurable chunks (e.g., CHUNK_SIZE = 100) and for each chunk perform a batch of inserts then, if more remain, schedule the next chunk using ctx.scheduler.runAfter pointing to internal.batchGeneration.seedBatchDispatchChunk (or a new seedBatchItemChunk handler) with batchJobId and next startIndex so remaining items are inserted in subsequent mutations; ensure you update any identifiers (batchJobId, ownerId, itemIndex, status fields) and preserve createdAt/updatedAt semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/release.yml:
- Around line 35-36: Move the "Install dependencies" step (the step named
"Install dependencies" that runs "bun install --frozen-lockfile") so it executes
after the release-only hotfix merge/reset steps (the block currently around
lines 101-116 that merges hotfixes back into main and resets release), or add a
second "Install dependencies" invocation after that merge/reset block; ensure
the workflow uses the fresh repository state before running "bun install
--frozen-lockfile" so any changes to package.json or bun.lock introduced by the
hotfix merge are included in installed dependencies.
- Around line 107-112: In the "Sync production worker secret" step, replace the
use of echo piped into bunx wrangler secret put (which appends a trailing
newline) with a printf '%s' invocation so the value of the environment secret
BLOOMSTUDIO_WORKER_SHARED_SECRET is written without a newline; update the
command that pipes into bunx wrangler secret put
BLOOMSTUDIO_WORKER_SHARED_SECRET --env production accordingly to avoid storing a
newline-terminated secret.
In `@cloudflare-migration-plan.md`:
- Around line 799-800: Replace the absolute Windows paths in the markdown
references with repo-relative links: find occurrences of cloudflareDispatch.ts,
cloudflareWorkerHttp.ts, secondaryAssets.ts (and the other listed occurrences)
and update their link targets so they point to the files inside the repository
using relative paths instead of C:/Code/.... Ensure all similar links (the
occurrences noted around the other ranges) are corrected consistently to use
repo-relative referencing.
In `@components/images/image-lightbox.test.tsx`:
- Around line 765-786: The test currently sets
useImageDetailsMock.mockReturnValue(undefined) which leaks the mock
implementation to other tests; change this to
useImageDetailsMock.mockReturnValueOnce(undefined) inside the "renders gallery
media immediately while full details are still loading" test so the undefined
response is scoped to this test only, leaving subsequent tests unaffected (refer
to useImageDetailsMock and the test block around ImageLightbox render).
In `@components/images/image-lightbox.tsx`:
- Around line 532-537: canUseDesktopMediaNavigation currently allows
keyboard/side-button navigation even when the PromptLibrary modal is open; add a
gate on !libraryOpen so media navigation is disabled while the prompt-library
modal is showing. Locate the boolean expression that defines
canUseDesktopMediaNavigation and include && !libraryOpen (or the equivalent
modal-open state) to prevent navigation; apply the same change to the other
occurrences where desktop navigation is computed (the similar expressions around
the other media-navigation checks referenced in the comment). Ensure you
reference the same modal state variable (libraryOpen) used by PromptLibrary so
the gating is consistent.
In `@components/images/lightbox/lightbox-media-display.tsx`:
- Around line 13-29: The module-global Set readyImageUrls grows unbounded;
modify markLightboxImageUrlReady and isImageUrlReady to use a bounded cache
(e.g., a simple LRU or capped FIFO) instead of an unbounded Set: introduce a
MAX_READY_IMAGE_URLS constant and maintain insertion order (use Map or a small
LRU helper) so when markLightboxImageUrlReady(url) adds a new entry it evicts
the oldest entries once the cap is exceeded; keep isImageUrlReady(url) checking
that bounded cache. Ensure the symbols readyImageUrls,
markLightboxImageUrlReady, and isImageUrlReady are updated to use the new
bounded structure and eviction behavior.
- Around line 197-200: Change toggleZoom and handleImageLoad to accept correctly
typed events instead of relying on "as unknown as" casts: update toggleZoom to
accept a union type (e.g., React.MouseEvent | React.KeyboardEvent | unknown) or
a single unknown and perform runtime narrowing inside (detect keyboard events by
checking e.type === 'keydown' or 'key' in event and call preventDefault() for
Space), then invoke the same zoom logic for mouse and keyboard without casting;
update the onKeyDown handler to call toggleZoom(e) directly and call
e.preventDefault() when handling Space/Enter to stop page scroll; change
handleImageLoad to accept React.SyntheticEvent<HTMLImageElement> (or narrow from
unknown) and remove the raw event cast, using the typed event properties
instead. Ensure references to toggleZoom and handleImageLoad in the component
are updated to the new signatures and internal narrowing logic.
In `@convex/cloudflareDispatch.ts`:
- Around line 120-127: Create a shared timeout helper (e.g.,
fetchWithTimeout(url, options, timeoutMs)) that uses AbortController to abort
the request after a configurable timeout (suggest ~5000 ms) and throws on
timeout; then replace the bare fetch(...) calls used for the worker dispatch
POSTs (the calls building `${baseUrl.replace(/\/$/, "")}/dispatch` with headers
including "x-bloomstudio-worker-secret" and body JSON.stringify(payload)) with
await fetchWithTimeout(...) and pass the AbortController.signal into the request
options so the fetch is cancellable; apply this change to all five dispatch
locations shown in the diff so the Convex action fails fast and can schedule
retries instead of waiting for the platform timeout.
- Around line 11-26: The queue payload types SingleGenerationDispatchPayload and
BatchItemDispatchPayload currently include apiKey; remove apiKey from these
payloads (and any other queue types mentioned around lines 112-127 and 209-225)
so only identifiers (generationId, batchJobId, itemIndex, attempt, enqueuedAt)
are serialized, and update the worker that claims/continues queued jobs to fetch
or derive the provider API key server-side (e.g., from secure vault/secret store
or lookup by generationId/batchJobId) instead of reading it from the queue
message; ensure any code that constructs queue messages (enqueue functions)
stops adding apiKey and any claim/continue handlers obtain credentials at
runtime.
In `@convex/cloudflareWorkerHttp.ts`:
- Around line 208-213: The handler is using unsafe `as never` casts (e.g., when
calling internal.singleGeneration.claimGenerationForWorker with generationId)
which hides invalid IDs; update the Zod request schemas in
convex/cloudflareWorkerHttp.ts to validate and narrow plain string IDs into
Convex Id branded types (use .refine() or a reusable branded helper to produce
Id<"generation">-style types), remove the `as never` casts, and pass the
validated, correctly-typed IDs into mutations like
internal.singleGeneration.claimGenerationForWorker; apply the same pattern to
the other handler inputs listed (lines referenced in the comment) so IDs are
type-safe at the HTTP boundary.
In `@convex/generatedImages.ts`:
- Around line 1164-1191: getRecoverableUnanalyzedImages currently only fetches a
fixed prefix (limit*4) of isSensitive==null/undefined rows and filters out
dispatched/processing afterwards, which can starve newer recoverable work;
modify the handler in getRecoverableUnanalyzedImages to either (a) query using
an index that includes moderationDispatchStatus and updatedAt (or
moderationDispatchStatus) so the DB returns non-dispatched/ non-processing rows
directly, or (b) implement a paginated scan loop using
withIndex("by_sensitivity", ...) and/or the legacy filter that repeatedly
fetches additional pages (increasing offset or using take in chunks) until you
accumulate args.limit recoverable items or exhaust results, applying the
isRecoverable predicate as you collect so you never stop at an early prefix
dominated by in-flight rows.
In `@workers/bloomstudio-worker/src/index.ts`:
- Around line 1071-1086: The upload can leak unless we always remove the R2
object when the generation does not finish successfully; wrap the call to
postConvexJsonWithRetry("/workers/single-generation/complete") in a
try/catch/finally (or try/finally) and ensure deleteR2Object(env,
uploadResult.r2Key) is called whenever completeResult?.completed !== true
(including when completeResult.duplicate === true or when the post call throws),
replacing the current conditional that only deletes on (!completed &&
!duplicate); mirror the same try/finally cleanup logic in the batch-item path
that uses postConvexJsonWithRetry and deleteR2Object (the code around
completeResult, uploadResult, generationId, and workerAttempt).
- Around line 410-451: postConvexJson currently calls fetch with no timeout
which can hang and prevent postConvexJsonWithRetry from retrying; wrap the fetch
in an AbortController with a configurable hard timeout (e.g., pass a timeoutMs
constant or parameter), pass controller.signal to fetch, and ensure the
controller is aborted after timeout; when fetch is aborted make postConvexJson
throw a retryable Error (detect abort via error.name === "AbortError" or
similar) so postConvexJsonWithRetry can catch and retry; update postConvexJson
signature or internal logic to use the timeout and ensure you still include
WORKER_SECRET_HEADER and getRequiredConfig unchanged.
---
Outside diff comments:
In `@convex/orphanCleanup.ts`:
- Line 323: The cleanup action cleanupOrphanedR2Objects defines prefixes =
["generated/", "thumbnails/", "reference/"] but auditOrphanedR2Objects also
scans "previews/", so add "previews/" to the prefixes array used in
cleanupOrphanedR2Objects (the prefixes variable) so that preview objects
discovered by audit get re-listed and removed by the cleanup routine.
In `@hooks/queries/use-generate-image.ts`:
- Around line 61-76: The public interface UseGenerateImageOptions incorrectly
types the lifecycle callback parameter as ImageGenerationParams only; update the
signatures for onMutate, onSuccess, onError, and onSettled in
UseGenerateImageOptions so their params argument is the union
ImageGenerationParams | VideoGenerationParams (matching the runtime entry.params
passed through), and ensure any related type annotations that reference those
callbacks use the same union to avoid narrowing and maintain correct public API
typings.
In `@hooks/use-media-player.ts`:
- Around line 158-169: dispatchLoad currently calls markMediaUrlReady
unconditionally which lets loadedmetadata persist a URL before any frame is
available; change the guard so markMediaUrlReady(getMediaSource(element) ||
currentUrlRef.current) is only called when the element is an image
(HTMLImageElement) or when the media element's readyState >=
HTMLMediaElement.HAVE_CURRENT_DATA (i.e., at least one frame available). Keep
the existing isCurrentMediaElement, clearErrorTimeout, setHasError, setIsLoading
behavior and only wrap the markMediaUrlReady call in the ready-state/image check
so handleVideoLoadedMetadata no longer caches URLs prematurely.
---
Nitpick comments:
In `@app/api/images/delete-bulk/route.ts`:
- Around line 97-98: The loop over r2Keys contains a redundant runtime type
guard (if (typeof key !== "string") continue) because the Zod validation already
guarantees each element is a string; remove that check from the for (const key
of r2Keys) loop and ensure r2Keys is typed as string[] (or use the Zod-parsed
type) so the code relies on static/validated types rather than repeating a
runtime typeof guard.
In `@components/github-star-button.tsx`:
- Around line 12-14: The GitHub payload is being cast with `as
GitHubRepoResponse`, which defeats the runtime guard; instead, parse the JSON
into an unknown and perform a runtime type check that narrows it to the shape {
stargazers_count: number }, then treat `stargazers_count` as required. Locate
where the JSON is parsed (the place using `as GitHubRepoResponse`), remove the
`as` cast, add a small guard function or inline checks that verify the value is
an object and has a numeric `stargazers_count`, and only then read
`stargazers_count` (update usages around the `GitHubRepoResponse` type and the
code paths mentioned at lines ~26-31).
In `@components/studio/gallery/persistent-image-gallery.test.tsx`:
- Around line 266-270: The test currently uses activeImageId="conv-8" which is
too close to the end and doesn't lock the preload threshold; change the test for
PersistentImageGallery to exercise the exact preload boundary by setting
activeImageId to the first index that should trigger preload (e.g., the item at
preloadThreshold from the end) and assert mockHistoryLoadMore was called with
20, then add a second assertion/variant using an activeImageId one index further
from the end (outside the threshold) and assert mockHistoryLoadMore was not
called; reference the PersistentImageGallery render, activeImageId prop, and
mockHistoryLoadMore to locate and update the test.
- Around line 254-257: The test creates extendedImages using an unsafe "as
Id<\"generatedImages\">" cast which bypasses type safety; update the test to
avoid the cast by either (a) using TypeScript's "satisfies" on the array/object
literal to ensure each item conforms to Id<\"generatedImages\"> (e.g., apply
satisfies to the mapped object) or (b) add a small test helper factory (e.g.,
makeGeneratedImage(id: string, overrides?)) that returns a properly typed item
and use that in the Array.from mapping; adjust the creation of extendedImages
and any usage of mockConvexImages[0] accordingly so no "as" cast is needed.
In `@components/studio/gallery/persistent-image-gallery.tsx`:
- Around line 384-402: The effect should call the stable ref instead of the
changing callback to prevent unnecessary re-runs: replace direct calls to
handleLoadMore with invoking handleLoadMoreRef.current() inside the effect and
remove handleLoadMore from the dependency array, keeping activeImageId,
canLoadMore, isLoadingMore, and mappedImages (and LIGHTBOX_PRELOAD_THRESHOLD is
used unchanged); ensure you still guard with the existing early-return checks
and verify handleLoadMoreRef.current exists before calling it.
In `@convex/batchGeneration.ts`:
- Around line 173-194: The loop that creates up to args.count batchItems in a
single mutation (the for loop that calls ctx.db.insert("batchItems", ...)) can
hit Convex mutation limits for very large batches; change it to insert items in
configurable chunks (e.g., CHUNK_SIZE = 100) and for each chunk perform a batch
of inserts then, if more remain, schedule the next chunk using
ctx.scheduler.runAfter pointing to
internal.batchGeneration.seedBatchDispatchChunk (or a new seedBatchItemChunk
handler) with batchJobId and next startIndex so remaining items are inserted in
subsequent mutations; ensure you update any identifiers (batchJobId, ownerId,
itemIndex, status fields) and preserve createdAt/updatedAt semantics.
In `@convex/contentAnalysis.ts`:
- Around line 117-123: The rate-limited branch in convex/contentAnalysis.ts
currently returns null when args.allProvidersRateLimited, leaving rows pending
with no guaranteed wake-up; update the logic in the analyzeRecentImages path
(the branch checking args.allProvidersRateLimited and the similar branches using
queuedImageCount, ANALYSIS_QUEUE_LOOKAHEAD and DELAY_BETWEEN_REQUESTS_MS) to
schedule a retry instead of returning null—either return a non-null delay (e.g.,
DELAY_BETWEEN_REQUESTS_MS) or explicitly enqueue/schedule another
analyzeRecentImages invocation so that when provider limits expire the pending
images are re-scanned; ensure the same fix is applied to the other occurrences
referenced (lines around the queuedImageCount/ANALYSIS_QUEUE_LOOKAHEAD checks).
In `@convex/lib/batchGenerationState.test.ts`:
- Around line 102-144: Add a unit test to cover the pending → processing
transition in getBatchStatusAfterItemSettlement: create a case that calls
getBatchStatusAfterItemSettlement with status: "pending" and item counts that
should move the batch to "processing" (e.g., completedCount < totalCount and no
terminal condition), and assert the returned status is "processing"; place this
alongside the existing tests (same describe block) so future regressions of the
pending branch are detected.
In `@convex/lib/openrouter.ts`:
- Line 125: The code currently casts response.json() to
OpenRouterChatCompletionResponse which bypasses runtime validation; change the
handling in the OpenRouter call to parse JSON into unknown (e.g., const raw =
await response.json() as unknown), then perform runtime narrowing/validation on
raw to ensure it has the expected shape (object with choices array,
choices[0].message and message.content string) before assigning to or using a
typed variable; replace the direct cast to OpenRouterChatCompletionResponse with
a validated value (or a type-guard function) and handle missing/malformed
choices[0].message.content with a controlled error path so malformed provider
responses fail safely.
In `@convex/lib/promptInference.ts`:
- Around line 23-29: Replace the unsafe `as CerebrasChatCompletionResponse` cast
with a lightweight runtime type guard or schema check before passing data to
parseResult: implement a small validation function (e.g.,
isCerebrasChatCompletionResponse) that verifies the top-level shape (choices
array with message.content string) and use it to narrow the response; if
validation fails, handle the error path (throw or return a clear parse error) so
parseResult only receives structurally validated data. Reference the
CerebrasChatCompletionResponse type and the parseResult call to locate where the
cast is applied and replace it with the guard.
In `@wrangler.jsonc`:
- Around line 11-25: The top-level observability.enabled is set to false while
nested settings (observability.logs.persist and
observability.logs.invocation_logs) are enabled; verify whether Wrangler treats
observability.enabled as authoritative and, if so, set observability.enabled to
true so the nested log settings take effect, or conversely remove/disable the
nested persistence/invocation flags if the top-level flag must remain false;
check and adjust the observability.enabled, observability.logs.persist, and
observability.logs.invocation_logs entries accordingly to ensure invocation logs
are actually enabled at deploy time.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8599c94b-28e7-462d-a850-9812a862dbed
⛔ Files ignored due to path filters (2)
bun.lockis excluded by!**/*.lockconvex/_generated/api.d.tsis excluded by!**/_generated/**
📒 Files selected for processing (65)
.dev.vars.example.github/workflows/release.yml.gitignoreapp/api/images/delete-bulk/route.tsapp/api/images/delete/route.tscloudflare-migration-plan.mdcomponents/debug/limit-tester.tsxcomponents/github-star-button.tsxcomponents/images/image-lightbox.test.tsxcomponents/images/image-lightbox.tsxcomponents/images/lightbox/index.tscomponents/images/lightbox/lightbox-media-display.tsxcomponents/studio/gallery/gallery-thumbnail.test.tsxcomponents/studio/gallery/gallery-thumbnail.tsxcomponents/studio/gallery/persistent-image-gallery.test.tsxcomponents/studio/gallery/persistent-image-gallery.tsxcomponents/studio/layout/studio-shell.test.tsxcomponents/studio/layout/studio-shell.tsxcomponents/ui/media-player.test.tsxcomponents/ui/media-player.tsxconvex-architecture-guidance.mdconvex.jsonconvex/batchGeneration.tsconvex/batchProcessor.tsconvex/cloudflareDispatch.tsconvex/cloudflareWorkerHttp.tsconvex/contentAnalysis.test.tsconvex/contentAnalysis.tsconvex/generatedImages.tsconvex/http.tsconvex/lib/batchGenerationState.test.tsconvex/lib/batchGenerationState.tsconvex/lib/groq.test.tsconvex/lib/groq.tsconvex/lib/index.tsconvex/lib/openrouter.tsconvex/lib/promptInference.test.tsconvex/lib/promptInference.tsconvex/lib/r2.tsconvex/lib/retry.tsconvex/lib/videoPreview.tsconvex/lib/videoThumbnail.tsconvex/orphanCleanup.tsconvex/orphanCleanupQueries.tsconvex/promptInference.tsconvex/schema.tsconvex/secondaryAssets.tsconvex/secondaryAssetsProcessor.tsconvex/singleGeneration.tsconvex/singleGenerationProcessor.tsconvex/thumbnailMigration.tsconvex/thumbnailMigrationActions.tsconvex/users.tseslint.config.mjshooks/mutations/use-delete-image.test.tsxhooks/mutations/use-delete-image.tshooks/queries/use-generate-image.test.tsxhooks/queries/use-generate-image.tshooks/use-media-player.tspackage.jsonscripts/migrate-local.tsuntested-files.mdvercel.jsonworkers/bloomstudio-worker/src/index.tswrangler.jsonc
💤 Files with no reviewable changes (12)
- convex/secondaryAssetsProcessor.ts
- convex/batchProcessor.ts
- hooks/queries/use-generate-image.test.tsx
- convex/lib/index.ts
- convex/lib/r2.ts
- scripts/migrate-local.ts
- convex/thumbnailMigration.ts
- convex/lib/videoThumbnail.ts
- convex/thumbnailMigrationActions.ts
- untested-files.md
- convex/lib/videoPreview.ts
- convex/singleGenerationProcessor.ts
| async function postConvexJson<TResponse>(env: Env, path: string, body: unknown): Promise<TResponse> { | ||
| const { convexSiteUrl, sharedSecret } = getRequiredConfig(env) | ||
| const response = await fetch(`${convexSiteUrl}${path}`, { | ||
| method: "POST", | ||
| headers: { | ||
| "content-type": "application/json", | ||
| [WORKER_SECRET_HEADER]: sharedSecret, | ||
| }, | ||
| body: JSON.stringify(body), | ||
| }) | ||
|
|
||
| if (!response.ok) { | ||
| const errorText = await response.text().catch(() => "") | ||
| throw new Error( | ||
| `Convex callback ${path} failed with ${response.status} ${response.statusText}${errorText ? `: ${errorText}` : ""}` | ||
| ) | ||
| } | ||
|
|
||
| return (await response.json()) as TResponse | ||
| } | ||
|
|
||
| async function postConvexJsonWithRetry<TResponse>( | ||
| env: Env, | ||
| path: string, | ||
| body: unknown, | ||
| maxAttempts = 3 | ||
| ): Promise<TResponse> { | ||
| let lastError: Error | null = null | ||
|
|
||
| for (let attempt = 1; attempt <= maxAttempts; attempt += 1) { | ||
| try { | ||
| return await postConvexJson<TResponse>(env, path, body) | ||
| } catch (error) { | ||
| lastError = error instanceof Error ? error : new Error("Unknown Convex callback error") | ||
| if (attempt === maxAttempts) { | ||
| break | ||
| } | ||
| } | ||
| } | ||
|
|
||
| throw lastError ?? new Error("Unknown Convex callback error") | ||
| } |
There was a problem hiding this comment.
Time-box worker→Convex callbacks.
postConvexJson() uses bare fetch, so a stalled Convex edge/network hop can pin a queue consumer until the platform timeout before postConvexJsonWithRetry() gets a chance to retry. Add an abort timeout here and treat timeouts as retryable callback failures.
Suggested hardening
+const CONVEX_CALLBACK_TIMEOUT_MS = 10_000
+
async function postConvexJson<TResponse>(env: Env, path: string, body: unknown): Promise<TResponse> {
const { convexSiteUrl, sharedSecret } = getRequiredConfig(env)
- const response = await fetch(`${convexSiteUrl}${path}`, {
- method: "POST",
- headers: {
- "content-type": "application/json",
- [WORKER_SECRET_HEADER]: sharedSecret,
- },
- body: JSON.stringify(body),
- })
+ const controller = new AbortController()
+ const timeoutId = setTimeout(() => controller.abort(), CONVEX_CALLBACK_TIMEOUT_MS)
+ let response: Response
+ try {
+ response = await fetch(`${convexSiteUrl}${path}`, {
+ method: "POST",
+ headers: {
+ "content-type": "application/json",
+ [WORKER_SECRET_HEADER]: sharedSecret,
+ },
+ body: JSON.stringify(body),
+ signal: controller.signal,
+ })
+ } catch (error) {
+ if (error instanceof Error && error.name === "AbortError") {
+ throw new Error(`Convex callback ${path} timed out after ${CONVEX_CALLBACK_TIMEOUT_MS}ms`)
+ }
+ throw error
+ } finally {
+ clearTimeout(timeoutId)
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workers/bloomstudio-worker/src/index.ts` around lines 410 - 451,
postConvexJson currently calls fetch with no timeout which can hang and prevent
postConvexJsonWithRetry from retrying; wrap the fetch in an AbortController with
a configurable hard timeout (e.g., pass a timeoutMs constant or parameter), pass
controller.signal to fetch, and ensure the controller is aborted after timeout;
when fetch is aborted make postConvexJson throw a retryable Error (detect abort
via error.name === "AbortError" or similar) so postConvexJsonWithRetry can catch
and retry; update postConvexJson signature or internal logic to use the timeout
and ensure you still include WORKER_SECRET_HEADER and getRequiredConfig
unchanged.
| const completeResult = await postConvexJsonWithRetry<CompleteGenerationResponse>(env, "/workers/single-generation/complete", { | ||
| generationId: message.body.generationId, | ||
| claimToken, | ||
| r2Key: uploadResult.r2Key, | ||
| url: uploadResult.url, | ||
| width: params.width ?? (isVideoRequest(params) ? 1024 : dirtberrySourceDimensions?.width ?? 1024), | ||
| height: params.height ?? (isVideoRequest(params) ? 1024 : dirtberrySourceDimensions?.height ?? 1024), | ||
| seed, | ||
| contentType, | ||
| sizeBytes: uploadResult.sizeBytes, | ||
| retryCount: workerAttempt > 1 ? workerAttempt - 1 : undefined, | ||
| }) | ||
|
|
||
| if (!completeResult.completed && !completeResult.duplicate) { | ||
| await deleteR2Object(env, uploadResult.r2Key) | ||
| } |
There was a problem hiding this comment.
Delete uploads on every non-complete exit.
After the R2 upload succeeds, any path other than completeResult.completed === true leaks the new object. That includes duplicate completions and exceptions from the /complete callback itself, so retries can accumulate orphaned media.
Suggested fix
- const completeResult = await postConvexJsonWithRetry<CompleteGenerationResponse>(env, "/workers/single-generation/complete", {
- generationId: message.body.generationId,
- claimToken,
- r2Key: uploadResult.r2Key,
- url: uploadResult.url,
- width: params.width ?? (isVideoRequest(params) ? 1024 : dirtberrySourceDimensions?.width ?? 1024),
- height: params.height ?? (isVideoRequest(params) ? 1024 : dirtberrySourceDimensions?.height ?? 1024),
- seed,
- contentType,
- sizeBytes: uploadResult.sizeBytes,
- retryCount: workerAttempt > 1 ? workerAttempt - 1 : undefined,
- })
-
- if (!completeResult.completed && !completeResult.duplicate) {
- await deleteR2Object(env, uploadResult.r2Key)
- }
+ try {
+ const completeResult = await postConvexJsonWithRetry<CompleteGenerationResponse>(env, "/workers/single-generation/complete", {
+ generationId: message.body.generationId,
+ claimToken,
+ r2Key: uploadResult.r2Key,
+ url: uploadResult.url,
+ width: params.width ?? (isVideoRequest(params) ? 1024 : dirtberrySourceDimensions?.width ?? 1024),
+ height: params.height ?? (isVideoRequest(params) ? 1024 : dirtberrySourceDimensions?.height ?? 1024),
+ seed,
+ contentType,
+ sizeBytes: uploadResult.sizeBytes,
+ retryCount: workerAttempt > 1 ? workerAttempt - 1 : undefined,
+ })
+
+ if (!completeResult.completed) {
+ await deleteR2Object(env, uploadResult.r2Key)
+ }
+ } catch (error) {
+ await deleteR2Object(env, uploadResult.r2Key)
+ throw error
+ }Mirror the same cleanup in the batch-item path.
Also applies to: 1164-1179
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workers/bloomstudio-worker/src/index.ts` around lines 1071 - 1086, The upload
can leak unless we always remove the R2 object when the generation does not
finish successfully; wrap the call to
postConvexJsonWithRetry("/workers/single-generation/complete") in a
try/catch/finally (or try/finally) and ensure deleteR2Object(env,
uploadResult.r2Key) is called whenever completeResult?.completed !== true
(including when completeResult.duplicate === true or when the post call throws),
replacing the current conditional that only deletes on (!completed &&
!duplicate); mirror the same try/finally cleanup logic in the batch-item path
that uses postConvexJsonWithRetry and deleteR2Object (the code around
completeResult, uploadResult, generationId, and workerAttempt).
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
hooks/use-media-player.test.ts (1)
5-8:⚠️ Potential issue | 🟡 MinorAdd
resetReadyMediaUrlsForTest()to ensure test isolation.The
beforeEachblock only callsvi.clearAllMocks()but doesn't reset the globalreadyMediaUrlsSet. Since the hook now uses a global cache, tests could leak readiness state between runs, potentially causing flaky tests depending on execution order.🛠️ Proposed fix
import { renderHook, act } from "@testing-library/react" import { describe, expect, it, vi, beforeEach } from "vitest" -import { useMediaPlayer } from "./use-media-player" +import { useMediaPlayer, resetReadyMediaUrlsForTest } from "./use-media-player" describe("useMediaPlayer", () => { beforeEach(() => { vi.clearAllMocks() + resetReadyMediaUrlsForTest() })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hooks/use-media-player.test.ts` around lines 5 - 8, The tests currently only call vi.clearAllMocks() in beforeEach but do not clear the global readyMediaUrls cache; update the beforeEach in use-media-player.test.ts to also call resetReadyMediaUrlsForTest() so each test starts with a fresh readyMediaUrls Set and avoids cross-test leakage; keep vi.clearAllMocks() and call resetReadyMediaUrlsForTest() (the helper that clears the global Set) before each test.convex/batchGeneration.ts (1)
123-169:⚠️ Potential issue | 🟠 MajorDon't store the raw provider key on the batch job.
Line 168 persists the Pollinations API key directly on
batchJobsso the worker can read it later. That leaves a plaintext credential at rest for the full job lifetime and sends it back through continuation queries. Store an encrypted secret or a durable key reference here, then resolve/decrypt it server-side when the worker continues.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/batchGeneration.ts` around lines 123 - 169, The handler in convex/batchGeneration.ts currently persists the raw Pollinations API key into the batchJobs document via ctx.db.insert("batchJobs", { apiKey: args.apiKey, ... }), which exposes plaintext credentials; instead, store a durable secret reference or an encrypted value: save the API key into a secure secrets store (or encrypt it server-side) and insert only the secretId/encryptedBlob reference on the batchJobs document (e.g., secretRef or encryptedApiKeyId), remove the plain apiKey field, and update the worker/processor logic that reads batchJobs to resolve/decrypt that secret server-side before using it (ensure resolution happens in the worker code path, not in client-accessible continuations).
🧹 Nitpick comments (12)
convex/lib/batchGenerationState.test.ts (1)
125-143: Consider using a stronger assertion for the first case.The
.not.toBe("failed")assertion at lines 127-133 is weaker than necessary—it would pass even if the function returned an unexpected value. Since both test cases (90/10/100 and 3/1/4) represent the same scenario (finished batch with mixed results), consider using.toBe("completed")for both.💡 Suggested improvement
it("marks a mixed-result finished batch as completed", () => { expect( getBatchStatusAfterItemSettlement({ completedCount: 90, failedCount: 10, totalCount: 100, status: "processing", }) - ).not.toBe("failed") + ).toBe("completed") expect( getBatchStatusAfterItemSettlement({ completedCount: 3, failedCount: 1, totalCount: 4, status: "processing", }) ).toBe("completed") })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/lib/batchGenerationState.test.ts` around lines 125 - 143, The first assertion uses a weak check (.not.toBe("failed")) for getBatchStatusAfterItemSettlement; update the test case that calls getBatchStatusAfterItemSettlement with completedCount: 90, failedCount: 10, totalCount: 100, status: "processing" to assert the explicit expected value "completed" (use .toBe("completed")) so both mixed-result finished scenarios assert the same concrete outcome.app/api/images/delete/route.test.ts (2)
53-67: Assert the delete side effect stays untouched onMISSING_KEY.Line 53 checks the error payload, but this failure case does not verify that
deleteImagewas never called. For a destructive endpoint, that extra assertion helps keep validation regressions fail-closed.Suggested diff
expect(data).toEqual({ success: false, error: { code: "MISSING_KEY", message: "Missing r2Key", }, }) + expect(mockDeleteImage).not.toHaveBeenCalled()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/api/images/delete/route.test.ts` around lines 53 - 67, The test for the "returns 400 when r2Key is missing" case should also assert that the destructive side-effect is not invoked: after calling POST(buildRequest(...)) and checking the 400/MISSING_KEY payload, add an assertion that deleteImage (the function that performs the delete) was never called (e.g., expect(deleteImage).not.toHaveBeenCalled()) and ensure mocks like mockAuth remain as used; locate the assertion near the existing expects in the "returns 400 when r2Key is missing" test to keep the validation fail-closed.
27-100: Add an unauthenticated-path test.Every scenario in this suite seeds
auth()with auserId, so the route’s primary auth guard can regress without any failing test. I’d add one case that leaves the request unauthenticated and asserts the delete is rejected anddeleteImageis not called.Suggested diff
+ it("rejects unauthenticated requests", async () => { + mockAuth.mockResolvedValue({ userId: null }) + + const response = await POST( + buildRequest(JSON.stringify({ r2Key: "generated/some-user/image.png" })), + ) + const data = await response.json() + + expect(response.status).toBe(401) + expect(data.success).toBe(false) + expect(mockDeleteImage).not.toHaveBeenCalled() + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/api/images/delete/route.test.ts` around lines 27 - 100, Add a new test case in the describe("/api/images/delete") suite that simulates an unauthenticated request by having mockAuth resolve to null/undefined (e.g., mockAuth.mockResolvedValue(null)), call POST(buildRequest(...)) with a valid payload (an r2Key or empty body as appropriate), then assert the response is an authentication failure (expect response.status to be 401) and/or the expected error JSON for unauthenticated access, and assert mockDeleteImage was not called; place the test near the other cases and reference mockAuth, POST, buildRequest, and mockDeleteImage so the route's auth guard is covered.workers/bloomstudio-worker/src/secondary-assets.ts (1)
145-152: Consider adding 503 explicitly to retryable status codes documentation.The current logic
response.status >= 500covers 503 (Service Unavailable), but it might be worth adding a comment clarifying that this is intentional for Cloudflare Media Transformations, which may return 503 during high load.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workers/bloomstudio-worker/src/secondary-assets.ts` around lines 145 - 152, Add a short clarifying comment next to the retryable-status check in the media transform probe branch so reviewers know 503 is intentionally treated as retryable; update the block around the if (!response.ok) and the retryable calculation (where SecondaryAssetsError is thrown for "media_transform_probe") to explicitly mention that response.status >= 500 includes 503 (Cloudflare Media Transformations may return 503 under high load), and keep the existing logic (response.status >= 500 || response.status === 408 || response.status === 429) unchanged.hooks/use-media-player.ts (1)
5-5: Consider documenting or bounding the global readiness cache.The
readyMediaUrlsSet grows unboundedly during a session. For typical usage this is fine, but in long-running sessions with many distinct media URLs, memory could accumulate. Consider documenting this trade-off or adding a cache eviction strategy (e.g., LRU with a max size) in a future iteration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hooks/use-media-player.ts` at line 5, The global Set readyMediaUrls can grow unbounded across a session; either document this trade-off in the hook's comments (useMediaPlayer / readyMediaUrls) or replace the Set with a bounded cache (e.g., an LRU with a max size) to evict old entries — implement an LRU using a small Map-based helper or an existing tiny LRU util, expose a MAX_READY_URLS constant, and update add/check logic to drop the oldest entries when the cap is exceeded so memory cannot grow without bound.convex/lib/cloudflareWorkerHttp.ts (1)
43-66: Handler receivesunknownbut parsed data has schema type.The handler signature uses
body: unknown, but afterschema.safeParse(json)succeeds,parsed.datahas the inferred schema type. Consider using a generic to preserve type safety:♻️ Proposed improvement for type safety
-export function createWorkerHttpAction( - schema: z.ZodTypeAny, - handler: (ctx: WorkerHttpContext, body: unknown) => Promise<unknown> +export function createWorkerHttpAction<T extends z.ZodTypeAny>( + schema: T, + handler: (ctx: WorkerHttpContext, body: z.infer<T>) => Promise<unknown> ) { return httpAction(async (ctx, request) => { ... - const result = await handler(ctx, parsed.data) + const result = await handler(ctx, parsed.data as z.infer<T>) return Response.json(result) }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/lib/cloudflareWorkerHttp.ts` around lines 43 - 66, The handler currently accepts body: unknown even though schema.safeParse(json) yields parsed.data of the schema type; make createWorkerHttpAction generic (e.g., <T>) so schema is typed as z.ZodType<T> (or similar) and the handler signature becomes (ctx: WorkerHttpContext, body: T) => Promise<unknown>, then use that generic type for parsed.data passed into handler; update references to createWorkerHttpAction, schema, handler, and parsed.data accordingly to preserve type safety throughout.convex/contentAnalysis.ts (1)
619-623: Redundant null check after array access.Line 620 assigns
image = images[0]and Line 621 checksif (!image). Sinceimages.length === 0is already checked at Line 615-617,images[0]is guaranteed to exist when Line 620 executes. The null check is redundant.♻️ Simplify by removing redundant check
if (images.length === 0) { return } let allProvidersRateLimited = false const image = images[0] - if (!image) { - return - }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/contentAnalysis.ts` around lines 619 - 623, Remove the redundant null check after reading the first image: since earlier code already returns when images.length === 0, the lines "const image = images[0]" followed by "if (!image) { return }" should be simplified by dropping the conditional return; keep the single assignment to image (or inline usage) and rely on the prior existence check so functions/logic using image (the const image = images[0] reference in contentAnalysis.ts) proceed without the unnecessary guard.workers/bloomstudio-worker/src/core.ts (1)
105-125: Consider adding delay between retry attempts.The retry loop immediately retries on failure without any backoff delay. For transient network issues or Convex rate limits, immediate retries may not give the system time to recover.
♻️ Proposed fix to add exponential backoff
export async function postConvexJsonWithRetry<TResponse>( env: Env, path: string, body: unknown, maxAttempts = 3 ): Promise<TResponse> { let lastError: Error | null = null for (let attempt = 1; attempt <= maxAttempts; attempt += 1) { try { return await postConvexJson<TResponse>(env, path, body) } catch (error) { lastError = error instanceof Error ? error : new Error("Unknown Convex callback error") if (attempt === maxAttempts) { break } + // Exponential backoff: 100ms, 200ms, 400ms... + await new Promise((r) => setTimeout(r, 100 * Math.pow(2, attempt - 1))) } } throw lastError ?? new Error("Unknown Convex callback error") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workers/bloomstudio-worker/src/core.ts` around lines 105 - 125, postConvexJsonWithRetry currently retries immediately on failure; add an exponential backoff (with optional jitter) between attempts to avoid hammering Convex or transient network flaps. Inside postConvexJsonWithRetry (using the existing attempt loop and maxAttempts), after catching an error and before the next attempt, await a delay computed like baseDelayMs * 2 ** (attempt - 1) (optionally add small random jitter) using a promise-based sleep (e.g., await new Promise(res => setTimeout(res, ms))); do not delay after the final attempt and preserve lastError behavior.hooks/queries/use-generate-image.ts (1)
306-306: Consider narrowing instead ofascast for params.The
as GeneratedImage["params"]cast bypasses type checking. Sinceentry.paramsis typed asImageGenerationParams | VideoGenerationParams, consider using a type guard or restructuring to avoid the cast, or add a brief comment explaining why this cast is safe here.As per coding guidelines: "Avoid
ascasts in TypeScript; prefersatisfies, generics, and runtime validation."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hooks/queries/use-generate-image.ts` at line 306, Replace the unsafe cast of entry.params to GeneratedImage["params"] by narrowing the union at runtime: implement or use a type guard (e.g., isImageParams) that checks a discriminant or presence of image-specific fields on entry.params (or use entry.type if available), then assign entryParams using that narrowed branch so TypeScript infers ImageGenerationParams without `as`; if a cast is absolutely unavoidable, add a concise comment explaining why the cast is safe and reference the invariant used (e.g., entry.type === "image") and the symbols entry.params, entryParams, GeneratedImage["params"], ImageGenerationParams, VideoGenerationParams to make the rationale explicit.workers/bloomstudio-worker/src/queue.ts (1)
11-28: Make the queue router exhaustive and drop theas Message<...>casts.This switch currently depends on five manual casts, so a new
jobTypecan be added without the compiler forcing this router to handle it. Refactoring around a narrowed body/helper plus an unreachableneverbranch will keep the dispatch path type-safe withoutas.As per coding guidelines, "Avoid
ascasts in TypeScript; prefersatisfies, generics, and runtime validation".components/images/image-lightbox.test.tsx (1)
379-384: Preserve the imported module's type instead of erasing it toRecord<string, unknown>.The cast here throws away the real module shape right where this mock is trying to preserve it. Keeping
actualtyped avoids anotherasescape hatch in the test surface.As per coding guidelines, "Avoid
ascasts in TypeScript; prefersatisfies, generics, and runtime validation".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/images/image-lightbox.test.tsx` around lines 379 - 384, The mock currently erases the imported module's type by casting to Record<string, unknown>; instead preserve the actual module shape by removing the cast and applying a TypeScript 'satisfies' check on the returned object so the mock keeps the real module type: inside the vi.mock callback (the importOriginal usage), keep const actual = await importOriginal(); then return { ...actual, getModelDisplayName: (model: string) => model } satisfies typeof actual so the mock respects the original module shape and avoids an 'as' escape hatch while keeping references to importOriginal, actual, and getModelDisplayName.coderabbit-review-comments.md (1)
1-163: Keep the review transcript out of the repo.This file is a transient review checklist with line-specific amendments, not durable project documentation. Keeping it versioned will go stale quickly and adds noise for future contributors unless review artifacts are intentionally part of the repo.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@coderabbit-review-comments.md` around lines 1 - 163, The review transcript file coderabbit-review-comments.md is transient and should not be committed to source; remove this file from the repository (or move it to a non-versioned location such as a personal notes folder or a dedicated `.reviews/` archive that is excluded by .gitignore) and commit the deletion, then update the PR to reflect the removal (use a succinct commit message like "remove transient review transcript"); if you must preserve the content for internal tracking, export it to an external notes system and ensure any persisted copy is excluded from version control by adding the chosen path to .gitignore.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@components/images/image-lightbox.tsx`:
- Around line 53-55: The current isGeneratedImageId guard accepts any non-empty
string; tighten it to validate the runtime shape of a Convex Id for the
generatedImages collection by checking the collection prefix and expected format
(e.g., ensure the value is a string that begins with the "generatedImages"
collection identifier/prefix and contains only allowed ID characters such as
alphanumerics, dashes/underscores, and an expected separator like ":"), and
reject values that don't match that pattern; update the isGeneratedImageId
function accordingly and keep callers like warmCandidateDetails (which does
candidate?._id ?? candidate?.id) unchanged but now safe because the guard
enforces the Convex Id shape.
In `@components/images/lightbox/lightbox-media-display.tsx`:
- Around line 159-164: The wrapper div (role="presentation",
className="relative") currently stops all key events via onKeyDown={(e) =>
e.stopPropagation()}, which prevents the outer lightbox from receiving Escape
for dismissal; change the onKeyDown handler on that wrapper (and the other
similar handlers in the same file) to only stop propagation for keys other than
Escape (e.key === 'Escape' and also accept the legacy 'Esc' if you check it) so
Escape is allowed to bubble to the lightbox container while other keys remain
isolated.
In `@convex/batchGeneration.ts`:
- Around line 603-613: Update the claim logic so paused jobs cannot start new
pre-dispatched chunks: remove "paused" from the allowed statuses in the
canClaimPending condition (keep checks on batchItem.status === "pending",
batchItem.dispatchStatus === "dispatched" and batchJob.status being only
"pending" or "processing"), and ensure the reclaim path (canReclaimProcessing
using args.workerAttempt > currentAttempt) still works; additionally add a short
handler such that when batchJob.status === "paused" and batchItem.dispatchStatus
=== "dispatched" you either set that batchItem back to pending (release it) or
explicitly block claims—use the existing symbols canClaimPending,
canReclaimProcessing, batchItem.status, batchItem.dispatchStatus,
batchJob.status, and args.workerAttempt to locate and implement the change.
In `@convex/cloudflareDispatch.ts`:
- Around line 257-274: The catch block currently always calls
internal.batchGeneration.recordBatchItemDispatchFailure which leaves items in
dispatchStatus "pending" and never marks them terminal when retries are
exhausted; modify the catch so that if dispatchResult.dispatchAttempts >=
WORKER_RETRY_MAX_ATTEMPTS you call a terminal mutation (e.g.
internal.batchGeneration.recordBatchItemDispatchTerminalFailure or similar) to
mark the item as failed/terminal (set dispatchStatus to "failed" and increment
failedCount) instead of scheduling another retry, then throw the error;
otherwise keep the existing retry scheduling using calculateDispatchDelayMs and
internal.cloudflareDispatch.dispatchBatchItem.
In `@workers/bloomstudio-worker/src/moderation.ts`:
- Around line 232-260: Change normalizeLevel and parseVisionResponse to fail
loudly on malformed/missing fields instead of returning benign defaults: in
normalizeLevel(value, allowed) throw a descriptive error if value is not a
string or its lowercased form is not one of allowed (do not return allowed[0]);
in parseVisionResponse(text) validate that parsed JSON contains required keys
(nudity, sexual_content, violence, confidence, reasoning) and that each has an
expected type/value (nudity must be "full" or "none"; sexual_content must be one
of ["none","suggestive","explicit"]; violence one of ["none","mild","graphic"];
confidence a number between 0 and 1; reasoning a string) and throw if any check
fails (use getObjectProperty to locate values and include which key failed in
the error) so callers can retry/escalate instead of silently treating malformed
provider responses as safe.
---
Outside diff comments:
In `@convex/batchGeneration.ts`:
- Around line 123-169: The handler in convex/batchGeneration.ts currently
persists the raw Pollinations API key into the batchJobs document via
ctx.db.insert("batchJobs", { apiKey: args.apiKey, ... }), which exposes
plaintext credentials; instead, store a durable secret reference or an encrypted
value: save the API key into a secure secrets store (or encrypt it server-side)
and insert only the secretId/encryptedBlob reference on the batchJobs document
(e.g., secretRef or encryptedApiKeyId), remove the plain apiKey field, and
update the worker/processor logic that reads batchJobs to resolve/decrypt that
secret server-side before using it (ensure resolution happens in the worker code
path, not in client-accessible continuations).
In `@hooks/use-media-player.test.ts`:
- Around line 5-8: The tests currently only call vi.clearAllMocks() in
beforeEach but do not clear the global readyMediaUrls cache; update the
beforeEach in use-media-player.test.ts to also call resetReadyMediaUrlsForTest()
so each test starts with a fresh readyMediaUrls Set and avoids cross-test
leakage; keep vi.clearAllMocks() and call resetReadyMediaUrlsForTest() (the
helper that clears the global Set) before each test.
---
Nitpick comments:
In `@app/api/images/delete/route.test.ts`:
- Around line 53-67: The test for the "returns 400 when r2Key is missing" case
should also assert that the destructive side-effect is not invoked: after
calling POST(buildRequest(...)) and checking the 400/MISSING_KEY payload, add an
assertion that deleteImage (the function that performs the delete) was never
called (e.g., expect(deleteImage).not.toHaveBeenCalled()) and ensure mocks like
mockAuth remain as used; locate the assertion near the existing expects in the
"returns 400 when r2Key is missing" test to keep the validation fail-closed.
- Around line 27-100: Add a new test case in the describe("/api/images/delete")
suite that simulates an unauthenticated request by having mockAuth resolve to
null/undefined (e.g., mockAuth.mockResolvedValue(null)), call
POST(buildRequest(...)) with a valid payload (an r2Key or empty body as
appropriate), then assert the response is an authentication failure (expect
response.status to be 401) and/or the expected error JSON for unauthenticated
access, and assert mockDeleteImage was not called; place the test near the other
cases and reference mockAuth, POST, buildRequest, and mockDeleteImage so the
route's auth guard is covered.
In `@coderabbit-review-comments.md`:
- Around line 1-163: The review transcript file coderabbit-review-comments.md is
transient and should not be committed to source; remove this file from the
repository (or move it to a non-versioned location such as a personal notes
folder or a dedicated `.reviews/` archive that is excluded by .gitignore) and
commit the deletion, then update the PR to reflect the removal (use a succinct
commit message like "remove transient review transcript"); if you must preserve
the content for internal tracking, export it to an external notes system and
ensure any persisted copy is excluded from version control by adding the chosen
path to .gitignore.
In `@components/images/image-lightbox.test.tsx`:
- Around line 379-384: The mock currently erases the imported module's type by
casting to Record<string, unknown>; instead preserve the actual module shape by
removing the cast and applying a TypeScript 'satisfies' check on the returned
object so the mock keeps the real module type: inside the vi.mock callback (the
importOriginal usage), keep const actual = await importOriginal(); then return {
...actual, getModelDisplayName: (model: string) => model } satisfies typeof
actual so the mock respects the original module shape and avoids an 'as' escape
hatch while keeping references to importOriginal, actual, and
getModelDisplayName.
In `@convex/contentAnalysis.ts`:
- Around line 619-623: Remove the redundant null check after reading the first
image: since earlier code already returns when images.length === 0, the lines
"const image = images[0]" followed by "if (!image) { return }" should be
simplified by dropping the conditional return; keep the single assignment to
image (or inline usage) and rely on the prior existence check so functions/logic
using image (the const image = images[0] reference in contentAnalysis.ts)
proceed without the unnecessary guard.
In `@convex/lib/batchGenerationState.test.ts`:
- Around line 125-143: The first assertion uses a weak check
(.not.toBe("failed")) for getBatchStatusAfterItemSettlement; update the test
case that calls getBatchStatusAfterItemSettlement with completedCount: 90,
failedCount: 10, totalCount: 100, status: "processing" to assert the explicit
expected value "completed" (use .toBe("completed")) so both mixed-result
finished scenarios assert the same concrete outcome.
In `@convex/lib/cloudflareWorkerHttp.ts`:
- Around line 43-66: The handler currently accepts body: unknown even though
schema.safeParse(json) yields parsed.data of the schema type; make
createWorkerHttpAction generic (e.g., <T>) so schema is typed as z.ZodType<T>
(or similar) and the handler signature becomes (ctx: WorkerHttpContext, body: T)
=> Promise<unknown>, then use that generic type for parsed.data passed into
handler; update references to createWorkerHttpAction, schema, handler, and
parsed.data accordingly to preserve type safety throughout.
In `@hooks/queries/use-generate-image.ts`:
- Line 306: Replace the unsafe cast of entry.params to GeneratedImage["params"]
by narrowing the union at runtime: implement or use a type guard (e.g.,
isImageParams) that checks a discriminant or presence of image-specific fields
on entry.params (or use entry.type if available), then assign entryParams using
that narrowed branch so TypeScript infers ImageGenerationParams without `as`; if
a cast is absolutely unavoidable, add a concise comment explaining why the cast
is safe and reference the invariant used (e.g., entry.type === "image") and the
symbols entry.params, entryParams, GeneratedImage["params"],
ImageGenerationParams, VideoGenerationParams to make the rationale explicit.
In `@hooks/use-media-player.ts`:
- Line 5: The global Set readyMediaUrls can grow unbounded across a session;
either document this trade-off in the hook's comments (useMediaPlayer /
readyMediaUrls) or replace the Set with a bounded cache (e.g., an LRU with a max
size) to evict old entries — implement an LRU using a small Map-based helper or
an existing tiny LRU util, expose a MAX_READY_URLS constant, and update
add/check logic to drop the oldest entries when the cap is exceeded so memory
cannot grow without bound.
In `@workers/bloomstudio-worker/src/core.ts`:
- Around line 105-125: postConvexJsonWithRetry currently retries immediately on
failure; add an exponential backoff (with optional jitter) between attempts to
avoid hammering Convex or transient network flaps. Inside
postConvexJsonWithRetry (using the existing attempt loop and maxAttempts), after
catching an error and before the next attempt, await a delay computed like
baseDelayMs * 2 ** (attempt - 1) (optionally add small random jitter) using a
promise-based sleep (e.g., await new Promise(res => setTimeout(res, ms))); do
not delay after the final attempt and preserve lastError behavior.
In `@workers/bloomstudio-worker/src/secondary-assets.ts`:
- Around line 145-152: Add a short clarifying comment next to the
retryable-status check in the media transform probe branch so reviewers know 503
is intentionally treated as retryable; update the block around the if
(!response.ok) and the retryable calculation (where SecondaryAssetsError is
thrown for "media_transform_probe") to explicitly mention that response.status
>= 500 includes 503 (Cloudflare Media Transformations may return 503 under high
load), and keep the existing logic (response.status >= 500 || response.status
=== 408 || response.status === 429) unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5d505aed-8cff-4ac0-9d3e-9e07744ac102
⛔ Files ignored due to path filters (12)
temp/8cwmSrjuaVE5bduS3A-_-4.pngis excluded by!**/*.pngtemp/Applestyle_product_ad_202601081739_q8poc.mp4is excluded by!**/*.mp4temp/High-Volume_Pro_Content_version_2(1).pngis excluded by!**/*.pngtemp/Ratio_916_brandingcreative_202601081726_y1.mp4is excluded by!**/*.mp4temp/Slickly_designed_like_202601081740_ypmho.mp4is excluded by!**/*.mp4temp/Slickly_designed_like_202601081742_ijls7.mp4is excluded by!**/*.mp4temp/Visual__scene_202601081826_uq1ob.mp4is excluded by!**/*.mp4temp/Visual_slickly_designed_202601081802_tezjf.mp4is excluded by!**/*.mp4temp/Visual_slickly_designed_202601081816_t2lm4.mp4is excluded by!**/*.mp4temp/Visual_slickly_designed_202601081828_bjtay.mp4is excluded by!**/*.mp4temp/aaiNlKMr3effkQHI1yy_Y0.pngis excluded by!**/*.pngtemp/pomelli_image(4).pngis excluded by!**/*.png
📒 Files selected for processing (71)
.github/workflows/release.ymlapp/api/images/delete-bulk/route.tsapp/api/images/delete/route.test.tsapp/api/images/delete/route.tscloudflare-migration-plan.mdcoderabbit-review-comments.mdcomponents/github-star-button.tsxcomponents/images/image-lightbox.test.tsxcomponents/images/image-lightbox.tsxcomponents/images/lightbox/lightbox-media-display.tsxcomponents/studio/features/history/gallery-feature.tsxcomponents/studio/features/history/gallery-view.tsxcomponents/studio/gallery/lightbox-preload.tscomponents/studio/gallery/persistent-image-gallery.test.tsxcomponents/studio/gallery/persistent-image-gallery.tsxcomponents/studio/layout/studio-shell.test.tsxcomponents/studio/layout/studio-shell.tsxcomponents/ui/media-player.test.tsxconvex/batchGeneration.tsconvex/cloudflareDispatch.tsconvex/cloudflareWorkerHttp.tsconvex/contentAnalysis.test.tsconvex/contentAnalysis.tsconvex/generatedImages.tsconvex/lib/batchGenerationState.test.tsconvex/lib/cloudflareWorkerHttp.tsconvex/lib/groq.test.tsconvex/lib/groq.tsconvex/lib/openrouter.tsconvex/lib/promptInference.test.tsconvex/lib/promptInference.tsconvex/orphanCleanup.tsconvex/singleGeneration.tshooks/queries/use-generate-image.tshooks/use-image-lightbox.tshooks/use-media-player.test.tshooks/use-media-player.tslib/cloudflare-worker/retry.tsplayground/test-grid.htmlstories/01-seo-shared-architecture.mdstories/02-seo-solutions-pages.mdstories/03-seo-support-pages.mdstories/04-seo-deployment-configs.mdstories/studio-layout.mdtodo/bandwidth-audit-analysis.mdtodo/convex-bandwidth-audit.mdtodo/generating-state-design.mdtodo/living-strip-story.mdtodo/mobile-experience-spec.mdtodo/music-generation/suno-discovery.mdtodo/next-cache-implementation-plan.mdtodo/nsfw-sensitive-content-detection.mdtodo/paused-batch-cleanup.mdtodo/product-hunt-launch-plan.mdtodo/public-feed-phase2-story1-landing-integration.mdtodo/public-feed-phase2-story2-image-quality.mdtodo/public-feed-unauthenticated-access.mdtodo/reference-next-cache-and-actions-implementation.mdtodo/reference-next-cache-research.mdtodo/studio-optimizations.mdtodo/video-preview-refactor-plan.mduntested-files.mdworkers/bloomstudio-worker/src/core.tsworkers/bloomstudio-worker/src/generation.tsworkers/bloomstudio-worker/src/http.tsworkers/bloomstudio-worker/src/index.tsworkers/bloomstudio-worker/src/moderation.tsworkers/bloomstudio-worker/src/queue.tsworkers/bloomstudio-worker/src/secondary-assets.tsworkers/bloomstudio-worker/src/types.tswrangler.jsonc
💤 Files with no reviewable changes (24)
- todo/reference-next-cache-and-actions-implementation.md
- stories/03-seo-support-pages.md
- todo/public-feed-unauthenticated-access.md
- todo/studio-optimizations.md
- todo/bandwidth-audit-analysis.md
- todo/next-cache-implementation-plan.md
- todo/convex-bandwidth-audit.md
- todo/public-feed-phase2-story1-landing-integration.md
- todo/paused-batch-cleanup.md
- todo/living-strip-story.md
- stories/04-seo-deployment-configs.md
- todo/mobile-experience-spec.md
- stories/01-seo-shared-architecture.md
- todo/product-hunt-launch-plan.md
- todo/video-preview-refactor-plan.md
- stories/02-seo-solutions-pages.md
- todo/music-generation/suno-discovery.md
- playground/test-grid.html
- todo/nsfw-sensitive-content-detection.md
- todo/reference-next-cache-research.md
- untested-files.md
- todo/generating-state-design.md
- stories/studio-layout.md
- todo/public-feed-phase2-story2-image-quality.md
✅ Files skipped from review due to trivial changes (1)
- components/studio/gallery/lightbox-preload.ts
🚧 Files skipped from review as they are similar to previous changes (10)
- convex/lib/promptInference.test.ts
- convex/orphanCleanup.ts
- convex/contentAnalysis.test.ts
- app/api/images/delete/route.ts
- convex/lib/promptInference.ts
- app/api/images/delete-bulk/route.ts
- components/studio/layout/studio-shell.test.tsx
- components/github-star-button.tsx
- workers/bloomstudio-worker/src/index.ts
- .github/workflows/release.yml
| function isGeneratedImageId(value: string | undefined): value is Id<"generatedImages"> { | ||
| return typeof value === "string" && value.length > 0; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify whether `LightboxImage.id` is always the same logical type as `_id`
# before using it in `api.generatedImages.getById`.
fd 'use-image-lightbox.ts' --type f -x sed -n '1,220p' {}
fd 'pollinations.schema.ts' --type f -x rg -n -C3 'type GeneratedImage|interface GeneratedImage|\bid:|\b_id:'
rg -n -C3 'nextImage|previousImage|_id:|id:' components hooks app -g '!**/*.test.*'Repository: Simplereally/bloomstudio
Length of output: 50380
🏁 Script executed:
cat -n components/images/image-lightbox.tsx | sed -n '500,530p'Repository: Simplereally/bloomstudio
Length of output: 1011
🏁 Script executed:
rg -n "LightboxImage|setLightboxImage" components -A 5 -B 2 | grep -E "(id:|_id:|LightboxImage)" | head -40Repository: Simplereally/bloomstudio
Length of output: 3831
🏁 Script executed:
rg -n "id:" components/studio/layout/studio-shell.tsx | head -20Repository: Simplereally/bloomstudio
Length of output: 290
🏁 Script executed:
rg -n "const.*LightboxImage|nextVersion.*=" components/images/image-lightbox.tsx -A 10 | grep -E "id:|_id:"Repository: Simplereally/bloomstudio
Length of output: 50
🏁 Script executed:
sed -n '220,240p' components/images/image-lightbox.tsxRepository: Simplereally/bloomstudio
Length of output: 694
🏁 Script executed:
sed -n '180,230p' components/images/image-lightbox.tsxRepository: Simplereally/bloomstudio
Length of output: 1474
🏁 Script executed:
rg -n "id:" components -t tsx -t ts | rg "LightboxImage|GeneratedImage" -B 2 -A 2 | head -60Repository: Simplereally/bloomstudio
Length of output: 95
🏁 Script executed:
rg -n "id: newImage.id|_id: newImage._id" components/images/image-lightbox.tsxRepository: Simplereally/bloomstudio
Length of output: 121
🏁 Script executed:
rg -n "GeneratedImage\|GeneratedImageSchema" convex -A 8 -B 2 | grep -E "id:|_id:|generatedImages" | head -40Repository: Simplereally/bloomstudio
Length of output: 50
🏁 Script executed:
cat -n convex/pollinations.schema.ts | sed -n '72,92p'Repository: Simplereally/bloomstudio
Length of output: 129
🏁 Script executed:
fd "schema" convex --type f | head -5Repository: Simplereally/bloomstudio
Length of output: 84
🏁 Script executed:
cat -n convex/schema.ts | sed -n '70,100p'Repository: Simplereally/bloomstudio
Length of output: 1380
🏁 Script executed:
rg -n "GeneratedImage|export const generatedImages" convex/schema.ts -A 25 | head -80Repository: Simplereally/bloomstudio
Length of output: 50
🏁 Script executed:
rg -n "generatedImages" convex/schema.ts -B 5 -A 15Repository: Simplereally/bloomstudio
Length of output: 5766
🏁 Script executed:
rg -n "function isGeneratedImages" components -B 2 -A 3Repository: Simplereally/bloomstudio
Length of output: 515
Strengthen the type guard for isGeneratedImageId().
The predicate at lines 53–55 accepts any non-empty string, which is overly permissive for a Convex document ID. While the fallback in warmCandidateDetails() (lines 507–521) uses candidate?._id ?? candidate?.id defensively, the type guard should validate the actual shape of a Convex Id<"generatedImages"> rather than just checking for a non-empty string. Consider a stricter guard (e.g., checking for expected ID format) to prevent accidental misuse of non-Convex identifiers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/images/image-lightbox.tsx` around lines 53 - 55, The current
isGeneratedImageId guard accepts any non-empty string; tighten it to validate
the runtime shape of a Convex Id for the generatedImages collection by checking
the collection prefix and expected format (e.g., ensure the value is a string
that begins with the "generatedImages" collection identifier/prefix and contains
only allowed ID characters such as alphanumerics, dashes/underscores, and an
expected separator like ":"), and reject values that don't match that pattern;
update the isGeneratedImageId function accordingly and keep callers like
warmCandidateDetails (which does candidate?._id ?? candidate?.id) unchanged but
now safe because the guard enforces the Convex Id shape.
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (1)
workers/bloomstudio-worker/src/queue.ts (1)
17-31: Replace these dispatcher casts with a real type guard.The five
as QueueMessage<Extract<...>>assertions sidestep the queue contract on the hot path. A small predicate/helper keeps the switch exhaustive without dropping type safety. As per coding guidelines, "Avoidascasts in TypeScript; prefersatisfies, generics, and runtime validation."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workers/bloomstudio-worker/src/queue.ts` around lines 17 - 31, Create a proper TypeScript type guard instead of using the five "as" casts: add a helper like isQueueMessageOfType<T extends GenerationQueueMessage["jobType"]>(msg: unknown, jobType: T): msg is QueueMessage<Extract<GenerationQueueMessage, { jobType: T }>> that checks runtime shape (msg && typeof msg === "object" && "body" in msg && msg.body?.jobType === jobType) and use it in each switch case to narrow the type before calling the handlers (handleSingleGenerationMessage, handleBatchItemMessage, handleSecondaryAssetsMessage, handlePromptInferenceMessage, handleVisionAnalysisMessage); remove the casts, and keep the switch exhaustive (add a default that handles unexpected types or asserts never) so the compiler and runtime both enforce the queue contract.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@components/images/image-lightbox.tsx`:
- Around line 614-625: The isTypingTarget guard currently only blocks text
inputs; expand it so global arrow shortcuts also ignore focused media and
interactive controls: detect and return true for HTMLMediaElement instances
(e.g., <video>, <audio>), HTMLButtonElement, elements with
role="button"/"slider"/"tab", elements with a non-negative tabindex or
contentEditable (already covered), and any element matching controls selectors
(e.g., video controls, audio controls, .media-controls). Update the
isTypingTarget function and any equivalent checks used for arrow navigation
(e.g., the handlers referenced near lines 628-641) to use this enhanced
predicate so ArrowLeft/ArrowRight are skipped when focus is on media or
interactive controls.
- Around line 94-100: The display URL is being swapped when details arrive
causing the preview to reload; keep the preview stable by only promoting the
fetched value into the "original" field and not replacing the display URL—so set
resolvedOriginalUrl from resolvedDetails?.url ?? image.originalUrl ?? image.url,
set resolvedContentType from resolvedDetails?.contentType ?? image.contentType,
but ensure resolvedDisplayUrl remains image.url (or image.originalUrl when it
already exists) instead of using resolvedOriginalUrl; update the logic around
resolvedDisplayUrl (and references used by LightboxMediaDisplay) so the fetched
URL only populates originalUrl and does not replace the currently rendered
preview URL.
- Around line 581-586: Preload currently uses new window.Image() with
candidateUrl and calls markLightboxImageUrlReady immediately when that preload
finishes, but LightboxMediaDisplay may render an optimized next/image path
(/_next/image?url=...) so the preload completion can mismatch the actual render
URL; change the logic to compute the actual render URL used by
LightboxMediaDisplay (the same transformation used when rendering next/image
with unoptimized=false or the image URL resolver you use for optimized images),
then only call markLightboxImageUrlReady when the preloaded img.src equals that
computed render URL (or otherwise normalize both URLs and compare their final
paths), falling back to the existing behavior if you cannot derive an optimized
URL; ensure you reference markLightboxImageUrlReady, candidateUrl and the
LightboxMediaDisplay rendering path in your change.
In `@convex/batchGeneration.ts`:
- Around line 678-690: The pause branch that flips batchItem.dispatchStatus back
to "pending" must also release the parent in-flight slot to avoid leaking
inFlightCount; modify the block handling batchItem._id (inside the if checking
batchJob.status === "paused" && batchItem.status === "pending" &&
batchItem.dispatchStatus === "dispatched") so after patching batchItem you also
decrement the corresponding batchJob.inFlightCount (the same job referenced by
batchJob._id) via ctx.db.patch/update and update timestamps; this mirrors the
inverse of markBatchItemDispatched() which increments inFlightCount and ensures
the batch returns to zero when all slots are released.
In `@workers/bloomstudio-worker/src/generation.ts`:
- Around line 256-274: The finally block currently deletes the R2 object
whenever completeResult?.completed !== true, which can remove an object if the
/workers/single-generation/complete response was lost/ambiguous; change this to
only call deleteR2Object(uploadResult.r2Key) when the Convex response explicitly
indicates a definitive negative acknowledgement (for example
completeResult?.completed === false AND an explicit failure status/code field
such as completeResult?.status === "failed" or completeResult?.acknowledged ===
false); otherwise do not delete and let the orphan sweeper handle it. Update the
postConvexJsonWithRetry handling around completeResult, adjust the conditional
before calling deleteR2Object in the block using completeResult and
uploadResult, and make the same defensive change in the other similar block that
uses postConvexJsonWithRetry/completeResult.
- Around line 295-303: When continuation.canContinue is false we currently just
ack the message and leak a claimed item; instead call the batch-item
finish/decrement flow used elsewhere so the parent inFlightCount is decremented
and the item is settled. Locate where ContinueBatchItemResponse (variable
continuation) is checked in generation.ts and, when !continuation.canContinue
(or missing ownerId/generationParams/apiKey), invoke the same finish/decrement
logic used by finishBatchItemForWorker / decrementInFlightCount (or POST to the
existing "/workers/batch-item/finish" endpoint) to mark the item
finished/cancelled before calling message.ack(); do the same fix for the similar
branch around lines 333-341.
- Around line 206-214: The current logic acknowledges the message even when
continuation.canContinue is true but continuation.apiKey is undefined, which
silently leaves the generation stuck; update the check around
postConvexJsonWithRetry/ContinueGenerationResponse so you only ack when either
continuation.canContinue is false (so nothing to do) or when all required fields
including continuation.apiKey, continuation.ownerId and
continuation.generationParams are present; if canContinue is true but apiKey is
missing, do not call message.ack() — instead return without acknowledging or
raise a retryable error so the message will be retried; locate this logic around
the continuation variable returned from postConvexJsonWithRetry and adjust the
conditional that currently calls message.ack().
In `@workers/bloomstudio-worker/src/queue.ts`:
- Around line 40-55: The current max-attempt branch just calls message.ack(),
which can leave Convex rows stuck in processing if the job was claimed by
handleSingleGenerationMessage or handleBatchItemMessage; update the branch so
that before ack() you explicitly settle the claimed job using the same
failure/release path those handlers use (e.g., invoke the generation fail or
release/complete mutation used inside handleSingleGenerationMessage and
handleBatchItemMessage, or call the shared cleanup function they call on error),
then ack the message; ensure you reference and call the job-settling function(s)
used by handleSingleGenerationMessage/handleBatchItemMessage so the Convex row
is marked failed or released rather than left in processing.
---
Nitpick comments:
In `@workers/bloomstudio-worker/src/queue.ts`:
- Around line 17-31: Create a proper TypeScript type guard instead of using the
five "as" casts: add a helper like isQueueMessageOfType<T extends
GenerationQueueMessage["jobType"]>(msg: unknown, jobType: T): msg is
QueueMessage<Extract<GenerationQueueMessage, { jobType: T }>> that checks
runtime shape (msg && typeof msg === "object" && "body" in msg &&
msg.body?.jobType === jobType) and use it in each switch case to narrow the type
before calling the handlers (handleSingleGenerationMessage,
handleBatchItemMessage, handleSecondaryAssetsMessage,
handlePromptInferenceMessage, handleVisionAnalysisMessage); remove the casts,
and keep the switch exhaustive (add a default that handles unexpected types or
asserts never) so the compiler and runtime both enforce the queue contract.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ed8cfaa6-158b-4824-9df6-342c43bcc7bb
📒 Files selected for processing (10)
components/images/image-lightbox.tsxcomponents/images/lightbox/lightbox-media-display.tsxconvex/batchGeneration.tsconvex/cloudflareDispatch.tsworkers/bloomstudio-worker/src/generation.tsworkers/bloomstudio-worker/src/index.tsworkers/bloomstudio-worker/src/moderation.tsworkers/bloomstudio-worker/src/queue.tsworkers/bloomstudio-worker/src/secondary-assets.tsworkers/bloomstudio-worker/src/types.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- workers/bloomstudio-worker/src/index.ts
- workers/bloomstudio-worker/src/moderation.ts
| const img = new window.Image(); | ||
| img.decoding = "async"; | ||
| img.onload = () => { | ||
| markLightboxImageUrlReady(candidateUrl); | ||
| }; | ||
| img.src = candidateUrl; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Next.js 15, when next/imagereceives a non-HTTPsrcandunoptimized={false}, does the browser fetch the original srcdirectly or an optimized/_next/image?... URL?
💡 Result:
For data: and blob: src values, Next.js 15 forces unoptimized = true internally, even if you pass unoptimized={false}. In that case, the rendered <img> uses the original src directly, so the browser does not request an optimized /_next/image?... URL. (raw.githubusercontent.com)
(Only “normal” URLs—e.g. /foo.png or https://...—use the optimization proxy and become /_next/image?... when unoptimized is not enabled.) (nextjs.org)
Citations:
- 1: https://raw.githubusercontent.com/vercel/next.js/canary/packages/next/src/shared/lib/get-img-props.ts
- 2: https://nextjs.org/docs/app/api-reference/components/image
🏁 Script executed:
# Find the files mentioned in the review
fd -t f "image-lightbox.tsx" -o -t f "lightbox-media-display.tsx"Repository: Simplereally/bloomstudio
Length of output: 184
🏁 Script executed:
# Find the files using git ls-files
git ls-files | grep -E "(image-lightbox|lightbox-media-display)"Repository: Simplereally/bloomstudio
Length of output: 261
🏁 Script executed:
# Read the image-lightbox.tsx file around lines 581-586
wc -l components/images/image-lightbox.tsxRepository: Simplereally/bloomstudio
Length of output: 108
🏁 Script executed:
# Read around lines 581-586 with context
sed -n '570,600p' components/images/image-lightbox.tsxRepository: Simplereally/bloomstudio
Length of output: 749
🏁 Script executed:
# Get more context to understand the warmCandidateMedia function and candidateUrl sources
sed -n '550,600p' components/images/image-lightbox.tsxRepository: Simplereally/bloomstudio
Length of output: 1255
🏁 Script executed:
# Find where markLightboxImageUrlReady is defined and used
rg "markLightboxImageUrlReady" -A 5 -B 5 components/images/image-lightbox.tsx | head -50Repository: Simplereally/bloomstudio
Length of output: 660
🏁 Script executed:
# Find the definition of markLightboxImageUrlReady
rg "export.*markLightboxImageUrlReady|function markLightboxImageUrlReady|const markLightboxImageUrlReady" -A 10Repository: Simplereally/bloomstudio
Length of output: 856
🏁 Script executed:
# Read the entire lightbox-media-display.tsx file to understand the usage
wc -l components/images/lightbox/lightbox-media-display.tsxRepository: Simplereally/bloomstudio
Length of output: 125
🏁 Script executed:
# Read the entire lightbox-media-display.tsx file
cat -n components/images/lightbox/lightbox-media-display.tsxRepository: Simplereally/bloomstudio
Length of output: 11194
Only mark images as ready when the preload matches the render path.
The plain Image() preload on line 584 loads candidateUrl directly, but LightboxMediaDisplay renders non-HTTP URLs through next/image with unoptimized=false, which requests /_next/image?url=... instead. This mismatch causes the loader to hide when the preload completes, before the actual optimized image request finishes, leaving a blank frame during navigation.
Guard the ready state until paths align
const img = new window.Image();
img.decoding = "async";
img.onload = () => {
- markLightboxImageUrlReady(candidateUrl);
+ if (candidateUrl.startsWith("http")) {
+ markLightboxImageUrlReady(candidateUrl);
+ }
};
img.src = candidateUrl;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const img = new window.Image(); | |
| img.decoding = "async"; | |
| img.onload = () => { | |
| markLightboxImageUrlReady(candidateUrl); | |
| }; | |
| img.src = candidateUrl; | |
| const img = new window.Image(); | |
| img.decoding = "async"; | |
| img.onload = () => { | |
| if (candidateUrl.startsWith("http")) { | |
| markLightboxImageUrlReady(candidateUrl); | |
| } | |
| }; | |
| img.src = candidateUrl; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/images/image-lightbox.tsx` around lines 581 - 586, Preload
currently uses new window.Image() with candidateUrl and calls
markLightboxImageUrlReady immediately when that preload finishes, but
LightboxMediaDisplay may render an optimized next/image path
(/_next/image?url=...) so the preload completion can mismatch the actual render
URL; change the logic to compute the actual render URL used by
LightboxMediaDisplay (the same transformation used when rendering next/image
with unoptimized=false or the image URL resolver you use for optimized images),
then only call markLightboxImageUrlReady when the preloaded img.src equals that
computed render URL (or otherwise normalize both URLs and compare their final
paths), falling back to the existing behavior if you cannot derive an optimized
URL; ensure you reference markLightboxImageUrlReady, candidateUrl and the
LightboxMediaDisplay rendering path in your change.
| if (!canClaimPending && !canReclaimProcessing) { | ||
| // When the job is paused and the item was pre-dispatched but not | ||
| // yet claimed, release it back to pending so it can be | ||
| // re-dispatched when the job resumes. | ||
| if ( | ||
| batchJob.status === "paused" && | ||
| batchItem.status === "pending" && | ||
| batchItem.dispatchStatus === "dispatched" | ||
| ) { | ||
| await ctx.db.patch(batchItem._id, { | ||
| dispatchStatus: "pending", | ||
| updatedAt: Date.now(), | ||
| }) |
There was a problem hiding this comment.
Release the parent in-flight slot when pause undoes a pre-dispatch.
markBatchItemDispatched() already increments batchJobs.inFlightCount. This pause branch only flips the item back to dispatchStatus: "pending", so each pre-dispatched item released during pause leaks one in-flight count and the batch never returns to zero after resume.
🩹 Proposed fix
if (
batchJob.status === "paused" &&
batchItem.status === "pending" &&
batchItem.dispatchStatus === "dispatched"
) {
+ const now = Date.now()
await ctx.db.patch(batchItem._id, {
dispatchStatus: "pending",
- updatedAt: Date.now(),
+ updatedAt: now,
})
+ await ctx.db.patch(args.batchJobId, {
+ inFlightCount: Math.max(0, (batchJob.inFlightCount ?? 0) - 1),
+ updatedAt: now,
+ })
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@convex/batchGeneration.ts` around lines 678 - 690, The pause branch that
flips batchItem.dispatchStatus back to "pending" must also release the parent
in-flight slot to avoid leaking inFlightCount; modify the block handling
batchItem._id (inside the if checking batchJob.status === "paused" &&
batchItem.status === "pending" && batchItem.dispatchStatus === "dispatched") so
after patching batchItem you also decrement the corresponding
batchJob.inFlightCount (the same job referenced by batchJob._id) via
ctx.db.patch/update and update timestamps; this mirrors the inverse of
markBatchItemDispatched() which increments inFlightCount and ensures the batch
returns to zero when all slots are released.
| const continuation = await postConvexJsonWithRetry<ContinueGenerationResponse>(env, "/workers/single-generation/continue", { | ||
| generationId: message.body.generationId, | ||
| claimToken, | ||
| }) | ||
|
|
||
| if (!continuation.canContinue || !continuation.ownerId || !continuation.generationParams || !continuation.apiKey) { | ||
| message.ack() | ||
| return | ||
| } |
There was a problem hiding this comment.
Don't acknowledge a claimed generation when /continue only failed to supply the API key.
convex/singleGeneration.ts:660-710 explicitly catches decrypt/user lookup failures and can return { canContinue: true, apiKey: undefined }. These branches treat that as success and ack() the message, which silently strands the generation in processing instead of retrying or failing it.
Also applies to: 242-250
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workers/bloomstudio-worker/src/generation.ts` around lines 206 - 214, The
current logic acknowledges the message even when continuation.canContinue is
true but continuation.apiKey is undefined, which silently leaves the generation
stuck; update the check around
postConvexJsonWithRetry/ContinueGenerationResponse so you only ack when either
continuation.canContinue is false (so nothing to do) or when all required fields
including continuation.apiKey, continuation.ownerId and
continuation.generationParams are present; if canContinue is true but apiKey is
missing, do not call message.ack() — instead return without acknowledging or
raise a retryable error so the message will be retried; locate this logic around
the continuation variable returned from postConvexJsonWithRetry and adjust the
conditional that currently calls message.ack().
- Expand lightbox keyboard navigation guard to skip arrow keys when focus is on media elements, buttons, or ARIA interactive controls - Stabilize lightbox display URL so the preview doesn't reload when full image details arrive asynchronously - Add releaseBatchItemFromWorker mutation to release paused batch items back to pending state and decrement inFlightCount - Fix R2 object deletion to only occur on definitive negative acknowledgement, not on ambiguous/lost Convex responses - Settle claimed generation/batch items when worker continuation is denied instead of silently acking and leaking Convex rows - Throw retryable error when continuation response is missing required fields (apiKey, ownerId) rather than silently dropping the message - Add best-effort settlement on final queue retry attempt using skipClaimTokenCheck to prevent stuck processing rows - Replace unsafe type casts in queue handler with type-safe narrowing and exhaustive switch Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
What changed
convex/singleGenerationProcessor.tsconvex/batchProcessor.tsconvex/secondaryAssetsProcessor.tsconvex/lib/r2.tsconvex/lib/videoThumbnail.tsconvex/lib/videoPreview.tsbunx convex deploy --cmd 'bun run build'Production readiness work completed
401 Unauthorizedon worker dispatch callbacksCLOUDFLARE_API_TOKENBLOOMSTUDIO_WORKER_SHARED_SECRETValidation
bunx tsc --noEmitbun run cf:typesbun run test C:\Code\pixelstream\components\studio\gallery\gallery-thumbnail.test.tsxcloudflareDispatch:dispatchSecondaryAssetsafter secret correctionRemaining follow-up
Summary by Sourcery
Migrate generation, moderation, and video secondary-asset processing from Convex Node actions to a Cloudflare worker plane while preserving Convex as the control plane and wiring Cloudflare into the production release path.
New Features:
Bug Fixes:
Enhancements:
Build:
CI:
Documentation:
Tests:
Chores:
Summary by CodeRabbit
New Features
Improvements
Chores