fix: finalize convex optimization remediations#44
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub. 1 Skipped Deployment
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThis pull request refactors the single-generation API (replacing Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ConvexAction as Convex Action
participant Processor as Generation Processor
participant Poll as Pollinations Fetch
participant R2 as R2 Storage
participant DB as Convex DB
Client->>ConvexAction: call singleGenerationProcessor.processGeneration(generationId, apiKey)
ConvexAction->>Processor: validate & start processing
alt Dev mock enabled
Processor->>Processor: generate placeholder image
else Production
Processor->>Poll: fetchPollinationsWithTimeout(url, apiKey)
alt Success
Poll-->>Processor: image bytes/response
else Retryable failure
Poll-->>Processor: error + retryKind
Processor->>DB: adjustAdaptiveDelay(batchId, outcome="throttle"/"transient_error")
Processor->>ConvexAction: schedule retry (attempt+1)
else Permanent failure
Poll-->>Processor: permanent error
Processor->>DB: recordBatchItemResult(failure)
end
end
Processor->>R2: upload image (on success)
Processor->>DB: recordBatchItemResult(success), adjustAdaptiveDelay(success)
Processor-->>Client: processing result / status
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d9220d2556
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
convex/batchGeneration.ts (1)
722-728:⚠️ Potential issue | 🟠 MajorBug:
resumeDecision.itemIndexis computed but not used.
getResumeBatchDecisioncomputes the correctitemIndexthat accounts for already-settled items, butresumeBatchJobignores it and usesbatchJob.currentIndexinstead:if (batchJob.currentIndex < batchJob.totalCount) { await ctx.scheduler.runAfter(0, internal.batchProcessor.processBatchItem, { batchJobId: args.batchJobId, itemIndex: batchJob.currentIndex, // Should be resumeDecision.itemIndex }) }This undermines the settled-item tracking and could lead to duplicate item processing on resume.
🐛 Suggested fix
- // Schedule the next item for processing - if (batchJob.currentIndex < batchJob.totalCount) { + // Schedule the next item for processing using the decision's computed index + if (resumeDecision.itemIndex !== null) { await ctx.scheduler.runAfter(0, internal.batchProcessor.processBatchItem, { batchJobId: args.batchJobId, - itemIndex: batchJob.currentIndex, + itemIndex: resumeDecision.itemIndex, }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/batchGeneration.ts` around lines 722 - 728, resumeBatchJob is scheduling the next item using batchJob.currentIndex instead of the adjusted index returned by getResumeBatchDecision; change the scheduled payload to use resumeDecision.itemIndex so settled items are skipped. Locate where resumeBatchJob calls ctx.scheduler.runAfter(..., internal.batchProcessor.processBatchItem, { batchJobId: args.batchJobId, itemIndex: batchJob.currentIndex }) and replace batchJob.currentIndex with the computed resumeDecision.itemIndex (from getResumeBatchDecision) and ensure the surrounding conditional uses resumeDecision.itemIndex when deciding to schedule the next item.
🧹 Nitpick comments (5)
convex-optimiziations.md (1)
9-9: Stale reference to removed API.This line references
singleGeneration.dispatchGeneration, which is being removed in this PR. Consider updating to reflect the newsingleGenerationProcessor.processGenerationpath, or note that this was the prior state being optimized.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex-optimiziations.md` at line 9, Update the stale API reference: replace mentions of singleGeneration.dispatchGeneration with the new singleGenerationProcessor.processGeneration symbol (or explicitly state that dispatchGeneration was the prior implementation being optimized away) so the doc reflects the current code paths; update the text around the Convex/ctx.runAction comment to reference singleGenerationProcessor.processGeneration and remove or rephrase the note about crossing runtimes accordingly.hooks/queries/use-generate-image.test.tsx (1)
126-128: Minor indentation inconsistency.The
singleGenerationProcessorobject has extra indentation compared to sibling properties likesingleGenerationandgeneratedImages.🔧 Suggested fix
}, - singleGenerationProcessor: { - processGeneration: "singleGenerationProcessor.processGeneration", - }, + singleGenerationProcessor: { + processGeneration: "singleGenerationProcessor.processGeneration", + }, generatedImages: {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hooks/queries/use-generate-image.test.tsx` around lines 126 - 128, The object key `singleGenerationProcessor` is over-indented compared to its sibling properties (`singleGeneration`, `generatedImages`); adjust the indentation of the `singleGenerationProcessor` block (and its nested `processGeneration: "singleGenerationProcessor.processGeneration"`) so it aligns with the other top-level properties to maintain consistent formatting.convex/singleGenerationProcessor.ts (1)
172-461: Consider extracting helper functions for readability.
runGenerationProcessoris ~290 lines handling validation, API calls, cropping, upload, persistence, and error recovery. While the sequential flow is logical, extracting some phases into named helpers would improve readability and testability:
- Validation + cancellation checks
- Pollinations API call with retry scheduling
- Post-processing (crop + upload)
- Persistence and completion
This is optional given the function's clear linear structure, but would help as the codebase grows.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/singleGenerationProcessor.ts` around lines 172 - 461, The runGenerationProcessor function is very large and should be split into clear helper functions to improve readability and testability: extract the validation/ownership/cancellation logic into a helper (e.g., validateAndClaimGeneration using runGenerationProcessor, getGeneration, isCancelled, claimPendingGeneration), extract the Pollinations request + retry scheduling into a helper (e.g., callPollinationsWithRetry using buildPollinationsUrl, fetchPollinationsWithTimeout, and scheduler.runAfter), extract post-processing (crop + prepare upload buffer) into a helper (e.g., processImageBuffer using shouldCropDirtberry, cropDirtberryImageBuffer), and extract persistence/completion into a helper (e.g., persistGenerationResult using uploadMediaWithThumbnail, storeGeneratedImage, updateGenerationStatus); move the corresponding try/catch and cancellation checks into these helpers and call them sequentially from runGenerationProcessor so the top-level function becomes a concise orchestrator.convex/lib/batchGenerationState.test.ts (1)
39-53: Test coverage gap: sparse settled indexes not covered.This test only verifies the case where
settledItemIndexesare consecutive from 0 ([0, 1, 2, 3, 4]). However, the implementation ingetResumeBatchDecisiononly increments by 1 when the current index is settled—it does not scan forward through multiple consecutive settled items.For sparse settled sets (e.g.,
settledItemIndexes = [0, 1, 5]withcurrentIndex = 0), the function would returnitemIndex = 1, but index 1 is already settled. This could lead to duplicate scheduling.Consider adding a test case with non-consecutive settled indexes to verify the implementation handles or rejects this scenario correctly.
💡 Suggested additional test case
it("handles sparse settled indexes correctly", () => { // If indexes 0 and 2 are settled but 1 is not, should return 1 expect( getResumeBatchDecision({ currentIndex: 0, totalCount: 10, inFlightCount: 0, settledItemIndexes: [0, 2], }) ).toEqual({ canSchedule: true, itemIndex: 1, // or should scan to find first unsettled? nextInFlightCount: 1, reason: null, }) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/lib/batchGenerationState.test.ts` around lines 39 - 53, The current behavior of getResumeBatchDecision can return an itemIndex that is already in settledItemIndexes when settled indexes are sparse; update getResumeBatchDecision to treat settledItemIndexes as a set and advance itemIndex repeatedly (starting from currentIndex) until you find the first index < totalCount that is not in settledItemIndexes (or determine canSchedule=false if none), adjusting nextInFlightCount accordingly, and add a unit test in batchGenerationState.test.ts (e.g., "handles sparse settled indexes correctly") that asserts for inputs like currentIndex=0, totalCount=10, inFlightCount=0, settledItemIndexes=[0,2] that the returned itemIndex is the first unsettled index (1) and canSchedule/nextInFlightCount are correct.convex/batchProcessor.ts (1)
67-160: Code duplication with singleGenerationProcessor.ts.
formatApiErrorText,PollinationsAttemptResulttype, andfetchPollinationsWithTimeoutare nearly identical to their counterparts insingleGenerationProcessor.ts. The only difference is thatbatchProcessorincludes aretryKindfield.Consider extracting these into a shared module (e.g.,
convex/lib/pollinationsClient.ts) to reduce duplication and ensure consistent behavior.♻️ Suggested shared module structure
// convex/lib/pollinationsClient.ts export type RetryKind = "throttle" | "transient_error" | "permanent" export type PollinationsAttemptResult = | { success: true; response: Response } | { success: false errorMessage: string statusCode?: number retryable: boolean retryKind?: RetryKind // optional for single-gen, required for batch } export function formatApiErrorText(rawErrorText: string): string { ... } export async function fetchPollinationsWithTimeout( url: string, apiKey: string, timeoutMs: number, logger: string, includeRetryKind?: boolean ): Promise<PollinationsAttemptResult> { ... }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/batchProcessor.ts` around lines 67 - 160, Extract the duplicated types and functions into a shared module (e.g., convex/lib/pollinationsClient.ts): move RetryKind, PollinationsAttemptResult, formatApiErrorText, and fetchPollinationsWithTimeout into that file; change PollinationsAttemptResult’s failed variant to make retryKind optional (retryKind?: RetryKind) so singleGenerationProcessor can ignore it, and add an optional parameter to fetchPollinationsWithTimeout (e.g., includeRetryKind?: boolean) that controls whether the returned result includes retryKind for batchProcessor; then update batchProcessor.ts and singleGenerationProcessor.ts to import RetryKind, PollinationsAttemptResult, formatApiErrorText, and fetchPollinationsWithTimeout from the new module and remove the duplicated implementations (ensure fetchPollinationsWithTimeout still computes retryable and retryKind internally and only attaches retryKind when includeRetryKind is true).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@convex/lib/batchGenerationState.ts`:
- Around line 26-30: getResumeBatchDecision currently only skips one settled
index which can reschedule already-settled items; replace the single-step logic
that computes itemIndex with a scan to the first unsettled index: use the
settledItemIndexes (defaulting to []) and advance from state.currentIndex while
settledItemIndexes.includes(index) to find the first index not settled, then use
that as itemIndex (handle empty/default arrays and preserve use of
state.currentIndex when nothing is settled). Update the logic in
batchGenerationState.ts where settledItemIndexes and itemIndex are computed to
perform this loop so duplicate scheduling is prevented.
In `@hooks/use-batch-mode.ts`:
- Around line 304-307: The catch block that handles errors from resumeBatchJob
should not blindly cast the thrown value to Error; instead detect ConvexError
shape and extract a usable message before calling showErrorToast. Update the
catch to check if error has a 'data' object with a 'message' string (the
ConvexError payload) or fallback to error.message or String(error), then pass
that extracted message (or wrap it in an Error) to showErrorToast; remove the
"as Error" cast and ensure console.error logs the original error and the
extracted message for clarity. Reference resumeBatchJob, ConvexError
(error.data.message), showErrorToast, and the catch block in useBatchMode.
---
Outside diff comments:
In `@convex/batchGeneration.ts`:
- Around line 722-728: resumeBatchJob is scheduling the next item using
batchJob.currentIndex instead of the adjusted index returned by
getResumeBatchDecision; change the scheduled payload to use
resumeDecision.itemIndex so settled items are skipped. Locate where
resumeBatchJob calls ctx.scheduler.runAfter(...,
internal.batchProcessor.processBatchItem, { batchJobId: args.batchJobId,
itemIndex: batchJob.currentIndex }) and replace batchJob.currentIndex with the
computed resumeDecision.itemIndex (from getResumeBatchDecision) and ensure the
surrounding conditional uses resumeDecision.itemIndex when deciding to schedule
the next item.
---
Nitpick comments:
In `@convex-optimiziations.md`:
- Line 9: Update the stale API reference: replace mentions of
singleGeneration.dispatchGeneration with the new
singleGenerationProcessor.processGeneration symbol (or explicitly state that
dispatchGeneration was the prior implementation being optimized away) so the doc
reflects the current code paths; update the text around the Convex/ctx.runAction
comment to reference singleGenerationProcessor.processGeneration and remove or
rephrase the note about crossing runtimes accordingly.
In `@convex/batchProcessor.ts`:
- Around line 67-160: Extract the duplicated types and functions into a shared
module (e.g., convex/lib/pollinationsClient.ts): move RetryKind,
PollinationsAttemptResult, formatApiErrorText, and fetchPollinationsWithTimeout
into that file; change PollinationsAttemptResult’s failed variant to make
retryKind optional (retryKind?: RetryKind) so singleGenerationProcessor can
ignore it, and add an optional parameter to fetchPollinationsWithTimeout (e.g.,
includeRetryKind?: boolean) that controls whether the returned result includes
retryKind for batchProcessor; then update batchProcessor.ts and
singleGenerationProcessor.ts to import RetryKind, PollinationsAttemptResult,
formatApiErrorText, and fetchPollinationsWithTimeout from the new module and
remove the duplicated implementations (ensure fetchPollinationsWithTimeout still
computes retryable and retryKind internally and only attaches retryKind when
includeRetryKind is true).
In `@convex/lib/batchGenerationState.test.ts`:
- Around line 39-53: The current behavior of getResumeBatchDecision can return
an itemIndex that is already in settledItemIndexes when settled indexes are
sparse; update getResumeBatchDecision to treat settledItemIndexes as a set and
advance itemIndex repeatedly (starting from currentIndex) until you find the
first index < totalCount that is not in settledItemIndexes (or determine
canSchedule=false if none), adjusting nextInFlightCount accordingly, and add a
unit test in batchGenerationState.test.ts (e.g., "handles sparse settled indexes
correctly") that asserts for inputs like currentIndex=0, totalCount=10,
inFlightCount=0, settledItemIndexes=[0,2] that the returned itemIndex is the
first unsettled index (1) and canSchedule/nextInFlightCount are correct.
In `@convex/singleGenerationProcessor.ts`:
- Around line 172-461: The runGenerationProcessor function is very large and
should be split into clear helper functions to improve readability and
testability: extract the validation/ownership/cancellation logic into a helper
(e.g., validateAndClaimGeneration using runGenerationProcessor, getGeneration,
isCancelled, claimPendingGeneration), extract the Pollinations request + retry
scheduling into a helper (e.g., callPollinationsWithRetry using
buildPollinationsUrl, fetchPollinationsWithTimeout, and scheduler.runAfter),
extract post-processing (crop + prepare upload buffer) into a helper (e.g.,
processImageBuffer using shouldCropDirtberry, cropDirtberryImageBuffer), and
extract persistence/completion into a helper (e.g., persistGenerationResult
using uploadMediaWithThumbnail, storeGeneratedImage, updateGenerationStatus);
move the corresponding try/catch and cancellation checks into these helpers and
call them sequentially from runGenerationProcessor so the top-level function
becomes a concise orchestrator.
In `@hooks/queries/use-generate-image.test.tsx`:
- Around line 126-128: The object key `singleGenerationProcessor` is
over-indented compared to its sibling properties (`singleGeneration`,
`generatedImages`); adjust the indentation of the `singleGenerationProcessor`
block (and its nested `processGeneration:
"singleGenerationProcessor.processGeneration"`) so it aligns with the other
top-level properties to maintain consistent formatting.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2e8ea68c-af6e-4792-b063-942d6325d2c7
⛔ Files ignored due to path filters (1)
convex/_generated/api.d.tsis excluded by!**/_generated/**
📒 Files selected for processing (15)
components/debug/limit-tester.tsxcomponents/studio/batch/batch-action-button.test.tsxcomponents/studio/batch/batch-action-button.tsxconvex-optimization-branch-review.mdconvex-optimiziations.mdconvex/batchGeneration.tsconvex/batchProcessor.tsconvex/lib/batchGenerationState.test.tsconvex/lib/batchGenerationState.tsconvex/schema.tsconvex/singleGeneration.tsconvex/singleGenerationProcessor.tshooks/queries/use-generate-image.test.tsxhooks/queries/use-generate-image.tshooks/use-batch-mode.ts
…ogic - Move `fetchPollinationsWithTimeout` and `formatApiErrorText` to pollinations lib - Add optional `retryKind` parameter to `fetchPollinationsWithTimeout` - Export `PollinationsAttemptResult` and `RetryKind` types from lib index - Fix batch resume to skip all settled indexes, not just current - Update `resumeBatchJob` to use `itemIndex` from resume decision - Add test coverage for sparse settled indexes in batch generation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4db6f31291
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| let itemIndex = state.currentIndex | ||
|
|
||
| while (settledItemIndexes.includes(itemIndex) && itemIndex < state.totalCount) { | ||
| itemIndex += 1 |
There was a problem hiding this comment.
Resume from earliest unsettled item index
getResumeBatchDecision starts scanning at currentIndex, which assumes every lower index is already settled. That assumption is broken by the new scheduled-retry flow in batchProcessor.processBatchItem: if an item fails transiently after a pause is requested, its retry is queued, later exits early on paused status, and leaves that lower itemIndex unsettled while currentIndex has already advanced. Resuming from currentIndex then skips the missed item permanently, so completedCount + failedCount can stay below totalCount and the batch can get stuck instead of completing.
Useful? React with 👍 / 👎.
|
Caution Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted. Error details |
There was a problem hiding this comment.
🧹 Nitpick comments (3)
lib/config/models.test.ts (1)
190-201: Consolidate these into one exact model-id assertion.This block is now a second source of truth for image models: Line 194 hard-codes the count, and the membership list here already diverges from
expectedImageModelsabove becausedirtberrywas only added in this section. Line 190 also checks counts only, so duplicate/misalignedmodel.idvalues could still slip through. Prefer one shared expected list and compare exact sets.♻️ Suggested refactor
- it("should have all model IDs", () => { - expect(ALL_MODEL_IDS.length).toBe(IMAGE_MODEL_IDS.length + VIDEO_MODEL_IDS.length) - }) - - it("should have correct image model IDs", () => { - expect(IMAGE_MODEL_IDS.length).toBe(16) - expect(IMAGE_MODEL_IDS).toContain("zimage") - expect(IMAGE_MODEL_IDS).toContain("gptimage") - expect(IMAGE_MODEL_IDS).toContain("flux") - expect(IMAGE_MODEL_IDS).toContain("klein") - expect(IMAGE_MODEL_IDS).toContain("imagen-4") - expect(IMAGE_MODEL_IDS).toContain("grok-imagine") - expect(IMAGE_MODEL_IDS).toContain("dirtberry") - expect(IMAGE_MODEL_IDS).toContain("flux-2-dev") - expect(IMAGE_MODEL_IDS).not.toContain("veo") - }) + const expectedImageModelIds = [ + "zimage", + "turbo", + "gptimage", + "gptimage-large", + "seedream", + "kontext", + "flux", + "nanobanana", + "seedream-pro", + "nanobanana-pro", + "klein", + "klein-large", + "imagen-4", + "grok-imagine", + "dirtberry", + "flux-2-dev", + ] + + it("should have correct image model IDs", () => { + expect(new Set(IMAGE_MODEL_IDS)).toEqual(new Set(expectedImageModelIds)) + }) + + it("should have all model IDs", () => { + expect(new Set(ALL_MODEL_IDS)).toEqual( + new Set([...IMAGE_MODEL_IDS, ...VIDEO_MODEL_IDS]), + ) + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/config/models.test.ts` around lines 190 - 201, The test duplicates image model expectations across IMAGE_MODEL_IDS and a hard-coded count; replace the many membership checks and the length assertion with a single exact equality assertion against the canonical expectedImageModels array (or rename that canonical list if needed) — update the "should have correct image model IDs" spec to assert that IMAGE_MODEL_IDS (or Array.from(IMAGE_MODEL_IDS) if it's a Set) equals expectedImageModels as a set/array (compare sorted arrays or use a set-equality helper) and remove the separate expect(IMAGE_MODEL_IDS.length). Ensure the test references the single source of truth variable (expectedImageModels) so additions like "dirtberry" are not duplicated in another place.convex/lib/batchGenerationState.test.ts (1)
1-6: Consider:@testing-library/jest-dom/vitestimport may be unnecessary.This import provides DOM matchers (like
toBeInTheDocument), but these tests only use Vitest's built-intoEqualandtoBe. The import is harmless but adds unused setup overhead.🧹 Optional cleanup
-import "@testing-library/jest-dom/vitest" import { describe, expect, it } from "vitest"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/lib/batchGenerationState.test.ts` around lines 1 - 6, The test file imports "@testing-library/jest-dom/vitest" which is unused; remove that import line at the top so only Vitest's utilities are imported and avoid unnecessary DOM matcher setup (update the top imports that currently include buildRecordBatchItemResultTransition and getResumeBatchDecision to only import from "vitest" and "./batchGenerationState" as shown, ensuring no other tests rely on jest-dom matchers).convex/singleGenerationProcessor.ts (1)
44-68: Dev mock configuration is duplicated from batchProcessor.ts.The constants and
shouldUseDevGenerationMockfunction are nearly identical between this file andconvex/batchProcessor.ts:43-54. Consider extracting to a shared utility inconvex/lib/if these need to stay in sync.💡 Optional: Extract shared dev mock utilities
Create a new file
convex/lib/devMock.ts:export const ENABLE_DEV_GENERATION_MOCK = process.env.CONVEX_ENABLE_DEV_GENERATION_MOCK === "true" export const MOCK_PNG_BASE64 = "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8/x8AAwMCAO7k6xwAAAAASUVORK5CYII=" export const MOCK_IMAGE_BUFFER = Buffer.from(MOCK_PNG_BASE64, "base64") export function shouldUseDevGenerationMock(isVideoRequest: boolean): boolean { const deployment = process.env.CONVEX_DEPLOYMENT ?? "" const isDevDeployment = deployment.startsWith("dev:") return ENABLE_DEV_GENERATION_MOCK && isDevDeployment && !isVideoRequest }Then import from both processors.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/singleGenerationProcessor.ts` around lines 44 - 68, Duplicate dev-mock config (ENABLE_DEV_GENERATION_MOCK, MOCK_PNG_BASE64, MOCK_IMAGE_BUFFER) and shouldUseDevGenerationMock are defined here and in batchProcessor; extract them into a single shared module (e.g., convex/lib/devMock) that exports those constants and the shouldUseDevGenerationMock function, update singleGenerationProcessor to import and use them, and remove the duplicated local definitions so both processors consume the shared utility.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@convex/lib/batchGenerationState.test.ts`:
- Around line 1-6: The test file imports "@testing-library/jest-dom/vitest"
which is unused; remove that import line at the top so only Vitest's utilities
are imported and avoid unnecessary DOM matcher setup (update the top imports
that currently include buildRecordBatchItemResultTransition and
getResumeBatchDecision to only import from "vitest" and "./batchGenerationState"
as shown, ensuring no other tests rely on jest-dom matchers).
In `@convex/singleGenerationProcessor.ts`:
- Around line 44-68: Duplicate dev-mock config (ENABLE_DEV_GENERATION_MOCK,
MOCK_PNG_BASE64, MOCK_IMAGE_BUFFER) and shouldUseDevGenerationMock are defined
here and in batchProcessor; extract them into a single shared module (e.g.,
convex/lib/devMock) that exports those constants and the
shouldUseDevGenerationMock function, update singleGenerationProcessor to import
and use them, and remove the duplicated local definitions so both processors
consume the shared utility.
In `@lib/config/models.test.ts`:
- Around line 190-201: The test duplicates image model expectations across
IMAGE_MODEL_IDS and a hard-coded count; replace the many membership checks and
the length assertion with a single exact equality assertion against the
canonical expectedImageModels array (or rename that canonical list if needed) —
update the "should have correct image model IDs" spec to assert that
IMAGE_MODEL_IDS (or Array.from(IMAGE_MODEL_IDS) if it's a Set) equals
expectedImageModels as a set/array (compare sorted arrays or use a set-equality
helper) and remove the separate expect(IMAGE_MODEL_IDS.length). Ensure the test
references the single source of truth variable (expectedImageModels) so
additions like "dirtberry" are not duplicated in another place.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 801a11a5-ee9c-4dfc-8a27-a9d4b241dab2
📒 Files selected for processing (10)
convex/batchGeneration.tsconvex/batchProcessor.tsconvex/lib/batchGenerationState.test.tsconvex/lib/batchGenerationState.tsconvex/lib/index.tsconvex/lib/pollinations.tsconvex/singleGenerationProcessor.tshooks/queries/use-generate-image.test.tsxhooks/use-batch-mode.tslib/config/models.test.ts
Summary
Validation
Summary by CodeRabbit
New Features
Bug Fixes
UI/UX
Tests
Documentation