feat: restructure monorepo, add shared package, and background remover#17
feat: restructure monorepo, add shared package, and background remover#17gaureshpai wants to merge 1 commit intomainfrom
Conversation
- Introduce 'package/' directory for shared CLI and core utilities. - Centralize dependency management using pnpm workspace catalogs. - Add ImageBackgroundRemover component to the frontend. - Update backend and frontend to align with the new workspace structure. - Replace markdownlint with Biome and refine Turbo tasks. - Perform a global refactor of routes, middleware, and UI components.
📝 WalkthroughWalkthroughThis PR applies extensive indentation reformatting (spaces to tabs) across backend and frontend code, refactors GitHub Actions publishing workflow to unconditional version bumping, updates the README to emphasize npm package distribution, replaces Sharp image processing with Jimp in image routes, adds a new image background removal component, and updates configuration tooling (Biome) with comprehensive linting rules. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
This comment was marked as outdated.
This comment was marked as outdated.
There was a problem hiding this comment.
Actionable comments posted: 14
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (20)
frontend/src/components/PdfSplitter.jsx-88-90 (1)
88-90:⚠️ Potential issue | 🟠 MajorReject malformed range tokens like
1-2-3during validation.On Line 88 and Line 89,
split("-")+ destructuring accepts extra separators silently (1-2-3becomesstart=1, end=2). That makes invalid input appear valid and may send unintended ranges downstream.Proposed fix
for (const part of parts) { if (part.includes("-")) { - const [start, end] = part.split("-").map(Number); + const bounds = part.split("-").map((v) => v.trim()); + if (bounds.length !== 2 || bounds[0] === "" || bounds[1] === "") { + isValidRange = false; + break; + } + const [start, end] = bounds.map(Number); if ( Number.isNaN(start) || Number.isNaN(end) || start <= 0 || end > totalPages || start > end ) { isValidRange = false; break; }Also applies to: 96-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PdfSplitter.jsx` around lines 88 - 90, The range-parsing logic in PdfSplitter.jsx currently uses part.split("-") with array destructuring (seen around the block handling part.includes("-")) which silently accepts malformed tokens like "1-2-3"; update that logic to first split and verify the split result has exactly two segments, confirm both segments parse to finite numbers, and enforce start <= end before accepting the range (apply the same fix to the second occurrence around lines 96-99); locate the parsing code in the PdfSplitter component / function that iterates parts, replace the direct destructuring with an explicit check of splitParts.length === 2 and numeric validation before converting to numbers and pushing the range.frontend/src/components/ImageResizer.jsx-81-98 (1)
81-98:⚠️ Potential issue | 🟠 MajorHandle async resize failures and null canvas context to avoid stuck loading state
setLoading(true)is called at Line 78, but the resize pipeline has noreader.onerror/img.onerrorhandling, andctxat Line 88 can benullbefore Line 89. Any failure here can leave the UI stuck on “Resizing...”.Suggested fix
const reader = new FileReader(); + reader.onerror = () => { + toast.error("Failed to read the image file."); + setLoading(false); + }; reader.onload = (event) => { const img = new Image(); + img.onerror = () => { + toast.error("Failed to decode the uploaded image."); + setLoading(false); + }; img.onload = () => { const canvas = document.createElement("canvas"); canvas.width = width; canvas.height = height; const ctx = canvas.getContext("2d"); + if (!ctx) { + toast.error("Canvas is not supported in this browser."); + setLoading(false); + return; + } ctx.drawImage(img, 0, 0, width, height); const dataUrl = canvas.toDataURL(originalImage.type); setResizedImageSrc(dataUrl); handleDownload(dataUrl, `resized-${originalImage ? originalImage.name : "image"}`); setLoading(false); }; img.src = event.target.result; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageResizer.jsx` around lines 81 - 98, The resize pipeline can fail and leave the UI stuck because reader.onerror and img.onerror are not handled and the canvas getContext("2d") (ctx) may be null; add reader.onerror and img.onerror handlers that call setLoading(false), setResizedImageSrc(null) or an error state, and surface/log the error instead of proceeding, and after creating ctx verify it's not null before calling ctx.drawImage / canvas.toDataURL / handleDownload; ensure every early-exit or error path calls setLoading(false) so the UI won’t remain in “Resizing...”.frontend/src/components/ImageCropper.jsx-53-79 (1)
53-79:⚠️ Potential issue | 🟠 MajorGuard crop flow with
try/finallyto prevent sticky loading state.If any canvas/image operation throws,
setLoading(false)is skipped and the button can remain permanently disabled. Add ref/context guards and wrap the crop body intry/finally.Proposed fix
const handleCrop = () => { if (!imageSrc) { toast.error("Please upload an image first."); return; } - setLoading(true); - trackToolUsage("ImageCropper", "image"); - const image = imageRef.current; - const canvas = canvasRef.current; - const ctx = canvas.getContext("2d"); + const image = imageRef.current; + const canvas = canvasRef.current; + if (!image || !canvas) { + toast.error("Image is not ready yet. Please try again."); + return; + } + const ctx = canvas.getContext("2d"); + if (!ctx) { + toast.error("Canvas is unavailable in this browser."); + return; + } + + setLoading(true); + try { + trackToolUsage("ImageCropper", "image"); - const cropX = image.naturalWidth * 0.25; - const cropY = image.naturalHeight * 0.25; - const cropWidth = image.naturalWidth * 0.5; - const cropHeight = image.naturalHeight * 0.5; + const cropX = image.naturalWidth * 0.25; + const cropY = image.naturalHeight * 0.25; + const cropWidth = image.naturalWidth * 0.5; + const cropHeight = image.naturalHeight * 0.5; - canvas.width = cropWidth; - canvas.height = cropHeight; + canvas.width = cropWidth; + canvas.height = cropHeight; - ctx.drawImage(image, cropX, cropY, cropWidth, cropHeight, 0, 0, cropWidth, cropHeight); - const dataUrl = canvas.toDataURL(image.type); - setCroppedImageSrc(dataUrl); + ctx.drawImage(image, cropX, cropY, cropWidth, cropHeight, 0, 0, cropWidth, cropHeight); + const dataUrl = canvas.toDataURL("image/png"); + setCroppedImageSrc(dataUrl); - handleDownload(dataUrl, `cropped-image-${Date.now()}.png`); - setLoading(false); + handleDownload(dataUrl, `cropped-image-${Date.now()}.png`); + } finally { + setLoading(false); + } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageCropper.jsx` around lines 53 - 79, The cropping flow in handleCrop can throw and skip setLoading(false), so wrap the main crop logic in try/finally and validate refs/contexts up front; call setLoading(true) before try, then in finally call setLoading(false). Inside try, guard imageRef.current, canvasRef.current and canvas.getContext(...) and bail with a toast.error if missing, then perform ctx.drawImage, canvas.toDataURL and setCroppedImageSrc and handleDownload; optionally catch errors to toast.error before rethrowing or letting finally run.frontend/src/components/HashGenerator.jsx-34-36 (1)
34-36:⚠️ Potential issue | 🟠 MajorMD5 is cryptographically broken and should not be used for security-sensitive operations.
MD5 is vulnerable to collision attacks and is considered cryptographically insecure. If this hash generator is used for security-sensitive purposes (password hashing, digital signatures, data integrity verification, authentication tokens), you should remove the MD5 option or add a prominent warning.
MD5 is acceptable only for non-security use cases such as checksums, cache keys, or quick content hashing where collision resistance is not critical.
⚠️ Recommended actionsIf security-sensitive:
- Remove MD5 option entirely, or
- Add a visible warning in the UI about MD5's security limitations
- Consider offering only SHA-256, SHA-384, SHA-512, or bcrypt/scrypt for sensitive data
If non-security use case:
- Document clearly that MD5 is provided only for legacy/non-security purposes
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/HashGenerator.jsx` around lines 34 - 36, The HashGenerator component currently enables MD5 handling in the branch where algorithm === "MD5" (calling setHashMd5 and trackToolUsage); since MD5 is cryptographically insecure, either remove the MD5 option from the algorithm selection and delete the MD5-specific branch (remove references to "MD5", setHashMd5, and associated tracking), or keep MD5 only for non-security use by adding a prominent UI warning shown when algorithm === "MD5" (render a visible warning element in the HashGenerator component and update any docs) and ensure any wording clarifies MD5 is for legacy/checksum use only.frontend/src/components/ImageGrayscaler.jsx-50-53 (1)
50-53:⚠️ Potential issue | 🟠 MajorAdd timeout to the upload request.
The
axios.postcall on lines 50-53 lacks a timeout configuration. Without it, a stalled network connection can leave the UI stuck in a loading state indefinitely. No global timeout is configured in the frontend.Suggested fix
const res = await axios.post( `${import.meta.env.VITE_API_BASE_URL}/api/convert/image-grayscale`, formData, + { timeout: 30000 }, );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageGrayscaler.jsx` around lines 50 - 53, The axios.post call in ImageGrayscaler.jsx that sends formData to `${import.meta.env.VITE_API_BASE_URL}/api/convert/image-grayscale` lacks a timeout, so add a per-request timeout to the axios.post call (e.g., pass a config object with timeout: 30000 or an appropriate value) so the request fails fast on network stalls; ensure the request signature that sets res is updated to include the config object (axios.post(..., formData, { timeout: <ms> })) and the surrounding error handling for the res/formData upload path handles timeout errors gracefully.frontend/src/components/Base64TextConverter.jsx-16-23 (1)
16-23:⚠️ Potential issue | 🟠 MajorGuard conversion and always clear loading state.
Line 20 runs conversion inside
setTimeout, but if any runtime error occurs there,setLoading(false)on Line 21 is skipped and the button can stay disabled indefinitely. Wrap conversion intry/catch/finallyinside the timeout callback.Proposed fix
const encodeBase64 = () => { setLoading(true); trackToolUsage("Base64TextConverter", "text"); setTimeout(() => { - setConvertedText(btoa(text)); - setLoading(false); + try { + setConvertedText(btoa(text)); + } catch (error) { + setConvertedText(""); + toast.error("Failed to encode text."); + } finally { + setLoading(false); + } }, 500); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/Base64TextConverter.jsx` around lines 16 - 23, The encodeBase64 function runs btoa inside a setTimeout and may throw, leaving loading true; modify the setTimeout callback to wrap the conversion in try/catch/finally so any exception is caught (log or surface the error) and ensure setLoading(false) is always executed in finally; specifically update the encodeBase64 implementation that calls setLoading(true), trackToolUsage("Base64TextConverter","text"), setConvertedText(btoa(text)) to instead call btoa(text) inside try, call setConvertedText on success, handle errors in catch (e.g., setConvertedText("") or surface an error), and always call setLoading(false) in finally.backend/routes/textConverter.js-9-20 (1)
9-20:⚠️ Potential issue | 🟠 MajorHarden request validation for
texttype before conversion.Current checks only verify presence. A non-string
textcan still reach conversion and produce a server error path. Validate type early and return400for invalid payloads.Proposed fix
- if (!text || !type) { + if (typeof text !== "string" || text.length === 0 || !type) { return res.status(400).json({ msg: "Text and type (encode/decode) are required." }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/textConverter.js` around lines 9 - 20, The handler in textConverter.js currently only checks presence of text and type but not their types; update the request validation to ensure text is a non-empty string and type is a string equal to "encode" or "decode" before attempting conversion: check typeof text === "string" (and optionally text.trim().length > 0) and typeof type === "string" and validate type against the allowed set, returning res.status(400).json({ msg: "Text must be a string and type must be 'encode' or 'decode'." }) for invalid payloads; keep the existing conversion logic (Buffer.from(...)) unchanged once validation passes and reference the variables text, type and the result assignment for where to apply the checks.backend/routes/redirectChecker.js-20-23 (1)
20-23:⚠️ Potential issue | 🟠 MajorBound the outbound requests.
Neither probe sets a timeout, and the fallback
axios.getbuffers the full response body by default. A slow or very large target can pin a worker and burn memory/bandwidth.Also applies to: 42-46
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/redirectChecker.js` around lines 20 - 23, The outbound HTTP probes (the axios.head call for currentUrl and its fallback axios.get) lack timeouts and allow unbounded response bodies; update the axios requests (reference axios.head for currentUrl and the fallback axios.get) to include a reasonable timeout value and protect against large bodies by either using responseType: 'stream' or setting maxContentLength/maxBodyLength limits, and ensure validateStatus remains appropriate; apply the same changes to the other probe block (the second axios.head / axios.get pair) so both have timeouts and bounded response handling.frontend/src/components/ExcelToPdfConverter.jsx-10-18 (1)
10-18:⚠️ Potential issue | 🟠 MajorRead
isAuthenticatedfromstatehere too.Other
AuthContextconsumers in this PR usestate.isAuthenticated; this destructure pulls a nonexistent top-level field, so signed-in users still hit the 10MB limit.Suggested fix
- const { isAuthenticated } = useContext(AuthContext); + const { + state: { isAuthenticated }, + } = useContext(AuthContext);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ExcelToPdfConverter.jsx` around lines 10 - 18, The component is reading isAuthenticated incorrectly from AuthContext; replace the current destructure that pulls a top-level isAuthenticated with reading state.isAuthenticated from the context (i.e., change the useContext/AuthContext usage so onFileChange uses context.state.isAuthenticated) so the maxFileSize calculation uses the signed-in flag used elsewhere; update references in this file (e.g., the existing isAuthenticated variable usage in onFileChange) to use the context.state.isAuthenticated value.frontend/src/components/ImageToPdfConverter.jsx-29-53 (1)
29-53:⚠️ Potential issue | 🟠 MajorReset selection state before returning on invalid files.
These early returns skip the post-loop reset, so a failed reselection leaves the previous valid files in
selectedFiles. The next submit can upload the wrong batch.Suggested fix
- const validFiles = []; - let hasInvalidFile = false; + const validFiles = []; for (const file of files) { if (!allowedTypes.includes(file.type)) { toast.error( `Invalid file type: ${file.name}. Only images (JPEG, PNG, GIF, WebP, TIFF, AVIF) are allowed.`, ); - hasInvalidFile = true; + setSelectedFiles([]); + e.target.value = ""; return; } if (file.size > maxSize) { toast.error( `File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, ); - hasInvalidFile = true; + setSelectedFiles([]); + e.target.value = ""; return; } validFiles.push(file); } setSelectedFiles(validFiles); - if (hasInvalidFile) { - e.target.value = ""; - } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageToPdfConverter.jsx` around lines 29 - 53, In ImageToPdfConverter.jsx fix the early-return logic inside the file loop so that when an invalid file is detected you clear the selection state before returning: call setSelectedFiles([]) and reset the input value (e.target.value = "") immediately when detecting invalid type or oversize (inside the branches where you currently set hasInvalidFile = true and return), then return; also remove the separate post-loop hasInvalidFile handling since state is already cleared; this ensures selectedFiles and the file input are reset when validation fails.frontend/src/components/PngToJpgConverter.jsx-21-43 (1)
21-43:⚠️ Potential issue | 🟠 MajorClear stale files before bailing out on validation errors.
This has the same state bug as the image-to-PDF picker: the early returns skip the reset path, so an invalid reselection can leave the previous PNG batch in
selectedFiles.Suggested fix
- const validFiles = []; - let hasInvalidFile = false; + const validFiles = []; for (const file of files) { if (!allowedTypes.includes(file.type)) { toast.error(`Invalid file type: ${file.name}. Only PNG images are allowed.`); - hasInvalidFile = true; + setSelectedFiles([]); + e.target.value = ""; return; } if (file.size > maxSize) { toast.error( `File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, ); - hasInvalidFile = true; + setSelectedFiles([]); + e.target.value = ""; return; } validFiles.push(file); } setSelectedFiles(validFiles); - if (hasInvalidFile) { - e.target.value = ""; - } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PngToJpgConverter.jsx` around lines 21 - 43, The handler currently returns early on validation failures and skips clearing previous state, so stale selectedFiles remain; modify the loop in the file validation block (where files, allowedTypes, maxSize are checked) to call setSelectedFiles([]) and reset the file input (e.target.value = "") immediately before any return on invalid type or size, and ensure the local validFiles array is not used after a failure; keep setSelectedFiles(validFiles) only when the loop completes without errors.frontend/src/components/Navbar.jsx-86-92 (1)
86-92:⚠️ Potential issue | 🟠 MajorAdd an accessible name to the mobile menu trigger.
This is an icon-only control.
aria-expandedandaria-controlshelp, but they do not tell screen-reader users what the button does.Suggested fix
<Button variant="ghost" size="icon" onClick={() => setMobileMenuOpen(!mobileMenuOpen)} + aria-label={mobileMenuOpen ? "Close navigation menu" : "Open navigation menu"} aria-expanded={mobileMenuOpen} aria-controls="mobile-menu" >🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/Navbar.jsx` around lines 86 - 92, The mobile menu Button (the icon-only control that calls setMobileMenuOpen and uses mobileMenuOpen with aria-expanded and aria-controls="mobile-menu") lacks an accessible name; add one by providing either an aria-label or aria-labelledby that conveys action and state (e.g., dynamic aria-label using mobileMenuOpen such as "Open mobile menu" / "Close mobile menu"), or include visually-hidden descriptive text inside the Button so screen readers know what the control does.backend/routes/pdfConverter.js-118-120 (1)
118-120:⚠️ Potential issue | 🟠 MajorSanitize user-controlled filenames before building storage keys.
Lines 118, 219, and 313 derive object names from
file.originalnamewithout sanitization. This can introduce unsafe characters/path-like segments into stored filenames.🛡️ Proposed fix
const router = require("express").Router(); const path = require("node:path"); @@ const { handlePdfError, validatePdfFile, validatePageRange } = require("../utils/pdfErrorHandler"); + +const safeBaseName = (originalname) => { + const base = path.parse(String(originalname || "file")).name.normalize("NFKC"); + const sanitized = base.replace(/[^a-zA-Z0-9._-]/g, "_").replace(/_+/g, "_").slice(0, 80); + return sanitized || "file"; +}; @@ - const nameWithoutExt = path.parse(file.originalname).name; + const nameWithoutExt = safeBaseName(file.originalname); const outputFileName = `${nameWithoutExt}_dkutils_split_${Date.now()}.pdf`; @@ - const nameWithoutExt = path.parse(file.originalname).name; + const nameWithoutExt = safeBaseName(file.originalname); const outputFileName = `${nameWithoutExt}_dkutils_rotated_${Date.now()}.pdf`; @@ - const nameWithoutExt = path.parse(file.originalname).name; + const nameWithoutExt = safeBaseName(file.originalname); const outputFileName = `${nameWithoutExt}_dkutils_compressed_${Date.now()}.pdf`;Also applies to: 219-221, 313-315
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/pdfConverter.js` around lines 118 - 120, The code builds storage object keys from the user-controlled file.originalname (see variables nameWithoutExt and outputFileName and the supabase.storage upload calls) without sanitization; update the logic that derives nameWithoutExt/outputFileName to first normalize and sanitize file.originalname by removing any path segments, replacing or removing unsafe characters (e.g., ../, slashes, nulls), restricting to a safe character whitelist (letters, numbers, dashes, underscores), collapsing whitespace, and providing a deterministic fallback (e.g., UUID or timestamp) if the sanitized name is empty; apply this same sanitization helper wherever file.originalname is used (the occurrences producing nameWithoutExt at the start and the other two spots referenced in the diff) so storage keys are safe and predictable.backend/routes/analytics.js-26-44 (1)
26-44:⚠️ Potential issue | 🟠 MajorHarden
/trackinput validation before DB update.Line 28 only checks presence. Non-string or oversized
toolName/categoryvalues can still reachfindOneAndUpdate, risking polluted analytics data and avoidable DB stress.🔒 Proposed fix
const { toolName, category } = req.body; - if (!toolName || !category) { + if (typeof toolName !== "string" || typeof category !== "string") { + return res.status(400).json({ msg: "Tool name and category must be strings." }); + } + + const normalizedToolName = toolName.trim(); + const normalizedCategory = category.trim(); + + if (!normalizedToolName || !normalizedCategory) { return res.status(400).json({ msg: "Tool name and category are required." }); } - if (!isValidCategory(category)) { + if (normalizedToolName.length > 120) { + return res.status(400).json({ msg: "Tool name is too long." }); + } + + if (!isValidCategory(normalizedCategory)) { return res.status(400).json({ msg: "Invalid category" }); } // Find and update or create new entry const toolUsage = await ToolUsage.findOneAndUpdate( - { toolName, category }, + { toolName: normalizedToolName, category: normalizedCategory },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/analytics.js` around lines 26 - 44, Validate and sanitize request inputs before calling ToolUsage.findOneAndUpdate: ensure req.body.toolName and req.body.category are strings, trim them, enforce length limits (e.g., max 100 for toolName, max 50 for category), and reject non-string or oversized values with 400; continue to call isValidCategory(category) after trimming; this prevents malformed or excessively large values from reaching the database and keeps the update in ToolUsage.findOneAndUpdate safe and consistent.backend/routes/officeConverter.js-89-90 (1)
89-90:⚠️ Potential issue | 🟠 MajorUnsanitized user-supplied filename creates potential security risk.
file.originalnameis provided by the client and is not sanitized by theuploadLimitermiddleware (which usesmulter.memoryStorage()without filename filtering). Whilepath.parse().namestrips the extension and directory components, it does not sanitize against:
- Null bytes or control characters
- Unicode normalization attacks
- Excessively long filenames
- Reserved names or characters that may cause issues in storage systems
Consider sanitizing the filename before using it in the storage path:
🛡️ Proposed fix to sanitize filename
+const sanitizeFilename = (name) => { + // Remove path separators, null bytes, and control characters + return name + .replace(/[\x00-\x1f\x80-\x9f]/g, '') + .replace(/[/\\:*?"<>|]/g, '_') + .substring(0, 100); // Limit length +}; + const nameWithoutExt = path.parse(file.originalname).name; -const fileName = `${nameWithoutExt}_dkutils_converted_${Date.now()}.docx`; +const fileName = `${sanitizeFilename(nameWithoutExt)}_dkutils_converted_${Date.now()}.docx`;Apply the same pattern to line 145-146 for the Excel conversion route.
As noted in
backend/middleware/uploadLimiter.js(lines 1-35), there is no filename sanitization in the upload middleware chain.Also applies to: 145-146
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/officeConverter.js` around lines 89 - 90, The code uses unsanitized file.originalname to build nameWithoutExt and fileName, creating security risks; implement a sanitizeFilename utility (e.g., normalize Unicode, strip control/null chars, replace path/reserved characters with safe fallback like '-', trim and enforce a max length, and disallow reserved names) and call it before path.parse in the office conversion handlers that set nameWithoutExt/fileName (also apply the same sanitizeFilename call to the Excel conversion route where fileName is built); optionally integrate or call this sanitizer from the uploadLimiter/multer handling to centralize validation and ensure a safe fallback filename if the sanitized result is empty.frontend/src/components/ImageCompressor.jsx-31-55 (1)
31-55:⚠️ Potential issue | 🟠 MajorClear stale selections before returning on validation errors.
The early
returnnow skips bothsetSelectedFiles(validFiles)and the input reset. If a user already had valid files selected, then picks a batch with one invalid file, submit still uses the previous selection.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageCompressor.jsx` around lines 31 - 55, The loop in the file picker exits early on a validation error, skipping setSelectedFiles(validFiles) and the input reset so previous selections remain; update the handler (the block using allowedTypes, maxSize, toast, and setSelectedFiles) so that on detecting an invalid file you immediately clear the selection state and the input (call setSelectedFiles([]) and set e.target.value = "" or equivalent) before returning, or alternatively stop using early returns and instead set hasInvalidFile = true and break the loop, then after the loop always call setSelectedFiles(validFiles) and if hasInvalidFile clear e.target.value; ensure you reference the same symbols (allowedTypes, maxSize, toast, setSelectedFiles, e.target.value) when editing the code.backend/middleware/apiActivityTracker.js-7-18 (1)
7-18:⚠️ Potential issue | 🟠 MajorStrip query strings from endpoint before persisting and counting.
req.originalUrlincludes query strings in the request, causing each/download?filename=...request to create a distinctServiceUsagerow (due to the unique constraint on theendpointfield) instead of incrementing a single counter for the/downloadendpoint. This also stores user-supplied filenames inApiActivity.endpoint. Usereq.pathinstead to track endpoints by their route pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/middleware/apiActivityTracker.js` around lines 7 - 18, The middleware currently uses req.originalUrl when creating ApiActivity and when updating ServiceUsage and TotalUsage, which preserves query strings and user-supplied data; change these to use req.path for the endpoint value so query strings are stripped before persisting and counting. Update the code that constructs ApiActivity (where apiActivity.endpoint is set) to use req.path instead of req.originalUrl, and update the ServiceUsage.findOneAndUpdate lookup and increment key to use req.path as the endpoint identifier; keep TotalUsage logic unchanged. Ensure any references to req.originalUrl in this file (e.g., around apiActivity.save, ServiceUsage.findOneAndUpdate, and the endpoint field) are replaced with req.path so counts aggregate per route and no query params are stored.backend/routes/imageConverter.js-4-7 (1)
4-7:⚠️ Potential issue | 🟠 MajorThe Jimp swap breaks WEBP/AVIF conversion that this endpoint still advertises.
Jimp 1.6.0's default build supports only BMP, GIF, JPEG, PNG, and TIFF. WEBP and AVIF require explicitly imported WASM plugins (
@jimp/wasm-webp,@jimp/wasm-avif) and a custom Jimp instance created viacreateJimp(). This code allowlistswebpandavifat line 522, then callsJimp.read()andgetBuffer("image/webp"|"image/avif")directly on the default instance (lines 537–545 and batch paths), causing these requests to fail with unsupported format errors. (https://jimp-dev.github.io/jimp/guides/webp/)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/imageConverter.js` around lines 4 - 7, The endpoint currently allowlists "webp" and "avif" but uses the default Jimp instance (Jimp.read and image.getBuffer) which lacks WASM plugin support; fix by importing the WASM plugins (`@jimp/wasm-webp` and `@jimp/wasm-avif`), create a custom Jimp instance via createJimp() that registers those plugins, and use that instance for reads and getBuffer calls (replace direct Jimp.read/getBuffer usage with the plugin-enabled instance) so WEBP/AVIF conversions succeed..github/workflows/publish.yml-105-107 (1)
105-107:⚠️ Potential issue | 🟠 MajorThis staging list makes optional docs mandatory again.
Lines 84-92 explicitly treat these README/docs files as optional, but this
git addhardcodes them anyway. If any one of those paths is absent, the job dies here before it reaches thegit diff --cachedguard.git add -uis safer for this workflow because every file it edits is already tracked.Safer staging
- git add package.json frontend/package.json backend/package.json package/package.json \ - backend/docs/api-documentation.json backend/docs/README.md README.md \ - frontend/README.md package/README.md + git add -u🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/publish.yml around lines 105 - 107, Replace the hardcoded git add listing in the publish workflow with a safer staged-update invocation: change the explicit "git add package.json frontend/package.json backend/package.json package/package.json backend/docs/api-documentation.json backend/docs/README.md README.md frontend/README.md package/README.md" step to "git add -u" so only tracked files that were modified are staged (avoid failing when optional docs are missing); locate the git add invocation in the .github workflow step and update it to use the "-u" flag rather than the long fixed file list..github/workflows/publish.yml-6-14 (1)
6-14:⚠️ Potential issue | 🟠 MajorPrerelease functionality is unreachable with current input options.
The workflow defines prerelease handling on line 137 using
${{ contains(github.event.inputs.release_type, 'pre') }}, but the availablerelease_typeoptions (patch,minor,major) do not contain the substringpre. This evaluates tofalsefor all selections, making the prerelease parameter always false and preventing GitHub Release creation as a prerelease. Sincenpm versionsupportsprepatch,preminor,premajor, andprerelease, restore thepre*options or add a separate boolean input to enable prerelease publishing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/publish.yml around lines 6 - 14, The workflow's release_type input currently only offers "patch", "minor", "major" but the prerelease check uses contains(github.event.inputs.release_type, 'pre'), which will always be false; update the release_type input (symbol: release_type) to include the pre variants ("prepatch", "preminor", "premajor", "prerelease") OR add a new boolean input (e.g., prerelease) and change the prerelease conditional (the contains(...) usage) to instead check that boolean (e.g., github.event.inputs.prerelease) so prerelease publishing can be triggered as intended.
🟡 Minor comments (13)
frontend/src/components/ToolCard.jsx-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorRemove the hidden BOM at file start.
Line 1 appears to include a UTF-8 BOM (
) beforeimport. This can create noisy diffs or occasional tooling/parsing inconsistencies. Please save the file as UTF-8 without BOM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ToolCard.jsx` at line 1, Remove the hidden UTF-8 BOM at the start of the file that precedes the import statement (the invisible character before "import PropTypes from \"prop-types\";") by re-saving the file as UTF-8 without BOM or stripping the leading BOM character so the file begins directly with the import line; ensure no other files are saved with a BOM to avoid tooling/parsing issues.frontend/src/components/WebsiteScreenshotGenerator.jsx-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorRemove the UTF-8 BOM character at file start.
Line 1 appears to include a BOM before the
importstatement. This can cause avoidable tooling/lint inconsistencies across environments.Suggested fix
-import axios from "axios"; +import axios from "axios";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/WebsiteScreenshotGenerator.jsx` at line 1, The file starts with a UTF-8 BOM before the import statement which can break tools; remove the BOM character so the file begins exactly with "import axios from 'axios';" and re-save the file as UTF-8 without BOM (update editor/IDE save settings or run a file encoding conversion), then verify linting/build passes for WebsiteScreenshotGenerator.jsx.frontend/src/components/MarkdownToHtmlConverter.jsx-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorRemove the hidden BOM character at file start.
Line 1 contains a UTF-8 BOM (
) before theimport. This can cause inconsistent lint/format behavior across tooling/editors.🔧 Suggested fix
-import { marked } from "marked"; +import { marked } from "marked";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/MarkdownToHtmlConverter.jsx` at line 1, Remove the hidden UTF-8 BOM character at the start of the file so the line "import { marked } from \"marked\";" begins with a normal ASCII character; open frontend/src/components/MarkdownToHtmlConverter.jsx in your editor or run a tool to strip the BOM and re-save the file without BOM (ensure the first character is "i" of "import"), then re-run lint/format to confirm no BOM-related issues remain.frontend/src/components/PdfToWordConverter.jsx-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorRemove UTF-8 BOM from the first import line.
Line 1 appears to include a BOM (
\uFEFF) beforeimport. This can cause inconsistent linting/parsing behavior across tooling.🔧 Proposed fix
-import axios from "axios"; +import axios from "axios";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PdfToWordConverter.jsx` at line 1, The first line of PdfToWordConverter.jsx contains a UTF-8 BOM before the import which can break tooling; open the file and remove the leading BOM so the line starts exactly with "import axios from \"axios\";" (verify there are no hidden characters before the import), then save the file without BOM/with UTF-8 encoding and run lint/parse to confirm the issue is resolved.frontend/src/components/LinkShortener.jsx-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorRemove the BOM prefix from the module file.
Line 1 appears to include a UTF-8 BOM character (
) beforeimport. This can create avoidable parser/lint inconsistencies in some toolchains/editors.🔧 Proposed fix
-import axios from "axios"; +import axios from "axios";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/LinkShortener.jsx` at line 1, The file LinkShortener.jsx contains a UTF-8 BOM character at the start of the module (before the import axios statement) which can break parsers; open the file containing the import axios line, remove the hidden BOM character at the start of the file, and re-save the file as UTF-8 without BOM (ensure the first character of the file is 'i' of "import"); also scan other source files for a leading BOM and remove if found to avoid toolchain inconsistencies.frontend/src/components/PdfCompressor.jsx-16-35 (1)
16-35:⚠️ Potential issue | 🟡 MinorHandle empty file selection without showing an error toast.
If the file picker is canceled,
fileis undefined and the currentelsepath shows “Please select a PDF file.” This creates noisy UX for a non-error action.💡 Suggested fix
const onFileChange = (e) => { const file = e.target.files[0]; const maxFileSize = isAuthenticated ? 50 * 1024 * 1024 : 10 * 1024 * 1024; + if (!file) { + return; + } - if (file && file.type === "application/pdf") { + if (file.type === "application/pdf") { if (file.size > maxFileSize) { toast.error( `File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, ); setSelectedFile(null); e.target.value = null; } else { setSelectedFile(file); } } else { toast.error("Please select a PDF file."); setSelectedFile(null); e.target.value = null; } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PdfCompressor.jsx` around lines 16 - 35, The onFileChange handler currently treats a canceled file picker as an error; update onFileChange to first check for no selection (e.g., if !file or e.target.files.length === 0) and simply clear state (setSelectedFile(null) and e.target.value = null if needed) and return early without calling toast.error; after that, continue existing checks for file.type === "application/pdf", file.size against maxFileSize (using isAuthenticated to compute the limit), and call toast.error only for real invalid files (wrong type or too large) to avoid noise when the picker is canceled.frontend/src/components/TextCaseConverter.jsx-19-23 (1)
19-23:⚠️ Potential issue | 🟡 MinorAdd timeout cleanup to avoid stale async state updates.
These
setTimeoutcallbacks can still fire after unmount and attempt state updates. Track timeout IDs and clear them in a cleanup effect on unmount.Applies to: lines 19-23, 28-32, 37-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/TextCaseConverter.jsx` around lines 19 - 23, The setTimeout callbacks in TextCaseConverter.jsx (used in the conversion handlers that call setConvertedText and setLoading) can fire after the component unmounts; modify the conversion functions (e.g., convertToUpper, convertToLower, convertToTitle) to capture each timeout ID (const id = setTimeout(...)) and store it (ref or state array), clear any previous timeout before creating a new one, and add a useEffect cleanup that clears all stored timeout IDs on unmount using clearTimeout to prevent stale state updates.frontend/src/components/ImageCropper.jsx-74-74 (1)
74-74:⚠️ Potential issue | 🟡 MinorStore the original file MIME type in state and pass it explicitly to
toDataURL().
imageis an HTMLImageElement and does not have atypeproperty. Passingimage.type(which isundefined) tocanvas.toDataURL()silently defaults to PNG output. The original file's MIME type is available inhandleFileChange(line 28:file.type), but it's not stored. Either store it in state and use it here, or explicitly pass"image/png"totoDataURL()to make the behavior deterministic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageCropper.jsx` at line 74, The code calls canvas.toDataURL(image.type) but image is an HTMLImageElement without a type; update handleFileChange to save the uploaded file's MIME (file.type) into component state (e.g., originalMimeType) and then call canvas.toDataURL(originalMimeType) inside the cropping/export logic (where canvas.toDataURL is invoked) so the exported dataURL uses the actual original file MIME; ensure the state key name matches usages in ImageCropper.jsx and update any references accordingly.frontend/src/components/ImageToBase64Converter.jsx-24-25 (1)
24-25:⚠️ Potential issue | 🟡 MinorFix misleading limit message for authenticated users.
On Line 24, the toast always says “Login for a higher limit (50MB).” That’s incorrect when
isAuthenticatedis already true.Proposed fix
- toast.error( - `File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, - ); + const limitMb = maxFileSize / (1024 * 1024); + const hint = isAuthenticated ? "" : " Login for a higher limit (50MB)."; + toast.error(`File too large: ${file.name}. Maximum size is ${limitMb}MB.${hint}`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageToBase64Converter.jsx` around lines 24 - 25, The toast message in ImageToBase64Converter.jsx currently always tells users to “Login for a higher limit (50MB)”, which is wrong when isAuthenticated is true; update the error string construction (where the toast is called) to conditionally include the login prompt only if isAuthenticated is false — e.g., build the message using isAuthenticated ? `File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB.` : `File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, locating the change inside the ImageToBase64Converter component where the toast is invoked.frontend/src/components/TextDifferenceChecker.jsx-55-65 (1)
55-65:⚠️ Potential issue | 🟡 MinorDisplay and clipboard content mismatch due to HTML entity handling.
With
dangerouslySetInnerHTMLremoved,{diffResult}renders escaped HTML entities literally (e.g., user sees<on screen). However,copyToClipboardusesinnerHTMLassignment which interprets those entities, so the clipboard receives unescaped text (e.g.,<).If the critical diff logic issue above is fixed by restoring
dangerouslySetInnerHTML, this inconsistency resolves itself. If you keep the current plain-text rendering approach instead, removeescapeHtmlusage entirely and simplify the copy logic:♻️ Simplified copy for plain-text approach (if not using dangerouslySetInnerHTML)
const copyToClipboard = async () => { try { - const tempElement = document.createElement("div"); - tempElement.innerHTML = diffResult; - await navigator.clipboard.writeText(tempElement.innerText); + await navigator.clipboard.writeText(diffResult); toast.success("Copied to clipboard!"); } catch (error) {And remove
escapeHtmlcalls since React already escapes text content.Also applies to: 129-131
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/TextDifferenceChecker.jsx` around lines 55 - 65, The copy/clipboard behavior is inconsistent because copyToClipboard sets tempElement.innerHTML (which unescapes entities) while the UI renders escaped text (dangerouslySetInnerHTML was removed) or uses escapeHtml; fix by making clipboard content match displayed text: either restore the original HTML rendering (use dangerouslySetInnerHTML with diffResult and keep current innerHTML copy logic) or, if you want plain-text rendering, remove escapeHtml usage and change copyToClipboard to write navigator.clipboard.writeText(diffResult) (or tempElement.textContent if you must use a DOM node) so the copied text exactly matches what's shown; update both the copyToClipboard function and any other copy uses (also referenced around lines 129-131) to be consistent.frontend/src/components/ImageGrayscaler.jsx-16-34 (1)
16-34:⚠️ Potential issue | 🟡 MinorDon’t clear a valid selection when file dialog is canceled.
On Line 17 and Line 30, canceling the picker (
fileisundefined) falls into the error branch, clearsselectedFile, and shows a toast. That’s a UX regression if the user already had a valid file selected.💡 Suggested fix
const onFileChange = (e) => { - const file = e.target.files[0]; + const file = e.target.files?.[0]; + if (!file) return; const maxFileSize = isAuthenticated ? 50 * 1024 * 1024 : 10 * 1024 * 1024; - if (file?.type.startsWith("image/")) { + if (file.type.startsWith("image/")) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageGrayscaler.jsx` around lines 16 - 34, The onFileChange handler incorrectly treats a canceled file dialog (file undefined) as an error and clears the existing selection; update onFileChange to first check for absence of e.target.files[0] and simply return early (do not call setSelectedFile or show toast) so an existing selected file is preserved; keep the existing image/type/size validation (using file.type.startsWith("image/") and file.size > maxFileSize) and only clear setSelectedFile and show toast for real invalid files or oversize cases; reference the onFileChange function, setSelectedFile, isAuthenticated and toast calls when making the change.backend/routes/passwordStrength.js-50-53 (1)
50-53:⚠️ Potential issue | 🟡 MinorReject non-string payloads before scoring.
123or{}pass the current guard and then get coerced by.lengthand the regex checks, so this endpoint can return nonsense scores instead of400.Suggested fix
- if (!password) { + if (typeof password !== "string" || password.length === 0) { return res.status(400).json({ msg: "Password is required." }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/passwordStrength.js` around lines 50 - 53, The handler currently only checks for presence of password after "const { password } = req.body;" but allows non-string values (e.g., 123 or {}) which cause invalid scoring; update the guard to reject any non-string by checking typeof password !== 'string' (and keep the empty check) and return res.status(400).json({ msg: "Password is required and must be a string." }); so the rest of the code that uses password.length and regexes only runs on bona fide strings.frontend/src/components/SeoTools.jsx-83-90 (1)
83-90:⚠️ Potential issue | 🟡 MinorFix sitemap “not found” state assignment.
Line 88 stores the error message in
sitemapXmlContentinstead ofsitemapXmlError, so the UI treats an error as valid content.💡 Proposed fix
if (res.data.exists) { setSitemapXmlContent(res.data.content); trackToolUsage("SeoTools:sitemap_xml_success", "web"); toast.success("sitemap.xml fetched successfully!"); } else { - setSitemapXmlContent(res.data.error || "sitemap.xml not found or accessible."); + setSitemapXmlError(res.data.error || "sitemap.xml not found or accessible."); + setSitemapXmlContent(""); trackToolUsage("SeoTools:sitemap_xml_not_found", "web"); toast.info("sitemap.xml not found or accessible."); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/SeoTools.jsx` around lines 83 - 90, The current else branch in the SeoTools component assigns error text to sitemapXmlContent instead of the error state; update the else branch to call setSitemapXmlError(res.data.error || "sitemap.xml not found or accessible.") (instead of setSitemapXmlContent) so that sitemapXmlError holds the failure message; keep the existing trackToolUsage("SeoTools:sitemap_xml_not_found", "web") and toast.info unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 86a01362-068b-425a-b4c7-33edc4643e78
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (134)
.github/workflows/publish.yml.markdownlint.json.markdownlintignoreREADME.mdbackend/docs/api-documentation.jsonbackend/index.jsbackend/middleware/apiActivityTracker.jsbackend/middleware/auth.jsbackend/middleware/authorize.jsbackend/middleware/uploadLimiter.jsbackend/models/ApiActivity.jsbackend/models/ServiceUsage.jsbackend/models/ToolUsage.jsbackend/models/Url.jsbackend/models/User.jsbackend/package.jsonbackend/routes/analytics.jsbackend/routes/auth.jsbackend/routes/cleanSupabase.jsbackend/routes/favicon.jsbackend/routes/imageConverter.jsbackend/routes/jsonXmlConverter.jsbackend/routes/keepAlive.jsbackend/routes/officeConverter.jsbackend/routes/passwordStrength.jsbackend/routes/pdfConverter.jsbackend/routes/redirectChecker.jsbackend/routes/screenshot.jsbackend/routes/seoTools.jsbackend/routes/shortener.jsbackend/routes/textConverter.jsbackend/routes/textToPdf.jsbackend/utils/ipValidation.jsbackend/utils/pdfErrorHandler.jsbackend/utils/supabaseCleaner.jsbackend/utils/supabaseClient.jsbiome.jsonfrontend/package.jsonfrontend/public/manifest.jsonfrontend/src/App.jsxfrontend/src/components/Base64TextConverter.jsxfrontend/src/components/CsvToJsonConverter.jsxfrontend/src/components/ExcelToPdfConverter.jsxfrontend/src/components/FaviconExtractor.jsxfrontend/src/components/Footer.jsxfrontend/src/components/HashGenerator.jsxfrontend/src/components/HtmlToMarkdownConverter.jsxfrontend/src/components/ImageBackgroundRemover.jsxfrontend/src/components/ImageCompressor.jsxfrontend/src/components/ImageCropper.jsxfrontend/src/components/ImageFlipper.jsxfrontend/src/components/ImageFormatConverter.jsxfrontend/src/components/ImageGrayscaler.jsxfrontend/src/components/ImageResizer.jsxfrontend/src/components/ImageToBase64Converter.jsxfrontend/src/components/ImageToPdfConverter.jsxfrontend/src/components/JsonFormatterValidator.jsxfrontend/src/components/JsonXmlConverter.jsxfrontend/src/components/LinkShortener.jsxfrontend/src/components/MarkdownToHtmlConverter.jsxfrontend/src/components/Navbar.jsxfrontend/src/components/PasswordGenerator.jsxfrontend/src/components/PasswordStrengthChecker.jsxfrontend/src/components/PdfCompressor.jsxfrontend/src/components/PdfMerger.jsxfrontend/src/components/PdfPageDeleter.jsxfrontend/src/components/PdfRotator.jsxfrontend/src/components/PdfSplitter.jsxfrontend/src/components/PdfToExcelConverter.jsxfrontend/src/components/PdfToTextConverter.jsxfrontend/src/components/PdfToWordConverter.jsxfrontend/src/components/PngToJpgConverter.jsxfrontend/src/components/QrCodeGenerator.jsxfrontend/src/components/QrCodeScanner.jsxfrontend/src/components/SeoTools.jsxfrontend/src/components/TextCaseConverter.jsxfrontend/src/components/TextDifferenceChecker.jsxfrontend/src/components/TextToPdfGenerator.jsxfrontend/src/components/ToolCard.jsxfrontend/src/components/UrlRedirectChecker.jsxfrontend/src/components/WebsiteScreenshotGenerator.jsxfrontend/src/components/auth/Login.jsxfrontend/src/components/auth/Register.jsxfrontend/src/components/mode-toggle.jsxfrontend/src/components/theme-provider.jsxfrontend/src/components/ui/badge.jsxfrontend/src/components/ui/button.jsxfrontend/src/components/ui/card.jsxfrontend/src/components/ui/input.jsxfrontend/src/components/ui/label.jsxfrontend/src/components/ui/textarea.jsxfrontend/src/context/AuthContext.jsxfrontend/src/index.cssfrontend/src/index.jsxfrontend/src/lib/utils.jsfrontend/src/pages/HomePage.jsxfrontend/src/pages/ImageToolsPage.jsxfrontend/src/pages/PdfToolsPage.jsxfrontend/src/pages/TextToolsPage.jsxfrontend/src/pages/WebToolsPage.jsxfrontend/src/utils/setAuthToken.jsfrontend/src/utils/useAnalytics.jsfrontend/src/utils/useSortedTools.jsfrontend/tailwind.config.jsfrontend/vercel.jsonfrontend/vite.config.jspackage.jsonpackage/README.mdpackage/package.jsonpackage/rslib.config.tspackage/scripts/postbuild.mjspackage/src/branding.tspackage/src/cli.tspackage/src/constants/index.tspackage/src/image/index.tspackage/src/index.tspackage/src/interactive.tspackage/src/interfaces/common.tspackage/src/interfaces/index.tspackage/src/pdf/index.tspackage/src/types/common.tspackage/src/types/shims.d.tspackage/src/utils/branding.tspackage/src/utils/config.tspackage/src/utils/errors.tspackage/src/utils/ffmpeg.tspackage/src/utils/files.tspackage/src/utils/index.tspackage/src/utils/watermark.tspackage/src/video/index.tspackage/tests/dist.test.jspackage/tsconfig.jsonpnpm-workspace.yamlturbo.json
💤 Files with no reviewable changes (2)
- .markdownlintignore
- .markdownlint.json
| - name: Bump versions | ||
| run: | | ||
| # Fetch tags and get the last release tag | ||
| git fetch --tags | ||
| LAST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "") | ||
| if [ -n "$LAST_TAG" ]; then | ||
| BASE=$(git rev-list -n1 "$LAST_TAG") | ||
| echo "Using base commit $BASE from last tag $LAST_TAG" | ||
| else | ||
| BASE=$(git rev-parse --verify 4b825dc642cb6eb9a060e54bf8d69288fbee4904) | ||
| echo "No tags found, using empty-tree SHA $BASE" | ||
| fi | ||
| CHANGED_FILES=$(git diff --name-only "$BASE" HEAD) | ||
|
|
||
| # Determine which workspaces need version bumps | ||
| if echo "$CHANGED_FILES" | grep -q "^frontend/"; then | ||
| echo "Frontend changes detected, bumping frontend version" | ||
| cd frontend | ||
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | ||
| cd .. | ||
| fi | ||
| echo "Bumping NPM package version (always patch)" | ||
| cd package | ||
| npm version patch --no-git-tag-version | ||
| cd .. | ||
|
|
||
| if echo "$CHANGED_FILES" | grep -q "^backend/"; then | ||
| echo "Backend changes detected, bumping backend version" | ||
| cd backend | ||
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | ||
| cd .. | ||
| fi | ||
| echo "Bumping website versions (${{ github.event.inputs.release_type }})" | ||
| # Always bump root, frontend, and backend versions | ||
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | ||
|
|
||
| cd frontend | ||
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | ||
| cd .. | ||
|
|
||
| # Always bump root version | ||
| cd backend | ||
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | ||
| cd .. |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
npm version command clean tree requirement
💡 Result:
The npm version command requires a clean Git working directory (no uncommitted changes) when run in a Git repository and git-tag-version is enabled (default: true). It fails with "Git working directory not clean" if there are unstaged or uncommitted changes. To bypass this requirement, use the --force or -f flag: npm version patch --force. This allows versioning even with a dirty tree, though it warns about potential risks. Alternatively, disable Git tagging with --no-git-tag-version to update package.json without Git operations: npm version patch --no-git-tag-version. npm ci performs a clean install by deleting node_modules and strictly using package-lock.json, but is unrelated to npm version or Git tree requirements. Sources: Official npm docs confirm the clean directory requirement and force option [1]. GitHub issues and Stack Overflow discussions detail the error and workarounds [16,17,23].
Citations:
- 1: https://docs.npmjs.com/cli/v8/commands/npm-version/
- 2: npm version fails with "Git working directory not clean" npm/npm#6879
- 3: https://stackoverflow.com/questions/52075039/update-the-version-in-package-json-without-clean-git-working-directory-without
- 4: [BUG] npm version
--forceoption is not implemented npm/cli#3238 - 5: https://docs.npmjs.com/cli/v9/commands/npm-ci/
🏁 Script executed:
# Check the npm version documentation and flags
npm --version
npm version --help 2>/dev/null | head -50Repository: gaureshpai/dkutils
Length of output: 526
🌐 Web query:
npm version --force flag documentation
💡 Result:
The --force (or -f) flag for the npm version command skips the check to ensure the git working directory is clean before bumping the version and creating a commit/tag. Normally, npm version fails if there are uncommitted changes, but --force allows it to proceed anyway. This is documented in the official npm docs for the version command (across npm v8 and v9). It warns: "It will fail if the working directory is not clean, unless the -f or --force flag is set." The step "Check to make sure the git working directory is clean before we get started" is explicitly skipped with --force. Note: There was a bug in earlier npm versions (e.g., npm 7) where --force didn't fully work with certain options like from-git, but this was fixed by npm 10.7.0 (2024), and current versions (as of 2026) support it properly. Usage example: npm version patch --force Related config: allow-same-version prevents errors when setting the same version, but --force specifically handles the git dirty state.
Citations:
- 1: https://docs.npmjs.com/cli/v8/commands/npm-version/
- 2: https://docs.npmjs.com/cli/v9/commands/npm-version
- 3: [BUG] npm version
--forceoption is not implemented npm/cli#3238
🏁 Script executed:
# Get full npm version help
npm version --helpRepository: gaureshpai/dkutils
Length of output: 519
Chained npm version calls will fail after the first bump.
npm version requires a clean git working tree before running. After the first npm version patch at line 41 changes package/package.json, the working tree becomes dirty, causing subsequent npm version calls to fail. The --no-git-tag-version flag suppresses git commit/tag operations but does NOT bypass the clean-tree check.
Add --force to each npm version call to skip the working directory validation:
Fix
echo "Bumping NPM package version (always patch)"
cd package
- npm version patch --no-git-tag-version
+ npm version patch --no-git-tag-version --force
cd ..
echo "Bumping website versions (${{ github.event.inputs.release_type }})"
# Always bump root, frontend, and backend versions
- npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
+ npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force
cd frontend
- npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
+ npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force
cd ..
cd backend
- npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
+ npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force
cd ..📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Bump versions | |
| run: | | |
| # Fetch tags and get the last release tag | |
| git fetch --tags | |
| LAST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "") | |
| if [ -n "$LAST_TAG" ]; then | |
| BASE=$(git rev-list -n1 "$LAST_TAG") | |
| echo "Using base commit $BASE from last tag $LAST_TAG" | |
| else | |
| BASE=$(git rev-parse --verify 4b825dc642cb6eb9a060e54bf8d69288fbee4904) | |
| echo "No tags found, using empty-tree SHA $BASE" | |
| fi | |
| CHANGED_FILES=$(git diff --name-only "$BASE" HEAD) | |
| # Determine which workspaces need version bumps | |
| if echo "$CHANGED_FILES" | grep -q "^frontend/"; then | |
| echo "Frontend changes detected, bumping frontend version" | |
| cd frontend | |
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | |
| cd .. | |
| fi | |
| echo "Bumping NPM package version (always patch)" | |
| cd package | |
| npm version patch --no-git-tag-version | |
| cd .. | |
| if echo "$CHANGED_FILES" | grep -q "^backend/"; then | |
| echo "Backend changes detected, bumping backend version" | |
| cd backend | |
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | |
| cd .. | |
| fi | |
| echo "Bumping website versions (${{ github.event.inputs.release_type }})" | |
| # Always bump root, frontend, and backend versions | |
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | |
| cd frontend | |
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | |
| cd .. | |
| # Always bump root version | |
| cd backend | |
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | |
| cd .. | |
| - name: Bump versions | |
| run: | | |
| echo "Bumping NPM package version (always patch)" | |
| cd package | |
| npm version patch --no-git-tag-version --force | |
| cd .. | |
| echo "Bumping website versions (${{ github.event.inputs.release_type }})" | |
| # Always bump root, frontend, and backend versions | |
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force | |
| cd frontend | |
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force | |
| cd .. | |
| cd backend | |
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force | |
| cd .. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/publish.yml around lines 37 - 54, The "Bump versions"
workflow step uses multiple npm version calls (the commands shown for package,
root, frontend, and backend) which fail after the first bump because npm
enforces a clean git tree even with --no-git-tag-version; update each npm
version invocation in that block to include the --force flag (add --force to the
npm version patch in package and to the npm version ${{
github.event.inputs.release_type }} calls for root, frontend, and backend) so
subsequent bumps skip the working-tree validation and succeed.
| TAG_NAME="v${{ steps.get_versions.outputs.website_version }}" | ||
| if git rev-parse -q --verify "refs/tags/$TAG_NAME"; then | ||
| echo "Tag $TAG_NAME already exists, skipping tag creation" | ||
| # Only set did_commit=true if we actually made a new commit | ||
| if [ "$(git rev-parse HEAD)" != "$(git rev-parse HEAD~1)" ]; then | ||
| echo "did_commit=true" >> $GITHUB_OUTPUT | ||
| fi | ||
| else | ||
| echo "Creating new tag $TAG_NAME" | ||
| git tag -a "$TAG_NAME" -m "Release $TAG_NAME" | ||
| echo "did_commit=true" >> $GITHUB_OUTPUT | ||
| fi | ||
| echo "did_commit=true" >> $GITHUB_OUTPUT |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/publish.ymlRepository: gaureshpai/dkutils
Length of output: 6023
🌐 Web query:
softprops/action-gh-release existing tag behavior documentation
💡 Result:
When using softprops/action-gh-release with an existing tag, the action reuses the existing GitHub release associated with that tag (if one exists) and updates it by uploading new assets. Existing assets with the same name are overwritten. If release metadata fields like name, body, draft, prerelease, etc., are not explicitly provided in the action inputs, the existing release retains its original metadata values. This behavior is documented in the official README: - "If a tag already has a GitHub release, the existing release will be updated with the release assets." - "When the release info keys (such as name, body, prerelease, etc.) are not explicitly set and there is already an existing release for the tag, the release will retain its original info." Recent releases (e.g., v2.6.1 as of 2026-03-16) include documentation clarifications on reused draft release behavior and immutable prerelease uploads, where GitHub platform constraints apply (e.g., prereleases become immutable after publication). Note: There is no built-in input to fail or skip on existing tags/releases; it always attempts to reuse and update. For asset overwriting issues in concurrent/matrix jobs, sequential uploads or separate jobs may be needed.
Citations:
- 1: https://github.com/softprops/action-gh-release
- 2: Support release asset patching behavior to overwrite existing release assets softprops/action-gh-release#45
Don't continue to publish when tag creation is skipped.
When v${{ steps.get_versions.outputs.website_version }} already exists, the workflow still sets did_commit=true (line 123), allowing the push and release steps to run. However, no new tag was created, so softprops/action-gh-release will target the existing old tag instead. This creates a mismatch: a new commit is created with version bumps, but the GitHub release is published against the old tag. This is unsafe—the release should either target the new tag or fail.
Exit with code 1 when a tag collision is detected, as this indicates the version bump logic or repository state has a problem that needs investigation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/publish.yml around lines 116 - 123, The workflow sets
did_commit=true even when TAG_NAME (v${{
steps.get_versions.outputs.website_version }}) already exists, which causes the
publish/release steps (e.g., softprops/action-gh-release) to run against an old
tag; change the logic in the TAG_NAME check so that when git rev-parse finds the
tag you immediately fail the job (exit with non‑zero) and do not set did_commit,
e.g., replace the "Tag ... already exists, skipping tag creation" branch with an
explicit error exit (or set did_commit=false and exit 1) so subsequent steps do
not run and the mismatch is prevented.
| { $inc: { totalCount: 1 } }, | ||
| { upsert: true, new: true }, | ||
| ); | ||
| await TotalUsage.findOneAndUpdate({}, { $inc: { totalCount: 1 } }, { upsert: true, new: true }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find and read the middleware file
find . -name "apiActivityTracker.js" -type fRepository: gaureshpai/dkutils
Length of output: 104
🏁 Script executed:
# Find the TotalUsage model definition
find . -type f \( -name "*.js" -o -name "*.ts" \) | xargs grep -l "TotalUsage" | head -20Repository: gaureshpai/dkutils
Length of output: 137
🏁 Script executed:
# Get context around line 14 in apiActivityTracker.js
cat -n backend/middleware/apiActivityTracker.js | head -30Repository: gaureshpai/dkutils
Length of output: 998
🌐 Web query:
Mongoose findOneAndUpdate upsert empty filter atomicity concurrent writes
💡 Result:
In Mongoose (as of v8.15+ per recent updates), Model.findOneAndUpdate({}, update, { upsert: true }) with an empty filter {} is valid and expected behavior. It finds the first document in natural order (typically oldest insertion order, non-deterministic without sort), updates it if any exist, or inserts a new document using the update fields if none match (which empty filter always "matches" at least the first if collection non-empty, but with upsert it would insert if collection empty). Atomicity: The underlying MongoDB findOneAndUpdate is atomic at the single-document level for matched updates. The find-and-update happens atomically, so no changes between find and update. However, with empty filter: - Without sort: Which document is first is non-deterministic across queries. - Upsert case: If collection empty, inserts atomically. If non-empty, updates first (atomic per op). Concurrent writes: Each concurrent findOneAndUpdate({}, ..., { upsert: true }) operates atomically independently. Since empty filter matches all docs: - Multiple ops will race to update different "first" docs (MongoDB doesn't lock the collection). - Results: Multiple docs updated (one per concurrent op), no duplicates since updating existing. - Safe from lost updates per doc, but expect multiple docs modified, non-deterministic which ones. This is dangerous for production: unintended broad updates. Mongoose issue #14913 discusses; since ~2025, { strictFilter: true } or { requireFilter: true } option throws on empty filter to prevent accidents (default false for BC). Official docs confirm empty filter updates first doc (MongoDB docs); Mongoose mirrors this. For concurrency safety with upsert, always use filter on uniquely indexed fields to serialize via index. Recommendation: Never use empty filter. Add sort: { _id: 1 } for determinism, but better use specific filter.
Citations:
- 1: https://stackoverflow.com/questions/40794558/is-an-upsert-in-mongodb-atomic-with-the-filter-and-the-actual-update
- 2: https://stackoverflow.com/questions/75583865/is-mongooses-findoneandupdate-atomic
- 3: findOneAndUpdate method - produces inappropriate results on missing to pass first argument Automattic/mongoose#14913
- 4: https://mongoosejs.com/docs/tutorials/findoneandupdate.html
- 5: https://www.mongodb.com/docs/manual/reference/method/db.collection.findoneandupdate/
- 6: https://mongoosejs.com/docs/7.x/docs/tutorials/findoneandupdate.html
- 7: Fix: Add strictFilter option to findOneAndUpdate (#14913) Automattic/mongoose#15402
🏁 Script executed:
# Read the ServiceUsage model to understand TotalUsage schema
cat -n backend/models/ServiceUsage.jsRepository: gaureshpai/dkutils
Length of output: 729
Use a fixed unique key instead of an empty filter for the global counter.
findOneAndUpdate({}, ..., { upsert: true }) with an empty filter is non-deterministic: it matches the "first" document in non-deterministic order. Under concurrent writes, each operation atomically updates a different "first" document, resulting in multiple TotalUsage documents being incremented rather than a single global counter. Add a unique identifier field to the schema and filter on it to serialize updates and ensure only one document exists.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/middleware/apiActivityTracker.js` at line 14,
TotalUsage.findOneAndUpdate is using an empty filter which can create multiple
documents under concurrent writes; change the update to target a fixed unique
key (e.g., use filter { key: 'global' } or { _id: 'global' }) and ensure the
TotalUsage schema has that field with a unique constraint; perform
findOneAndUpdate(filter, { $inc: { totalCount: 1 }, $setOnInsert: { key:
'global' } }, { upsert: true, new: true }) so updates are serialized to a single
global counter document.
| router.post("/", async (req, res) => { | ||
| try { | ||
| await cleanSupabaseStorage(); | ||
| res.status(200).json({ msg: "Supabase cleanup triggered successfully." }); | ||
| } catch (error) { | ||
| console.error("Error triggering Supabase cleanup:", error); | ||
| res.status(500).json({ | ||
| msg: "Failed to trigger Supabase cleanup.", | ||
| error: error.message, | ||
| }); | ||
| } | ||
| try { | ||
| await cleanSupabaseStorage(); | ||
| res.status(200).json({ msg: "Supabase cleanup triggered successfully." }); | ||
| } catch (error) { | ||
| console.error("Error triggering Supabase cleanup:", error); | ||
| res.status(500).json({ | ||
| msg: "Failed to trigger Supabase cleanup.", | ||
| error: error.message, | ||
| }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Expected: the clean-supabase mount chain shows auth/signature middleware before this router.
rg -n -C4 'cleanSupabase|clean-supabase|/trigger' .
rg -n -C4 '\b(app|router)\.use\s*\(' backend
rg -n -C4 'auth|authenticate|verify|secret|token|signature|cron' backend .githubRepository: gaureshpai/dkutils
Length of output: 30330
🏁 Script executed:
cat -n backend/middleware/auth.jsRepository: gaureshpai/dkutils
Length of output: 598
Add mandatory authentication enforcement to the cleanup routes.
The authMiddleware at the mount point (line 90 in backend/index.js) is optional—it calls next() without blocking if no token is provided (see backend/middleware/auth.js, lines 6–7). This allows unauthenticated requests to reach the handlers.
Additionally, the GET /trigger route performs a destructive side effect, which violates HTTP semantics and risks accidental triggering via prefetch or crawlers. The production workflow (.github/workflows/trigger-supabase-cleanup.yml, line 24) explicitly calls this endpoint without authentication, confirming the endpoint is accessible to anyone.
Either:
- Replace the optional middleware with a middleware that enforces authentication (rejecting requests without a valid token), or
- Use a signed token/secret in the request to validate intentional triggering (suitable for cron/external calls).
Also convert the destructive GET route to POST, or remove it entirely and use only the POST handler.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/cleanSupabase.js` around lines 7 - 18, The cleanup endpoints
currently allow unauthenticated access; update backend/routes/cleanSupabase.js
to require enforced authentication and remove the destructive GET handler:
attach or call the strict auth validator (not the optional authMiddleware)
before the router handlers (or add an explicit token/secret check inside the
POST handler that verifies a signed secret from the request) so that
cleanSupabaseStorage() can only be invoked with a valid credential, and remove
or replace the GET "/trigger" route with a POST-only endpoint that uses the
enforced auth/secret validation for cron/external calls; make sure to reference
the router.post handler and cleanSupabaseStorage function when implementing the
checks.
| router.get("/trigger", async (req, res) => { | ||
| try { | ||
| await cleanSupabaseStorage(); | ||
| res.status(200).json({ msg: "Supabase cleanup triggered successfully." }); | ||
| } catch (error) { | ||
| console.error("Error triggering Supabase cleanup:", error); | ||
| res.status(500).json({ | ||
| msg: "Failed to trigger Supabase cleanup.", | ||
| error: error.message, | ||
| }); | ||
| } | ||
| try { | ||
| await cleanSupabaseStorage(); | ||
| res.status(200).json({ msg: "Supabase cleanup triggered successfully." }); | ||
| } catch (error) { | ||
| console.error("Error triggering Supabase cleanup:", error); | ||
| res.status(500).json({ | ||
| msg: "Failed to trigger Supabase cleanup.", | ||
| error: error.message, | ||
| }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Remove the destructive GET trigger.
GET /trigger performs a side effect that can be fired by crawlers, link previews, caches, or browser prefetch. Scheduled jobs can call the POST endpoint instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/cleanSupabase.js` around lines 23 - 34, Remove the destructive
GET route handler so the cleanup can't be triggered by crawlers or prefetch;
delete the router.get("/trigger", ...) block and expose the same behavior only
via router.post("/trigger", async (req, res) => { ... }) calling
cleanSupabaseStorage(), returning the same 200/500 responses. Ensure the unique
symbols router.get("/trigger") and cleanSupabaseStorage() are replaced by
router.post("/trigger") so scheduled jobs use POST instead of GET.
| /^fc00:/, // IPv6 unique local address (fc00::/7) | ||
| /^fd00:/, // IPv6 unique local address (fd00::/8) | ||
| /^fe80:/, // IPv6 link-local address | ||
| /^2001:db8::/, // IPv6 documentation prefix |
There was a problem hiding this comment.
Critical: IPv6 documentation prefix regex won't match expanded addresses.
The regex /^2001:db8::/ uses :: which will only match the zero-compressed form 2001:db8::x. It will fail to match expanded addresses like 2001:db8:1::1 or 2001:db8:0:0:0:0:0:1. The documentation range is 2001:db8::/32, so all addresses beginning with 2001:db8: should match.
🔧 Proposed fix
- /^2001:db8::/, // IPv6 documentation prefix
+ /^2001:db8:/, // IPv6 documentation prefix (2001:db8::/32)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /^2001:db8::/, // IPv6 documentation prefix | |
| /^2001:db8:/, // IPv6 documentation prefix (2001:db8::/32) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/utils/ipValidation.js` at line 22, The IPv6 documentation prefix
regex /^2001:db8::/ is too narrow; update the entry in
backend/utils/ipValidation.js (the array or variable that holds the
reserved/documentation prefixes) to match the entire 2001:db8::/32 block by
using a start-anchor prefix match like /^2001:db8:/i (or /^2001:db8:/ if you
prefer case-sensitive) so any address beginning with "2001:db8:" (including
expanded forms) is matched; replace the old /^2001:db8::/ regex with
/^2001:db8:/i and run tests.
| @@ -1,167 +1,162 @@ | |||
| import React, { useState } from "react"; | |||
| import md5 from "js-md5"; | |||
There was a problem hiding this comment.
Remove the BOM character at the start of the file.
The file begins with a UTF-8 Byte Order Mark (BOM) character before the import statement. This invisible character can cause JavaScript parsing errors, module resolution failures, and unexpected runtime behavior in various environments and bundlers.
🔧 Proposed fix
Save the file with UTF-8 encoding without BOM, or manually remove the invisible character. Most modern editors have an option to save as "UTF-8 without BOM":
-import md5 from "js-md5";
+import md5 from "js-md5";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/HashGenerator.jsx` at line 1, The file starts with a
UTF-8 BOM character before the import which can break parsing; open
frontend/src/components/HashGenerator.jsx and remove the leading invisible BOM
character (save the file as "UTF-8 without BOM" in your editor) so the import
md5 from "js-md5"; line begins at the very first byte; ensure no other files in
the same module have BOMs.
| for (const file of files) { | ||
| if (file.type !== "application/pdf") { | ||
| toast.error(`Invalid file type: ${file.name}. Only PDF files are allowed.`); | ||
| hasError = true; | ||
| return; | ||
| } | ||
| if (file.size > maxFileSize) { | ||
| toast.error( | ||
| `File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, | ||
| ); | ||
| hasError = true; | ||
| return; | ||
| } | ||
| validFiles.push(file); | ||
| } |
There was a problem hiding this comment.
Critical: Early return breaks validation error handling.
The return statements on lines 27 and 34 exit the entire onFileChange function immediately when an invalid file is encountered. This makes the error handling code at lines 40-45 completely unreachable—the hasError flag is set but never checked, the file input is never cleared, and the error state message is never set.
Current broken behavior:
- User selects files including an invalid one
- Toast error appears
- Function exits early via
return - File input still displays the invalid files (not cleared)
selectedFilesstate is not updated- Error message below input is not shown
Impact: Validation appears broken to users—they see a toast but the file input isn't cleared and no persistent error message appears. The dead code suggests this refactoring from forEach to for...of introduced the bug.
🔧 Fix: Replace `return` with `break` to reach error handling code
for (const file of files) {
if (file.type !== "application/pdf") {
toast.error(`Invalid file type: ${file.name}. Only PDF files are allowed.`);
hasError = true;
- return;
+ break;
}
if (file.size > maxFileSize) {
toast.error(
`File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`,
);
hasError = true;
- return;
+ break;
}
validFiles.push(file);
}Alternatively, restructure to validate all files first, accumulate errors, and then handle them after the loop completes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/PdfMerger.jsx` around lines 23 - 37, The loop inside
onFileChange exits early on an invalid file due to `return`, preventing
post-loop error handling, clearing the file input, and updating selectedFiles;
change the control flow so the loop does not return early (e.g., replace
`return` with `break` or continue validating remaining files and accumulate
errors), ensure hasError is checked after the loop and if true clear the file
input element, do not call setSelectedFiles(validFiles) when hasError is true,
and set the persistent error state/message so the UI shows the error; reference
the onFileChange function, variables validFiles, hasError, and the
selectedFiles/state update to locate where to modify logic.
| const pagesToDeleteArray = pagesToDelete | ||
| .split(",") | ||
| .map((s) => s.trim()) | ||
| .filter(Boolean) | ||
| .flatMap((range) => { | ||
| if (range.includes("-")) { | ||
| const [start, end] = range.split("-").map(Number); | ||
| if ( | ||
| Number.isNaN(start) || | ||
| Number.isNaN(end) || | ||
| start < 1 || | ||
| end > numPages || | ||
| start > end | ||
| ) { | ||
| throw new Error(`Invalid page range: ${range}`); | ||
| } | ||
| return Array.from({ length: end - start + 1 }, (_, i) => start + i); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Single-page delete tokens are currently ignored.
flatMap only returns values for a-b ranges. Inputs like 1, 3 (shown in UI) produce no deletions for those tokens. This breaks core behavior.
✅ Suggested fix
const pagesToDeleteArray = pagesToDelete
.split(",")
.map((s) => s.trim())
.filter(Boolean)
- .flatMap((range) => {
- if (range.includes("-")) {
- const [start, end] = range.split("-").map(Number);
- if (
- Number.isNaN(start) ||
- Number.isNaN(end) ||
- start < 1 ||
- end > numPages ||
- start > end
- ) {
- throw new Error(`Invalid page range: ${range}`);
- }
- return Array.from({ length: end - start + 1 }, (_, i) => start + i);
- }
- });
+ .flatMap((token) => {
+ const parts = token.split("-").map((p) => p.trim());
+
+ if (parts.length === 1) {
+ const page = Number(parts[0]);
+ if (!Number.isInteger(page) || page < 1 || page > numPages) {
+ throw new Error(`Invalid page number: ${token}`);
+ }
+ return [page];
+ }
+
+ if (parts.length === 2) {
+ const [start, end] = parts.map(Number);
+ if (
+ !Number.isInteger(start) ||
+ !Number.isInteger(end) ||
+ start < 1 ||
+ end > numPages ||
+ start > end
+ ) {
+ throw new Error(`Invalid page range: ${token}`);
+ }
+ return Array.from({ length: end - start + 1 }, (_, i) => start + i);
+ }
+
+ throw new Error(`Invalid page selector: ${token}`);
+ });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const pagesToDeleteArray = pagesToDelete | |
| .split(",") | |
| .map((s) => s.trim()) | |
| .filter(Boolean) | |
| .flatMap((range) => { | |
| if (range.includes("-")) { | |
| const [start, end] = range.split("-").map(Number); | |
| if ( | |
| Number.isNaN(start) || | |
| Number.isNaN(end) || | |
| start < 1 || | |
| end > numPages || | |
| start > end | |
| ) { | |
| throw new Error(`Invalid page range: ${range}`); | |
| } | |
| return Array.from({ length: end - start + 1 }, (_, i) => start + i); | |
| } | |
| }); | |
| const pagesToDeleteArray = pagesToDelete | |
| .split(",") | |
| .map((s) => s.trim()) | |
| .filter(Boolean) | |
| .flatMap((token) => { | |
| const parts = token.split("-").map((p) => p.trim()); | |
| if (parts.length === 1) { | |
| const page = Number(parts[0]); | |
| if (!Number.isInteger(page) || page < 1 || page > numPages) { | |
| throw new Error(`Invalid page number: ${token}`); | |
| } | |
| return [page]; | |
| } | |
| if (parts.length === 2) { | |
| const [start, end] = parts.map(Number); | |
| if ( | |
| !Number.isInteger(start) || | |
| !Number.isInteger(end) || | |
| start < 1 || | |
| end > numPages || | |
| start > end | |
| ) { | |
| throw new Error(`Invalid page range: ${token}`); | |
| } | |
| return Array.from({ length: end - start + 1 }, (_, i) => start + i); | |
| } | |
| throw new Error(`Invalid page selector: ${token}`); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/PdfPageDeleter.jsx` around lines 52 - 70, The current
flatMap callback only returns arrays for range tokens and ignores single-page
tokens (e.g., "1,3"), causing missing deletions; update the pagesToDeleteArray
construction (the pagesToDelete .split(...).flatMap callback) to handle
non-range tokens by parsing the token to a Number, validating it (not NaN, >=1,
<= numPages), and returning an array with that single page (e.g., [page]) so
every branch returns a value; keep the existing range validation and throw on
invalid tokens as before.
| const html = diff | ||
| .map((part) => { | ||
| const [type, text] = part; | ||
| const safeText = escapeHtml(text); | ||
| if (type === 0) { | ||
| return safeText; | ||
| } | ||
| return ""; | ||
| }) | ||
| .join(""); |
There was a problem hiding this comment.
Critical: Diff logic discards all additions and deletions.
The diff-match-patch library uses type === 0 for equal (unchanged) parts, type === -1 for deletions, and type === 1 for additions. This implementation only returns text when type === 0 and returns "" for everything else—meaning the component now displays only the parts that are identical between the two texts, completely losing the actual differences.
This defeats the core purpose of a "Text Difference Checker."
🐛 Proposed fix to restore diff visualization
const html = diff
.map((part) => {
const [type, text] = part;
const safeText = escapeHtml(text);
- if (type === 0) {
- return safeText;
- }
- return "";
+ if (type === -1) {
+ return `<del class="bg-red-200 dark:bg-red-900">${safeText}</del>`;
+ }
+ if (type === 1) {
+ return `<ins class="bg-green-200 dark:bg-green-900 no-underline">${safeText}</ins>`;
+ }
+ return safeText;
})
.join("");Then update the rendering at line 129-131 to use dangerouslySetInnerHTML:
- <div className="bg-background border border-input text-foreground text-sm rounded-lg block w-full p-2.5 h-max">
- {diffResult}{" "}
- </div>
+ <div
+ className="bg-background border border-input text-foreground text-sm rounded-lg block w-full p-2.5 h-max whitespace-pre-wrap"
+ dangerouslySetInnerHTML={{ __html: diffResult }}
+ />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/TextDifferenceChecker.jsx` around lines 39 - 48, The
current diff mapping only returns unchanged parts (type === 0) and discards
additions/deletions; update the logic that builds the html variable (the
diff.map callback around part, type, text and escapeHtml) to return escaped text
wrapped in distinct markers for each diff type (e.g., wrap deletions (type ===
-1) in a <del> ... </del>, additions (type === 1) in an <ins> ... </ins>, and
leave equals (type === 0) unwrapped) so additions and deletions are preserved
for rendering, and then ensure the component renders that html string as HTML
(use the existing rendering spot that currently renders html to switch to
dangerouslySetInnerHTML or equivalent).
Summary by CodeRabbit
New Features
Documentation
Chores