Skip to content

feat: restructure monorepo, add shared package, and background remover#17

Open
gaureshpai wants to merge 1 commit intomainfrom
feat/monorepo-restructure-cli
Open

feat: restructure monorepo, add shared package, and background remover#17
gaureshpai wants to merge 1 commit intomainfrom
feat/monorepo-restructure-cli

Conversation

@gaureshpai
Copy link
Owner

@gaureshpai gaureshpai commented Mar 23, 2026

  • Introduce 'package/' directory for shared CLI and core utilities.
  • Centralize dependency management using pnpm workspace catalogs.
  • Add ImageBackgroundRemover component to the frontend.
  • Update backend and frontend to align with the new workspace structure.
  • Replace markdownlint with Biome and refine Turbo tasks.
  • Perform a global refactor of routes, middleware, and UI components.

Summary by CodeRabbit

  • New Features

    • Added image background removal tool powered by AI.
  • Documentation

    • Updated README to emphasize npm package distribution and monorepo structure.
    • Clarified package availability and supported features.
  • Chores

    • Updated linting and formatting configuration for improved code quality.

- Introduce 'package/' directory for shared CLI and core utilities.
- Centralize dependency management using pnpm workspace catalogs.
- Add ImageBackgroundRemover component to the frontend.
- Update backend and frontend to align with the new workspace structure.
- Replace markdownlint with Biome and refine Turbo tasks.
- Perform a global refactor of routes, middleware, and UI components.
@vercel
Copy link

vercel bot commented Mar 23, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
dkutility Ready Ready Preview, Comment Mar 23, 2026 6:17pm
dkutils Ready Ready Preview, Comment Mar 23, 2026 6:17pm

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 23, 2026

📝 Walkthrough

Walkthrough

This PR applies extensive indentation reformatting (spaces to tabs) across backend and frontend code, refactors GitHub Actions publishing workflow to unconditional version bumping, updates the README to emphasize npm package distribution, replaces Sharp image processing with Jimp in image routes, adds a new image background removal component, and updates configuration tooling (Biome) with comprehensive linting rules.

Changes

Cohort / File(s) Summary
GitHub Actions Publishing Workflow
.github/workflows/publish.yml
Replaced conditional version-bump logic with unconditional bumps for website and package; changed version extraction from single to dual versions (website_version, package_version); simplified git staging to hardcoded file paths; altered tagging to use only website_version; removed conditional commit tracking.
Documentation & Linting Configuration
README.md, .markdownlint.json, .markdownlintignore
Rewrote README to focus on npm package as primary distribution with monorepo layout details, updated pnpm build/test instructions, and removed marketing/features sections; deleted .markdownlint.json config entirely; removed backend/docs/README.md from linting ignore list.
Biome Configuration
biome.json
Added comprehensive $schema, files.ignore globs, global formatter with tabs and 100-char width, linter with recommended rules (excluding noNonNullAssertion and noExplicitAny), and JavaScript/JSON formatter configurations.
Backend Package Dependencies & Scripts
backend/package.json, frontend/package.json
Updated multiple dependencies to catalog: version specifiers; removed sharp from backend; added jimp to backend and @imgly/background-removal to frontend; added lint scripts using Biome and updated test scripts with informational echo messages.
Backend Formatting (Models & Middleware)
backend/models/*, backend/middleware/*, backend/utils/*, backend/index.js
Converted indentation from spaces to tabs across schema definitions, middleware logic, and utility functions (ApiActivity, ServiceUsage, ToolUsage, Url, User, auth, authorize, uploadLimiter, apiActivityTracker, ipValidation, pdfErrorHandler, supabaseClient, supabaseCleaner) without functional logic changes.
Backend Route Refactoring (Image & File Processing)
backend/routes/imageConverter.js
Replaced sharp-based image processing (PNG-to-JPG, image-to-PDF, resize, compress, flip, grayscale) with jimp-based operations; updated output filename patterns with _dkutils_ variants; changed /download filename validation to include dkutils- prefix and alternate _dkutils_ acceptance path.
Backend Route Changes (Favicon, PDF, Office)
backend/routes/favicon.js, backend/routes/pdfConverter.js, backend/routes/officeConverter.js
Updated favicon filenames to favicon_dkutils_${Date.now()} pattern and replaced forEach with for...of loops; modified PDF rotation to accumulate current page rotation before applying requested angle with modulo normalization; used path.parse() for office converter output filenames from uploaded PDF base names.
Backend Route Formatting
backend/routes/screenshot.js, backend/routes/cleanSupabase.js, backend/routes/jsonXmlConverter.js, backend/routes/keepAlive.js, backend/routes/shortener.js, backend/routes/textToPdf.js, backend/routes/passwordStrength.js, backend/routes/textConverter.js, backend/routes/redirectChecker.js, backend/routes/seoTools.js, backend/routes/auth.js, backend/routes/analytics.js
Indentation reformatting (spaces to tabs), Node node: prefix imports, for...of loop refactoring in analytics, and minor filename pattern updates (e.g., screenshot_dkutils_${Date.now()}.png, dkutils-text-output-${Date.now()}.pdf); core request/response logic and error handling unchanged.
Backend API Documentation
backend/docs/api-documentation.json
Reformatted JSON indentation from spaces to tabs; no structural or content changes to endpoints, middleware lists, or metadata.
Frontend Component Formatting & Imports
frontend/src/components/Base64TextConverter, frontend/src/components/CsvToJsonConverter, frontend/src/components/ExcelToPdfConverter, frontend/src/App, frontend/src/components/Footer, frontend/src/components/Navbar, frontend/src/components/ToolCard, and 20+ others
Reordered module imports (typically moving axios before React or rearranging hook imports), converted space indentation to tabs, replaced forEach with for...of loops in file validation, updated optional chaining (file?.type vs file && file.type), and converted parseInt/isNaN to Number.parseInt/Number.isNaN; core component logic and JSX structure unchanged.
Frontend New Component
frontend/src/components/ImageBackgroundRemover.jsx
Added new React component using @imgly/background-removal library to upload images, validate file type, remove background, and trigger PNG download with -no-bg suffix; includes file validation, loading state, error toasts, and tool usage tracking.
Frontend File Validation Refactoring
frontend/src/components/ImageCompressor, frontend/src/components/ImageFlipper, frontend/src/components/ImageGrayscaler, frontend/src/components/ImageFormatConverter, frontend/src/components/ImageToBase64Converter, frontend/src/components/PngToJpgConverter, frontend/src/components/PdfMerger
Refactored file validation from forEach with hasInvalidFile flag to early-exit for...of loops; updated optional chaining for file type checks; consolidated error toast expressions to single lines.
Frontend Advanced Component Changes
frontend/src/components/TextDifferenceChecker, frontend/src/components/QrCodeScanner, frontend/src/components/PasswordStrengthChecker, frontend/src/components/SeoTools, frontend/src/components/PdfPageDeleter, frontend/src/components/PdfSplitter
Updated optional chaining for safety checks, replaced dangerouslySetInnerHTML in TextDifferenceChecker with direct rendering, adjusted effect dependency arrays (trackToolUsage), changed parseInt to Number.parseInt for form input parsing, and refactored range validation logic.
Frontend Public Assets & Configuration
frontend/public/manifest.json
Reformatted JSON indentation from spaces to tabs; manifest fields and values unchanged.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

enhancement, refactoring


🐰 A hop, a skip, through tabs so neat,
Sharp to Jimp, a swappy treat!
Background vanished, images bright,
Workflow trimmed to morning light.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Description check ❓ Inconclusive The description lists specific, concrete changes but lacks detail on testing, related issues, and self-review confirmation. Key sections from the template are missing or incomplete. Add sections for 'Related Issues', 'How Has This Been Tested?', and complete the 'Checklist' to align with the template and provide comprehensive context.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main changes: introducing a shared package, restructuring the monorepo, and adding a new component. It accurately reflects the primary objectives.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • ✅ Committed to branch successfully - (🔄 Check to regenerate)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/monorepo-restructure-cli

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai

This comment was marked as outdated.

@coderabbitai coderabbitai bot added the enhancement New feature or request label Mar 23, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

🟠 Major comments (20)
frontend/src/components/PdfSplitter.jsx-88-90 (1)

88-90: ⚠️ Potential issue | 🟠 Major

Reject malformed range tokens like 1-2-3 during validation.

On Line 88 and Line 89, split("-") + destructuring accepts extra separators silently (1-2-3 becomes start=1, end=2). That makes invalid input appear valid and may send unintended ranges downstream.

Proposed fix
 			for (const part of parts) {
 				if (part.includes("-")) {
-					const [start, end] = part.split("-").map(Number);
+					const bounds = part.split("-").map((v) => v.trim());
+					if (bounds.length !== 2 || bounds[0] === "" || bounds[1] === "") {
+						isValidRange = false;
+						break;
+					}
+					const [start, end] = bounds.map(Number);
 					if (
 						Number.isNaN(start) ||
 						Number.isNaN(end) ||
 						start <= 0 ||
 						end > totalPages ||
 						start > end
 					) {
 						isValidRange = false;
 						break;
 					}

Also applies to: 96-99

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/PdfSplitter.jsx` around lines 88 - 90, The
range-parsing logic in PdfSplitter.jsx currently uses part.split("-") with array
destructuring (seen around the block handling part.includes("-")) which silently
accepts malformed tokens like "1-2-3"; update that logic to first split and
verify the split result has exactly two segments, confirm both segments parse to
finite numbers, and enforce start <= end before accepting the range (apply the
same fix to the second occurrence around lines 96-99); locate the parsing code
in the PdfSplitter component / function that iterates parts, replace the direct
destructuring with an explicit check of splitParts.length === 2 and numeric
validation before converting to numbers and pushing the range.
frontend/src/components/ImageResizer.jsx-81-98 (1)

81-98: ⚠️ Potential issue | 🟠 Major

Handle async resize failures and null canvas context to avoid stuck loading state

setLoading(true) is called at Line 78, but the resize pipeline has no reader.onerror / img.onerror handling, and ctx at Line 88 can be null before Line 89. Any failure here can leave the UI stuck on “Resizing...”.

Suggested fix
 	const reader = new FileReader();
+	reader.onerror = () => {
+		toast.error("Failed to read the image file.");
+		setLoading(false);
+	};
 	reader.onload = (event) => {
 		const img = new Image();
+		img.onerror = () => {
+			toast.error("Failed to decode the uploaded image.");
+			setLoading(false);
+		};
 		img.onload = () => {
 			const canvas = document.createElement("canvas");
 			canvas.width = width;
 			canvas.height = height;
 			const ctx = canvas.getContext("2d");
+			if (!ctx) {
+				toast.error("Canvas is not supported in this browser.");
+				setLoading(false);
+				return;
+			}
 			ctx.drawImage(img, 0, 0, width, height);
 			const dataUrl = canvas.toDataURL(originalImage.type);
 			setResizedImageSrc(dataUrl);

 			handleDownload(dataUrl, `resized-${originalImage ? originalImage.name : "image"}`);
 			setLoading(false);
 		};
 		img.src = event.target.result;
 	};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ImageResizer.jsx` around lines 81 - 98, The resize
pipeline can fail and leave the UI stuck because reader.onerror and img.onerror
are not handled and the canvas getContext("2d") (ctx) may be null; add
reader.onerror and img.onerror handlers that call setLoading(false),
setResizedImageSrc(null) or an error state, and surface/log the error instead of
proceeding, and after creating ctx verify it's not null before calling
ctx.drawImage / canvas.toDataURL / handleDownload; ensure every early-exit or
error path calls setLoading(false) so the UI won’t remain in “Resizing...”.
frontend/src/components/ImageCropper.jsx-53-79 (1)

53-79: ⚠️ Potential issue | 🟠 Major

Guard crop flow with try/finally to prevent sticky loading state.

If any canvas/image operation throws, setLoading(false) is skipped and the button can remain permanently disabled. Add ref/context guards and wrap the crop body in try/finally.

Proposed fix
 const handleCrop = () => {
 	if (!imageSrc) {
 		toast.error("Please upload an image first.");
 		return;
 	}

-	setLoading(true);
-	trackToolUsage("ImageCropper", "image");
-	const image = imageRef.current;
-	const canvas = canvasRef.current;
-	const ctx = canvas.getContext("2d");
+	const image = imageRef.current;
+	const canvas = canvasRef.current;
+	if (!image || !canvas) {
+		toast.error("Image is not ready yet. Please try again.");
+		return;
+	}
+	const ctx = canvas.getContext("2d");
+	if (!ctx) {
+		toast.error("Canvas is unavailable in this browser.");
+		return;
+	}
+
+	setLoading(true);
+	try {
+		trackToolUsage("ImageCropper", "image");

-	const cropX = image.naturalWidth * 0.25;
-	const cropY = image.naturalHeight * 0.25;
-	const cropWidth = image.naturalWidth * 0.5;
-	const cropHeight = image.naturalHeight * 0.5;
+		const cropX = image.naturalWidth * 0.25;
+		const cropY = image.naturalHeight * 0.25;
+		const cropWidth = image.naturalWidth * 0.5;
+		const cropHeight = image.naturalHeight * 0.5;

-	canvas.width = cropWidth;
-	canvas.height = cropHeight;
+		canvas.width = cropWidth;
+		canvas.height = cropHeight;

-	ctx.drawImage(image, cropX, cropY, cropWidth, cropHeight, 0, 0, cropWidth, cropHeight);
-	const dataUrl = canvas.toDataURL(image.type);
-	setCroppedImageSrc(dataUrl);
+		ctx.drawImage(image, cropX, cropY, cropWidth, cropHeight, 0, 0, cropWidth, cropHeight);
+		const dataUrl = canvas.toDataURL("image/png");
+		setCroppedImageSrc(dataUrl);

-	handleDownload(dataUrl, `cropped-image-${Date.now()}.png`);
-	setLoading(false);
+		handleDownload(dataUrl, `cropped-image-${Date.now()}.png`);
+	} finally {
+		setLoading(false);
+	}
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ImageCropper.jsx` around lines 53 - 79, The cropping
flow in handleCrop can throw and skip setLoading(false), so wrap the main crop
logic in try/finally and validate refs/contexts up front; call setLoading(true)
before try, then in finally call setLoading(false). Inside try, guard
imageRef.current, canvasRef.current and canvas.getContext(...) and bail with a
toast.error if missing, then perform ctx.drawImage, canvas.toDataURL and
setCroppedImageSrc and handleDownload; optionally catch errors to toast.error
before rethrowing or letting finally run.
frontend/src/components/HashGenerator.jsx-34-36 (1)

34-36: ⚠️ Potential issue | 🟠 Major

MD5 is cryptographically broken and should not be used for security-sensitive operations.

MD5 is vulnerable to collision attacks and is considered cryptographically insecure. If this hash generator is used for security-sensitive purposes (password hashing, digital signatures, data integrity verification, authentication tokens), you should remove the MD5 option or add a prominent warning.

MD5 is acceptable only for non-security use cases such as checksums, cache keys, or quick content hashing where collision resistance is not critical.

⚠️ Recommended actions

If security-sensitive:

  • Remove MD5 option entirely, or
  • Add a visible warning in the UI about MD5's security limitations
  • Consider offering only SHA-256, SHA-384, SHA-512, or bcrypt/scrypt for sensitive data

If non-security use case:

  • Document clearly that MD5 is provided only for legacy/non-security purposes
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/HashGenerator.jsx` around lines 34 - 36, The
HashGenerator component currently enables MD5 handling in the branch where
algorithm === "MD5" (calling setHashMd5 and trackToolUsage); since MD5 is
cryptographically insecure, either remove the MD5 option from the algorithm
selection and delete the MD5-specific branch (remove references to "MD5",
setHashMd5, and associated tracking), or keep MD5 only for non-security use by
adding a prominent UI warning shown when algorithm === "MD5" (render a visible
warning element in the HashGenerator component and update any docs) and ensure
any wording clarifies MD5 is for legacy/checksum use only.
frontend/src/components/ImageGrayscaler.jsx-50-53 (1)

50-53: ⚠️ Potential issue | 🟠 Major

Add timeout to the upload request.

The axios.post call on lines 50-53 lacks a timeout configuration. Without it, a stalled network connection can leave the UI stuck in a loading state indefinitely. No global timeout is configured in the frontend.

Suggested fix
			const res = await axios.post(
 				`${import.meta.env.VITE_API_BASE_URL}/api/convert/image-grayscale`,
 				formData,
+				{ timeout: 30000 },
 			);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ImageGrayscaler.jsx` around lines 50 - 53, The
axios.post call in ImageGrayscaler.jsx that sends formData to
`${import.meta.env.VITE_API_BASE_URL}/api/convert/image-grayscale` lacks a
timeout, so add a per-request timeout to the axios.post call (e.g., pass a
config object with timeout: 30000 or an appropriate value) so the request fails
fast on network stalls; ensure the request signature that sets res is updated to
include the config object (axios.post(..., formData, { timeout: <ms> })) and the
surrounding error handling for the res/formData upload path handles timeout
errors gracefully.
frontend/src/components/Base64TextConverter.jsx-16-23 (1)

16-23: ⚠️ Potential issue | 🟠 Major

Guard conversion and always clear loading state.

Line 20 runs conversion inside setTimeout, but if any runtime error occurs there, setLoading(false) on Line 21 is skipped and the button can stay disabled indefinitely. Wrap conversion in try/catch/finally inside the timeout callback.

Proposed fix
 const encodeBase64 = () => {
 	setLoading(true);
 	trackToolUsage("Base64TextConverter", "text");
 	setTimeout(() => {
-		setConvertedText(btoa(text));
-		setLoading(false);
+		try {
+			setConvertedText(btoa(text));
+		} catch (error) {
+			setConvertedText("");
+			toast.error("Failed to encode text.");
+		} finally {
+			setLoading(false);
+		}
 	}, 500);
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/Base64TextConverter.jsx` around lines 16 - 23, The
encodeBase64 function runs btoa inside a setTimeout and may throw, leaving
loading true; modify the setTimeout callback to wrap the conversion in
try/catch/finally so any exception is caught (log or surface the error) and
ensure setLoading(false) is always executed in finally; specifically update the
encodeBase64 implementation that calls setLoading(true),
trackToolUsage("Base64TextConverter","text"), setConvertedText(btoa(text)) to
instead call btoa(text) inside try, call setConvertedText on success, handle
errors in catch (e.g., setConvertedText("") or surface an error), and always
call setLoading(false) in finally.
backend/routes/textConverter.js-9-20 (1)

9-20: ⚠️ Potential issue | 🟠 Major

Harden request validation for text type before conversion.

Current checks only verify presence. A non-string text can still reach conversion and produce a server error path. Validate type early and return 400 for invalid payloads.

Proposed fix
-	if (!text || !type) {
+	if (typeof text !== "string" || text.length === 0 || !type) {
 		return res.status(400).json({ msg: "Text and type (encode/decode) are required." });
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routes/textConverter.js` around lines 9 - 20, The handler in
textConverter.js currently only checks presence of text and type but not their
types; update the request validation to ensure text is a non-empty string and
type is a string equal to "encode" or "decode" before attempting conversion:
check typeof text === "string" (and optionally text.trim().length > 0) and
typeof type === "string" and validate type against the allowed set, returning
res.status(400).json({ msg: "Text must be a string and type must be 'encode' or
'decode'." }) for invalid payloads; keep the existing conversion logic
(Buffer.from(...)) unchanged once validation passes and reference the variables
text, type and the result assignment for where to apply the checks.
backend/routes/redirectChecker.js-20-23 (1)

20-23: ⚠️ Potential issue | 🟠 Major

Bound the outbound requests.

Neither probe sets a timeout, and the fallback axios.get buffers the full response body by default. A slow or very large target can pin a worker and burn memory/bandwidth.

Also applies to: 42-46

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routes/redirectChecker.js` around lines 20 - 23, The outbound HTTP
probes (the axios.head call for currentUrl and its fallback axios.get) lack
timeouts and allow unbounded response bodies; update the axios requests
(reference axios.head for currentUrl and the fallback axios.get) to include a
reasonable timeout value and protect against large bodies by either using
responseType: 'stream' or setting maxContentLength/maxBodyLength limits, and
ensure validateStatus remains appropriate; apply the same changes to the other
probe block (the second axios.head / axios.get pair) so both have timeouts and
bounded response handling.
frontend/src/components/ExcelToPdfConverter.jsx-10-18 (1)

10-18: ⚠️ Potential issue | 🟠 Major

Read isAuthenticated from state here too.

Other AuthContext consumers in this PR use state.isAuthenticated; this destructure pulls a nonexistent top-level field, so signed-in users still hit the 10MB limit.

Suggested fix
-	const { isAuthenticated } = useContext(AuthContext);
+	const {
+		state: { isAuthenticated },
+	} = useContext(AuthContext);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ExcelToPdfConverter.jsx` around lines 10 - 18, The
component is reading isAuthenticated incorrectly from AuthContext; replace the
current destructure that pulls a top-level isAuthenticated with reading
state.isAuthenticated from the context (i.e., change the useContext/AuthContext
usage so onFileChange uses context.state.isAuthenticated) so the maxFileSize
calculation uses the signed-in flag used elsewhere; update references in this
file (e.g., the existing isAuthenticated variable usage in onFileChange) to use
the context.state.isAuthenticated value.
frontend/src/components/ImageToPdfConverter.jsx-29-53 (1)

29-53: ⚠️ Potential issue | 🟠 Major

Reset selection state before returning on invalid files.

These early returns skip the post-loop reset, so a failed reselection leaves the previous valid files in selectedFiles. The next submit can upload the wrong batch.

Suggested fix
-		const validFiles = [];
-		let hasInvalidFile = false;
+		const validFiles = [];
 
 		for (const file of files) {
 			if (!allowedTypes.includes(file.type)) {
 				toast.error(
 					`Invalid file type: ${file.name}. Only images (JPEG, PNG, GIF, WebP, TIFF, AVIF) are allowed.`,
 				);
-				hasInvalidFile = true;
+				setSelectedFiles([]);
+				e.target.value = "";
 				return;
 			}
 			if (file.size > maxSize) {
 				toast.error(
 					`File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`,
 				);
-				hasInvalidFile = true;
+				setSelectedFiles([]);
+				e.target.value = "";
 				return;
 			}
 			validFiles.push(file);
 		}
 
 		setSelectedFiles(validFiles);
-		if (hasInvalidFile) {
-			e.target.value = "";
-		}
 	};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ImageToPdfConverter.jsx` around lines 29 - 53, In
ImageToPdfConverter.jsx fix the early-return logic inside the file loop so that
when an invalid file is detected you clear the selection state before returning:
call setSelectedFiles([]) and reset the input value (e.target.value = "")
immediately when detecting invalid type or oversize (inside the branches where
you currently set hasInvalidFile = true and return), then return; also remove
the separate post-loop hasInvalidFile handling since state is already cleared;
this ensures selectedFiles and the file input are reset when validation fails.
frontend/src/components/PngToJpgConverter.jsx-21-43 (1)

21-43: ⚠️ Potential issue | 🟠 Major

Clear stale files before bailing out on validation errors.

This has the same state bug as the image-to-PDF picker: the early returns skip the reset path, so an invalid reselection can leave the previous PNG batch in selectedFiles.

Suggested fix
-		const validFiles = [];
-		let hasInvalidFile = false;
+		const validFiles = [];
 
 		for (const file of files) {
 			if (!allowedTypes.includes(file.type)) {
 				toast.error(`Invalid file type: ${file.name}. Only PNG images are allowed.`);
-				hasInvalidFile = true;
+				setSelectedFiles([]);
+				e.target.value = "";
 				return;
 			}
 			if (file.size > maxSize) {
 				toast.error(
 					`File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`,
 				);
-				hasInvalidFile = true;
+				setSelectedFiles([]);
+				e.target.value = "";
 				return;
 			}
 			validFiles.push(file);
 		}
 
 		setSelectedFiles(validFiles);
-		if (hasInvalidFile) {
-			e.target.value = "";
-		}
 	};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/PngToJpgConverter.jsx` around lines 21 - 43, The
handler currently returns early on validation failures and skips clearing
previous state, so stale selectedFiles remain; modify the loop in the file
validation block (where files, allowedTypes, maxSize are checked) to call
setSelectedFiles([]) and reset the file input (e.target.value = "") immediately
before any return on invalid type or size, and ensure the local validFiles array
is not used after a failure; keep setSelectedFiles(validFiles) only when the
loop completes without errors.
frontend/src/components/Navbar.jsx-86-92 (1)

86-92: ⚠️ Potential issue | 🟠 Major

Add an accessible name to the mobile menu trigger.

This is an icon-only control. aria-expanded and aria-controls help, but they do not tell screen-reader users what the button does.

Suggested fix
 						<Button
 							variant="ghost"
 							size="icon"
 							onClick={() => setMobileMenuOpen(!mobileMenuOpen)}
+							aria-label={mobileMenuOpen ? "Close navigation menu" : "Open navigation menu"}
 							aria-expanded={mobileMenuOpen}
 							aria-controls="mobile-menu"
 						>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/Navbar.jsx` around lines 86 - 92, The mobile menu
Button (the icon-only control that calls setMobileMenuOpen and uses
mobileMenuOpen with aria-expanded and aria-controls="mobile-menu") lacks an
accessible name; add one by providing either an aria-label or aria-labelledby
that conveys action and state (e.g., dynamic aria-label using mobileMenuOpen
such as "Open mobile menu" / "Close mobile menu"), or include visually-hidden
descriptive text inside the Button so screen readers know what the control does.
backend/routes/pdfConverter.js-118-120 (1)

118-120: ⚠️ Potential issue | 🟠 Major

Sanitize user-controlled filenames before building storage keys.

Lines 118, 219, and 313 derive object names from file.originalname without sanitization. This can introduce unsafe characters/path-like segments into stored filenames.

🛡️ Proposed fix
 const router = require("express").Router();
 const path = require("node:path");
@@
 const { handlePdfError, validatePdfFile, validatePageRange } = require("../utils/pdfErrorHandler");
+
+const safeBaseName = (originalname) => {
+	const base = path.parse(String(originalname || "file")).name.normalize("NFKC");
+	const sanitized = base.replace(/[^a-zA-Z0-9._-]/g, "_").replace(/_+/g, "_").slice(0, 80);
+	return sanitized || "file";
+};
@@
-			const nameWithoutExt = path.parse(file.originalname).name;
+			const nameWithoutExt = safeBaseName(file.originalname);
 			const outputFileName = `${nameWithoutExt}_dkutils_split_${Date.now()}.pdf`;
@@
-			const nameWithoutExt = path.parse(file.originalname).name;
+			const nameWithoutExt = safeBaseName(file.originalname);
 			const outputFileName = `${nameWithoutExt}_dkutils_rotated_${Date.now()}.pdf`;
@@
-			const nameWithoutExt = path.parse(file.originalname).name;
+			const nameWithoutExt = safeBaseName(file.originalname);
 			const outputFileName = `${nameWithoutExt}_dkutils_compressed_${Date.now()}.pdf`;

Also applies to: 219-221, 313-315

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routes/pdfConverter.js` around lines 118 - 120, The code builds
storage object keys from the user-controlled file.originalname (see variables
nameWithoutExt and outputFileName and the supabase.storage upload calls) without
sanitization; update the logic that derives nameWithoutExt/outputFileName to
first normalize and sanitize file.originalname by removing any path segments,
replacing or removing unsafe characters (e.g., ../, slashes, nulls), restricting
to a safe character whitelist (letters, numbers, dashes, underscores),
collapsing whitespace, and providing a deterministic fallback (e.g., UUID or
timestamp) if the sanitized name is empty; apply this same sanitization helper
wherever file.originalname is used (the occurrences producing nameWithoutExt at
the start and the other two spots referenced in the diff) so storage keys are
safe and predictable.
backend/routes/analytics.js-26-44 (1)

26-44: ⚠️ Potential issue | 🟠 Major

Harden /track input validation before DB update.

Line 28 only checks presence. Non-string or oversized toolName/category values can still reach findOneAndUpdate, risking polluted analytics data and avoidable DB stress.

🔒 Proposed fix
 		const { toolName, category } = req.body;
 
-		if (!toolName || !category) {
+		if (typeof toolName !== "string" || typeof category !== "string") {
+			return res.status(400).json({ msg: "Tool name and category must be strings." });
+		}
+
+		const normalizedToolName = toolName.trim();
+		const normalizedCategory = category.trim();
+
+		if (!normalizedToolName || !normalizedCategory) {
 			return res.status(400).json({ msg: "Tool name and category are required." });
 		}
 
-		if (!isValidCategory(category)) {
+		if (normalizedToolName.length > 120) {
+			return res.status(400).json({ msg: "Tool name is too long." });
+		}
+
+		if (!isValidCategory(normalizedCategory)) {
 			return res.status(400).json({ msg: "Invalid category" });
 		}
 
 		// Find and update or create new entry
 		const toolUsage = await ToolUsage.findOneAndUpdate(
-			{ toolName, category },
+			{ toolName: normalizedToolName, category: normalizedCategory },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routes/analytics.js` around lines 26 - 44, Validate and sanitize
request inputs before calling ToolUsage.findOneAndUpdate: ensure
req.body.toolName and req.body.category are strings, trim them, enforce length
limits (e.g., max 100 for toolName, max 50 for category), and reject non-string
or oversized values with 400; continue to call isValidCategory(category) after
trimming; this prevents malformed or excessively large values from reaching the
database and keeps the update in ToolUsage.findOneAndUpdate safe and consistent.
backend/routes/officeConverter.js-89-90 (1)

89-90: ⚠️ Potential issue | 🟠 Major

Unsanitized user-supplied filename creates potential security risk.

file.originalname is provided by the client and is not sanitized by the uploadLimiter middleware (which uses multer.memoryStorage() without filename filtering). While path.parse().name strips the extension and directory components, it does not sanitize against:

  • Null bytes or control characters
  • Unicode normalization attacks
  • Excessively long filenames
  • Reserved names or characters that may cause issues in storage systems

Consider sanitizing the filename before using it in the storage path:

🛡️ Proposed fix to sanitize filename
+const sanitizeFilename = (name) => {
+	// Remove path separators, null bytes, and control characters
+	return name
+		.replace(/[\x00-\x1f\x80-\x9f]/g, '')
+		.replace(/[/\\:*?"<>|]/g, '_')
+		.substring(0, 100); // Limit length
+};
+
 const nameWithoutExt = path.parse(file.originalname).name;
-const fileName = `${nameWithoutExt}_dkutils_converted_${Date.now()}.docx`;
+const fileName = `${sanitizeFilename(nameWithoutExt)}_dkutils_converted_${Date.now()}.docx`;

Apply the same pattern to line 145-146 for the Excel conversion route.

As noted in backend/middleware/uploadLimiter.js (lines 1-35), there is no filename sanitization in the upload middleware chain.

Also applies to: 145-146

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routes/officeConverter.js` around lines 89 - 90, The code uses
unsanitized file.originalname to build nameWithoutExt and fileName, creating
security risks; implement a sanitizeFilename utility (e.g., normalize Unicode,
strip control/null chars, replace path/reserved characters with safe fallback
like '-', trim and enforce a max length, and disallow reserved names) and call
it before path.parse in the office conversion handlers that set
nameWithoutExt/fileName (also apply the same sanitizeFilename call to the Excel
conversion route where fileName is built); optionally integrate or call this
sanitizer from the uploadLimiter/multer handling to centralize validation and
ensure a safe fallback filename if the sanitized result is empty.
frontend/src/components/ImageCompressor.jsx-31-55 (1)

31-55: ⚠️ Potential issue | 🟠 Major

Clear stale selections before returning on validation errors.

The early return now skips both setSelectedFiles(validFiles) and the input reset. If a user already had valid files selected, then picks a batch with one invalid file, submit still uses the previous selection.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ImageCompressor.jsx` around lines 31 - 55, The loop
in the file picker exits early on a validation error, skipping
setSelectedFiles(validFiles) and the input reset so previous selections remain;
update the handler (the block using allowedTypes, maxSize, toast, and
setSelectedFiles) so that on detecting an invalid file you immediately clear the
selection state and the input (call setSelectedFiles([]) and set e.target.value
= "" or equivalent) before returning, or alternatively stop using early returns
and instead set hasInvalidFile = true and break the loop, then after the loop
always call setSelectedFiles(validFiles) and if hasInvalidFile clear
e.target.value; ensure you reference the same symbols (allowedTypes, maxSize,
toast, setSelectedFiles, e.target.value) when editing the code.
backend/middleware/apiActivityTracker.js-7-18 (1)

7-18: ⚠️ Potential issue | 🟠 Major

Strip query strings from endpoint before persisting and counting.

req.originalUrl includes query strings in the request, causing each /download?filename=... request to create a distinct ServiceUsage row (due to the unique constraint on the endpoint field) instead of incrementing a single counter for the /download endpoint. This also stores user-supplied filenames in ApiActivity.endpoint. Use req.path instead to track endpoints by their route pattern.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/middleware/apiActivityTracker.js` around lines 7 - 18, The middleware
currently uses req.originalUrl when creating ApiActivity and when updating
ServiceUsage and TotalUsage, which preserves query strings and user-supplied
data; change these to use req.path for the endpoint value so query strings are
stripped before persisting and counting. Update the code that constructs
ApiActivity (where apiActivity.endpoint is set) to use req.path instead of
req.originalUrl, and update the ServiceUsage.findOneAndUpdate lookup and
increment key to use req.path as the endpoint identifier; keep TotalUsage logic
unchanged. Ensure any references to req.originalUrl in this file (e.g., around
apiActivity.save, ServiceUsage.findOneAndUpdate, and the endpoint field) are
replaced with req.path so counts aggregate per route and no query params are
stored.
backend/routes/imageConverter.js-4-7 (1)

4-7: ⚠️ Potential issue | 🟠 Major

The Jimp swap breaks WEBP/AVIF conversion that this endpoint still advertises.

Jimp 1.6.0's default build supports only BMP, GIF, JPEG, PNG, and TIFF. WEBP and AVIF require explicitly imported WASM plugins (@jimp/wasm-webp, @jimp/wasm-avif) and a custom Jimp instance created via createJimp(). This code allowlists webp and avif at line 522, then calls Jimp.read() and getBuffer("image/webp"|"image/avif") directly on the default instance (lines 537–545 and batch paths), causing these requests to fail with unsupported format errors. (https://jimp-dev.github.io/jimp/guides/webp/)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routes/imageConverter.js` around lines 4 - 7, The endpoint currently
allowlists "webp" and "avif" but uses the default Jimp instance (Jimp.read and
image.getBuffer) which lacks WASM plugin support; fix by importing the WASM
plugins (`@jimp/wasm-webp` and `@jimp/wasm-avif`), create a custom Jimp instance via
createJimp() that registers those plugins, and use that instance for reads and
getBuffer calls (replace direct Jimp.read/getBuffer usage with the
plugin-enabled instance) so WEBP/AVIF conversions succeed.
.github/workflows/publish.yml-105-107 (1)

105-107: ⚠️ Potential issue | 🟠 Major

This staging list makes optional docs mandatory again.

Lines 84-92 explicitly treat these README/docs files as optional, but this git add hardcodes them anyway. If any one of those paths is absent, the job dies here before it reaches the git diff --cached guard. git add -u is safer for this workflow because every file it edits is already tracked.

Safer staging
-          git add package.json frontend/package.json backend/package.json package/package.json \
-                  backend/docs/api-documentation.json backend/docs/README.md README.md \
-                  frontend/README.md package/README.md
+          git add -u
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/publish.yml around lines 105 - 107, Replace the hardcoded
git add listing in the publish workflow with a safer staged-update invocation:
change the explicit "git add package.json frontend/package.json
backend/package.json package/package.json backend/docs/api-documentation.json
backend/docs/README.md README.md frontend/README.md package/README.md" step to
"git add -u" so only tracked files that were modified are staged (avoid failing
when optional docs are missing); locate the git add invocation in the .github
workflow step and update it to use the "-u" flag rather than the long fixed file
list.
.github/workflows/publish.yml-6-14 (1)

6-14: ⚠️ Potential issue | 🟠 Major

Prerelease functionality is unreachable with current input options.

The workflow defines prerelease handling on line 137 using ${{ contains(github.event.inputs.release_type, 'pre') }}, but the available release_type options (patch, minor, major) do not contain the substring pre. This evaluates to false for all selections, making the prerelease parameter always false and preventing GitHub Release creation as a prerelease. Since npm version supports prepatch, preminor, premajor, and prerelease, restore the pre* options or add a separate boolean input to enable prerelease publishing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/publish.yml around lines 6 - 14, The workflow's
release_type input currently only offers "patch", "minor", "major" but the
prerelease check uses contains(github.event.inputs.release_type, 'pre'), which
will always be false; update the release_type input (symbol: release_type) to
include the pre variants ("prepatch", "preminor", "premajor", "prerelease") OR
add a new boolean input (e.g., prerelease) and change the prerelease conditional
(the contains(...) usage) to instead check that boolean (e.g.,
github.event.inputs.prerelease) so prerelease publishing can be triggered as
intended.
🟡 Minor comments (13)
frontend/src/components/ToolCard.jsx-1-1 (1)

1-1: ⚠️ Potential issue | 🟡 Minor

Remove the hidden BOM at file start.

Line 1 appears to include a UTF-8 BOM () before import. This can create noisy diffs or occasional tooling/parsing inconsistencies. Please save the file as UTF-8 without BOM.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ToolCard.jsx` at line 1, Remove the hidden UTF-8 BOM
at the start of the file that precedes the import statement (the invisible
character before "import PropTypes from \"prop-types\";") by re-saving the file
as UTF-8 without BOM or stripping the leading BOM character so the file begins
directly with the import line; ensure no other files are saved with a BOM to
avoid tooling/parsing issues.
frontend/src/components/WebsiteScreenshotGenerator.jsx-1-1 (1)

1-1: ⚠️ Potential issue | 🟡 Minor

Remove the UTF-8 BOM character at file start.

Line 1 appears to include a BOM before the import statement. This can cause avoidable tooling/lint inconsistencies across environments.

Suggested fix
-import axios from "axios";
+import axios from "axios";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/WebsiteScreenshotGenerator.jsx` at line 1, The file
starts with a UTF-8 BOM before the import statement which can break tools;
remove the BOM character so the file begins exactly with "import axios from
'axios';" and re-save the file as UTF-8 without BOM (update editor/IDE save
settings or run a file encoding conversion), then verify linting/build passes
for WebsiteScreenshotGenerator.jsx.
frontend/src/components/MarkdownToHtmlConverter.jsx-1-1 (1)

1-1: ⚠️ Potential issue | 🟡 Minor

Remove the hidden BOM character at file start.

Line 1 contains a UTF-8 BOM () before the import. This can cause inconsistent lint/format behavior across tooling/editors.

🔧 Suggested fix
-import { marked } from "marked";
+import { marked } from "marked";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/MarkdownToHtmlConverter.jsx` at line 1, Remove the
hidden UTF-8 BOM character at the start of the file so the line "import { marked
} from \"marked\";" begins with a normal ASCII character; open
frontend/src/components/MarkdownToHtmlConverter.jsx in your editor or run a tool
to strip the BOM and re-save the file without BOM (ensure the first character is
"i" of "import"), then re-run lint/format to confirm no BOM-related issues
remain.
frontend/src/components/PdfToWordConverter.jsx-1-1 (1)

1-1: ⚠️ Potential issue | 🟡 Minor

Remove UTF-8 BOM from the first import line.

Line 1 appears to include a BOM (\uFEFF) before import. This can cause inconsistent linting/parsing behavior across tooling.

🔧 Proposed fix
-import axios from "axios";
+import axios from "axios";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/PdfToWordConverter.jsx` at line 1, The first line of
PdfToWordConverter.jsx contains a UTF-8 BOM before the import which can break
tooling; open the file and remove the leading BOM so the line starts exactly
with "import axios from \"axios\";" (verify there are no hidden characters
before the import), then save the file without BOM/with UTF-8 encoding and run
lint/parse to confirm the issue is resolved.
frontend/src/components/LinkShortener.jsx-1-1 (1)

1-1: ⚠️ Potential issue | 🟡 Minor

Remove the BOM prefix from the module file.

Line 1 appears to include a UTF-8 BOM character () before import. This can create avoidable parser/lint inconsistencies in some toolchains/editors.

🔧 Proposed fix
-import axios from "axios";
+import axios from "axios";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/LinkShortener.jsx` at line 1, The file
LinkShortener.jsx contains a UTF-8 BOM character at the start of the module
(before the import axios statement) which can break parsers; open the file
containing the import axios line, remove the hidden BOM character at the start
of the file, and re-save the file as UTF-8 without BOM (ensure the first
character of the file is 'i' of "import"); also scan other source files for a
leading BOM and remove if found to avoid toolchain inconsistencies.
frontend/src/components/PdfCompressor.jsx-16-35 (1)

16-35: ⚠️ Potential issue | 🟡 Minor

Handle empty file selection without showing an error toast.

If the file picker is canceled, file is undefined and the current else path shows “Please select a PDF file.” This creates noisy UX for a non-error action.

💡 Suggested fix
 const onFileChange = (e) => {
 	const file = e.target.files[0];
 	const maxFileSize = isAuthenticated ? 50 * 1024 * 1024 : 10 * 1024 * 1024;
+	if (!file) {
+		return;
+	}
 
-	if (file && file.type === "application/pdf") {
+	if (file.type === "application/pdf") {
 		if (file.size > maxFileSize) {
 			toast.error(
 				`File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`,
 			);
 			setSelectedFile(null);
 			e.target.value = null;
 		} else {
 			setSelectedFile(file);
 		}
 	} else {
 		toast.error("Please select a PDF file.");
 		setSelectedFile(null);
 		e.target.value = null;
 	}
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/PdfCompressor.jsx` around lines 16 - 35, The
onFileChange handler currently treats a canceled file picker as an error; update
onFileChange to first check for no selection (e.g., if !file or
e.target.files.length === 0) and simply clear state (setSelectedFile(null) and
e.target.value = null if needed) and return early without calling toast.error;
after that, continue existing checks for file.type === "application/pdf",
file.size against maxFileSize (using isAuthenticated to compute the limit), and
call toast.error only for real invalid files (wrong type or too large) to avoid
noise when the picker is canceled.
frontend/src/components/TextCaseConverter.jsx-19-23 (1)

19-23: ⚠️ Potential issue | 🟡 Minor

Add timeout cleanup to avoid stale async state updates.

These setTimeout callbacks can still fire after unmount and attempt state updates. Track timeout IDs and clear them in a cleanup effect on unmount.

Applies to: lines 19-23, 28-32, 37-45

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/TextCaseConverter.jsx` around lines 19 - 23, The
setTimeout callbacks in TextCaseConverter.jsx (used in the conversion handlers
that call setConvertedText and setLoading) can fire after the component
unmounts; modify the conversion functions (e.g., convertToUpper, convertToLower,
convertToTitle) to capture each timeout ID (const id = setTimeout(...)) and
store it (ref or state array), clear any previous timeout before creating a new
one, and add a useEffect cleanup that clears all stored timeout IDs on unmount
using clearTimeout to prevent stale state updates.
frontend/src/components/ImageCropper.jsx-74-74 (1)

74-74: ⚠️ Potential issue | 🟡 Minor

Store the original file MIME type in state and pass it explicitly to toDataURL().

image is an HTMLImageElement and does not have a type property. Passing image.type (which is undefined) to canvas.toDataURL() silently defaults to PNG output. The original file's MIME type is available in handleFileChange (line 28: file.type), but it's not stored. Either store it in state and use it here, or explicitly pass "image/png" to toDataURL() to make the behavior deterministic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ImageCropper.jsx` at line 74, The code calls
canvas.toDataURL(image.type) but image is an HTMLImageElement without a type;
update handleFileChange to save the uploaded file's MIME (file.type) into
component state (e.g., originalMimeType) and then call
canvas.toDataURL(originalMimeType) inside the cropping/export logic (where
canvas.toDataURL is invoked) so the exported dataURL uses the actual original
file MIME; ensure the state key name matches usages in ImageCropper.jsx and
update any references accordingly.
frontend/src/components/ImageToBase64Converter.jsx-24-25 (1)

24-25: ⚠️ Potential issue | 🟡 Minor

Fix misleading limit message for authenticated users.

On Line 24, the toast always says “Login for a higher limit (50MB).” That’s incorrect when isAuthenticated is already true.

Proposed fix
-				toast.error(
-					`File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`,
-				);
+				const limitMb = maxFileSize / (1024 * 1024);
+				const hint = isAuthenticated ? "" : " Login for a higher limit (50MB).";
+				toast.error(`File too large: ${file.name}. Maximum size is ${limitMb}MB.${hint}`);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ImageToBase64Converter.jsx` around lines 24 - 25, The
toast message in ImageToBase64Converter.jsx currently always tells users to
“Login for a higher limit (50MB)”, which is wrong when isAuthenticated is true;
update the error string construction (where the toast is called) to
conditionally include the login prompt only if isAuthenticated is false — e.g.,
build the message using isAuthenticated ? `File too large: ${file.name}. Maximum
size is ${maxFileSize / (1024 * 1024)}MB.` : `File too large: ${file.name}.
Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit
(50MB).`, locating the change inside the ImageToBase64Converter component where
the toast is invoked.
frontend/src/components/TextDifferenceChecker.jsx-55-65 (1)

55-65: ⚠️ Potential issue | 🟡 Minor

Display and clipboard content mismatch due to HTML entity handling.

With dangerouslySetInnerHTML removed, {diffResult} renders escaped HTML entities literally (e.g., user sees &lt; on screen). However, copyToClipboard uses innerHTML assignment which interprets those entities, so the clipboard receives unescaped text (e.g., <).

If the critical diff logic issue above is fixed by restoring dangerouslySetInnerHTML, this inconsistency resolves itself. If you keep the current plain-text rendering approach instead, remove escapeHtml usage entirely and simplify the copy logic:

♻️ Simplified copy for plain-text approach (if not using dangerouslySetInnerHTML)
 	const copyToClipboard = async () => {
 		try {
-			const tempElement = document.createElement("div");
-			tempElement.innerHTML = diffResult;
-			await navigator.clipboard.writeText(tempElement.innerText);
+			await navigator.clipboard.writeText(diffResult);
 			toast.success("Copied to clipboard!");
 		} catch (error) {

And remove escapeHtml calls since React already escapes text content.

Also applies to: 129-131

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/TextDifferenceChecker.jsx` around lines 55 - 65, The
copy/clipboard behavior is inconsistent because copyToClipboard sets
tempElement.innerHTML (which unescapes entities) while the UI renders escaped
text (dangerouslySetInnerHTML was removed) or uses escapeHtml; fix by making
clipboard content match displayed text: either restore the original HTML
rendering (use dangerouslySetInnerHTML with diffResult and keep current
innerHTML copy logic) or, if you want plain-text rendering, remove escapeHtml
usage and change copyToClipboard to write
navigator.clipboard.writeText(diffResult) (or tempElement.textContent if you
must use a DOM node) so the copied text exactly matches what's shown; update
both the copyToClipboard function and any other copy uses (also referenced
around lines 129-131) to be consistent.
frontend/src/components/ImageGrayscaler.jsx-16-34 (1)

16-34: ⚠️ Potential issue | 🟡 Minor

Don’t clear a valid selection when file dialog is canceled.

On Line 17 and Line 30, canceling the picker (file is undefined) falls into the error branch, clears selectedFile, and shows a toast. That’s a UX regression if the user already had a valid file selected.

💡 Suggested fix
 const onFileChange = (e) => {
-	const file = e.target.files[0];
+	const file = e.target.files?.[0];
+	if (!file) return;
 	const maxFileSize = isAuthenticated ? 50 * 1024 * 1024 : 10 * 1024 * 1024;

-	if (file?.type.startsWith("image/")) {
+	if (file.type.startsWith("image/")) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ImageGrayscaler.jsx` around lines 16 - 34, The
onFileChange handler incorrectly treats a canceled file dialog (file undefined)
as an error and clears the existing selection; update onFileChange to first
check for absence of e.target.files[0] and simply return early (do not call
setSelectedFile or show toast) so an existing selected file is preserved; keep
the existing image/type/size validation (using file.type.startsWith("image/")
and file.size > maxFileSize) and only clear setSelectedFile and show toast for
real invalid files or oversize cases; reference the onFileChange function,
setSelectedFile, isAuthenticated and toast calls when making the change.
backend/routes/passwordStrength.js-50-53 (1)

50-53: ⚠️ Potential issue | 🟡 Minor

Reject non-string payloads before scoring.

123 or {} pass the current guard and then get coerced by .length and the regex checks, so this endpoint can return nonsense scores instead of 400.

Suggested fix
-	if (!password) {
+	if (typeof password !== "string" || password.length === 0) {
 		return res.status(400).json({ msg: "Password is required." });
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routes/passwordStrength.js` around lines 50 - 53, The handler
currently only checks for presence of password after "const { password } =
req.body;" but allows non-string values (e.g., 123 or {}) which cause invalid
scoring; update the guard to reject any non-string by checking typeof password
!== 'string' (and keep the empty check) and return res.status(400).json({ msg:
"Password is required and must be a string." }); so the rest of the code that
uses password.length and regexes only runs on bona fide strings.
frontend/src/components/SeoTools.jsx-83-90 (1)

83-90: ⚠️ Potential issue | 🟡 Minor

Fix sitemap “not found” state assignment.

Line 88 stores the error message in sitemapXmlContent instead of sitemapXmlError, so the UI treats an error as valid content.

💡 Proposed fix
 			if (res.data.exists) {
 				setSitemapXmlContent(res.data.content);
 				trackToolUsage("SeoTools:sitemap_xml_success", "web");
 				toast.success("sitemap.xml fetched successfully!");
 			} else {
-				setSitemapXmlContent(res.data.error || "sitemap.xml not found or accessible.");
+				setSitemapXmlError(res.data.error || "sitemap.xml not found or accessible.");
+				setSitemapXmlContent("");
 				trackToolUsage("SeoTools:sitemap_xml_not_found", "web");
 				toast.info("sitemap.xml not found or accessible.");
 			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/SeoTools.jsx` around lines 83 - 90, The current else
branch in the SeoTools component assigns error text to sitemapXmlContent instead
of the error state; update the else branch to call
setSitemapXmlError(res.data.error || "sitemap.xml not found or accessible.")
(instead of setSitemapXmlContent) so that sitemapXmlError holds the failure
message; keep the existing trackToolUsage("SeoTools:sitemap_xml_not_found",
"web") and toast.info unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 86a01362-068b-425a-b4c7-33edc4643e78

📥 Commits

Reviewing files that changed from the base of the PR and between 1f9b6e7 and db16532.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (134)
  • .github/workflows/publish.yml
  • .markdownlint.json
  • .markdownlintignore
  • README.md
  • backend/docs/api-documentation.json
  • backend/index.js
  • backend/middleware/apiActivityTracker.js
  • backend/middleware/auth.js
  • backend/middleware/authorize.js
  • backend/middleware/uploadLimiter.js
  • backend/models/ApiActivity.js
  • backend/models/ServiceUsage.js
  • backend/models/ToolUsage.js
  • backend/models/Url.js
  • backend/models/User.js
  • backend/package.json
  • backend/routes/analytics.js
  • backend/routes/auth.js
  • backend/routes/cleanSupabase.js
  • backend/routes/favicon.js
  • backend/routes/imageConverter.js
  • backend/routes/jsonXmlConverter.js
  • backend/routes/keepAlive.js
  • backend/routes/officeConverter.js
  • backend/routes/passwordStrength.js
  • backend/routes/pdfConverter.js
  • backend/routes/redirectChecker.js
  • backend/routes/screenshot.js
  • backend/routes/seoTools.js
  • backend/routes/shortener.js
  • backend/routes/textConverter.js
  • backend/routes/textToPdf.js
  • backend/utils/ipValidation.js
  • backend/utils/pdfErrorHandler.js
  • backend/utils/supabaseCleaner.js
  • backend/utils/supabaseClient.js
  • biome.json
  • frontend/package.json
  • frontend/public/manifest.json
  • frontend/src/App.jsx
  • frontend/src/components/Base64TextConverter.jsx
  • frontend/src/components/CsvToJsonConverter.jsx
  • frontend/src/components/ExcelToPdfConverter.jsx
  • frontend/src/components/FaviconExtractor.jsx
  • frontend/src/components/Footer.jsx
  • frontend/src/components/HashGenerator.jsx
  • frontend/src/components/HtmlToMarkdownConverter.jsx
  • frontend/src/components/ImageBackgroundRemover.jsx
  • frontend/src/components/ImageCompressor.jsx
  • frontend/src/components/ImageCropper.jsx
  • frontend/src/components/ImageFlipper.jsx
  • frontend/src/components/ImageFormatConverter.jsx
  • frontend/src/components/ImageGrayscaler.jsx
  • frontend/src/components/ImageResizer.jsx
  • frontend/src/components/ImageToBase64Converter.jsx
  • frontend/src/components/ImageToPdfConverter.jsx
  • frontend/src/components/JsonFormatterValidator.jsx
  • frontend/src/components/JsonXmlConverter.jsx
  • frontend/src/components/LinkShortener.jsx
  • frontend/src/components/MarkdownToHtmlConverter.jsx
  • frontend/src/components/Navbar.jsx
  • frontend/src/components/PasswordGenerator.jsx
  • frontend/src/components/PasswordStrengthChecker.jsx
  • frontend/src/components/PdfCompressor.jsx
  • frontend/src/components/PdfMerger.jsx
  • frontend/src/components/PdfPageDeleter.jsx
  • frontend/src/components/PdfRotator.jsx
  • frontend/src/components/PdfSplitter.jsx
  • frontend/src/components/PdfToExcelConverter.jsx
  • frontend/src/components/PdfToTextConverter.jsx
  • frontend/src/components/PdfToWordConverter.jsx
  • frontend/src/components/PngToJpgConverter.jsx
  • frontend/src/components/QrCodeGenerator.jsx
  • frontend/src/components/QrCodeScanner.jsx
  • frontend/src/components/SeoTools.jsx
  • frontend/src/components/TextCaseConverter.jsx
  • frontend/src/components/TextDifferenceChecker.jsx
  • frontend/src/components/TextToPdfGenerator.jsx
  • frontend/src/components/ToolCard.jsx
  • frontend/src/components/UrlRedirectChecker.jsx
  • frontend/src/components/WebsiteScreenshotGenerator.jsx
  • frontend/src/components/auth/Login.jsx
  • frontend/src/components/auth/Register.jsx
  • frontend/src/components/mode-toggle.jsx
  • frontend/src/components/theme-provider.jsx
  • frontend/src/components/ui/badge.jsx
  • frontend/src/components/ui/button.jsx
  • frontend/src/components/ui/card.jsx
  • frontend/src/components/ui/input.jsx
  • frontend/src/components/ui/label.jsx
  • frontend/src/components/ui/textarea.jsx
  • frontend/src/context/AuthContext.jsx
  • frontend/src/index.css
  • frontend/src/index.jsx
  • frontend/src/lib/utils.js
  • frontend/src/pages/HomePage.jsx
  • frontend/src/pages/ImageToolsPage.jsx
  • frontend/src/pages/PdfToolsPage.jsx
  • frontend/src/pages/TextToolsPage.jsx
  • frontend/src/pages/WebToolsPage.jsx
  • frontend/src/utils/setAuthToken.js
  • frontend/src/utils/useAnalytics.js
  • frontend/src/utils/useSortedTools.js
  • frontend/tailwind.config.js
  • frontend/vercel.json
  • frontend/vite.config.js
  • package.json
  • package/README.md
  • package/package.json
  • package/rslib.config.ts
  • package/scripts/postbuild.mjs
  • package/src/branding.ts
  • package/src/cli.ts
  • package/src/constants/index.ts
  • package/src/image/index.ts
  • package/src/index.ts
  • package/src/interactive.ts
  • package/src/interfaces/common.ts
  • package/src/interfaces/index.ts
  • package/src/pdf/index.ts
  • package/src/types/common.ts
  • package/src/types/shims.d.ts
  • package/src/utils/branding.ts
  • package/src/utils/config.ts
  • package/src/utils/errors.ts
  • package/src/utils/ffmpeg.ts
  • package/src/utils/files.ts
  • package/src/utils/index.ts
  • package/src/utils/watermark.ts
  • package/src/video/index.ts
  • package/tests/dist.test.js
  • package/tsconfig.json
  • pnpm-workspace.yaml
  • turbo.json
💤 Files with no reviewable changes (2)
  • .markdownlintignore
  • .markdownlint.json

Comment on lines +37 to +54
- name: Bump versions
run: |
# Fetch tags and get the last release tag
git fetch --tags
LAST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
if [ -n "$LAST_TAG" ]; then
BASE=$(git rev-list -n1 "$LAST_TAG")
echo "Using base commit $BASE from last tag $LAST_TAG"
else
BASE=$(git rev-parse --verify 4b825dc642cb6eb9a060e54bf8d69288fbee4904)
echo "No tags found, using empty-tree SHA $BASE"
fi
CHANGED_FILES=$(git diff --name-only "$BASE" HEAD)

# Determine which workspaces need version bumps
if echo "$CHANGED_FILES" | grep -q "^frontend/"; then
echo "Frontend changes detected, bumping frontend version"
cd frontend
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
cd ..
fi
echo "Bumping NPM package version (always patch)"
cd package
npm version patch --no-git-tag-version
cd ..

if echo "$CHANGED_FILES" | grep -q "^backend/"; then
echo "Backend changes detected, bumping backend version"
cd backend
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
cd ..
fi
echo "Bumping website versions (${{ github.event.inputs.release_type }})"
# Always bump root, frontend, and backend versions
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version

cd frontend
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
cd ..

# Always bump root version
cd backend
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
cd ..
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

npm version command clean tree requirement

💡 Result:

The npm version command requires a clean Git working directory (no uncommitted changes) when run in a Git repository and git-tag-version is enabled (default: true). It fails with "Git working directory not clean" if there are unstaged or uncommitted changes. To bypass this requirement, use the --force or -f flag: npm version patch --force. This allows versioning even with a dirty tree, though it warns about potential risks. Alternatively, disable Git tagging with --no-git-tag-version to update package.json without Git operations: npm version patch --no-git-tag-version. npm ci performs a clean install by deleting node_modules and strictly using package-lock.json, but is unrelated to npm version or Git tree requirements. Sources: Official npm docs confirm the clean directory requirement and force option [1]. GitHub issues and Stack Overflow discussions detail the error and workarounds [16,17,23].

Citations:


🏁 Script executed:

# Check the npm version documentation and flags
npm --version
npm version --help 2>/dev/null | head -50

Repository: gaureshpai/dkutils

Length of output: 526


🌐 Web query:

npm version --force flag documentation

💡 Result:

The --force (or -f) flag for the npm version command skips the check to ensure the git working directory is clean before bumping the version and creating a commit/tag. Normally, npm version fails if there are uncommitted changes, but --force allows it to proceed anyway. This is documented in the official npm docs for the version command (across npm v8 and v9). It warns: "It will fail if the working directory is not clean, unless the -f or --force flag is set." The step "Check to make sure the git working directory is clean before we get started" is explicitly skipped with --force. Note: There was a bug in earlier npm versions (e.g., npm 7) where --force didn't fully work with certain options like from-git, but this was fixed by npm 10.7.0 (2024), and current versions (as of 2026) support it properly. Usage example: npm version patch --force Related config: allow-same-version prevents errors when setting the same version, but --force specifically handles the git dirty state.

Citations:


🏁 Script executed:

# Get full npm version help
npm version --help

Repository: gaureshpai/dkutils

Length of output: 519


Chained npm version calls will fail after the first bump.

npm version requires a clean git working tree before running. After the first npm version patch at line 41 changes package/package.json, the working tree becomes dirty, causing subsequent npm version calls to fail. The --no-git-tag-version flag suppresses git commit/tag operations but does NOT bypass the clean-tree check.

Add --force to each npm version call to skip the working directory validation:

Fix
           echo "Bumping NPM package version (always patch)"
           cd package
-          npm version patch --no-git-tag-version
+          npm version patch --no-git-tag-version --force
           cd ..

           echo "Bumping website versions (${{ github.event.inputs.release_type }})"
           # Always bump root, frontend, and backend versions
-          npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
+          npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force
           
           cd frontend
-          npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
+          npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force
           cd ..

           cd backend
-          npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
+          npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force
           cd ..
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Bump versions
run: |
# Fetch tags and get the last release tag
git fetch --tags
LAST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
if [ -n "$LAST_TAG" ]; then
BASE=$(git rev-list -n1 "$LAST_TAG")
echo "Using base commit $BASE from last tag $LAST_TAG"
else
BASE=$(git rev-parse --verify 4b825dc642cb6eb9a060e54bf8d69288fbee4904)
echo "No tags found, using empty-tree SHA $BASE"
fi
CHANGED_FILES=$(git diff --name-only "$BASE" HEAD)
# Determine which workspaces need version bumps
if echo "$CHANGED_FILES" | grep -q "^frontend/"; then
echo "Frontend changes detected, bumping frontend version"
cd frontend
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
cd ..
fi
echo "Bumping NPM package version (always patch)"
cd package
npm version patch --no-git-tag-version
cd ..
if echo "$CHANGED_FILES" | grep -q "^backend/"; then
echo "Backend changes detected, bumping backend version"
cd backend
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
cd ..
fi
echo "Bumping website versions (${{ github.event.inputs.release_type }})"
# Always bump root, frontend, and backend versions
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
cd frontend
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
cd ..
# Always bump root version
cd backend
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version
cd ..
- name: Bump versions
run: |
echo "Bumping NPM package version (always patch)"
cd package
npm version patch --no-git-tag-version --force
cd ..
echo "Bumping website versions (${{ github.event.inputs.release_type }})"
# Always bump root, frontend, and backend versions
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force
cd frontend
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force
cd ..
cd backend
npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force
cd ..
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/publish.yml around lines 37 - 54, The "Bump versions"
workflow step uses multiple npm version calls (the commands shown for package,
root, frontend, and backend) which fail after the first bump because npm
enforces a clean git tree even with --no-git-tag-version; update each npm
version invocation in that block to include the --force flag (add --force to the
npm version patch in package and to the npm version ${{
github.event.inputs.release_type }} calls for root, frontend, and backend) so
subsequent bumps skip the working-tree validation and succeed.

Comment on lines +116 to +123
TAG_NAME="v${{ steps.get_versions.outputs.website_version }}"
if git rev-parse -q --verify "refs/tags/$TAG_NAME"; then
echo "Tag $TAG_NAME already exists, skipping tag creation"
# Only set did_commit=true if we actually made a new commit
if [ "$(git rev-parse HEAD)" != "$(git rev-parse HEAD~1)" ]; then
echo "did_commit=true" >> $GITHUB_OUTPUT
fi
else
echo "Creating new tag $TAG_NAME"
git tag -a "$TAG_NAME" -m "Release $TAG_NAME"
echo "did_commit=true" >> $GITHUB_OUTPUT
fi
echo "did_commit=true" >> $GITHUB_OUTPUT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n .github/workflows/publish.yml

Repository: gaureshpai/dkutils

Length of output: 6023


🌐 Web query:

softprops/action-gh-release existing tag behavior documentation

💡 Result:

When using softprops/action-gh-release with an existing tag, the action reuses the existing GitHub release associated with that tag (if one exists) and updates it by uploading new assets. Existing assets with the same name are overwritten. If release metadata fields like name, body, draft, prerelease, etc., are not explicitly provided in the action inputs, the existing release retains its original metadata values. This behavior is documented in the official README: - "If a tag already has a GitHub release, the existing release will be updated with the release assets." - "When the release info keys (such as name, body, prerelease, etc.) are not explicitly set and there is already an existing release for the tag, the release will retain its original info." Recent releases (e.g., v2.6.1 as of 2026-03-16) include documentation clarifications on reused draft release behavior and immutable prerelease uploads, where GitHub platform constraints apply (e.g., prereleases become immutable after publication). Note: There is no built-in input to fail or skip on existing tags/releases; it always attempts to reuse and update. For asset overwriting issues in concurrent/matrix jobs, sequential uploads or separate jobs may be needed.

Citations:


Don't continue to publish when tag creation is skipped.

When v${{ steps.get_versions.outputs.website_version }} already exists, the workflow still sets did_commit=true (line 123), allowing the push and release steps to run. However, no new tag was created, so softprops/action-gh-release will target the existing old tag instead. This creates a mismatch: a new commit is created with version bumps, but the GitHub release is published against the old tag. This is unsafe—the release should either target the new tag or fail.

Exit with code 1 when a tag collision is detected, as this indicates the version bump logic or repository state has a problem that needs investigation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/publish.yml around lines 116 - 123, The workflow sets
did_commit=true even when TAG_NAME (v${{
steps.get_versions.outputs.website_version }}) already exists, which causes the
publish/release steps (e.g., softprops/action-gh-release) to run against an old
tag; change the logic in the TAG_NAME check so that when git rev-parse finds the
tag you immediately fail the job (exit with non‑zero) and do not set did_commit,
e.g., replace the "Tag ... already exists, skipping tag creation" branch with an
explicit error exit (or set did_commit=false and exit 1) so subsequent steps do
not run and the mismatch is prevented.

{ $inc: { totalCount: 1 } },
{ upsert: true, new: true },
);
await TotalUsage.findOneAndUpdate({}, { $inc: { totalCount: 1 } }, { upsert: true, new: true });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, find and read the middleware file
find . -name "apiActivityTracker.js" -type f

Repository: gaureshpai/dkutils

Length of output: 104


🏁 Script executed:

# Find the TotalUsage model definition
find . -type f \( -name "*.js" -o -name "*.ts" \) | xargs grep -l "TotalUsage" | head -20

Repository: gaureshpai/dkutils

Length of output: 137


🏁 Script executed:

# Get context around line 14 in apiActivityTracker.js
cat -n backend/middleware/apiActivityTracker.js | head -30

Repository: gaureshpai/dkutils

Length of output: 998


🌐 Web query:

Mongoose findOneAndUpdate upsert empty filter atomicity concurrent writes

💡 Result:

In Mongoose (as of v8.15+ per recent updates), Model.findOneAndUpdate({}, update, { upsert: true }) with an empty filter {} is valid and expected behavior. It finds the first document in natural order (typically oldest insertion order, non-deterministic without sort), updates it if any exist, or inserts a new document using the update fields if none match (which empty filter always "matches" at least the first if collection non-empty, but with upsert it would insert if collection empty). Atomicity: The underlying MongoDB findOneAndUpdate is atomic at the single-document level for matched updates. The find-and-update happens atomically, so no changes between find and update. However, with empty filter: - Without sort: Which document is first is non-deterministic across queries. - Upsert case: If collection empty, inserts atomically. If non-empty, updates first (atomic per op). Concurrent writes: Each concurrent findOneAndUpdate({}, ..., { upsert: true }) operates atomically independently. Since empty filter matches all docs: - Multiple ops will race to update different "first" docs (MongoDB doesn't lock the collection). - Results: Multiple docs updated (one per concurrent op), no duplicates since updating existing. - Safe from lost updates per doc, but expect multiple docs modified, non-deterministic which ones. This is dangerous for production: unintended broad updates. Mongoose issue #14913 discusses; since ~2025, { strictFilter: true } or { requireFilter: true } option throws on empty filter to prevent accidents (default false for BC). Official docs confirm empty filter updates first doc (MongoDB docs); Mongoose mirrors this. For concurrency safety with upsert, always use filter on uniquely indexed fields to serialize via index. Recommendation: Never use empty filter. Add sort: { _id: 1 } for determinism, but better use specific filter.

Citations:


🏁 Script executed:

# Read the ServiceUsage model to understand TotalUsage schema
cat -n backend/models/ServiceUsage.js

Repository: gaureshpai/dkutils

Length of output: 729


Use a fixed unique key instead of an empty filter for the global counter.

findOneAndUpdate({}, ..., { upsert: true }) with an empty filter is non-deterministic: it matches the "first" document in non-deterministic order. Under concurrent writes, each operation atomically updates a different "first" document, resulting in multiple TotalUsage documents being incremented rather than a single global counter. Add a unique identifier field to the schema and filter on it to serialize updates and ensure only one document exists.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/middleware/apiActivityTracker.js` at line 14,
TotalUsage.findOneAndUpdate is using an empty filter which can create multiple
documents under concurrent writes; change the update to target a fixed unique
key (e.g., use filter { key: 'global' } or { _id: 'global' }) and ensure the
TotalUsage schema has that field with a unique constraint; perform
findOneAndUpdate(filter, { $inc: { totalCount: 1 }, $setOnInsert: { key:
'global' } }, { upsert: true, new: true }) so updates are serialized to a single
global counter document.

Comment on lines 7 to 18
router.post("/", async (req, res) => {
try {
await cleanSupabaseStorage();
res.status(200).json({ msg: "Supabase cleanup triggered successfully." });
} catch (error) {
console.error("Error triggering Supabase cleanup:", error);
res.status(500).json({
msg: "Failed to trigger Supabase cleanup.",
error: error.message,
});
}
try {
await cleanSupabaseStorage();
res.status(200).json({ msg: "Supabase cleanup triggered successfully." });
} catch (error) {
console.error("Error triggering Supabase cleanup:", error);
res.status(500).json({
msg: "Failed to trigger Supabase cleanup.",
error: error.message,
});
}
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Expected: the clean-supabase mount chain shows auth/signature middleware before this router.
rg -n -C4 'cleanSupabase|clean-supabase|/trigger' .
rg -n -C4 '\b(app|router)\.use\s*\(' backend
rg -n -C4 'auth|authenticate|verify|secret|token|signature|cron' backend .github

Repository: gaureshpai/dkutils

Length of output: 30330


🏁 Script executed:

cat -n backend/middleware/auth.js

Repository: gaureshpai/dkutils

Length of output: 598


Add mandatory authentication enforcement to the cleanup routes.

The authMiddleware at the mount point (line 90 in backend/index.js) is optional—it calls next() without blocking if no token is provided (see backend/middleware/auth.js, lines 6–7). This allows unauthenticated requests to reach the handlers.

Additionally, the GET /trigger route performs a destructive side effect, which violates HTTP semantics and risks accidental triggering via prefetch or crawlers. The production workflow (.github/workflows/trigger-supabase-cleanup.yml, line 24) explicitly calls this endpoint without authentication, confirming the endpoint is accessible to anyone.

Either:

  1. Replace the optional middleware with a middleware that enforces authentication (rejecting requests without a valid token), or
  2. Use a signed token/secret in the request to validate intentional triggering (suitable for cron/external calls).

Also convert the destructive GET route to POST, or remove it entirely and use only the POST handler.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routes/cleanSupabase.js` around lines 7 - 18, The cleanup endpoints
currently allow unauthenticated access; update backend/routes/cleanSupabase.js
to require enforced authentication and remove the destructive GET handler:
attach or call the strict auth validator (not the optional authMiddleware)
before the router handlers (or add an explicit token/secret check inside the
POST handler that verifies a signed secret from the request) so that
cleanSupabaseStorage() can only be invoked with a valid credential, and remove
or replace the GET "/trigger" route with a POST-only endpoint that uses the
enforced auth/secret validation for cron/external calls; make sure to reference
the router.post handler and cleanSupabaseStorage function when implementing the
checks.

Comment on lines 23 to 34
router.get("/trigger", async (req, res) => {
try {
await cleanSupabaseStorage();
res.status(200).json({ msg: "Supabase cleanup triggered successfully." });
} catch (error) {
console.error("Error triggering Supabase cleanup:", error);
res.status(500).json({
msg: "Failed to trigger Supabase cleanup.",
error: error.message,
});
}
try {
await cleanSupabaseStorage();
res.status(200).json({ msg: "Supabase cleanup triggered successfully." });
} catch (error) {
console.error("Error triggering Supabase cleanup:", error);
res.status(500).json({
msg: "Failed to trigger Supabase cleanup.",
error: error.message,
});
}
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Remove the destructive GET trigger.

GET /trigger performs a side effect that can be fired by crawlers, link previews, caches, or browser prefetch. Scheduled jobs can call the POST endpoint instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/routes/cleanSupabase.js` around lines 23 - 34, Remove the destructive
GET route handler so the cleanup can't be triggered by crawlers or prefetch;
delete the router.get("/trigger", ...) block and expose the same behavior only
via router.post("/trigger", async (req, res) => { ... }) calling
cleanSupabaseStorage(), returning the same 200/500 responses. Ensure the unique
symbols router.get("/trigger") and cleanSupabaseStorage() are replaced by
router.post("/trigger") so scheduled jobs use POST instead of GET.

/^fc00:/, // IPv6 unique local address (fc00::/7)
/^fd00:/, // IPv6 unique local address (fd00::/8)
/^fe80:/, // IPv6 link-local address
/^2001:db8::/, // IPv6 documentation prefix
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: IPv6 documentation prefix regex won't match expanded addresses.

The regex /^2001:db8::/ uses :: which will only match the zero-compressed form 2001:db8::x. It will fail to match expanded addresses like 2001:db8:1::1 or 2001:db8:0:0:0:0:0:1. The documentation range is 2001:db8::/32, so all addresses beginning with 2001:db8: should match.

🔧 Proposed fix
-		/^2001:db8::/, // IPv6 documentation prefix
+		/^2001:db8:/, // IPv6 documentation prefix (2001:db8::/32)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/^2001:db8::/, // IPv6 documentation prefix
/^2001:db8:/, // IPv6 documentation prefix (2001:db8::/32)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/utils/ipValidation.js` at line 22, The IPv6 documentation prefix
regex /^2001:db8::/ is too narrow; update the entry in
backend/utils/ipValidation.js (the array or variable that holds the
reserved/documentation prefixes) to match the entire 2001:db8::/32 block by
using a start-anchor prefix match like /^2001:db8:/i (or /^2001:db8:/ if you
prefer case-sensitive) so any address beginning with "2001:db8:" (including
expanded forms) is matched; replace the old /^2001:db8::/ regex with
/^2001:db8:/i and run tests.

@@ -1,167 +1,162 @@
import React, { useState } from "react";
import md5 from "js-md5";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Remove the BOM character at the start of the file.

The file begins with a UTF-8 Byte Order Mark (BOM) character  before the import statement. This invisible character can cause JavaScript parsing errors, module resolution failures, and unexpected runtime behavior in various environments and bundlers.

🔧 Proposed fix

Save the file with UTF-8 encoding without BOM, or manually remove the invisible character. Most modern editors have an option to save as "UTF-8 without BOM":

-import md5 from "js-md5";
+import md5 from "js-md5";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/HashGenerator.jsx` at line 1, The file starts with a
UTF-8 BOM character before the import which can break parsing; open
frontend/src/components/HashGenerator.jsx and remove the leading invisible BOM
character (save the file as "UTF-8 without BOM" in your editor) so the import
md5 from "js-md5"; line begins at the very first byte; ensure no other files in
the same module have BOMs.

Comment on lines +23 to +37
for (const file of files) {
if (file.type !== "application/pdf") {
toast.error(`Invalid file type: ${file.name}. Only PDF files are allowed.`);
hasError = true;
return;
}
if (file.size > maxFileSize) {
toast.error(
`File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`,
);
hasError = true;
return;
}
validFiles.push(file);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Early return breaks validation error handling.

The return statements on lines 27 and 34 exit the entire onFileChange function immediately when an invalid file is encountered. This makes the error handling code at lines 40-45 completely unreachable—the hasError flag is set but never checked, the file input is never cleared, and the error state message is never set.

Current broken behavior:

  1. User selects files including an invalid one
  2. Toast error appears
  3. Function exits early via return
  4. File input still displays the invalid files (not cleared)
  5. selectedFiles state is not updated
  6. Error message below input is not shown

Impact: Validation appears broken to users—they see a toast but the file input isn't cleared and no persistent error message appears. The dead code suggests this refactoring from forEach to for...of introduced the bug.

🔧 Fix: Replace `return` with `break` to reach error handling code
 	for (const file of files) {
 		if (file.type !== "application/pdf") {
 			toast.error(`Invalid file type: ${file.name}. Only PDF files are allowed.`);
 			hasError = true;
-			return;
+			break;
 		}
 		if (file.size > maxFileSize) {
 			toast.error(
 				`File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`,
 			);
 			hasError = true;
-			return;
+			break;
 		}
 		validFiles.push(file);
 	}

Alternatively, restructure to validate all files first, accumulate errors, and then handle them after the loop completes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/PdfMerger.jsx` around lines 23 - 37, The loop inside
onFileChange exits early on an invalid file due to `return`, preventing
post-loop error handling, clearing the file input, and updating selectedFiles;
change the control flow so the loop does not return early (e.g., replace
`return` with `break` or continue validating remaining files and accumulate
errors), ensure hasError is checked after the loop and if true clear the file
input element, do not call setSelectedFiles(validFiles) when hasError is true,
and set the persistent error state/message so the UI shows the error; reference
the onFileChange function, variables validFiles, hasError, and the
selectedFiles/state update to locate where to modify logic.

Comment on lines +52 to +70
const pagesToDeleteArray = pagesToDelete
.split(",")
.map((s) => s.trim())
.filter(Boolean)
.flatMap((range) => {
if (range.includes("-")) {
const [start, end] = range.split("-").map(Number);
if (
Number.isNaN(start) ||
Number.isNaN(end) ||
start < 1 ||
end > numPages ||
start > end
) {
throw new Error(`Invalid page range: ${range}`);
}
return Array.from({ length: end - start + 1 }, (_, i) => start + i);
}
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Single-page delete tokens are currently ignored.

flatMap only returns values for a-b ranges. Inputs like 1, 3 (shown in UI) produce no deletions for those tokens. This breaks core behavior.

✅ Suggested fix
 			const pagesToDeleteArray = pagesToDelete
 				.split(",")
 				.map((s) => s.trim())
 				.filter(Boolean)
-				.flatMap((range) => {
-					if (range.includes("-")) {
-						const [start, end] = range.split("-").map(Number);
-						if (
-							Number.isNaN(start) ||
-							Number.isNaN(end) ||
-							start < 1 ||
-							end > numPages ||
-							start > end
-						) {
-							throw new Error(`Invalid page range: ${range}`);
-						}
-						return Array.from({ length: end - start + 1 }, (_, i) => start + i);
-					}
-				});
+				.flatMap((token) => {
+					const parts = token.split("-").map((p) => p.trim());
+
+					if (parts.length === 1) {
+						const page = Number(parts[0]);
+						if (!Number.isInteger(page) || page < 1 || page > numPages) {
+							throw new Error(`Invalid page number: ${token}`);
+						}
+						return [page];
+					}
+
+					if (parts.length === 2) {
+						const [start, end] = parts.map(Number);
+						if (
+							!Number.isInteger(start) ||
+							!Number.isInteger(end) ||
+							start < 1 ||
+							end > numPages ||
+							start > end
+						) {
+							throw new Error(`Invalid page range: ${token}`);
+						}
+						return Array.from({ length: end - start + 1 }, (_, i) => start + i);
+					}
+
+					throw new Error(`Invalid page selector: ${token}`);
+				});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const pagesToDeleteArray = pagesToDelete
.split(",")
.map((s) => s.trim())
.filter(Boolean)
.flatMap((range) => {
if (range.includes("-")) {
const [start, end] = range.split("-").map(Number);
if (
Number.isNaN(start) ||
Number.isNaN(end) ||
start < 1 ||
end > numPages ||
start > end
) {
throw new Error(`Invalid page range: ${range}`);
}
return Array.from({ length: end - start + 1 }, (_, i) => start + i);
}
});
const pagesToDeleteArray = pagesToDelete
.split(",")
.map((s) => s.trim())
.filter(Boolean)
.flatMap((token) => {
const parts = token.split("-").map((p) => p.trim());
if (parts.length === 1) {
const page = Number(parts[0]);
if (!Number.isInteger(page) || page < 1 || page > numPages) {
throw new Error(`Invalid page number: ${token}`);
}
return [page];
}
if (parts.length === 2) {
const [start, end] = parts.map(Number);
if (
!Number.isInteger(start) ||
!Number.isInteger(end) ||
start < 1 ||
end > numPages ||
start > end
) {
throw new Error(`Invalid page range: ${token}`);
}
return Array.from({ length: end - start + 1 }, (_, i) => start + i);
}
throw new Error(`Invalid page selector: ${token}`);
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/PdfPageDeleter.jsx` around lines 52 - 70, The current
flatMap callback only returns arrays for range tokens and ignores single-page
tokens (e.g., "1,3"), causing missing deletions; update the pagesToDeleteArray
construction (the pagesToDelete .split(...).flatMap callback) to handle
non-range tokens by parsing the token to a Number, validating it (not NaN, >=1,
<= numPages), and returning an array with that single page (e.g., [page]) so
every branch returns a value; keep the existing range validation and throw on
invalid tokens as before.

Comment on lines +39 to +48
const html = diff
.map((part) => {
const [type, text] = part;
const safeText = escapeHtml(text);
if (type === 0) {
return safeText;
}
return "";
})
.join("");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Diff logic discards all additions and deletions.

The diff-match-patch library uses type === 0 for equal (unchanged) parts, type === -1 for deletions, and type === 1 for additions. This implementation only returns text when type === 0 and returns "" for everything else—meaning the component now displays only the parts that are identical between the two texts, completely losing the actual differences.

This defeats the core purpose of a "Text Difference Checker."

🐛 Proposed fix to restore diff visualization
 			const html = diff
 				.map((part) => {
 					const [type, text] = part;
 					const safeText = escapeHtml(text);
-					if (type === 0) {
-						return safeText;
-					}
-					return "";
+					if (type === -1) {
+						return `<del class="bg-red-200 dark:bg-red-900">${safeText}</del>`;
+					}
+					if (type === 1) {
+						return `<ins class="bg-green-200 dark:bg-green-900 no-underline">${safeText}</ins>`;
+					}
+					return safeText;
 				})
 				.join("");

Then update the rendering at line 129-131 to use dangerouslySetInnerHTML:

-					<div className="bg-background border border-input text-foreground text-sm rounded-lg block w-full p-2.5 h-max">
-						{diffResult}{" "}
-					</div>
+					<div
+						className="bg-background border border-input text-foreground text-sm rounded-lg block w-full p-2.5 h-max whitespace-pre-wrap"
+						dangerouslySetInnerHTML={{ __html: diffResult }}
+					/>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/TextDifferenceChecker.jsx` around lines 39 - 48, The
current diff mapping only returns unchanged parts (type === 0) and discards
additions/deletions; update the logic that builds the html variable (the
diff.map callback around part, type, text and escapeHtml) to return escaped text
wrapped in distinct markers for each diff type (e.g., wrap deletions (type ===
-1) in a <del> ... </del>, additions (type === 1) in an <ins> ... </ins>, and
leave equals (type === 0) unwrapped) so additions and deletions are preserved
for rendering, and then ensure the component renders that html string as HTML
(use the existing rendering spot that currently renders html to switch to
dangerouslySetInnerHTML or equivalent).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant