feat: background auto-update with SHA256 verification#5
Conversation
- Add src/lib/auto-update.ts + src/lib/release.ts: 24h-throttled
detached background check, POSIX rename-in-place swap on next launch,
SHA256 hash verification before staging (tri-state distinguishes
legacy releases from fetch failures), version sidecar for the
'auto-updated to vX.Y.Z' stderr note
- Opt out via AUTO_UPDATE=false in ~/.worktreerc or WORKTREE_NO_UPDATE=1
- Argv guard prevents detached child from re-running hooks
- release.yml: --minify, cross-arch strip (llvm-strip on Linux), ad-hoc
codesign on macOS (fixes SIGKILL on Apple Silicon from unsigned
binaries), native-target smoke tests, SHA256SUMS publishing
- Bump to 1.3.0
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 16 minutes and 50 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (5)
📝 WalkthroughWalkthroughAdds a staged, SHA256‑verified auto-update system with a detached 24‑hour background checker, atomic apply of staged binaries, new release/asset utilities and fs write-error helpers, config AUTO_UPDATE handling and tests, a hidden internal check command, CI/release workflow changes, docs updates, and a package bump to 1.3.0. Changes
Sequence DiagramsequenceDiagram
participant User
participant CLI as CLI Entrypoint
participant UpdateMgr as Auto‑Update Manager
participant BG as Background Worker (detached)
participant GitHub as GitHub Release API
participant FS as File System
participant Probe as Probe Process
User->>CLI: start worktree
CLI->>UpdateMgr: applyPendingUpdate()
UpdateMgr->>FS: check for staged binary + sidecar
alt valid staged artifact
UpdateMgr->>FS: verify sha256, rename staged → exec (atomic)
UpdateMgr->>CLI: emit "updated vA → vB"
else
UpdateMgr->>FS: cleanup staged artifacts
end
CLI->>UpdateMgr: scheduleBackgroundUpdateCheck()
UpdateMgr->>FS: read last-check, config (throttle/disable)
alt allowed
UpdateMgr->>BG: spawn detached internal check
BG->>GitHub: fetch latest release metadata
BG->>GitHub: download asset and optional SHA256SUMS
BG->>FS: write temp binary + temp sidecar
BG->>Probe: run temp `--version` probe (with updates disabled)
Probe-->>BG: probe result
alt probe ok & hash ok
BG->>FS: chmod, rename temp → staged, commit sidecar
else
BG->>FS: remove temp artifacts
end
BG->>FS: update last-check timestamp
end
CLI->>User: continue startup
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (13)
.github/workflows/release.yml (2)
96-105: SHA256SUMS aggregation: simpler and more robust approach.The
tail -c1 ... | od -An -c | grep -q '\n'check to detect a trailing newline is fragile (relies onod's\nrendering of the newline character).shasum -a 256always terminates lines with\n, so each per-target file already ends in a newline and plaincatconcatenation is sufficient.♻️ Simpler aggregation
- name: Aggregate SHA256SUMS run: | cd artifacts - : > SHA256SUMS - for f in worktree-*.sha256; do - cat "$f" >> SHA256SUMS - tail -c1 "$f" | od -An -c | grep -q '\\n' || echo >> SHA256SUMS - done + cat worktree-*.sha256 > SHA256SUMS rm worktree-*.sha256 cat SHA256SUMS🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release.yml around lines 96 - 105, The "Aggregate SHA256SUMS" step uses a fragile newline-detection pipeline; simplify by just concatenating the per-target files (the for loop handling worktree-*.sha256 and the final cat SHA256SUMS) and remove the tail/od/grep check—since shasum -a 256 outputs terminating newlines, simply append each worktree-*.sha256 to SHA256SUMS and then remove the worktree-*.sha256 files (refer to the step name "Aggregate SHA256SUMS" and the for f in worktree-*.sha256 loop in the diff).
68-72: Smoke test skipslinux-arm64; consider QEMU or note this gap.The native smoke test correctly skips
linux-arm64since it can't run on x64 runners, but this means a brokenlinux-arm64binary could still be released. Consider running it under QEMU (docker run --platform=linux/arm64 ...) or at minimum documenting the known gap.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release.yml around lines 68 - 72, The workflow's "Smoke-test --version (native targets only)" step skips matrix.target == 'linux-arm64', leaving that binary untested; either add a QEMU-based invocation to exercise the linux-arm64 artifact (e.g., run the built binary inside a container with --platform=linux/arm64 via docker or a qemu-user-static runner) so ./dist/worktree-${{ matrix.target }} is executed for linux-arm64, or explicitly document the gap in the release process (e.g., update the workflow description or repo release docs) so reviewers know linux-arm64 isn't smoke-tested; modify the workflow step that checks matrix.target and/or add a separate job that runs the linux-arm64 binary under QEMU to address this.src/index.ts (1)
15-18: Argv guard usesincludes— prefer positional check.
process.argv.slice(2).includes(INTERNAL_CHECK_SUBCOMMAND)would also match if the token appeared as a flag value (e.g.worktree create --base __internal_update_check), bypassingapplyPendingUpdate()/scheduleBackgroundUpdateCheck()for a normal user invocation. brocli subcommands are always the first positional arg, so tighten the check:♻️ Proposed change
-if (!process.argv.slice(2).includes(INTERNAL_CHECK_SUBCOMMAND)) { +if (process.argv[2] !== INTERNAL_CHECK_SUBCOMMAND) { applyPendingUpdate(); void scheduleBackgroundUpdateCheck(); }Collision risk is low given the
__internal_prefix, but the positional form also documents intent more clearly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 15 - 18, The argv guard currently uses process.argv.slice(2).includes(INTERNAL_CHECK_SUBCOMMAND) which can match the token anywhere (e.g. as a flag value); change it to check the first positional argument instead (e.g. compare process.argv[2] or equivalent) so the internal subcommand only triggers when it is the subcommand token; update the condition guarding applyPendingUpdate and scheduleBackgroundUpdateCheck (functions applyPendingUpdate and scheduleBackgroundUpdateCheck) to use this positional check referencing INTERNAL_CHECK_SUBCOMMAND.README.md (1)
63-71: Config table missing newAUTO_UPDATEkey.The Config table documents
DEFAULT_BASEandWORKTREE_DIRbut not the newAUTO_UPDATEschema field added insrc/lib/config.ts. Users reading this section won't know it's a recognized option alongside the Update section.♻️ Proposed addition
| `DEFAULT_BASE` | Default base branch for new worktrees | `origin/dev` | | `WORKTREE_DIR` | Directory name for worktrees (default: `.worktrees`) | `.worktrees` | +| `AUTO_UPDATE` | Enable background update checks (default: `true`) | `false` |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 63 - 71, The README Config table is missing the new AUTO_UPDATE schema key; update the table to include a row for `AUTO_UPDATE` (referencing the AUTO_UPDATE field added in src/lib/config.ts) with a short description such as "Enable automatic update checks for worktrees" and an example value (e.g. `true` or `false`), and ensure the entry aligns with the existing Update section wording so readers know it's a recognized option alongside the Update configuration.src/lib/config.ts (1)
64-77: Silent fallback to defaults masks invalid config.
readConfigFileswallows parse/validation errors and returns defaults. If a user setsAUTO_UPDATE=junkin~/.worktreerc, the file is silently ignored and auto-update stays enabled with no signal — contrary to the test atconfig.test.ts:114-116which expectsvalidateConfigto throw on junk.Consider logging a stderr warning (per the user-facing-output-to-stderr guideline) when
parseErroris non-null so misconfiguration is visible:♻️ Proposed change
- if (parseError || !parsed) return validateConfig({}); + if (parseError || !parsed) { + console.error( + `warning: ignoring invalid config at ${filePath}: ${parseError?.message ?? "unknown error"}` + ); + return validateConfig({}); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/config.ts` around lines 64 - 77, readConfigFile currently swallows read/parse/validation errors and returns defaults; change it so that when file.text() or the parse/validate step (validateConfig(parseConfigContent(...))) produces an error you write a user-facing warning to stderr (including the error message and filename) and then rethrow or propagate the error instead of returning validateConfig({}); update the error branches that check "error" and "parseError" in readConfigFile to use console.error/stderr and rethrow so callers/tests (and validateConfig behavior) see the invalid-config error; keep the normal return of parsed when no error.src/lib/auto-update.ts (4)
178-189: Stale.worktree.next.<pid>.tmpfiles can accumulate next to the binary.If the detached update check is SIGKILLed (reboot, OOM, user kills terminal) between
downloadAssetand any of the cleanupsafeUnlinkcalls, the PID-tagged tmp file is left behind ingetBinaryDir()and is never reaped — the next run uses a new PID. Over time this wastes disk and clutters the install directory (especially annoying when the binary lives under/usr/local/bin).Consider sweeping stale
${STAGING_FILENAME}.*.tmpentries at the top ofrunBackgroundUpdateCheck(ignoring any whose PID is still live, or simply any older than e.g. 1h).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/auto-update.ts` around lines 178 - 189, runBackgroundUpdateCheck can leave behind PID-tagged tmp files (STAGING_FILENAME.<pid>.tmp) if killed between downloadAsset and cleanup; add a startup sweep in runBackgroundUpdateCheck that lists files in getBinaryDir() matching the pattern `${STAGING_FILENAME}.*.tmp`, parse the PID from each filename and remove the file if the PID is not alive (or if file mtime is older than a safe threshold like 1 hour) using safeUnlink; reference the functions/vars getBinaryDir, runBackgroundUpdateCheck, STAGING_FILENAME, safeUnlink and ensure the sweep runs before creating a new tmpPath so stale files are reaped but active processes’ tmp files are preserved.
136-142: Redundant stdio config inBun.spawn.You're passing both the
stdioarray form and individualstderr/stdout/stdinoptions. Bun accepts either, but mixing them is redundant and obscures intent — future readers will wonder which wins. Pick one form.♻️ Proposed cleanup
Bun.spawn({ cmd: [process.execPath, INTERNAL_CHECK_SUBCOMMAND], - stdio: ["ignore", "ignore", "ignore"], - stderr: "ignore", - stdout: "ignore", - stdin: "ignore", + stdio: ["ignore", "ignore", "ignore"], }).unref();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/auto-update.ts` around lines 136 - 142, The Bun.spawn call is using both the array form stdio and the individual stderr/stdout/stdin options which is redundant; update the Bun.spawn invocation that launches [process.execPath, INTERNAL_CHECK_SUBCOMMAND] (the call ending with .unref()) to use a single stdio form—prefer keeping the stdio: ["ignore","ignore","ignore"] array and remove the separate stderr, stdout, and stdin properties (or vice‑versa if you prefer the named options) so only one style is present and intent is clear.
67-99: Consider validating the staged binary's signature/size before the rename.
applyPendingUpdatetrusts the staged file unconditionally — it was only hash-verified at stage time byrunBackgroundUpdateCheck, possibly days earlier. Between staging and apply, a local attacker with write access togetBinaryDir()could swap.worktree.nextfor arbitrary code; at the next launch this function happilyrenames it overprocess.execPathand the user's next invocation executes it.Mitigations to consider (not all are necessary, pick what fits the threat model):
- Re-verify the hash against the sidecar right before
renameSync(requires storing the hash alongside the version in the sidecar, or a separate.sha256file).- Restrict the binary directory's permissions at install time and document that expectation.
- At minimum, document the trust assumption ("filesystem ACLs on the install dir are the security boundary") in the module header.
Given the module already goes through the trouble of SHA256+probe gating at stage time, letting the staged artifact age un-reverified somewhat undermines that defense-in-depth.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/auto-update.ts` around lines 67 - 99, applyPendingUpdate currently renames the staged file unconditionally, allowing a swapped/malicious .worktree.next to replace process.execPath; re-verify the staged artifact's integrity right before applying by reading the expected hash from the sidecar (extend getVersionSidecarPath or add a getStagingHashSidecar, whatever matches the diff), compute the staged file's SHA256, compare it to the stored hash, and abort (appendLastError and return) if it doesn't match; alternatively (or additionally) document the filesystem-ACL trust boundary and/or ensure installer sets restrictive perms on getBinaryDir(), but prefer implementing the immediate re-check in applyPendingUpdate around the call to fs.renameSync(getStagingPath(), process.execPath).
255-282: Improve timeout detection inprobeBinaryVersionfor clearer error messages.Bun.spawnSync does not set exitCode to
nullon timeout; instead, the process receives a kill signal (e.g., SIGTERM) and exits with a non-zero code (typically 130). The currentexitCode !== 0check already catches this, but explicitly checkingsignalCodeand theexitedDueToTimeoutflag makes the timeout case clearer in diagnostics.♻️ Suggested refactor
+ if (result.exitedDueToTimeout || result.signalCode) { + return { + ok: false, + reason: `killed by ${result.signalCode || "timeout"} (${PROBE_TIMEOUT_MS}ms limit)`, + }; + } if (result.exitCode !== 0) { return { ok: false, reason: `exit ${result.exitCode}` }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/auto-update.ts` around lines 255 - 282, The probeBinaryVersion function should explicitly detect when Bun.spawnSync timed out by inspecting the spawn result's timeout/signal indicators (e.g., result.signalCode and/or result.exitedDueToTimeout) in addition to the generic exitCode check; update the logic after the try/catch to check if result.exitedDueToTimeout is true or result.signalCode is set and return a clear timeout reason such as `ok: false, reason: "timeout (SIGTERM)"` or similar, otherwise keep the existing non-zero exitCode handling and the version-regex parsing; reference probeBinaryVersion, Bun.spawnSync, PROBE_TIMEOUT_MS, result.signalCode and result.exitedDueToTimeout when making the change.src/lib/release.ts (3)
156-163:constantTimeEqualson hex strings is slightly misleading.A truly constant-time comparison requires both inputs to be the same length before the early-return check, and to operate on bytes. Since
expectedSha256always comes from a public SHA256SUMS file (not a secret), this early-length-exit is fine in practice, but the name oversells the guarantee. Consider renaming tosecureEqualsor documenting the assumption (both operands are fixed-length 64-char hex and neither is confidential), so future reuse with secrets doesn't accidentally inherit a leakable comparator.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/release.ts` around lines 156 - 163, The function constantTimeEquals currently exits early on differing lengths and operates on JS chars, which makes the name misleading for true constant-time semantics; either rename the function to secureEquals and update all callers (e.g., any use sites that compare SHA256 strings) to reflect the weaker guarantee, or keep the name and add a clear doc comment on constantTimeEquals stating it assumes non-secret, fixed-length 64-character hex inputs (from SHA256SUMS) and may early-return on length mismatch; implement the chosen change and update any unit tests or references so callers like expectedSha256 comparisons continue to compile and reflect the new name/docs.
140-154: Stream the file into the SHA-256 hasher.
file.arrayBuffer()loads the entire staged binary into memory solely to feed a singlehasher.update. For binaries in the tens-of-MB range this doubles RSS needlessly; streaming overfile.stream()keeps memory bounded.♻️ Proposed refactor
-async function verifyBinaryHash( - filePath: string, - expectedSha256: string -): Promise<boolean> { - const hasher = new Bun.CryptoHasher("sha256"); - const file = Bun.file(filePath); - const { data: buffer, error } = await tryCatch(file.arrayBuffer()); - if (error || !buffer) return false; - hasher.update(buffer); - const actual = hasher.digest("hex"); - return constantTimeEquals( - actual.toLowerCase(), - expectedSha256.toLowerCase() - ); -} +async function verifyBinaryHash( + filePath: string, + expectedSha256: string +): Promise<boolean> { + const hasher = new Bun.CryptoHasher("sha256"); + try { + for await (const chunk of Bun.file(filePath).stream()) { + hasher.update(chunk); + } + } catch { + return false; + } + const actual = hasher.digest("hex"); + return constantTimeEquals( + actual.toLowerCase(), + expectedSha256.toLowerCase() + ); +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/release.ts` around lines 140 - 154, verifyBinaryHash currently reads the whole file into memory via file.arrayBuffer(), which can blow up RSS for large binaries; change it to stream the file through Bun.CryptoHasher instead. Replace the tryCatch(file.arrayBuffer()) usage in verifyBinaryHash with iteration over the async iterator from file.stream() (or for await (const chunk of file.stream())), calling hasher.update(chunk) for each chunk and catching stream errors; keep the final digest calculation and the constantTimeEquals comparison (actual.toLowerCase(), expectedSha256.toLowerCase()) unchanged. Ensure any stream/read errors cause the function to return false (or propagate consistently) and that you still return the constant-time equality check at the end.
113-138: Stream the download instead of buffering the whole asset in memory.
response.arrayBuffer()materializes the full binary (tens of MB with debug symbols, still several MB minified) in RSS before theBun.writecall. Bun 1.2'sBun.writeaccepts aResponsedirectly and pipes it to disk, which is both simpler and constant-memory.Note: Since
fetchWithTimeoutreturns a native fetchResponse, this refactor avoids the Bun 1.2 bugs (issues#13237,#15796,#21455) that affect non-fetch Responses. However, be aware of these edge cases if streaming patterns change in the future.♻️ Proposed refactor
- if (!response.ok) { - throw new Error( - `Download ${asset.name} failed: ${response.status} ${response.statusText}` - ); - } - const { data: buffer, error: bufError } = await tryCatch( - response.arrayBuffer() - ); - if (bufError || !buffer) { - throw new Error(`Failed to read ${asset.name} body`); - } - await Bun.write(destPath, buffer); + if (!response.ok) { + throw new Error( + `Download ${asset.name} failed: ${response.status} ${response.statusText}` + ); + } + const { error: writeError } = await tryCatch(Bun.write(destPath, response)); + if (writeError) { + throw new Error(`Failed to write ${asset.name}: ${writeError.message}`); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/release.ts` around lines 113 - 138, downloadAsset currently calls response.arrayBuffer() which buffers the entire asset in memory; replace that pattern with streaming the Response directly to disk by passing the native fetch Response to Bun.write (i.e., avoid response.arrayBuffer() and the intermediate buffer), keeping the existing error checks on response.ok and the tryCatch around fetchWithTimeout, and ensure you still handle fetch errors (from fetchWithTimeout) before calling Bun.write so downloadAsset (and symbols response.arrayBuffer, Bun.write, fetchWithTimeout, downloadAsset) use constant memory and stream the payload.src/commands/update.ts (1)
16-26: Consider hoistingisStandaloneBinaryandisEaccesErrorinto shared helpers.
isStandaloneBinaryis byte-identical toisStandaloneinsrc/lib/auto-update.ts(lines 23–27), andisEaccesErroris a general-purpose errno check. Extracting both (e.g. intosrc/lib/runtime.ts/src/lib/errors.tsor exporting them fromsrc/lib/release.ts) avoids drift if thebunfssentinel ever changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/update.ts` around lines 16 - 26, The two functions isStandaloneBinary and isEaccesError are duplicated utilities and should be hoisted into a shared helper module (e.g., a runtime/errors helper) and imported where needed; extract the implementations into a single exported function named isStandalone (or keep isStandaloneBinary) and a single exported isEaccesError, replace the local copies in src/commands/update.ts and in src/lib/auto-update.ts with imports from the new shared module, and ensure the exported names and behavior remain byte-identical so callers (e.g., any use sites referencing isStandaloneBinary or isStandalone and isEaccesError) continue to work without behavioral drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 90-92: The fenced code block containing the line
"AUTO_UPDATE=false" is missing a language tag; update the markdown fence to
include a language identifier (use "ini") so the block becomes ```ini ... ``` to
satisfy MD040 and enable proper syntax highlighting for the AUTO_UPDATE=false
snippet.
In `@src/commands/update.ts`:
- Around line 98-137: The code makes the downloaded file executable before
verifying its SHA256; change the order so fetchSha256Sums(...) and
verifyBinaryHash(tmpPath, expected) run before calling fs.chmod(tmpPath, 0o755).
Specifically, move the chmod block that references tmpPath (and its
chmodError/unlink/error handling) to after the verification block that uses
fetchSha256Sums and verifyBinaryHash, preserving the current
unlink/printError/EXIT_CODES handling for each failure path and keeping the same
messages and cleanup logic for tmpPath.
In `@src/lib/auto-update.ts`:
- Around line 118-146: scheduleBackgroundUpdateCheck currently writes the
"last-check" timestamp before calling Bun.spawn, so if spawn fails the timestamp
prevents retries for 24h; change the flow to only update the timestamp after
Bun.spawn succeeds (or roll it back on spawn failure). Locate
scheduleBackgroundUpdateCheck and move the fs.writeFileSync(getLastCheckPath(),
String(now)) call to after the Bun.spawn(...) call (or surround the spawn with
try/catch and delete/restore the last-check file on error) so the timestamp is
persisted only when the child process is successfully created.
---
Nitpick comments:
In @.github/workflows/release.yml:
- Around line 96-105: The "Aggregate SHA256SUMS" step uses a fragile
newline-detection pipeline; simplify by just concatenating the per-target files
(the for loop handling worktree-*.sha256 and the final cat SHA256SUMS) and
remove the tail/od/grep check—since shasum -a 256 outputs terminating newlines,
simply append each worktree-*.sha256 to SHA256SUMS and then remove the
worktree-*.sha256 files (refer to the step name "Aggregate SHA256SUMS" and the
for f in worktree-*.sha256 loop in the diff).
- Around line 68-72: The workflow's "Smoke-test --version (native targets only)"
step skips matrix.target == 'linux-arm64', leaving that binary untested; either
add a QEMU-based invocation to exercise the linux-arm64 artifact (e.g., run the
built binary inside a container with --platform=linux/arm64 via docker or a
qemu-user-static runner) so ./dist/worktree-${{ matrix.target }} is executed for
linux-arm64, or explicitly document the gap in the release process (e.g., update
the workflow description or repo release docs) so reviewers know linux-arm64
isn't smoke-tested; modify the workflow step that checks matrix.target and/or
add a separate job that runs the linux-arm64 binary under QEMU to address this.
In `@README.md`:
- Around line 63-71: The README Config table is missing the new AUTO_UPDATE
schema key; update the table to include a row for `AUTO_UPDATE` (referencing the
AUTO_UPDATE field added in src/lib/config.ts) with a short description such as
"Enable automatic update checks for worktrees" and an example value (e.g. `true`
or `false`), and ensure the entry aligns with the existing Update section
wording so readers know it's a recognized option alongside the Update
configuration.
In `@src/commands/update.ts`:
- Around line 16-26: The two functions isStandaloneBinary and isEaccesError are
duplicated utilities and should be hoisted into a shared helper module (e.g., a
runtime/errors helper) and imported where needed; extract the implementations
into a single exported function named isStandalone (or keep isStandaloneBinary)
and a single exported isEaccesError, replace the local copies in
src/commands/update.ts and in src/lib/auto-update.ts with imports from the new
shared module, and ensure the exported names and behavior remain byte-identical
so callers (e.g., any use sites referencing isStandaloneBinary or isStandalone
and isEaccesError) continue to work without behavioral drift.
In `@src/index.ts`:
- Around line 15-18: The argv guard currently uses
process.argv.slice(2).includes(INTERNAL_CHECK_SUBCOMMAND) which can match the
token anywhere (e.g. as a flag value); change it to check the first positional
argument instead (e.g. compare process.argv[2] or equivalent) so the internal
subcommand only triggers when it is the subcommand token; update the condition
guarding applyPendingUpdate and scheduleBackgroundUpdateCheck (functions
applyPendingUpdate and scheduleBackgroundUpdateCheck) to use this positional
check referencing INTERNAL_CHECK_SUBCOMMAND.
In `@src/lib/auto-update.ts`:
- Around line 178-189: runBackgroundUpdateCheck can leave behind PID-tagged tmp
files (STAGING_FILENAME.<pid>.tmp) if killed between downloadAsset and cleanup;
add a startup sweep in runBackgroundUpdateCheck that lists files in
getBinaryDir() matching the pattern `${STAGING_FILENAME}.*.tmp`, parse the PID
from each filename and remove the file if the PID is not alive (or if file mtime
is older than a safe threshold like 1 hour) using safeUnlink; reference the
functions/vars getBinaryDir, runBackgroundUpdateCheck, STAGING_FILENAME,
safeUnlink and ensure the sweep runs before creating a new tmpPath so stale
files are reaped but active processes’ tmp files are preserved.
- Around line 136-142: The Bun.spawn call is using both the array form stdio and
the individual stderr/stdout/stdin options which is redundant; update the
Bun.spawn invocation that launches [process.execPath, INTERNAL_CHECK_SUBCOMMAND]
(the call ending with .unref()) to use a single stdio form—prefer keeping the
stdio: ["ignore","ignore","ignore"] array and remove the separate stderr,
stdout, and stdin properties (or vice‑versa if you prefer the named options) so
only one style is present and intent is clear.
- Around line 67-99: applyPendingUpdate currently renames the staged file
unconditionally, allowing a swapped/malicious .worktree.next to replace
process.execPath; re-verify the staged artifact's integrity right before
applying by reading the expected hash from the sidecar (extend
getVersionSidecarPath or add a getStagingHashSidecar, whatever matches the
diff), compute the staged file's SHA256, compare it to the stored hash, and
abort (appendLastError and return) if it doesn't match; alternatively (or
additionally) document the filesystem-ACL trust boundary and/or ensure installer
sets restrictive perms on getBinaryDir(), but prefer implementing the immediate
re-check in applyPendingUpdate around the call to
fs.renameSync(getStagingPath(), process.execPath).
- Around line 255-282: The probeBinaryVersion function should explicitly detect
when Bun.spawnSync timed out by inspecting the spawn result's timeout/signal
indicators (e.g., result.signalCode and/or result.exitedDueToTimeout) in
addition to the generic exitCode check; update the logic after the try/catch to
check if result.exitedDueToTimeout is true or result.signalCode is set and
return a clear timeout reason such as `ok: false, reason: "timeout (SIGTERM)"`
or similar, otherwise keep the existing non-zero exitCode handling and the
version-regex parsing; reference probeBinaryVersion, Bun.spawnSync,
PROBE_TIMEOUT_MS, result.signalCode and result.exitedDueToTimeout when making
the change.
In `@src/lib/config.ts`:
- Around line 64-77: readConfigFile currently swallows read/parse/validation
errors and returns defaults; change it so that when file.text() or the
parse/validate step (validateConfig(parseConfigContent(...))) produces an error
you write a user-facing warning to stderr (including the error message and
filename) and then rethrow or propagate the error instead of returning
validateConfig({}); update the error branches that check "error" and
"parseError" in readConfigFile to use console.error/stderr and rethrow so
callers/tests (and validateConfig behavior) see the invalid-config error; keep
the normal return of parsed when no error.
In `@src/lib/release.ts`:
- Around line 156-163: The function constantTimeEquals currently exits early on
differing lengths and operates on JS chars, which makes the name misleading for
true constant-time semantics; either rename the function to secureEquals and
update all callers (e.g., any use sites that compare SHA256 strings) to reflect
the weaker guarantee, or keep the name and add a clear doc comment on
constantTimeEquals stating it assumes non-secret, fixed-length 64-character hex
inputs (from SHA256SUMS) and may early-return on length mismatch; implement the
chosen change and update any unit tests or references so callers like
expectedSha256 comparisons continue to compile and reflect the new name/docs.
- Around line 140-154: verifyBinaryHash currently reads the whole file into
memory via file.arrayBuffer(), which can blow up RSS for large binaries; change
it to stream the file through Bun.CryptoHasher instead. Replace the
tryCatch(file.arrayBuffer()) usage in verifyBinaryHash with iteration over the
async iterator from file.stream() (or for await (const chunk of file.stream())),
calling hasher.update(chunk) for each chunk and catching stream errors; keep the
final digest calculation and the constantTimeEquals comparison
(actual.toLowerCase(), expectedSha256.toLowerCase()) unchanged. Ensure any
stream/read errors cause the function to return false (or propagate
consistently) and that you still return the constant-time equality check at the
end.
- Around line 113-138: downloadAsset currently calls response.arrayBuffer()
which buffers the entire asset in memory; replace that pattern with streaming
the Response directly to disk by passing the native fetch Response to Bun.write
(i.e., avoid response.arrayBuffer() and the intermediate buffer), keeping the
existing error checks on response.ok and the tryCatch around fetchWithTimeout,
and ensure you still handle fetch errors (from fetchWithTimeout) before calling
Bun.write so downloadAsset (and symbols response.arrayBuffer, Bun.write,
fetchWithTimeout, downloadAsset) use constant memory and stream the payload.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9b34c931-87d0-4279-9c62-715ca2e3140e
📒 Files selected for processing (14)
.github/workflows/ci.yml.github/workflows/release.yml.gitignoreCHANGELOG.mdREADME.mdpackage.jsonsrc/commands/internal-update-check.tssrc/commands/update.tssrc/index.tssrc/lib/auto-update.tssrc/lib/config.test.tssrc/lib/config.tssrc/lib/release.test.tssrc/lib/release.ts
- Move last-check write to child; failed checks don't burn 24h window - Simplify probe to runs-only; use release.version for sidecar - SemVer 2.0 prerelease ordering in compareVersions - Rotate last-error log at 64 KB (keep last 20 lines) - Smoke-test linux-arm64 on native runner (ubuntu-24.04-arm) - Extract shared isStandalone() to release.ts - Stream verifyBinaryHash instead of loading full binary into memory - Consistent COLORS import site (logger.ts) - Log sidecar write failures to last-error - Extract safeUnlink helper in update.ts
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
README.md (1)
90-92:⚠️ Potential issue | 🟡 MinorAdd language to fenced code block (MD040).
The fence for
AUTO_UPDATE=falsestill lacks a language identifier; useini(ordotenv) to satisfy markdownlint MD040.💚 Proposed fix
-``` +```ini AUTO_UPDATE=false</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@README.mdaround lines 90 - 92, The fenced code block containing the line
"AUTO_UPDATE=false" is missing a language identifier (MD040); update that fence
to include a language like ini or dotenv so markdownlint passes (e.g., change
the opening backticks toini ordotenv) for the block that contains
AUTO_UPDATE=false.</details> </blockquote></details> </blockquote></details> <details> <summary>🧹 Nitpick comments (5)</summary><blockquote> <details> <summary>.gitignore (1)</summary><blockquote> `5-5`: **LGTM!** The addition of `tasks/` to `.gitignore` follows the existing pattern and is appropriate for excluding local development or build-related directories from version control. <details> <summary>📝 Optional: Add inline comment for clarity</summary> Consider adding a brief comment to document the purpose of this directory, especially if it's used for build/release tasks: ```diff docs/ +# Local build/release tasks tasks/ ``` This helps future contributors understand what `tasks/` contains. </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.gitignore at line 5, Add a short inline comment above the tasks/ entry in .gitignore explaining what the tasks/ directory contains and why it’s ignored (e.g., build artifacts, local task wrappers, or release scripts) so future contributors understand its purpose; update the .gitignore near the existing tasks/ line to include that descriptive comment. ``` </details> </blockquote></details> <details> <summary>src/index.ts (1)</summary><blockquote> `15-18`: **Argv guard is slightly over-broad.** `process.argv.slice(2).includes(INTERNAL_CHECK_SUBCOMMAND)` matches the token anywhere in argv, not just as the subcommand position. If a user ever passes that literal as an argument value (e.g., a worktree name), `applyPendingUpdate()` and scheduling will be skipped unexpectedly. Given the name is `__internal_update_check` this is extremely unlikely in practice, but tightening to `process.argv[2] === INTERNAL_CHECK_SUBCOMMAND` makes the intent unambiguous. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 15 - 18, The argv guard is too broad: replace the includes-based check with a strict positional check so the internal token only matches when used as the subcommand; change the condition that gates applyPendingUpdate() and scheduleBackgroundUpdateCheck() to test process.argv[2] === INTERNAL_CHECK_SUBCOMMAND (keeping the same functions applyPendingUpdate and scheduleBackgroundUpdateCheck and the constant INTERNAL_CHECK_SUBCOMMAND) so the update logic only skips when the subcommand is exactly the third argv entry. ``` </details> </blockquote></details> <details> <summary>README.md (1)</summary><blockquote> `67-71`: **Consider documenting `AUTO_UPDATE` in the Config table.** The table lists supported `.worktreerc` keys but omits the new `AUTO_UPDATE` key. Users skimming the table for the full option set may miss it. The Update section covers it, but duplicating it in the table improves discoverability. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 67 - 71, The config keys table is missing the AUTO_UPDATE entry; add a new row for `AUTO_UPDATE` in the same Config table (where `DEFAULT_BASE` and `WORKTREE_DIR` are listed) that briefly describes its purpose (e.g., "Automatically update worktrees from their base branch when syncing") and shows an example value (e.g., `true` or `false`), making sure the wording and default match the existing "Update" section's description of AUTO_UPDATE. ``` </details> </blockquote></details> <details> <summary>src/lib/config.ts (1)</summary><blockquote> `64-77`: **Silent fallback may hide user config mistakes.** `readConfigFile` returns `validateConfig({})` for every failure mode (missing file, read error, parse error). For the missing-file case this is correct, but when the user has a `.worktreerc` with an invalid value (e.g., `AUTO_UPDATE=junk`, per your own test on line 115), they'll silently get defaults with no diagnostic — including silently keeping auto-update enabled when they thought they disabled it. Consider differentiating "file absent" from "file present but invalid" and writing a one-line warning to stderr in the latter case. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/lib/config.ts` around lines 64 - 77, readConfigFile currently returns validateConfig({}) for missing file and for read/parse errors, which hides user mistakes; change it so that when file.exists() is true but reading or parsing fails (the branches where tryCatch(file.text()) or tryCatch(...parseConfigContent...) return an error or no parsed data) you still return validateConfig({}) but also write a one-line warning to stderr informing the user their config file is present but invalid. Update the readConfigFile implementation to detect the "file present but invalid" case (use the existing file.exists(), tryCatch results, parseConfigContent and validateConfig calls) and emit a concise stderr message (e.g., via console.error or Bun.stderr.write) that includes the config file path and a short note that defaults are being used. ``` </details> </blockquote></details> <details> <summary>src/lib/release.test.ts (1)</summary><blockquote> `59-63`: **Consider matching real BSD shasum format (single space + asterisk).** `shasum -a 256 -b` emits `HASH *FILENAME` (single space, asterisk-prefixed path). The current test uses two spaces before `*worktree-darwin-x64`, which passes because the regex is permissive but doesn't precisely mirror what the `-b` flag would produce. Consider tightening for realism, and optionally add a numeric-prerelease ordering case (e.g., `1.2.3-rc.2` vs `1.2.3-rc.10`) to lock in the intentional non-strict-SemVer behavior of `comparePrerelease`. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/lib/release.test.ts` around lines 59 - 63, The test should more accurately mirror BSD shasum -b output and add a numeric-prerelease ordering case: update the "parses BSD-style asterisk prefix" test to use a single space before the asterisk (e.g., const text = "c".repeat(64) + " *worktree-darwin-x64") so parseSha256Sums is validated against real output, and add a small new test that exercises comparePrerelease with numeric prerelease segments (e.g., assert comparePrerelease("rc.2","rc.10") indicates rc.2 < rc.10) to lock in the intended non-strict-SemVer behavior. ``` </details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Inline comments:
In@src/lib/auto-update.ts:
- Around line 256-261: The probe spawns the downloaded binary with
--version,
which causes the child's entrypoint logic to callapplyPendingUpdate()and
scheduleBackgroundUpdateCheck()and spawn an unnecessary grandchild; fix by
making the probe invoke the child with a sentinel arg and having the entrypoint
honor it, e.g. updateprobeBinaryRunsto pass an internal flag (reuse
INTERNAL_CHECK_SUBCOMMANDor add__probe) and update the entrypoint guard
(the logic that decides to runapplyPendingUpdate()/
scheduleBackgroundUpdateCheck()) to skip scheduling when that sentinel is
present, or alternatively add a guard soscheduleBackgroundUpdateCheck()is
not called when--version/--help/the probe sentinel is detected.In
@src/lib/release.ts:
- Around line 225-228: The current block in release.ts collapses real read
errors from response.text() into the misleading reason "empty SHA256SUMS body";
modify the error return logic in the tryCatch handling around response.text()
(variables text, textError and the call response.text()) so that if textError
exists you return a kind: "error" with the actual error message/details (e.g.,
textError.message or serialized error) and only use "empty SHA256SUMS body" when
no error occurred but text is empty; keep the existing shape ({ kind: "error",
reason: ... }) so callers of the sums result (e.g., update.ts and
auto-update.ts) receive the real failure reason.
Duplicate comments:
In@README.md:
- Around line 90-92: The fenced code block containing the line
"AUTO_UPDATE=false" is missing a language identifier (MD040); update that fence
to include a language like ini or dotenv so markdownlint passes (e.g., change
the opening backticks toini ordotenv) for the block that contains
AUTO_UPDATE=false.
Nitpick comments:
In @.gitignore:
- Line 5: Add a short inline comment above the tasks/ entry in .gitignore
explaining what the tasks/ directory contains and why it’s ignored (e.g., build
artifacts, local task wrappers, or release scripts) so future contributors
understand its purpose; update the .gitignore near the existing tasks/ line to
include that descriptive comment.In
@README.md:
- Around line 67-71: The config keys table is missing the AUTO_UPDATE entry; add
a new row forAUTO_UPDATEin the same Config table (whereDEFAULT_BASEand
WORKTREE_DIRare listed) that briefly describes its purpose (e.g.,
"Automatically update worktrees from their base branch when syncing") and shows
an example value (e.g.,trueorfalse), making sure the wording and default
match the existing "Update" section's description of AUTO_UPDATE.In
@src/index.ts:
- Around line 15-18: The argv guard is too broad: replace the includes-based
check with a strict positional check so the internal token only matches when
used as the subcommand; change the condition that gates applyPendingUpdate() and
scheduleBackgroundUpdateCheck() to test process.argv[2] ===
INTERNAL_CHECK_SUBCOMMAND (keeping the same functions applyPendingUpdate and
scheduleBackgroundUpdateCheck and the constant INTERNAL_CHECK_SUBCOMMAND) so the
update logic only skips when the subcommand is exactly the third argv entry.In
@src/lib/config.ts:
- Around line 64-77: readConfigFile currently returns validateConfig({}) for
missing file and for read/parse errors, which hides user mistakes; change it so
that when file.exists() is true but reading or parsing fails (the branches where
tryCatch(file.text()) or tryCatch(...parseConfigContent...) return an error or
no parsed data) you still return validateConfig({}) but also write a one-line
warning to stderr informing the user their config file is present but invalid.
Update the readConfigFile implementation to detect the "file present but
invalid" case (use the existing file.exists(), tryCatch results,
parseConfigContent and validateConfig calls) and emit a concise stderr message
(e.g., via console.error or Bun.stderr.write) that includes the config file path
and a short note that defaults are being used.In
@src/lib/release.test.ts:
- Around line 59-63: The test should more accurately mirror BSD shasum -b output
and add a numeric-prerelease ordering case: update the "parses BSD-style
asterisk prefix" test to use a single space before the asterisk (e.g., const
text = "c".repeat(64) + " *worktree-darwin-x64") so parseSha256Sums is validated
against real output, and add a small new test that exercises comparePrerelease
with numeric prerelease segments (e.g., assert comparePrerelease("rc.2","rc.10")
indicates rc.2 < rc.10) to lock in the intended non-strict-SemVer behavior.</details> <details> <summary>🪄 Autofix (Beta)</summary> Fix all unresolved CodeRabbit comments on this PR: - [ ] <!-- {"checkboxId": "4b0d0e0a-96d7-4f10-b296-3a18ea78f0b9"} --> Push a commit to this branch (recommended) - [ ] <!-- {"checkboxId": "ff5b1114-7d8c-49e6-8ac1-43f82af23a33"} --> Create a new PR with the fixes </details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: Organization UI **Review profile**: CHILL **Plan**: Pro **Run ID**: `9d33a09a-a576-4f9f-ae3d-a08135085db2` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 5b80977a6b4bb92d77091309b533237b8f0a678b and 121de31ad1f4fed32ef7bbe8270dab92234ad902. </details> <details> <summary>📒 Files selected for processing (14)</summary> * `.github/workflows/ci.yml` * `.github/workflows/release.yml` * `.gitignore` * `CHANGELOG.md` * `README.md` * `package.json` * `src/commands/internal-update-check.ts` * `src/commands/update.ts` * `src/index.ts` * `src/lib/auto-update.ts` * `src/lib/config.test.ts` * `src/lib/config.ts` * `src/lib/release.test.ts` * `src/lib/release.ts` </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/lib/release.test.ts (1)
1-98: LGTM — solid coverage of both parsers.Good coverage of the main semantics: v-prefix stripping, missing components, prerelease ordering, BSD vs sha256sum formats, comments/blank lines, case-normalization, and malformed-line skipping. The explicit comment on lines 40-45 documenting the intentional deviation from strict SemVer §11 (numeric segment ordering) is especially helpful for future maintainers.
A couple of small, optional suggestions if you want to harden the suite further — none are blockers:
parseSha256Sums"ignores malformed lines" (lines 90-97) only tests a non-hex prefix. Consider also exercising a short hash (e.g., 63 hex chars) and a line missing the filename, since both are plausible corruption modes of aSHA256SUMSfile.- The BSD test (lines 67-71) uses a single space before
*, but sha256sum/shasum typically emit two spaces. The regex\s+handles both, but asserting the canonical two-space form too would lock in compatibility with the actualshasum -b -a 256output produced by the release workflow.- Consider one test for duplicate filenames in the input (last-wins vs first-wins) to pin down behavior, since
SHA256SUMSgeneration bugs could produce duplicates and the current implementation silently overwrites.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/release.test.ts` around lines 1 - 98, Tests currently miss a couple of malformed-line variants and a duplicate-file behavior check; update the parseSha256Sums test suite to (1) add a malformed-line case with a short hash (63 hex chars) and a case with a hash but no filename to ensure parseSha256Sums drops those lines, (2) change the BSD-style test input in parseSha256Sums (the "*worktree-..." case) to use two spaces before the asterisk to mirror canonical "shasum -b" output, and (3) add a duplicate-filename test that feeds two lines with the same filename and asserts the intended overwrite rule (e.g., last-wins) so parseSha256Sums' behavior on duplicates is pinned down.src/lib/config.ts (1)
7-27: Use Zod's structured error handling in the booleanLike transformer.The transformer currently throws a plain
Errorfor invalid values, which bypasses Zod v4's native error handling. Since the project uses Zod v4, usectx.addIssue()with a custom issue code and returnz.NEVERinstead to keep validation failures within Zod's structured error system.♻️ Zod-native validation refactor
const booleanLike = z .union([z.boolean(), z.string()]) - .transform(function (value) { + .transform(function (value, ctx) { if (typeof value === "boolean") return value; const normalized = value.trim().toLowerCase(); if ( normalized === "true" || normalized === "1" || normalized === "yes" ) { return true; } if ( normalized === "false" || normalized === "0" || normalized === "no" ) { return false; } - throw new Error(`Expected boolean-like value, got "${value}"`); + ctx.addIssue({ + code: z.ZodIssueCode.custom, + message: `Expected boolean-like value, got "${value}"`, + }); + return z.NEVER; });This improves compliance with the coding guideline: "Use Zod schema validation for configuration file validation" by leveraging Zod's native validation issue reporting.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/config.ts` around lines 7 - 27, The transformer for booleanLike currently throws a plain Error which bypasses Zod v4 structured errors; change the transform signature to accept (value, ctx), and when the input is not a recognized boolean-like string call ctx.addIssue({ code: z.ZodIssueCode.custom, message: `Expected boolean-like value, got "${value}"` }) and return z.NEVER instead of throwing; keep the existing true/false branches and ensure z and z.NEVER / z.ZodIssueCode are referenced inside the booleanLike transformer.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/lib/config.ts`:
- Around line 7-27: The transformer for booleanLike currently throws a plain
Error which bypasses Zod v4 structured errors; change the transform signature to
accept (value, ctx), and when the input is not a recognized boolean-like string
call ctx.addIssue({ code: z.ZodIssueCode.custom, message: `Expected boolean-like
value, got "${value}"` }) and return z.NEVER instead of throwing; keep the
existing true/false branches and ensure z and z.NEVER / z.ZodIssueCode are
referenced inside the booleanLike transformer.
In `@src/lib/release.test.ts`:
- Around line 1-98: Tests currently miss a couple of malformed-line variants and
a duplicate-file behavior check; update the parseSha256Sums test suite to (1)
add a malformed-line case with a short hash (63 hex chars) and a case with a
hash but no filename to ensure parseSha256Sums drops those lines, (2) change the
BSD-style test input in parseSha256Sums (the "*worktree-..." case) to use two
spaces before the asterisk to mirror canonical "shasum -b" output, and (3) add a
duplicate-filename test that feeds two lines with the same filename and asserts
the intended overwrite rule (e.g., last-wins) so parseSha256Sums' behavior on
duplicates is pinned down.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ea16c8a3-c749-43eb-b0ef-4bf43852cbb7
📒 Files selected for processing (6)
README.mdsrc/index.tssrc/lib/auto-update.tssrc/lib/config.tssrc/lib/release.test.tssrc/lib/release.ts
✅ Files skipped from review due to trivial changes (2)
- README.md
- src/lib/release.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/index.ts
- src/lib/auto-update.ts
…g, discriminated hash result - auto-update.ts: re-verify staged binary hash before swap (meta sidecar now stores version+sha256); WORKTREE_NO_UPDATE guards applyPendingUpdate; user-visible stderr on apply failure; recordCheckCompleted on structural mismatches; probe stderr capture; log spawn failures; orphan-staging cleanup - release.ts: verifyBinaryHash now returns discriminated HashResult (mismatch vs io-error); new verifyBinaryHashSync; withTimeout callback form covers body reads - config.ts: readConfigFile throws on parse error; loadGlobalConfig propagates → isAutoUpdateDisabled fails closed; warn once per path with ~-prefixed display - update.ts: reorder SHA256 verification BEFORE chmod; handle HashResult discriminated union - fs-utils.ts (new): shared safeUnlink / safeUnlinkSync
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/auto-update.ts`:
- Around line 179-185: applyPendingUpdate() currently renames the staged binary
and prints the "auto-updated" message but doesn't refresh the background-update
throttle, so src/index.ts may call scheduleBackgroundUpdateCheck() and
re-download the same release; after the successful apply (immediately after
fs.renameSync(stagedPath, process.execPath) and safeUnlinkSync(metaPath)),
invoke a throttle-refresh function (e.g., refreshUpdateCheckThrottle() or
resetUpdateCheckTimestamp()) to set the last-update-check timestamp to now (or
have applyPendingUpdate() return a boolean like didApplyNewBinary = true and
have the entrypoint skip calling scheduleBackgroundUpdateCheck() when true) so
the same release is not re-staged within 24 hours; reference
applyPendingUpdate(), stagedPath, process.execPath, meta.version and
scheduleBackgroundUpdateCheck() when making the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a7dd4814-272a-4f4d-addc-a3d5f82a0d99
📒 Files selected for processing (5)
src/commands/update.tssrc/lib/auto-update.tssrc/lib/config.tssrc/lib/fs-utils.tssrc/lib/release.ts
- safeUnlink*: silent on ENOENT only; emit dim stderr warning on EACCES/EPERM/etc. - applyPendingUpdate: wrap rename in tryCatchSync; on EACCES/EPERM/EROFS clean up staged artifacts and suggest 'sudo worktree update' (breaks infinite apply-fail loop) - runBackgroundUpdateCheck: don't burn 24h throttle on asset-missing (transient) - verifyBinaryHashSync: switch to 64 KB chunked openSync+readSync; no full-file Buffer - downloadAsset: stream Response directly to disk via Bun.write(destPath, response) - release.ts: extract computeSha256Async/computeSha256Sync; dedupe 3 hashing sites - fetchLatestRelease/downloadAsset: preserve errno via Error.cause chain - isEaccesError: walk .cause chain so download EACCES suggests sudo - parseSha256Sums: throw on duplicate filename entries - probeBinaryRuns: surface 'timed out after 2000ms' instead of 'exit null' - scheduleBackgroundUpdateCheck: rethrow non-errno errors so TypeErrors surface - scheduleBackgroundUpdateCheck: spawn with detached:true (setsid) so background download survives terminal close - release.yml: drop dead tail/od newline guard; shasum always emits trailing newline
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
… fixes — see CHANGELOG)
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
.github/workflows/release.yml (2)
63-66: Verification step printsfileoutput but doesn't assert the architecture.A cross-compile misconfiguration (e.g., matrix
targetvsbun_targetdrift) would currently still pass this step —fileoutput is only printed, never matched. Consider failing fast when the produced binary doesn't match the expected architecture.♻️ Example hardening
- name: Verify binary architecture and size run: | file dist/worktree-${{ matrix.target }} ls -lh dist/worktree-${{ matrix.target }} + case "${{ matrix.target }}" in + linux-x64) file dist/worktree-${{ matrix.target }} | grep -E 'ELF 64-bit.*x86-64' ;; + linux-arm64) file dist/worktree-${{ matrix.target }} | grep -E 'ELF 64-bit.*aarch64|ARM aarch64' ;; + darwin-x64) file dist/worktree-${{ matrix.target }} | grep -E 'Mach-O 64-bit.*x86_64' ;; + darwin-arm64) file dist/worktree-${{ matrix.target }} | grep -E 'Mach-O 64-bit.*arm64' ;; + esac🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release.yml around lines 63 - 66, The "Verify binary architecture and size" step currently only prints file output and doesn't assert the architecture; update that step to compute the expected architecture from the build matrix (e.g., using matrix.target or bun_target) and then test the produced binary with the file command and grep the expected string, failing the job if the pattern doesn't match; locate the step by its name "Verify binary architecture and size" and the commands invoking file dist/worktree-${{ matrix.target }} and ls -lh, add a check that compares file output against the expected architecture/token and exit non-zero when it doesn't match so cross-compile drift fails fast.
10-13: Add aconcurrencyguard to the release workflow.If a tag is force-pushed or two tags land close together, two
Releaseworkflow runs can race and either overwrite the draft or publish a half-uploaded release. Since the whole design hinges onSHA256SUMSbeing present before the release is made public (see the comment on lines 104–109), this is worth pinning down at the workflow level.♻️ Suggested addition
permissions: contents: write +concurrency: + group: release-${{ github.ref }} + cancel-in-progress: false + jobs:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/release.yml around lines 10 - 13, Add a top-level concurrency guard to the workflow to prevent concurrent Release runs: insert a concurrency block at the workflow root (above jobs) using a group that includes the ref (e.g. group: "release-${{ github.ref }}" or "release-${{ github.ref_name }}") and set cancel-in-progress: false so multiple runs for the same tag/ref cannot race; this ensures the existing jobs (e.g., the build job with name "Build ${{ matrix.target }}") won’t overlap and avoids races around the SHA256SUMS/publish steps referenced in the file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/release.yml:
- Around line 63-71: Add the WORKTREE_NO_UPDATE=1 environment variable to the
"Smoke-test --version" workflow step so the binary won't spawn the background
updater during CI; locate the step named "Smoke-test --version" in the release
workflow and add an env mapping (WORKTREE_NO_UPDATE: "1") that applies before
running chmod and executing ./dist/worktree-${{ matrix.target }} --version so
scheduleBackgroundUpdateCheck()/isAutoUpdateDisabled() sees the guard.
In `@src/lib/auto-update.ts`:
- Around line 360-470: The code returns early on structural binary-directory
write failures (chmodError, metaWriteError, metaRenameError, renameError and
similar) without calling recordCheckCompleted(), causing incessant retries;
update each early-return branch that handles permission/readonly/IO errors (the
blocks handling chmodError, metaWriteError, metaRenameError, renameError and the
download failure path around downloadAsset/tmpPath) to call
recordCheckCompleted() (with the same failure semantics used elsewhere in this
file) before returning so these persistent failures are throttled; locate these
branches via symbols chmodSync tryCatchSync block,
fs.writeFileSync/metaWriteError, fs.renameSync/metaRenameError,
fs.renameSync/renameError, downloadAsset and ensure recordCheckCompleted() is
invoked right before each appendLastError() return.
In `@src/lib/config.ts`:
- Around line 99-108: When scope === "project" and raw contains "AUTO_UPDATE",
remove that key from the config object before calling validateConfig so the
ignored project-level AUTO_UPDATE cannot cause validation to fail; i.e., after
parseConfigContent(content) and after calling warnOnce for
`${filePath}:AUTO_UPDATE`, create a sanitized copy of raw (or delete
raw.AUTO_UPDATE) and pass that sanitized object into tryCatchSync(() =>
validateConfig(sanitizedRaw)) so validateConfig never sees project AUTO_UPDATE.
In `@src/lib/release.ts`:
- Around line 168-180: The code only checks content-length header and then calls
Bun.write(destPath, response), which can still write arbitrarily large responses
if the header is absent or forged; replace the single Bun.write(...) call with
manual streaming from response.body (or response.arrayBuffer/reader) into the
destination while counting bytes and aborting if the running total exceeds
MAX_ASSET_BYTES, throwing an error that includes asset.name and deleting any
partial destPath on overflow; keep the existing header pre-check for early
rejection but enforce the cap during the actual stream write path referenced
here (response, destPath, MAX_ASSET_BYTES, asset.name).
---
Nitpick comments:
In @.github/workflows/release.yml:
- Around line 63-66: The "Verify binary architecture and size" step currently
only prints file output and doesn't assert the architecture; update that step to
compute the expected architecture from the build matrix (e.g., using
matrix.target or bun_target) and then test the produced binary with the file
command and grep the expected string, failing the job if the pattern doesn't
match; locate the step by its name "Verify binary architecture and size" and the
commands invoking file dist/worktree-${{ matrix.target }} and ls -lh, add a
check that compares file output against the expected architecture/token and exit
non-zero when it doesn't match so cross-compile drift fails fast.
- Around line 10-13: Add a top-level concurrency guard to the workflow to
prevent concurrent Release runs: insert a concurrency block at the workflow root
(above jobs) using a group that includes the ref (e.g. group: "release-${{
github.ref }}" or "release-${{ github.ref_name }}") and set cancel-in-progress:
false so multiple runs for the same tag/ref cannot race; this ensures the
existing jobs (e.g., the build job with name "Build ${{ matrix.target }}") won’t
overlap and avoids races around the SHA256SUMS/publish steps referenced in the
file.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e32c0328-4649-4ffb-8ba6-22f12e06aca6
📒 Files selected for processing (8)
.github/workflows/release.ymlCHANGELOG.mdREADME.mdsrc/commands/update.tssrc/lib/auto-update.tssrc/lib/config.tssrc/lib/fs-utils.tssrc/lib/release.ts
✅ Files skipped from review due to trivial changes (2)
- README.md
- CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (2)
- src/lib/fs-utils.ts
- src/commands/update.ts
…fail-loud config (F1-F23)
|
@coderabbitai resume |
✅ Actions performedReviews resumed. |
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (1)
src/lib/auto-update.ts (1)
430-480:⚠️ Potential issue | 🟡 MinorThrottle persistent download/chmod write failures too.
The structural-failure throttling was added for sidecar/stage renames, but download and chmod errors still return without
recordCheckCompleted(). Root-owned or read-only install directories can keep retrying every launch and burn GitHub quota.🛠️ Proposed fix
if (dlError) { safeUnlinkSync(tmpPath); appendLastError("check", `download: ${dlError.message}`); + if (classifyWriteError(dlError) !== null) { + recordCheckCompleted(); + } return; } @@ if (chmodError) { safeUnlinkSync(tmpPath); appendLastError("check", `chmod: ${chmodError.message}`); + if (classifyWriteError(chmodError) !== null) { + recordCheckCompleted(); + } return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/auto-update.ts` around lines 430 - 480, The download and chmod failure branches never call recordCheckCompleted(), so persistent failures (e.g., root-owned/read-only installs) keep retrying; update the dlError and chmodError handlers to call recordCheckCompleted() before returning. Specifically, in the block handling const { error: dlError } = await tryCatch(downloadAsset(...)) add recordCheckCompleted() after appendLastError("check", `download: ${dlError.message}`) (and after safeUnlinkSync), and in the chmodError branch (where tryCatchSync wraps fs.chmodSync) add recordCheckCompleted() after appendLastError("check", `chmod: ${chmodError.message}`) (and after safeUnlinkSync). Keep the existing safeUnlinkSync and appendLastError calls and only add recordCheckCompleted() to throttle persistent structural failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CHANGELOG.md`:
- Around line 8-63: Add a top-level "## [Unreleased]" section above the existing
"## [1.3.0] - 2026-04-17" and move only user-facing items into the 1.3.0 release
notes; rewrite entries that mention internal symbols (e.g.
verifyAssetAgainstSums, Object.create(null), timingSafeEqual, MAX_ASSET_BYTES,
applyPendingUpdate, fetchLatestRelease, downloadAsset, appendLastError) to plain
user-facing outcomes (security, reliability, performance, UX) and collapse
implementation details into short "Changed" or "Security" bullets; remove
developer-oriented plumbing notes and keep the categories
Added/Changed/Fixed/Security/Tests concise and in Keep a Changelog tone so the
file reads as a user-facing changelog with an Unreleased header.
In `@src/lib/auto-update.ts`:
- Around line 162-165: applyPendingUpdate currently only respects
WORKTREE_NO_UPDATE and can install a staged binary even when the user has
AUTO_UPDATE=false; update applyPendingUpdate to also honor the same auto-update
decision by either (a) synchronously checking the user's config flag before
proceeding (read the RC/ENV that backs shouldAutoUpdate) or (b) converting the
startup path that calls applyPendingUpdate to await shouldAutoUpdate() and skip
applying when it returns false; modify the function (applyPendingUpdate) to call
the same gate (shouldAutoUpdate) or its synchronous equivalent alongside the
existing PROCESS env check and keep the early return behavior used with
isStandalone().
- Around line 105-119: rotateErrorLogIfOversized currently warns via
warnLogFailureOnce when fs.statSync or other file ops throw ENOENT on first
append; change the helper (rotateErrorLogIfOversized) to treat a missing file as
non-error by detecting error.code === "ENOENT" (or error instanceof Error &&
(error as any).code === "ENOENT") and simply return without calling
warnLogFailureOnce, while preserving existing behavior for other errors so they
still call warnLogFailureOnce with the error message.
- Around line 341-350: The tryCatchSync call that opens the cache log
(ensureCacheDir/getLastErrorPath) currently swallows errors and falls back to
stderr: "ignore", allowing Bun.spawn to run even when cache is unwritable;
modify the logic around the tryCatchSync result (the stderrFd variable) to
detect and capture the error, set/latch the hasCacheWriteFailed flag, call the
warning path once, and return early (skip the Bun.spawn that calls
process.execPath with INTERNAL_CHECK_SUBCOMMAND) when the open failed so the
updater is not spawned; keep references to tryCatchSync, ensureCacheDir,
getLastErrorPath, stderrFd, hasCacheWriteFailed, Bun.spawn, and
INTERNAL_CHECK_SUBCOMMAND to locate and update the code.
In `@src/lib/release.ts`:
- Around line 238-272: The current download loop in response.body.getReader()
buffers all chunks into memory then allocates a second full-size Uint8Array
before calling Bun.write(destPath, buffer), which can double memory usage;
instead, open the destination file for streaming and write each chunk as it
arrives while still counting bytesReceived against MAX_ASSET_BYTES and
preserving the same empty-body and cap errors that reference asset.name.
Concretely: replace the chunks array + final buffer assembly with a file writer
(or Bun.file/Bun.open) opened before the read loop, call its write method for
each chunk inside the loop, enforce the bytesReceived > MAX_ASSET_BYTES check
before writing, and ensure the file handle and the reader.releaseLock() are both
closed in finally (and only keep the empty-body error if bytesReceived === 0).
- Around line 153-164: The fetch currently follows redirects automatically and
only validates the final response URL after connections to intermediate hosts;
change the logic in the block around the fetch call so you call fetch(url, {
signal: controller.signal, headers: buildGitHubHeaders(), redirect: 'manual' })
and then manually follow redirects in a loop with a hop limit (e.g., 5). On each
3xx response read the Location header, resolve it against the current response
URL, call isAllowedReleaseHost(resolvedLocation) before issuing the next fetch,
and only fetch the next hop if allowed; once you reach a non-redirect response
call handler(response) as before. Ensure you still pass controller.signal to
each subsequent fetch, handle missing/invalid Location headers as errors, and
reuse the existing isAllowedReleaseHost, buildGitHubHeaders, controller, and
handler symbols.
---
Duplicate comments:
In `@src/lib/auto-update.ts`:
- Around line 430-480: The download and chmod failure branches never call
recordCheckCompleted(), so persistent failures (e.g., root-owned/read-only
installs) keep retrying; update the dlError and chmodError handlers to call
recordCheckCompleted() before returning. Specifically, in the block handling
const { error: dlError } = await tryCatch(downloadAsset(...)) add
recordCheckCompleted() after appendLastError("check", `download:
${dlError.message}`) (and after safeUnlinkSync), and in the chmodError branch
(where tryCatchSync wraps fs.chmodSync) add recordCheckCompleted() after
appendLastError("check", `chmod: ${chmodError.message}`) (and after
safeUnlinkSync). Keep the existing safeUnlinkSync and appendLastError calls and
only add recordCheckCompleted() to throttle persistent structural failures.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4305d304-3cff-494e-8a2a-80c51a2e51c1
📒 Files selected for processing (11)
.github/workflows/release.ymlCHANGELOG.mdREADME.mdsrc/commands/internal-update-check.tssrc/commands/update.tssrc/index.tssrc/lib/auto-update.tssrc/lib/config.tssrc/lib/fs-utils.tssrc/lib/release.test.tssrc/lib/release.ts
✅ Files skipped from review due to trivial changes (1)
- src/lib/release.test.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- src/commands/internal-update-check.ts
- README.md
- src/index.ts
There was a problem hiding this comment.
♻️ Duplicate comments (1)
CHANGELOG.md (1)
8-53:⚠️ Potential issue | 🟡 MinorAdd
[Unreleased]section and fold "Tests" into a standard category.Two deviations from the repo's changelog format:
- Missing
## [Unreleased]header above## [1.3.0]. The coding guidelines require it, and the PR's own commit message (docs(changelog): add [Unreleased] and rewrite 1.3.0 user-facing) claims it was added — current file state still jumps straight to 1.3.0.### Tests(line 53) is not one of the allowed categories. Move those bullets under### Added(or drop them — test additions generally aren't user-facing).📝 Proposed diff
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [Unreleased] + +### Added + +### Changed + +### Fixed + +### Removed + ## [1.3.0] - 2026-04-17 @@ -### Tests - -- Added coverage for the SHA256 verification flow across all result shapes (legacy, normal, transient, permanent, missing-entry, hash-mismatch, hash-io-error) using stubbed `fetch` and a precomputed-hash asset file — pins the safety contract against future refactors. -- Added coverage for the `SHA256SUMS` parser's duplicate-entry rejection. - ## [1.2.0] - 2026-04-17As per coding guidelines: "Update CHANGELOG.md before committing changes, following Keep a Changelog format with [Unreleased] section and categories: Added, Changed, Fixed, Removed".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHANGELOG.md` around lines 8 - 53, Add a top-level "## [Unreleased]" section above the existing "## [1.3.0] - 2026-04-17" header and move the bullets currently under the nonstandard "### Tests" section into an allowed category (preferably "### Added") or remove them if non-user-facing; ensure headings follow the allowed set (Added, Changed, Fixed, Removed) and that the "### Tests" header no longer appears in CHANGELOG.md.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@CHANGELOG.md`:
- Around line 8-53: Add a top-level "## [Unreleased]" section above the existing
"## [1.3.0] - 2026-04-17" header and move the bullets currently under the
nonstandard "### Tests" section into an allowed category (preferably "###
Added") or remove them if non-user-facing; ensure headings follow the allowed
set (Added, Changed, Fixed, Removed) and that the "### Tests" header no longer
appears in CHANGELOG.md.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 97209920-945a-4200-afac-81f99b45f82d
📒 Files selected for processing (5)
CHANGELOG.mdsrc/lib/auto-update.tssrc/lib/config.tssrc/lib/fs-utils.tssrc/lib/release.ts
✅ Files skipped from review due to trivial changes (1)
- src/lib/fs-utils.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/lib/auto-update.ts
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/lib/config.ts (1)
94-105: Inconsistentfile.exists()error handling vs.shouldAutoUpdate.
shouldAutoUpdate(Line 162) explicitly wrapsBun.file(...).exists()intryCatchwith the rationale "file.exists()can throw EACCES; guard to avoid crashing the scheduler." The same call here is unguarded, so an EACCES (or other stat error) on a project.worktreercwill throw out ofloadConfigand crash every command that loads config — despite the existingwarnOnce+ fallback-to-defaults pattern right below. Consider mirroring the guard so read-phase failures all funnel throughwarnOnceand default-config fallback.Proposed fix
const file = Bun.file(filePath); - const isExists = await file.exists(); + const display = displayPath(filePath); + const { data: isExists, error: existsError } = await tryCatch( + file.exists() + ); + if (existsError) { + warnOnce( + filePath, + `warning: could not stat ${display}: ${existsError.message}. Using defaults.` + ); + return validateConfig({}); + } if (!isExists) return validateConfig({}); - const display = displayPath(filePath); const { data: content, error: readError } = await tryCatch(file.text());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/config.ts` around lines 94 - 105, The unguarded call to file.exists() in loadConfig can throw (EACCES) and should be wrapped with tryCatch like shouldAutoUpdate does; change the logic around file.exists() to use const { data: isExists, error: existsError } = await tryCatch(file.exists()), and if existsError treat it like a read failure by calling warnOnce(filePath, `warning: could not read ${display}: ${existsError.message}. Using defaults.`) and return validateConfig({}); keep using displayPath, filePath, tryCatch, warnOnce and validateConfig so all read-phase failures consistently fall back to defaults.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/lib/config.ts`:
- Around line 94-105: The unguarded call to file.exists() in loadConfig can
throw (EACCES) and should be wrapped with tryCatch like shouldAutoUpdate does;
change the logic around file.exists() to use const { data: isExists, error:
existsError } = await tryCatch(file.exists()), and if existsError treat it like
a read failure by calling warnOnce(filePath, `warning: could not read
${display}: ${existsError.message}. Using defaults.`) and return
validateConfig({}); keep using displayPath, filePath, tryCatch, warnOnce and
validateConfig so all read-phase failures consistently fall back to defaults.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0dd2e80a-46ab-469e-be50-43a7050c13d0
📒 Files selected for processing (7)
.github/workflows/release.ymlCHANGELOG.mdsrc/commands/update.tssrc/lib/auto-update.tssrc/lib/config.tssrc/lib/release.test.tssrc/lib/release.ts
✅ Files skipped from review due to trivial changes (1)
- src/lib/release.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/commands/update.ts
- src/lib/auto-update.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/config.ts`:
- Around line 94-96: The unguarded await on file.exists() in loadConfig can
throw (e.g., permission/stat errors) and crash loadConfig; wrap the exists()
call in the same try/catch behavior used elsewhere (or call the existing
tryCatch helper) so errors are caught and we fall back to validateConfig({})
with a logged warning instead of throwing; specifically, update the logic around
Bun.file(filePath) / file.exists() in loadConfig to catch exceptions from
file.exists() and handle them by logging and returning validateConfig({}).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
Summary by CodeRabbit
New Features
updatecommand, and a hidden internal check runner.Integrity
Bug Fixes
Documentation
Tests
Chores