From e752467e6f78234869a2bca6fc93aa714b819e6f Mon Sep 17 00:00:00 2001 From: Efra Espada Date: Sun, 15 Feb 2026 16:40:19 +0100 Subject: [PATCH 01/16] bugfix-314-error-merging-release-branch-after-successful-deployment: Enhance documentation and implementation for deploy label and merge flow. Added new sections in README and index.mdx, and improved MergeRepository to wait for PR-specific check runs before merging. Updated tests for MergeRepository and DeployedActionUseCase to cover new behavior and ensure correctness in handling merges and check waits. --- README.md | 1 + .../single-actions/deploy-label-and-merge.mdx | 69 ++++++++ docs/single-actions/index.mdx | 4 + .../__tests__/merge_repository.test.ts | 88 +++++++++- src/data/repository/merge_repository.ts | 46 +++++- .../deployed_action_use_case.test.ts | 154 +++++++++--------- .../actions/deployed_action_use_case.ts | 10 ++ 7 files changed, 286 insertions(+), 86 deletions(-) create mode 100644 docs/single-actions/deploy-label-and-merge.mdx diff --git a/README.md b/README.md index 614f99e8..8fc42bba 100644 --- a/README.md +++ b/README.md @@ -23,6 +23,7 @@ Full documentation: **[docs.page/vypdev/copilot](https://docs.page/vypdev/copilo | [OpenCode (AI)](https://docs.page/vypdev/copilot/opencode-integration) | Progress, Bugbot, think, AI PR description | | [Testing OpenCode locally](https://docs.page/vypdev/copilot/testing-opencode-plan-locally) | Run check-progress, detect-potential-problems, recommend-steps via CLI | | [Single actions](https://docs.page/vypdev/copilot/single-actions) | On-demand: check progress, think, create release/tag, deployed | +| [Deploy label and merge flow](docs/single-actions/deploy-label-and-merge.mdx) | Deploy/deployed labels, post-deploy merges, waiting for checks per PR | | [Issues](https://docs.page/vypdev/copilot/issues) | Issue configuration and types (feature, bugfix, hotfix, release, docs, chore) | | [Pull requests](https://docs.page/vypdev/copilot/pull-requests) | PR configuration and AI description | | [Troubleshooting](https://docs.page/vypdev/copilot/troubleshooting) | Common issues and solutions | diff --git a/docs/single-actions/deploy-label-and-merge.mdx b/docs/single-actions/deploy-label-and-merge.mdx new file mode 100644 index 00000000..9cba6484 --- /dev/null +++ b/docs/single-actions/deploy-label-and-merge.mdx @@ -0,0 +1,69 @@ +--- +title: Deploy label and merge flow +description: How the deploy and deployed labels work and how post-deploy merges and check runs are handled. +--- + +# Deploy label and merge flow + +This page describes how the **deploy** label is used and what happens when you run the **deployed** single action after a successful deployment. It also explains how branch merges and check runs are handled so that both release→default and release→develop (or hotfix equivalents) complete correctly. + +## The deploy label + +- **`deploy`** — Applied to the issue when the release (or hotfix) is ready to be deployed. Your CI/CD or team adds this label when deployment has been triggered or is about to be. +- **`deployed`** — Set by Copilot when the **deployed** single action runs and the issue had the **deploy** label. It indicates "deployment is done and post-deploy steps have been run." + +The **deployed** single action is intended to be run **after a successful deployment** (e.g. from a pipeline step or a manual trigger). It requires: + +- An issue number (single action is run with `single-action-issue: `). +- The issue must have the **deploy** label and must **not** already have the **deployed** label. + +## What the deployed action does + +1. **Label update** + Replaces the **deploy** label with **deployed** on the issue. + +2. **Branch merges** (if a release or hotfix branch is configured): + - **Release:** merges the release branch into the default branch (e.g. `main`), then into the development branch (e.g. `develop`). + - **Hotfix:** merges the hotfix branch into the default branch, then merges the default branch into the development branch. + +3. **Issue closure** + If **all** merge operations succeed, the issue is closed. If any merge fails, the issue is left open and the action reports that one or more merge operations failed. + +Each merge is done via a **pull request**: create PR, wait for checks (see below), then merge the PR. If the PR workflow fails (e.g. checks fail or timeout), the code falls back to a **direct** Git merge when possible. + +## Waiting for checks before merging + +For each PR we create (e.g. release→main, release→develop), we wait until the PR is ready to merge before calling the merge API. + +### Per-PR check runs + +- GitHub's Checks API returns check runs **by ref** (branch), not by PR. The same branch can be the head of **multiple PRs** (e.g. release→main and release→develop). +- We **only consider check runs that belong to the current PR**, using the `pull_requests` array on each check run (filter by `pull_request.number === this PR`). We do **not** use check runs from another PR (e.g. the first merge) to decide that the second PR is ready. +- We wait until all check runs for **this** PR are completed and none have failed. Only then do we merge. + +### When the PR has no check runs + +- If there are **no** check runs on the ref, we use **commit status checks** (legacy) and merge when none are pending. +- If there **are** check runs on the ref but **none** for this PR (e.g. they are from another PR with the same head, or this base branch has no required checks), we wait a short number of polls (~30 seconds). If no check runs appear for this PR in that window, we assume this PR does not require check runs and **proceed to merge**. If the branch actually requires checks, GitHub will reject the merge and we fall back to the direct merge path (which may also fail under branch protection). + +### Timeout + +- A configurable **merge timeout** (input `merge-timeout`, default 600 seconds) limits how long we wait for checks **per merge**. If we hit the timeout, we throw and the code attempts a direct merge as fallback. + +## Summary table + +| Scenario | Behaviour | +|----------|-----------| +| PR has check runs for this PR | Wait until all are completed and none failed, then merge. | +| PR has no check runs on ref | Use status checks; if none pending, merge. | +| Check runs on ref but none for this PR | Wait a few polls (~30 s); if still none, proceed to merge (branch may have no required checks). | +| Timeout waiting for checks | Attempt direct merge. | +| PR merge fails, direct merge fails | Return failure; issue is not closed. | + +## Related code and tests + +- **Use case:** `src/usecase/actions/deployed_action_use_case.ts` — deploy label handling, merge order, issue close. +- **Merge and checks:** `src/data/repository/merge_repository.ts` — PR creation, per-PR check filtering, status checks fallback, direct merge fallback. +- **Tests:** + - `src/usecase/actions/__tests__/deployed_action_use_case.test.ts` — deploy label validation, release/hotfix merge order, close on success, no close on merge failure. + - `src/data/repository/__tests__/merge_repository.test.ts` — check runs per PR, no check runs, timeout, direct merge fallback. diff --git a/docs/single-actions/index.mdx b/docs/single-actions/index.mdx index 160b7ee2..c8557812 100644 --- a/docs/single-actions/index.mdx +++ b/docs/single-actions/index.mdx @@ -14,6 +14,9 @@ When you set the **`single-action`** input (and any required targets such as `si single-action, single-action-issue, single-action-version, and other inputs. + + Deploy/deployed labels, post-deploy merges (release/hotfix → default and develop), and how we wait for checks per PR. + Run from a GitHub Actions workflow or from the giik CLI. @@ -38,5 +41,6 @@ When you set the **`single-action`** input (and any required targets such as `si ## Next steps - **[Available actions](/single-actions/available-actions)** — Complete list with inputs and descriptions. +- **[Deploy label and merge flow](/single-actions/deploy-label-and-merge)** — Deploy/deployed labels and post-deploy merges. - **[Workflow & CLI](/single-actions/workflow-and-cli)** — How to run from a workflow and from the CLI. - **[Examples](/single-actions/examples)** — YAML and CLI examples. diff --git a/src/data/repository/__tests__/merge_repository.test.ts b/src/data/repository/__tests__/merge_repository.test.ts index 470ab780..7008cfe6 100644 --- a/src/data/repository/__tests__/merge_repository.test.ts +++ b/src/data/repository/__tests__/merge_repository.test.ts @@ -1,5 +1,8 @@ /** - * Unit tests for MergeRepository: mergeBranch (PR merge and direct merge fallback). + * Unit tests for MergeRepository.mergeBranch: PR creation, waiting for checks per PR, + * fallback when the PR has no check runs, timeout, and direct merge fallback. + * + * Used by the deploy flow (release/hotfix → default and develop). See docs/single-actions/deploy-label-and-merge.mdx. */ import { MergeRepository } from '../merge_repository'; @@ -50,6 +53,7 @@ describe('MergeRepository', () => { mockReposGetCombinedStatusForRef.mockReset(); }); + describe('PR creation and merge (timeout <= 10 skips waiting for checks)', () => { it('creates PR, updates body, merges and returns success (timeout <= 10 skips wait)', async () => { mockPullsCreate.mockResolvedValue({ data: { number: 42 }, @@ -136,7 +140,9 @@ describe('MergeRepository', () => { expect(result.some(r => r.success === false && r.steps?.some(s => s.includes('Failed to merge')))).toBe(true); expect(result.length).toBeGreaterThanOrEqual(2); }); + }); + describe('waiting for checks (timeout > 10): per-PR check runs, no checks, timeout', () => { it('when timeout > 10 waits for check runs (all completed) then merges', async () => { mockPullsCreate.mockResolvedValue({ data: { number: 1 } }); mockPullsListCommits.mockResolvedValue({ data: [{ commit: { message: 'msg' } }] }); @@ -145,7 +151,7 @@ describe('MergeRepository', () => { mockChecksListForRef.mockResolvedValue({ data: { check_runs: [ - { name: 'ci', status: 'completed', conclusion: 'success' }, + { name: 'ci', status: 'completed', conclusion: 'success', pull_requests: [{ number: 1 }] }, ], }, }); @@ -179,7 +185,7 @@ describe('MergeRepository', () => { mockChecksListForRef.mockResolvedValue({ data: { check_runs: [ - { name: 'ci', status: 'completed', conclusion: 'failure' }, + { name: 'ci', status: 'completed', conclusion: 'failure', pull_requests: [{ number: 1 }] }, ], }, }); @@ -214,6 +220,79 @@ describe('MergeRepository', () => { expect(mockReposGetCombinedStatusForRef).toHaveBeenCalled(); }); + it('when timeout > 10 waits only for check runs tied to this PR (ignores runs from other PRs with same head)', async () => { + jest.useFakeTimers(); + mockPullsCreate.mockResolvedValue({ data: { number: 42 } }); + mockPullsListCommits.mockResolvedValue({ data: [{ commit: { message: 'msg' } }] }); + mockPullsUpdate.mockResolvedValue({}); + mockPullsMerge.mockResolvedValue({}); + // First poll: runs exist but for another PR (e.g. release→master already merged). We must not treat as completed. + mockChecksListForRef + .mockResolvedValueOnce({ + data: { + check_runs: [ + { name: 'ci', status: 'completed', conclusion: 'success', pull_requests: [{ number: 1 }] }, + ], + }, + }) + .mockResolvedValueOnce({ + data: { + check_runs: [ + { name: 'ci', status: 'completed', conclusion: 'success', pull_requests: [{ number: 42 }] }, + ], + }, + }); + mockReposGetCombinedStatusForRef.mockResolvedValue({ + data: { state: 'success', statuses: [] }, + }); + + const promise = repo.mergeBranch( + 'owner', + 'repo', + 'release/1.0', + 'develop', + 30, + 'token', + ); + await jest.runAllTimersAsync(); + const result = await promise; + + jest.useRealTimers(); + expect(result).toHaveLength(1); + expect(result[0].success).toBe(true); + expect(mockChecksListForRef).toHaveBeenCalledTimes(2); + expect(mockPullsMerge).toHaveBeenCalled(); + }); + + it('when timeout > 10 and no check runs for this PR after a few polls, proceeds to merge (branch may have no required checks)', async () => { + jest.useFakeTimers(); + mockPullsCreate.mockResolvedValue({ data: { number: 99 } }); + mockPullsListCommits.mockResolvedValue({ data: [] }); + mockPullsUpdate.mockResolvedValue({}); + mockPullsMerge.mockResolvedValue({}); + // Check runs on ref are for another PR only; this PR (99) has none (e.g. develop has no required checks). + mockChecksListForRef.mockResolvedValue({ + data: { + check_runs: [ + { name: 'ci', status: 'completed', conclusion: 'success', pull_requests: [{ number: 1 }] }, + ], + }, + }); + mockReposGetCombinedStatusForRef.mockResolvedValue({ + data: { state: 'success', statuses: [] }, + }); + + const promise = repo.mergeBranch('o', 'r', 'release/1.0', 'develop', 60, 'token'); + await jest.runAllTimersAsync(); + const result = await promise; + + jest.useRealTimers(); + expect(result).toHaveLength(1); + expect(result[0].success).toBe(true); + expect(mockChecksListForRef).toHaveBeenCalledTimes(3); + expect(mockPullsMerge).toHaveBeenCalled(); + }); + it('when timeout > 10 and checks never complete throws then direct merge succeeds', async () => { jest.useFakeTimers(); mockPullsCreate.mockResolvedValue({ data: { number: 1 } }); @@ -221,7 +300,7 @@ describe('MergeRepository', () => { mockPullsUpdate.mockResolvedValue({}); mockChecksListForRef.mockResolvedValue({ data: { - check_runs: [{ name: 'ci', status: 'in_progress', conclusion: null }], + check_runs: [{ name: 'ci', status: 'in_progress', conclusion: null, pull_requests: [{ number: 1 }] }], }, }); mockReposGetCombinedStatusForRef.mockResolvedValue({ @@ -236,4 +315,5 @@ describe('MergeRepository', () => { jest.useRealTimers(); expect(result.some(r => r.success === true && r.steps?.some(s => s.includes('direct merge')))).toBe(true); }); + }); }); diff --git a/src/data/repository/merge_repository.ts b/src/data/repository/merge_repository.ts index 5181bdf5..3b84fcc7 100644 --- a/src/data/repository/merge_repository.ts +++ b/src/data/repository/merge_repository.ts @@ -3,8 +3,14 @@ import { logDebugInfo, logError } from '../../utils/logger'; import { Result } from '../model/result'; /** - * Repository for merging branches (via PR or direct merge). - * Isolated to allow unit tests with mocked Octokit. + * Repository for merging branches: creates a PR, waits for that PR's check runs (or status checks), + * then merges the PR; on failure, falls back to a direct Git merge. + * + * Check runs are filtered by PR (pull_requests) so we only wait for the current PR's checks, + * not those of another PR sharing the same head (e.g. release→main vs release→develop). + * If the PR has no check runs after a short wait, we proceed to merge (branch may have no required checks). + * + * @see docs/single-actions/deploy-label-and-merge.mdx for the deploy flow and check-wait behaviour. */ export class MergeRepository { @@ -61,10 +67,13 @@ This PR merges **${head}** into **${base}**. }); const iteration = 10; + /** Give workflows a short window to register check runs for this PR; after this, we allow merge with no check runs (e.g. branch has no required checks). */ + const maxWaitForPrChecksAttempts = 3; if (timeout > iteration) { // Wait for checks to complete - can use regular token for reading checks let checksCompleted = false; let attempts = 0; + let waitForPrChecksAttempts = 0; const maxAttempts = timeout > iteration ? Math.floor(timeout / iteration) : iteration; while (!checksCompleted && attempts < maxAttempts) { @@ -74,6 +83,14 @@ This PR merges **${head}** into **${base}**. ref: head, }); + // Only consider check runs that are for this PR. When the same branch is used in + // multiple PRs (e.g. release→master and release→develop), listForRef returns runs + // for all PRs; we must wait for runs tied to the current PR or we may see completed + // runs from the other PR and merge before this PR's checks have run. + const runsForThisPr = (checkRuns.check_runs as Array<{ status: string; conclusion: string | null; name: string; pull_requests?: Array<{ number: number }> }>).filter( + run => run.pull_requests?.some(pr => pr.number === pullRequest.number), + ); + // Get commit status checks for the PR head commit const { data: commitStatus } = await octokit.rest.repos.getCombinedStatusForRef({ owner: owner, @@ -82,11 +99,11 @@ This PR merges **${head}** into **${base}**. }); logDebugInfo(`Combined status state: ${commitStatus.state}`); - logDebugInfo(`Number of check runs: ${checkRuns.check_runs.length}`); + logDebugInfo(`Number of check runs for this PR: ${runsForThisPr.length} (total on ref: ${checkRuns.check_runs.length})`); - // If there are check runs, prioritize those over status checks - if (checkRuns.check_runs.length > 0) { - const pendingCheckRuns = checkRuns.check_runs.filter( + // If there are check runs for this PR, wait for them to complete + if (runsForThisPr.length > 0) { + const pendingCheckRuns = runsForThisPr.filter( check => check.status !== 'completed', ); @@ -95,7 +112,7 @@ This PR merges **${head}** into **${base}**. logDebugInfo('All check runs have completed.'); // Verify if all checks passed - const failedChecks = checkRuns.check_runs.filter( + const failedChecks = runsForThisPr.filter( check => check.conclusion === 'failure', ); @@ -111,6 +128,21 @@ This PR merges **${head}** into **${base}**. attempts++; continue; } + } else if (checkRuns.check_runs.length > 0 && runsForThisPr.length === 0) { + // There are runs on the ref but none for this PR. Either workflows for this PR + // haven't registered yet, or this PR/base has no required checks. + waitForPrChecksAttempts++; + if (waitForPrChecksAttempts >= maxWaitForPrChecksAttempts) { + checksCompleted = true; + logDebugInfo( + `No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; proceeding to merge (branch may have no required checks).`, + ); + } else { + logDebugInfo('Check runs exist on ref but none for this PR yet; waiting for workflows to register.'); + await new Promise(resolve => setTimeout(resolve, iteration * 1000)); + attempts++; + } + continue; } else { // Fall back to status checks if no check runs exist const pendingChecks = commitStatus.statuses.filter(status => { diff --git a/src/usecase/actions/__tests__/deployed_action_use_case.test.ts b/src/usecase/actions/__tests__/deployed_action_use_case.test.ts index d1ea4a54..6ed8b2d2 100644 --- a/src/usecase/actions/__tests__/deployed_action_use_case.test.ts +++ b/src/usecase/actions/__tests__/deployed_action_use_case.test.ts @@ -1,3 +1,13 @@ +/** + * Tests for DeployedActionUseCase: deploy label flow and post-deploy merges. + * + * The "deployed" single action runs after a successful deployment. It requires the issue to have + * the "deploy" label (and not already "deployed"). It then: sets the "deployed" label, merges + * release/hotfix branches into default and develop, and closes the issue when all merges succeed. + * + * See docs/single-actions/deploy-label-and-merge.mdx for the full flow and merge/check behaviour. + */ + import { DeployedActionUseCase } from '../deployed_action_use_case'; import { Result } from '../../../data/model/result'; import type { Execution } from '../../../data/model/execution'; @@ -84,7 +94,8 @@ describe('DeployedActionUseCase', () => { mockMergeBranch.mockClear(); }); - it('returns failure when there is no deploy label', async () => { + describe('deploy label validation', () => { + it('returns failure when there is no deploy label', async () => { const param = baseParam({ labels: { isDeploy: false, @@ -105,7 +116,7 @@ describe('DeployedActionUseCase', () => { expect(mockCloseIssue).not.toHaveBeenCalled(); }); - it('returns failure when deployed label is already set', async () => { + it('returns failure when deployed label is already set', async () => { const param = baseParam({ labels: { isDeploy: true, @@ -125,8 +136,10 @@ describe('DeployedActionUseCase', () => { expect(mockMergeBranch).not.toHaveBeenCalled(); expect(mockCloseIssue).not.toHaveBeenCalled(); }); + }); - it('updates labels but does not close issue when no release or hotfix branch (no merges)', async () => { + describe('label update and merge (no release/hotfix branch)', () => { + it('updates labels but does not close issue when no release or hotfix branch (no merges)', async () => { const param = baseParam(); const results = await useCase.invoke(param); @@ -143,8 +156,10 @@ describe('DeployedActionUseCase', () => { expect(results.some((r) => r.steps?.some((s) => s.includes('Label') && s.includes('deployed')))).toBe(true); expect(results.some((r) => r.steps?.some((s) => s.includes('not closed because no release or hotfix branch was configured')))).toBe(true); }); + }); - it('with releaseBranch: merges both branches and closes issue when all merges succeed', async () => { + describe('release branch: merge to default then develop, close issue when all succeed', () => { + it('merges release into default then into develop and closes issue when all merges succeed', async () => { mockMergeBranch .mockResolvedValueOnce(successResult('Merged release into main')) .mockResolvedValueOnce(successResult('Merged release into develop')); @@ -180,43 +195,7 @@ describe('DeployedActionUseCase', () => { expect(results.some((r) => r.steps?.some((s) => s.includes('closed after merge')))).toBe(true); }); - it('with hotfixBranch: merges both branches and closes issue when all merges succeed', async () => { - mockMergeBranch - .mockResolvedValueOnce(successResult('Merged hotfix into main')) - .mockResolvedValueOnce(successResult('Merged main into develop')); - const param = baseParam({ - currentConfiguration: { - releaseBranch: undefined, - hotfixBranch: 'hotfix/1.0.1', - }, - }); - - const results = await useCase.invoke(param); - - expect(mockMergeBranch).toHaveBeenCalledTimes(2); - expect(mockMergeBranch).toHaveBeenNthCalledWith( - 1, - 'owner', - 'repo', - 'hotfix/1.0.1', - 'main', - 60, - 'token' - ); - expect(mockMergeBranch).toHaveBeenNthCalledWith( - 2, - 'owner', - 'repo', - 'main', - 'develop', - 60, - 'token' - ); - expect(mockCloseIssue).toHaveBeenCalledWith('owner', 'repo', 42, 'token'); - expect(results.some((r) => r.steps?.some((s) => s.includes('closed after merge')))).toBe(true); - }); - - it('with releaseBranch: does not close issue when first merge fails', async () => { + it('does not close issue when first merge (to default) fails', async () => { mockMergeBranch .mockResolvedValueOnce(failureResult('Failed to merge into main')) .mockResolvedValueOnce(successResult('Merged into develop')); @@ -234,7 +213,7 @@ describe('DeployedActionUseCase', () => { expect(results.some((r) => r.success === false && r.steps?.some((s) => s.includes('not closed because one or more merge operations failed')))).toBe(true); }); - it('with releaseBranch: does not close issue when second merge fails', async () => { + it('does not close issue when second merge (to develop) fails', async () => { mockMergeBranch .mockResolvedValueOnce(successResult('Merged into main')) .mockResolvedValueOnce(failureResult('Failed to merge into develop')); @@ -251,24 +230,7 @@ describe('DeployedActionUseCase', () => { expect(results.some((r) => r.steps?.some((s) => s.includes('not closed because one or more merge operations failed')))).toBe(true); }); - it('with hotfixBranch: does not close issue when one merge fails', async () => { - mockMergeBranch - .mockResolvedValueOnce(successResult('Merged hotfix into main')) - .mockResolvedValueOnce(failureResult('Failed to merge main into develop')); - const param = baseParam({ - currentConfiguration: { - releaseBranch: undefined, - hotfixBranch: 'hotfix/1.0.1', - }, - }); - - const results = await useCase.invoke(param); - - expect(mockCloseIssue).not.toHaveBeenCalled(); - expect(results.some((r) => r.steps?.some((s) => s.includes('not closed because one or more merge operations failed')))).toBe(true); - }); - - it('pushes merge results into returned array (release path)', async () => { + it('pushes merge results into returned array', async () => { mockMergeBranch .mockResolvedValueOnce(successResult('Step A')) .mockResolvedValueOnce(successResult('Step B')); @@ -286,33 +248,75 @@ describe('DeployedActionUseCase', () => { expect(mergeSteps).toContain('Step B'); }); - it('when setLabels throws, returns error result and does not call merge or close', async () => { - mockSetLabels.mockRejectedValueOnce(new Error('API error')); - const param = baseParam(); + it('when all merges succeed but closeIssue returns false, does not push close step', async () => { + mockMergeBranch + .mockResolvedValueOnce(successResult('Merged into main')) + .mockResolvedValueOnce(successResult('Merged into develop')); + mockCloseIssue.mockResolvedValue(false); + const param = baseParam({ + currentConfiguration: { + releaseBranch: 'release/1.0.0', + hotfixBranch: undefined, + }, + }); + + const results = await useCase.invoke(param); + + expect(mockCloseIssue).toHaveBeenCalledWith('owner', 'repo', 42, 'token'); + expect(results.some((r) => r.steps?.some((s) => s.includes('closed after merge')))).toBe(false); + }); + }); + + describe('hotfix branch: merge hotfix to default, then default to develop', () => { + it('merges hotfix into default then default into develop and closes issue when all succeed', async () => { + mockMergeBranch + .mockResolvedValueOnce(successResult('Merged hotfix into main')) + .mockResolvedValueOnce(successResult('Merged main into develop')); + const param = baseParam({ + currentConfiguration: { + releaseBranch: undefined, + hotfixBranch: 'hotfix/1.0.1', + }, + }); const results = await useCase.invoke(param); - expect(results.some((r) => r.success === false)).toBe(true); - expect(results.some((r) => r.steps?.some((s) => s.includes('assign members to issue')))).toBe(true); - expect(mockMergeBranch).not.toHaveBeenCalled(); - expect(mockCloseIssue).not.toHaveBeenCalled(); + expect(mockMergeBranch).toHaveBeenCalledTimes(2); + expect(mockMergeBranch).toHaveBeenNthCalledWith(1, 'owner', 'repo', 'hotfix/1.0.1', 'main', 60, 'token'); + expect(mockMergeBranch).toHaveBeenNthCalledWith(2, 'owner', 'repo', 'main', 'develop', 60, 'token'); + expect(mockCloseIssue).toHaveBeenCalledWith('owner', 'repo', 42, 'token'); + expect(results.some((r) => r.steps?.some((s) => s.includes('closed after merge')))).toBe(true); }); - it('with releaseBranch and all merges succeed: when closeIssue returns false, does not push close result', async () => { + it('does not close issue when one merge fails', async () => { mockMergeBranch - .mockResolvedValueOnce(successResult('Merged into main')) - .mockResolvedValueOnce(successResult('Merged into develop')); - mockCloseIssue.mockResolvedValue(false); + .mockResolvedValueOnce(successResult('Merged hotfix into main')) + .mockResolvedValueOnce(failureResult('Failed to merge main into develop')); const param = baseParam({ currentConfiguration: { - releaseBranch: 'release/1.0.0', - hotfixBranch: undefined, + releaseBranch: undefined, + hotfixBranch: 'hotfix/1.0.1', }, }); const results = await useCase.invoke(param); - expect(mockCloseIssue).toHaveBeenCalledWith('owner', 'repo', 42, 'token'); - expect(results.some((r) => r.steps?.some((s) => s.includes('closed after merge')))).toBe(false); + expect(mockCloseIssue).not.toHaveBeenCalled(); + expect(results.some((r) => r.steps?.some((s) => s.includes('not closed because one or more merge operations failed')))).toBe(true); + }); + }); + + describe('errors', () => { + it('when setLabels throws, returns error result and does not call merge or close', async () => { + mockSetLabels.mockRejectedValueOnce(new Error('API error')); + const param = baseParam(); + + const results = await useCase.invoke(param); + + expect(results.some((r) => r.success === false)).toBe(true); + expect(results.some((r) => r.steps?.some((s) => s.includes('assign members to issue')))).toBe(true); + expect(mockMergeBranch).not.toHaveBeenCalled(); + expect(mockCloseIssue).not.toHaveBeenCalled(); + }); }); }); diff --git a/src/usecase/actions/deployed_action_use_case.ts b/src/usecase/actions/deployed_action_use_case.ts index 6dfd2113..a2df96d2 100644 --- a/src/usecase/actions/deployed_action_use_case.ts +++ b/src/usecase/actions/deployed_action_use_case.ts @@ -6,6 +6,16 @@ import { logDebugInfo, logError, logInfo } from "../../utils/logger"; import { getTaskEmoji } from "../../utils/task_emoji"; import { ParamUseCase } from "../base/param_usecase"; +/** + * Single action run after a successful deployment (triggered with the "deployed" action and an issue number). + * + * Requires the issue to have the "deploy" label and not already have the "deployed" label. Then: + * 1. Replaces the "deploy" label with "deployed". + * 2. If a release or hotfix branch is configured: merges it into default and develop (each via PR, waiting for that PR's checks). + * 3. Closes the issue only when all merges succeed. + * + * @see docs/single-actions/deploy-label-and-merge.mdx for the full flow and how merge/check waiting works. + */ export class DeployedActionUseCase implements ParamUseCase { taskId: string = 'DeployedActionUseCase'; private issueRepository = new IssueRepository(); From 3959fd172659024368f150de5c817c94f7aab729 Mon Sep 17 00:00:00 2001 From: Efra Espada Date: Sun, 15 Feb 2026 16:41:27 +0100 Subject: [PATCH 02/16] bugfix-314-error-merging-release-branch-after-successful-deployment: Enhance MergeRepository and DeployedActionUseCase documentation to clarify PR-specific check run handling and deployment flow. Updated comments and type definitions to reflect new behavior, ensuring better understanding of the merging process and check wait logic. --- build/cli/index.js | 53 ++++++++++++++++--- .../src/data/repository/merge_repository.d.ts | 10 +++- .../actions/deployed_action_use_case.d.ts | 10 ++++ build/github_action/index.js | 53 ++++++++++++++++--- .../src/data/repository/merge_repository.d.ts | 10 +++- .../actions/deployed_action_use_case.d.ts | 10 ++++ 6 files changed, 128 insertions(+), 18 deletions(-) diff --git a/build/cli/index.js b/build/cli/index.js index 734e447e..1d1fc499 100755 --- a/build/cli/index.js +++ b/build/cli/index.js @@ -51275,8 +51275,14 @@ const github = __importStar(__nccwpck_require__(5438)); const logger_1 = __nccwpck_require__(8836); const result_1 = __nccwpck_require__(7305); /** - * Repository for merging branches (via PR or direct merge). - * Isolated to allow unit tests with mocked Octokit. + * Repository for merging branches: creates a PR, waits for that PR's check runs (or status checks), + * then merges the PR; on failure, falls back to a direct Git merge. + * + * Check runs are filtered by PR (pull_requests) so we only wait for the current PR's checks, + * not those of another PR sharing the same head (e.g. release→main vs release→develop). + * If the PR has no check runs after a short wait, we proceed to merge (branch may have no required checks). + * + * @see docs/single-actions/deploy-label-and-merge.mdx for the deploy flow and check-wait behaviour. */ class MergeRepository { constructor() { @@ -51318,10 +51324,13 @@ This PR merges **${head}** into **${base}**. '\n\nThis PR was automatically created by [`copilot`](https://github.com/vypdev/copilot).', }); const iteration = 10; + /** Give workflows a short window to register check runs for this PR; after this, we allow merge with no check runs (e.g. branch has no required checks). */ + const maxWaitForPrChecksAttempts = 3; if (timeout > iteration) { // Wait for checks to complete - can use regular token for reading checks let checksCompleted = false; let attempts = 0; + let waitForPrChecksAttempts = 0; const maxAttempts = timeout > iteration ? Math.floor(timeout / iteration) : iteration; while (!checksCompleted && attempts < maxAttempts) { const { data: checkRuns } = await octokit.rest.checks.listForRef({ @@ -51329,6 +51338,11 @@ This PR merges **${head}** into **${base}**. repo: repository, ref: head, }); + // Only consider check runs that are for this PR. When the same branch is used in + // multiple PRs (e.g. release→master and release→develop), listForRef returns runs + // for all PRs; we must wait for runs tied to the current PR or we may see completed + // runs from the other PR and merge before this PR's checks have run. + const runsForThisPr = checkRuns.check_runs.filter(run => run.pull_requests?.some(pr => pr.number === pullRequest.number)); // Get commit status checks for the PR head commit const { data: commitStatus } = await octokit.rest.repos.getCombinedStatusForRef({ owner: owner, @@ -51336,15 +51350,15 @@ This PR merges **${head}** into **${base}**. ref: head, }); (0, logger_1.logDebugInfo)(`Combined status state: ${commitStatus.state}`); - (0, logger_1.logDebugInfo)(`Number of check runs: ${checkRuns.check_runs.length}`); - // If there are check runs, prioritize those over status checks - if (checkRuns.check_runs.length > 0) { - const pendingCheckRuns = checkRuns.check_runs.filter(check => check.status !== 'completed'); + (0, logger_1.logDebugInfo)(`Number of check runs for this PR: ${runsForThisPr.length} (total on ref: ${checkRuns.check_runs.length})`); + // If there are check runs for this PR, wait for them to complete + if (runsForThisPr.length > 0) { + const pendingCheckRuns = runsForThisPr.filter(check => check.status !== 'completed'); if (pendingCheckRuns.length === 0) { checksCompleted = true; (0, logger_1.logDebugInfo)('All check runs have completed.'); // Verify if all checks passed - const failedChecks = checkRuns.check_runs.filter(check => check.conclusion === 'failure'); + const failedChecks = runsForThisPr.filter(check => check.conclusion === 'failure'); if (failedChecks.length > 0) { throw new Error(`Checks failed: ${failedChecks.map(check => check.name).join(', ')}`); } @@ -51359,6 +51373,21 @@ This PR merges **${head}** into **${base}**. continue; } } + else if (checkRuns.check_runs.length > 0 && runsForThisPr.length === 0) { + // There are runs on the ref but none for this PR. Either workflows for this PR + // haven't registered yet, or this PR/base has no required checks. + waitForPrChecksAttempts++; + if (waitForPrChecksAttempts >= maxWaitForPrChecksAttempts) { + checksCompleted = true; + (0, logger_1.logDebugInfo)(`No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; proceeding to merge (branch may have no required checks).`); + } + else { + (0, logger_1.logDebugInfo)('Check runs exist on ref but none for this PR yet; waiting for workflows to register.'); + await new Promise(resolve => setTimeout(resolve, iteration * 1000)); + attempts++; + } + continue; + } else { // Fall back to status checks if no check runs exist const pendingChecks = commitStatus.statuses.filter(status => { @@ -53911,6 +53940,16 @@ const branch_repository_1 = __nccwpck_require__(7701); const issue_repository_1 = __nccwpck_require__(57); const logger_1 = __nccwpck_require__(8836); const task_emoji_1 = __nccwpck_require__(9785); +/** + * Single action run after a successful deployment (triggered with the "deployed" action and an issue number). + * + * Requires the issue to have the "deploy" label and not already have the "deployed" label. Then: + * 1. Replaces the "deploy" label with "deployed". + * 2. If a release or hotfix branch is configured: merges it into default and develop (each via PR, waiting for that PR's checks). + * 3. Closes the issue only when all merges succeed. + * + * @see docs/single-actions/deploy-label-and-merge.mdx for the full flow and how merge/check waiting works. + */ class DeployedActionUseCase { constructor() { this.taskId = 'DeployedActionUseCase'; diff --git a/build/cli/src/data/repository/merge_repository.d.ts b/build/cli/src/data/repository/merge_repository.d.ts index a152b014..d13d541a 100644 --- a/build/cli/src/data/repository/merge_repository.d.ts +++ b/build/cli/src/data/repository/merge_repository.d.ts @@ -1,7 +1,13 @@ import { Result } from '../model/result'; /** - * Repository for merging branches (via PR or direct merge). - * Isolated to allow unit tests with mocked Octokit. + * Repository for merging branches: creates a PR, waits for that PR's check runs (or status checks), + * then merges the PR; on failure, falls back to a direct Git merge. + * + * Check runs are filtered by PR (pull_requests) so we only wait for the current PR's checks, + * not those of another PR sharing the same head (e.g. release→main vs release→develop). + * If the PR has no check runs after a short wait, we proceed to merge (branch may have no required checks). + * + * @see docs/single-actions/deploy-label-and-merge.mdx for the deploy flow and check-wait behaviour. */ export declare class MergeRepository { mergeBranch: (owner: string, repository: string, head: string, base: string, timeout: number, token: string) => Promise; diff --git a/build/cli/src/usecase/actions/deployed_action_use_case.d.ts b/build/cli/src/usecase/actions/deployed_action_use_case.d.ts index bb20b27c..80f821f5 100644 --- a/build/cli/src/usecase/actions/deployed_action_use_case.d.ts +++ b/build/cli/src/usecase/actions/deployed_action_use_case.d.ts @@ -1,6 +1,16 @@ import { Execution } from "../../data/model/execution"; import { Result } from "../../data/model/result"; import { ParamUseCase } from "../base/param_usecase"; +/** + * Single action run after a successful deployment (triggered with the "deployed" action and an issue number). + * + * Requires the issue to have the "deploy" label and not already have the "deployed" label. Then: + * 1. Replaces the "deploy" label with "deployed". + * 2. If a release or hotfix branch is configured: merges it into default and develop (each via PR, waiting for that PR's checks). + * 3. Closes the issue only when all merges succeed. + * + * @see docs/single-actions/deploy-label-and-merge.mdx for the full flow and how merge/check waiting works. + */ export declare class DeployedActionUseCase implements ParamUseCase { taskId: string; private issueRepository; diff --git a/build/github_action/index.js b/build/github_action/index.js index f2d088a0..79eb65a0 100644 --- a/build/github_action/index.js +++ b/build/github_action/index.js @@ -46343,8 +46343,14 @@ const github = __importStar(__nccwpck_require__(5438)); const logger_1 = __nccwpck_require__(8836); const result_1 = __nccwpck_require__(7305); /** - * Repository for merging branches (via PR or direct merge). - * Isolated to allow unit tests with mocked Octokit. + * Repository for merging branches: creates a PR, waits for that PR's check runs (or status checks), + * then merges the PR; on failure, falls back to a direct Git merge. + * + * Check runs are filtered by PR (pull_requests) so we only wait for the current PR's checks, + * not those of another PR sharing the same head (e.g. release→main vs release→develop). + * If the PR has no check runs after a short wait, we proceed to merge (branch may have no required checks). + * + * @see docs/single-actions/deploy-label-and-merge.mdx for the deploy flow and check-wait behaviour. */ class MergeRepository { constructor() { @@ -46386,10 +46392,13 @@ This PR merges **${head}** into **${base}**. '\n\nThis PR was automatically created by [`copilot`](https://github.com/vypdev/copilot).', }); const iteration = 10; + /** Give workflows a short window to register check runs for this PR; after this, we allow merge with no check runs (e.g. branch has no required checks). */ + const maxWaitForPrChecksAttempts = 3; if (timeout > iteration) { // Wait for checks to complete - can use regular token for reading checks let checksCompleted = false; let attempts = 0; + let waitForPrChecksAttempts = 0; const maxAttempts = timeout > iteration ? Math.floor(timeout / iteration) : iteration; while (!checksCompleted && attempts < maxAttempts) { const { data: checkRuns } = await octokit.rest.checks.listForRef({ @@ -46397,6 +46406,11 @@ This PR merges **${head}** into **${base}**. repo: repository, ref: head, }); + // Only consider check runs that are for this PR. When the same branch is used in + // multiple PRs (e.g. release→master and release→develop), listForRef returns runs + // for all PRs; we must wait for runs tied to the current PR or we may see completed + // runs from the other PR and merge before this PR's checks have run. + const runsForThisPr = checkRuns.check_runs.filter(run => run.pull_requests?.some(pr => pr.number === pullRequest.number)); // Get commit status checks for the PR head commit const { data: commitStatus } = await octokit.rest.repos.getCombinedStatusForRef({ owner: owner, @@ -46404,15 +46418,15 @@ This PR merges **${head}** into **${base}**. ref: head, }); (0, logger_1.logDebugInfo)(`Combined status state: ${commitStatus.state}`); - (0, logger_1.logDebugInfo)(`Number of check runs: ${checkRuns.check_runs.length}`); - // If there are check runs, prioritize those over status checks - if (checkRuns.check_runs.length > 0) { - const pendingCheckRuns = checkRuns.check_runs.filter(check => check.status !== 'completed'); + (0, logger_1.logDebugInfo)(`Number of check runs for this PR: ${runsForThisPr.length} (total on ref: ${checkRuns.check_runs.length})`); + // If there are check runs for this PR, wait for them to complete + if (runsForThisPr.length > 0) { + const pendingCheckRuns = runsForThisPr.filter(check => check.status !== 'completed'); if (pendingCheckRuns.length === 0) { checksCompleted = true; (0, logger_1.logDebugInfo)('All check runs have completed.'); // Verify if all checks passed - const failedChecks = checkRuns.check_runs.filter(check => check.conclusion === 'failure'); + const failedChecks = runsForThisPr.filter(check => check.conclusion === 'failure'); if (failedChecks.length > 0) { throw new Error(`Checks failed: ${failedChecks.map(check => check.name).join(', ')}`); } @@ -46427,6 +46441,21 @@ This PR merges **${head}** into **${base}**. continue; } } + else if (checkRuns.check_runs.length > 0 && runsForThisPr.length === 0) { + // There are runs on the ref but none for this PR. Either workflows for this PR + // haven't registered yet, or this PR/base has no required checks. + waitForPrChecksAttempts++; + if (waitForPrChecksAttempts >= maxWaitForPrChecksAttempts) { + checksCompleted = true; + (0, logger_1.logDebugInfo)(`No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; proceeding to merge (branch may have no required checks).`); + } + else { + (0, logger_1.logDebugInfo)('Check runs exist on ref but none for this PR yet; waiting for workflows to register.'); + await new Promise(resolve => setTimeout(resolve, iteration * 1000)); + attempts++; + } + continue; + } else { // Fall back to status checks if no check runs exist const pendingChecks = commitStatus.statuses.filter(status => { @@ -48979,6 +49008,16 @@ const branch_repository_1 = __nccwpck_require__(7701); const issue_repository_1 = __nccwpck_require__(57); const logger_1 = __nccwpck_require__(8836); const task_emoji_1 = __nccwpck_require__(9785); +/** + * Single action run after a successful deployment (triggered with the "deployed" action and an issue number). + * + * Requires the issue to have the "deploy" label and not already have the "deployed" label. Then: + * 1. Replaces the "deploy" label with "deployed". + * 2. If a release or hotfix branch is configured: merges it into default and develop (each via PR, waiting for that PR's checks). + * 3. Closes the issue only when all merges succeed. + * + * @see docs/single-actions/deploy-label-and-merge.mdx for the full flow and how merge/check waiting works. + */ class DeployedActionUseCase { constructor() { this.taskId = 'DeployedActionUseCase'; diff --git a/build/github_action/src/data/repository/merge_repository.d.ts b/build/github_action/src/data/repository/merge_repository.d.ts index a152b014..d13d541a 100644 --- a/build/github_action/src/data/repository/merge_repository.d.ts +++ b/build/github_action/src/data/repository/merge_repository.d.ts @@ -1,7 +1,13 @@ import { Result } from '../model/result'; /** - * Repository for merging branches (via PR or direct merge). - * Isolated to allow unit tests with mocked Octokit. + * Repository for merging branches: creates a PR, waits for that PR's check runs (or status checks), + * then merges the PR; on failure, falls back to a direct Git merge. + * + * Check runs are filtered by PR (pull_requests) so we only wait for the current PR's checks, + * not those of another PR sharing the same head (e.g. release→main vs release→develop). + * If the PR has no check runs after a short wait, we proceed to merge (branch may have no required checks). + * + * @see docs/single-actions/deploy-label-and-merge.mdx for the deploy flow and check-wait behaviour. */ export declare class MergeRepository { mergeBranch: (owner: string, repository: string, head: string, base: string, timeout: number, token: string) => Promise; diff --git a/build/github_action/src/usecase/actions/deployed_action_use_case.d.ts b/build/github_action/src/usecase/actions/deployed_action_use_case.d.ts index bb20b27c..80f821f5 100644 --- a/build/github_action/src/usecase/actions/deployed_action_use_case.d.ts +++ b/build/github_action/src/usecase/actions/deployed_action_use_case.d.ts @@ -1,6 +1,16 @@ import { Execution } from "../../data/model/execution"; import { Result } from "../../data/model/result"; import { ParamUseCase } from "../base/param_usecase"; +/** + * Single action run after a successful deployment (triggered with the "deployed" action and an issue number). + * + * Requires the issue to have the "deploy" label and not already have the "deployed" label. Then: + * 1. Replaces the "deploy" label with "deployed". + * 2. If a release or hotfix branch is configured: merges it into default and develop (each via PR, waiting for that PR's checks). + * 3. Closes the issue only when all merges succeed. + * + * @see docs/single-actions/deploy-label-and-merge.mdx for the full flow and how merge/check waiting works. + */ export declare class DeployedActionUseCase implements ParamUseCase { taskId: string; private issueRepository; From b9106208ffab00ea278a0ada7484d869290976a8 Mon Sep 17 00:00:00 2001 From: Efra Espada Date: Sun, 15 Feb 2026 20:35:19 +0100 Subject: [PATCH 03/16] bugfix-314-error-merging-release-branch-after-successful-deployment: Enhance MergeRepository to handle fallback to status checks when no PR-specific check runs are available after polling. Added test case to verify that the merge process correctly waits for pending status checks before proceeding. This improves the robustness of the merging logic and ensures proper handling of check statuses. --- .../__tests__/merge_repository.test.ts | 43 +++++++++++++++++++ src/data/repository/merge_repository.ts | 26 +++++++++-- 2 files changed, 65 insertions(+), 4 deletions(-) diff --git a/src/data/repository/__tests__/merge_repository.test.ts b/src/data/repository/__tests__/merge_repository.test.ts index 7008cfe6..9bfd2fd2 100644 --- a/src/data/repository/__tests__/merge_repository.test.ts +++ b/src/data/repository/__tests__/merge_repository.test.ts @@ -293,6 +293,49 @@ describe('MergeRepository', () => { expect(mockPullsMerge).toHaveBeenCalled(); }); + it('when no check runs for this PR after max polls but status checks are pending, falls back to status checks and waits then merges', async () => { + jest.useFakeTimers(); + mockPullsCreate.mockResolvedValue({ data: { number: 99 } }); + mockPullsListCommits.mockResolvedValue({ data: [] }); + mockPullsUpdate.mockResolvedValue({}); + mockPullsMerge.mockResolvedValue({}); + // Check runs on ref are for another PR only; this PR (99) has none. + mockChecksListForRef.mockResolvedValue({ + data: { + check_runs: [ + { name: 'ci', status: 'completed', conclusion: 'success', pull_requests: [{ number: 1 }] }, + ], + }, + }); + // First 4 polls: pending status check; 5th: completed so we proceed to merge. + mockReposGetCombinedStatusForRef + .mockResolvedValueOnce({ + data: { state: 'pending', statuses: [{ context: 'ci', state: 'pending' }] }, + }) + .mockResolvedValueOnce({ + data: { state: 'pending', statuses: [{ context: 'ci', state: 'pending' }] }, + }) + .mockResolvedValueOnce({ + data: { state: 'pending', statuses: [{ context: 'ci', state: 'pending' }] }, + }) + .mockResolvedValueOnce({ + data: { state: 'pending', statuses: [{ context: 'ci', state: 'pending' }] }, + }) + .mockResolvedValue({ + data: { state: 'success', statuses: [{ context: 'ci', state: 'success' }] }, + }); + + const promise = repo.mergeBranch('o', 'r', 'release/1.0', 'develop', 60, 'token'); + await jest.runAllTimersAsync(); + const result = await promise; + + jest.useRealTimers(); + expect(result).toHaveLength(1); + expect(result[0].success).toBe(true); + expect(mockReposGetCombinedStatusForRef).toHaveBeenCalled(); + expect(mockPullsMerge).toHaveBeenCalled(); + }); + it('when timeout > 10 and checks never complete throws then direct merge succeeds', async () => { jest.useFakeTimers(); mockPullsCreate.mockResolvedValue({ data: { number: 1 } }); diff --git a/src/data/repository/merge_repository.ts b/src/data/repository/merge_repository.ts index 3b84fcc7..812341f5 100644 --- a/src/data/repository/merge_repository.ts +++ b/src/data/repository/merge_repository.ts @@ -133,10 +133,28 @@ This PR merges **${head}** into **${base}**. // haven't registered yet, or this PR/base has no required checks. waitForPrChecksAttempts++; if (waitForPrChecksAttempts >= maxWaitForPrChecksAttempts) { - checksCompleted = true; - logDebugInfo( - `No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; proceeding to merge (branch may have no required checks).`, - ); + // Give up waiting for PR-specific check runs; fall back to status checks + // before proceeding to merge (PR may have required status checks). + const pendingChecksFallback = commitStatus.statuses.filter(status => { + logDebugInfo(`Status check (fallback): ${status.context} (State: ${status.state})`); + return status.state === 'pending'; + }); + + if (pendingChecksFallback.length === 0) { + checksCompleted = true; + logDebugInfo( + `No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; no pending status checks; proceeding to merge.`, + ); + } else { + logDebugInfo( + `No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; falling back to status checks. Waiting for ${pendingChecksFallback.length} status checks to complete.`, + ); + pendingChecksFallback.forEach(check => { + logDebugInfo(` - ${check.context} (State: ${check.state})`); + }); + await new Promise(resolve => setTimeout(resolve, iteration * 1000)); + attempts++; + } } else { logDebugInfo('Check runs exist on ref but none for this PR yet; waiting for workflows to register.'); await new Promise(resolve => setTimeout(resolve, iteration * 1000)); From b199bde7cb9ff6b4c8168442d0416273b348c4ed Mon Sep 17 00:00:00 2001 From: Efra Espada Date: Sun, 15 Feb 2026 20:43:22 +0100 Subject: [PATCH 04/16] bugfix-314-error-merging-release-branch-after-successful-deployment: Enhance logging functionality by introducing accumulated log management. Added methods to clear and retrieve accumulated logs, and updated log functions to support accumulation. Implemented tests to verify the correct behavior of log accumulation and formatting, ensuring robust logging capabilities. --- build/cli/index.js | 62 ++++++++++++-- build/cli/src/utils/logger.d.ts | 5 +- build/github_action/index.js | 80 ++++++++++++++++--- build/github_action/src/utils/logger.d.ts | 5 +- src/actions/__tests__/common_action.test.ts | 5 +- src/actions/common_action.ts | 3 +- .../__tests__/publish_resume_use_case.test.ts | 47 +++++++++++ .../steps/common/publish_resume_use_case.ts | 21 ++++- src/utils/__tests__/logger.test.ts | 78 ++++++++++++++++++ src/utils/logger.ts | 44 ++++++++-- 10 files changed, 320 insertions(+), 30 deletions(-) diff --git a/build/cli/index.js b/build/cli/index.js index 1d1fc499..8a80670b 100755 --- a/build/cli/index.js +++ b/build/cli/index.js @@ -46694,6 +46694,7 @@ const queue_utils_1 = __nccwpck_require__(9800); async function mainRun(execution) { const results = []; await execution.setup(); + (0, logger_1.clearAccumulatedLogs)(); if (!execution.welcome) { /** * Wait for previous runs to finish @@ -51378,8 +51379,24 @@ This PR merges **${head}** into **${base}**. // haven't registered yet, or this PR/base has no required checks. waitForPrChecksAttempts++; if (waitForPrChecksAttempts >= maxWaitForPrChecksAttempts) { - checksCompleted = true; - (0, logger_1.logDebugInfo)(`No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; proceeding to merge (branch may have no required checks).`); + // Give up waiting for PR-specific check runs; fall back to status checks + // before proceeding to merge (PR may have required status checks). + const pendingChecksFallback = commitStatus.statuses.filter(status => { + (0, logger_1.logDebugInfo)(`Status check (fallback): ${status.context} (State: ${status.state})`); + return status.state === 'pending'; + }); + if (pendingChecksFallback.length === 0) { + checksCompleted = true; + (0, logger_1.logDebugInfo)(`No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; no pending status checks; proceeding to merge.`); + } + else { + (0, logger_1.logDebugInfo)(`No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; falling back to status checks. Waiting for ${pendingChecksFallback.length} status checks to complete.`); + pendingChecksFallback.forEach(check => { + (0, logger_1.logDebugInfo)(` - ${check.context} (State: ${check.state})`); + }); + await new Promise(resolve => setTimeout(resolve, iteration * 1000)); + attempts++; + } } else { (0, logger_1.logDebugInfo)('Check runs exist on ref but none for this PR yet; waiting for workflows to register.'); @@ -60265,6 +60282,9 @@ exports.getRandomElement = getRandomElement; "use strict"; Object.defineProperty(exports, "__esModule", ({ value: true })); +exports.getAccumulatedLogEntries = getAccumulatedLogEntries; +exports.getAccumulatedLogsAsText = getAccumulatedLogsAsText; +exports.clearAccumulatedLogs = clearAccumulatedLogs; exports.setGlobalLoggerDebug = setGlobalLoggerDebug; exports.setStructuredLogging = setStructuredLogging; exports.logInfo = logInfo; @@ -60277,6 +60297,25 @@ exports.logDebugError = logDebugError; let loggerDebug = false; let loggerRemote = false; let structuredLogging = false; +const accumulatedLogEntries = []; +function pushLogEntry(entry) { + accumulatedLogEntries.push(entry); +} +function getAccumulatedLogEntries() { + return [...accumulatedLogEntries]; +} +function getAccumulatedLogsAsText() { + return accumulatedLogEntries + .map((e) => { + const prefix = `[${e.level.toUpperCase()}]`; + const meta = e.metadata?.stack ? `\n${String(e.metadata.stack)}` : ''; + return `${prefix} ${e.message}${meta}`; + }) + .join('\n'); +} +function clearAccumulatedLogs() { + accumulatedLogEntries.length = 0; +} function setGlobalLoggerDebug(debug, isRemote = false) { loggerDebug = debug; loggerRemote = isRemote; @@ -60287,7 +60326,10 @@ function setStructuredLogging(enabled) { function formatStructuredLog(entry) { return JSON.stringify(entry); } -function logInfo(message, previousWasSingleLine = false, metadata) { +function logInfo(message, previousWasSingleLine = false, metadata, skipAccumulation) { + if (!skipAccumulation) { + pushLogEntry({ level: 'info', message, timestamp: Date.now(), metadata }); + } if (previousWasSingleLine && !loggerRemote) { console.log(); } @@ -60304,6 +60346,7 @@ function logInfo(message, previousWasSingleLine = false, metadata) { } } function logWarn(message, metadata) { + pushLogEntry({ level: 'warn', message, timestamp: Date.now(), metadata }); if (structuredLogging) { console.warn(formatStructuredLog({ level: 'warn', @@ -60321,15 +60364,17 @@ function logWarning(message) { } function logError(message, metadata) { const errorMessage = message instanceof Error ? message.message : String(message); + const metaWithStack = { + ...metadata, + stack: message instanceof Error ? message.stack : undefined + }; + pushLogEntry({ level: 'error', message: errorMessage, timestamp: Date.now(), metadata: metaWithStack }); if (structuredLogging) { console.error(formatStructuredLog({ level: 'error', message: errorMessage, timestamp: Date.now(), - metadata: { - ...metadata, - stack: message instanceof Error ? message.stack : undefined - } + metadata: metaWithStack })); } else { @@ -60338,6 +60383,7 @@ function logError(message, metadata) { } function logDebugInfo(message, previousWasSingleLine = false, metadata) { if (loggerDebug) { + pushLogEntry({ level: 'debug', message, timestamp: Date.now(), metadata }); if (structuredLogging) { console.log(formatStructuredLog({ level: 'debug', @@ -60347,7 +60393,7 @@ function logDebugInfo(message, previousWasSingleLine = false, metadata) { })); } else { - logInfo(message, previousWasSingleLine); + logInfo(message, previousWasSingleLine, undefined, true); } } } diff --git a/build/cli/src/utils/logger.d.ts b/build/cli/src/utils/logger.d.ts index cb1bcc34..14c6d4d8 100644 --- a/build/cli/src/utils/logger.d.ts +++ b/build/cli/src/utils/logger.d.ts @@ -4,9 +4,12 @@ export interface LogEntry { timestamp: number; metadata?: Record; } +export declare function getAccumulatedLogEntries(): LogEntry[]; +export declare function getAccumulatedLogsAsText(): string; +export declare function clearAccumulatedLogs(): void; export declare function setGlobalLoggerDebug(debug: boolean, isRemote?: boolean): void; export declare function setStructuredLogging(enabled: boolean): void; -export declare function logInfo(message: string, previousWasSingleLine?: boolean, metadata?: Record): void; +export declare function logInfo(message: string, previousWasSingleLine?: boolean, metadata?: Record, skipAccumulation?: boolean): void; export declare function logWarn(message: string, metadata?: Record): void; export declare function logWarning(message: string): void; export declare function logError(message: unknown, metadata?: Record): void; diff --git a/build/github_action/index.js b/build/github_action/index.js index 79eb65a0..561f9284 100644 --- a/build/github_action/index.js +++ b/build/github_action/index.js @@ -42212,6 +42212,7 @@ const queue_utils_1 = __nccwpck_require__(9800); async function mainRun(execution) { const results = []; await execution.setup(); + (0, logger_1.clearAccumulatedLogs)(); if (!execution.welcome) { /** * Wait for previous runs to finish @@ -46446,8 +46447,24 @@ This PR merges **${head}** into **${base}**. // haven't registered yet, or this PR/base has no required checks. waitForPrChecksAttempts++; if (waitForPrChecksAttempts >= maxWaitForPrChecksAttempts) { - checksCompleted = true; - (0, logger_1.logDebugInfo)(`No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; proceeding to merge (branch may have no required checks).`); + // Give up waiting for PR-specific check runs; fall back to status checks + // before proceeding to merge (PR may have required status checks). + const pendingChecksFallback = commitStatus.statuses.filter(status => { + (0, logger_1.logDebugInfo)(`Status check (fallback): ${status.context} (State: ${status.state})`); + return status.state === 'pending'; + }); + if (pendingChecksFallback.length === 0) { + checksCompleted = true; + (0, logger_1.logDebugInfo)(`No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; no pending status checks; proceeding to merge.`); + } + else { + (0, logger_1.logDebugInfo)(`No check runs for this PR after ${maxWaitForPrChecksAttempts} polls; falling back to status checks. Waiting for ${pendingChecksFallback.length} status checks to complete.`); + pendingChecksFallback.forEach(check => { + (0, logger_1.logDebugInfo)(` - ${check.context} (State: ${check.state})`); + }); + await new Promise(resolve => setTimeout(resolve, iteration * 1000)); + attempts++; + } } else { (0, logger_1.logDebugInfo)('Check runs exist on ref but none for this PR yet; waiting for workflows to register.'); @@ -52586,6 +52603,22 @@ ${errors} Check your project configuration, if everything is okay consider [opening an issue](https://github.com/vypdev/copilot/issues/new/choose). `; } + let debugLogSection = ''; + if (param.debug) { + const logsText = (0, logger_1.getAccumulatedLogsAsText)(); + if (logsText.length > 0) { + debugLogSection = ` + +
+Debug log + +\`\`\` +${logsText} +\`\`\` +
+`; + } + } const commentBody = `# ${title} ${content} ${errors.length > 0 ? errors : ''} @@ -52593,7 +52626,7 @@ ${errors.length > 0 ? errors : ''} ${stupidGif} ${footer} - +${debugLogSection} 🚀 Happy coding! `; if (content.length === 0) { @@ -55552,6 +55585,9 @@ exports.getRandomElement = getRandomElement; "use strict"; Object.defineProperty(exports, "__esModule", ({ value: true })); +exports.getAccumulatedLogEntries = getAccumulatedLogEntries; +exports.getAccumulatedLogsAsText = getAccumulatedLogsAsText; +exports.clearAccumulatedLogs = clearAccumulatedLogs; exports.setGlobalLoggerDebug = setGlobalLoggerDebug; exports.setStructuredLogging = setStructuredLogging; exports.logInfo = logInfo; @@ -55564,6 +55600,25 @@ exports.logDebugError = logDebugError; let loggerDebug = false; let loggerRemote = false; let structuredLogging = false; +const accumulatedLogEntries = []; +function pushLogEntry(entry) { + accumulatedLogEntries.push(entry); +} +function getAccumulatedLogEntries() { + return [...accumulatedLogEntries]; +} +function getAccumulatedLogsAsText() { + return accumulatedLogEntries + .map((e) => { + const prefix = `[${e.level.toUpperCase()}]`; + const meta = e.metadata?.stack ? `\n${String(e.metadata.stack)}` : ''; + return `${prefix} ${e.message}${meta}`; + }) + .join('\n'); +} +function clearAccumulatedLogs() { + accumulatedLogEntries.length = 0; +} function setGlobalLoggerDebug(debug, isRemote = false) { loggerDebug = debug; loggerRemote = isRemote; @@ -55574,7 +55629,10 @@ function setStructuredLogging(enabled) { function formatStructuredLog(entry) { return JSON.stringify(entry); } -function logInfo(message, previousWasSingleLine = false, metadata) { +function logInfo(message, previousWasSingleLine = false, metadata, skipAccumulation) { + if (!skipAccumulation) { + pushLogEntry({ level: 'info', message, timestamp: Date.now(), metadata }); + } if (previousWasSingleLine && !loggerRemote) { console.log(); } @@ -55591,6 +55649,7 @@ function logInfo(message, previousWasSingleLine = false, metadata) { } } function logWarn(message, metadata) { + pushLogEntry({ level: 'warn', message, timestamp: Date.now(), metadata }); if (structuredLogging) { console.warn(formatStructuredLog({ level: 'warn', @@ -55608,15 +55667,17 @@ function logWarning(message) { } function logError(message, metadata) { const errorMessage = message instanceof Error ? message.message : String(message); + const metaWithStack = { + ...metadata, + stack: message instanceof Error ? message.stack : undefined + }; + pushLogEntry({ level: 'error', message: errorMessage, timestamp: Date.now(), metadata: metaWithStack }); if (structuredLogging) { console.error(formatStructuredLog({ level: 'error', message: errorMessage, timestamp: Date.now(), - metadata: { - ...metadata, - stack: message instanceof Error ? message.stack : undefined - } + metadata: metaWithStack })); } else { @@ -55625,6 +55686,7 @@ function logError(message, metadata) { } function logDebugInfo(message, previousWasSingleLine = false, metadata) { if (loggerDebug) { + pushLogEntry({ level: 'debug', message, timestamp: Date.now(), metadata }); if (structuredLogging) { console.log(formatStructuredLog({ level: 'debug', @@ -55634,7 +55696,7 @@ function logDebugInfo(message, previousWasSingleLine = false, metadata) { })); } else { - logInfo(message, previousWasSingleLine); + logInfo(message, previousWasSingleLine, undefined, true); } } } diff --git a/build/github_action/src/utils/logger.d.ts b/build/github_action/src/utils/logger.d.ts index cb1bcc34..14c6d4d8 100644 --- a/build/github_action/src/utils/logger.d.ts +++ b/build/github_action/src/utils/logger.d.ts @@ -4,9 +4,12 @@ export interface LogEntry { timestamp: number; metadata?: Record; } +export declare function getAccumulatedLogEntries(): LogEntry[]; +export declare function getAccumulatedLogsAsText(): string; +export declare function clearAccumulatedLogs(): void; export declare function setGlobalLoggerDebug(debug: boolean, isRemote?: boolean): void; export declare function setStructuredLogging(enabled: boolean): void; -export declare function logInfo(message: string, previousWasSingleLine?: boolean, metadata?: Record): void; +export declare function logInfo(message: string, previousWasSingleLine?: boolean, metadata?: Record, skipAccumulation?: boolean): void; export declare function logWarn(message: string, metadata?: Record): void; export declare function logWarning(message: string): void; export declare function logError(message: unknown, metadata?: Record): void; diff --git a/src/actions/__tests__/common_action.test.ts b/src/actions/__tests__/common_action.test.ts index 78a9af1d..8f23a5ad 100644 --- a/src/actions/__tests__/common_action.test.ts +++ b/src/actions/__tests__/common_action.test.ts @@ -21,6 +21,7 @@ jest.mock('@actions/core', () => ({ jest.mock('../../utils/logger', () => ({ logInfo: jest.fn(), logError: jest.fn(), + clearAccumulatedLogs: jest.fn(), })); jest.mock('../../utils/queue_utils', () => ({ @@ -66,6 +67,7 @@ jest.mock('../../usecase/commit_use_case', () => ({ })); const core = require('@actions/core'); +const logger = require('../../utils/logger'); const { waitForPreviousRuns } = require('../../utils/queue_utils'); function mockExecution(overrides: Record = {}): Execution { @@ -103,11 +105,12 @@ describe('mainRun', () => { mockCommitInvoke.mockResolvedValue([]); }); - it('calls execution.setup()', async () => { + it('calls execution.setup() and clearAccumulatedLogs()', async () => { const setupMock = jest.fn().mockResolvedValue(undefined); const execution = mockExecution({ setup: setupMock }); await mainRun(execution); expect(setupMock).toHaveBeenCalledTimes(1); + expect(logger.clearAccumulatedLogs).toHaveBeenCalledTimes(1); }); it('waits for previous runs when welcome is false', async () => { diff --git a/src/actions/common_action.ts b/src/actions/common_action.ts index b6608dde..469e6cb2 100644 --- a/src/actions/common_action.ts +++ b/src/actions/common_action.ts @@ -7,7 +7,7 @@ import { IssueUseCase } from '../usecase/issue_use_case'; import { PullRequestReviewCommentUseCase } from '../usecase/pull_request_review_comment_use_case'; import { PullRequestUseCase } from '../usecase/pull_request_use_case'; import { SingleActionUseCase } from '../usecase/single_action_use_case'; -import { logError, logInfo } from '../utils/logger'; +import { clearAccumulatedLogs, logError, logInfo } from '../utils/logger'; import { TITLE } from '../utils/constants'; import chalk from 'chalk'; import boxen from 'boxen'; @@ -17,6 +17,7 @@ export async function mainRun(execution: Execution): Promise { const results: Result[] = [] await execution.setup(); + clearAccumulatedLogs(); if (!execution.welcome) { /** diff --git a/src/usecase/steps/common/__tests__/publish_resume_use_case.test.ts b/src/usecase/steps/common/__tests__/publish_resume_use_case.test.ts index 29722e36..4dd27760 100644 --- a/src/usecase/steps/common/__tests__/publish_resume_use_case.test.ts +++ b/src/usecase/steps/common/__tests__/publish_resume_use_case.test.ts @@ -1,9 +1,11 @@ import { Result } from '../../../../data/model/result'; import { PublishResultUseCase } from '../publish_resume_use_case'; +const mockGetAccumulatedLogsAsText = jest.fn(() => ''); jest.mock('../../../../utils/logger', () => ({ logInfo: jest.fn(), logError: jest.fn(), + getAccumulatedLogsAsText: () => mockGetAccumulatedLogsAsText(), })); jest.mock('../../../../utils/list_utils', () => ({ @@ -53,6 +55,7 @@ function baseParam(overrides: Record = {}) { release: { active: false }, hotfix: { active: false }, issueNotBranched: false, + debug: false, ...overrides, } as unknown as Parameters[0]; } @@ -63,6 +66,7 @@ describe('PublishResultUseCase', () => { beforeEach(() => { useCase = new PublishResultUseCase(); mockAddComment.mockReset(); + mockGetAccumulatedLogsAsText.mockReturnValue(''); }); it('does not call addComment when content is empty (no steps in results)', async () => { @@ -100,6 +104,49 @@ describe('PublishResultUseCase', () => { expect(mockAddComment).toHaveBeenCalledWith('o', 'r', 99, expect.stringContaining('1. Step 1'), 't'); }); + it('includes debug log section in comment body when debug is true and logs are present', async () => { + mockAddComment.mockResolvedValue(undefined); + mockGetAccumulatedLogsAsText.mockReturnValue('[INFO] line1\n[WARN] line2'); + const param = baseParam({ + isIssue: true, + debug: true, + currentConfiguration: { results: [new Result({ id: 'a', success: true, executed: true, steps: ['Step 1'] })] }, + }); + + await useCase.invoke(param); + + const commentBody = mockAddComment.mock.calls[0][3] as string; + expect(commentBody).toContain('Debug log'); + expect(commentBody).toContain('[INFO] line1'); + expect(commentBody).toContain('[WARN] line2'); + }); + + it('does not include debug log section when debug is false', async () => { + mockAddComment.mockResolvedValue(undefined); + mockGetAccumulatedLogsAsText.mockReturnValue('[INFO] line1'); + const param = baseParam({ isIssue: true, debug: false }); + + await useCase.invoke(param); + + const commentBody = mockAddComment.mock.calls[0][3] as string; + expect(commentBody).not.toContain('Debug log'); + }); + + it('does not include debug log section when debug is true but accumulated logs are empty', async () => { + mockAddComment.mockResolvedValue(undefined); + mockGetAccumulatedLogsAsText.mockReturnValue(''); + const param = baseParam({ + isIssue: true, + debug: true, + currentConfiguration: { results: [new Result({ id: 'a', success: true, executed: true, steps: ['Step 1'] })] }, + }); + + await useCase.invoke(param); + + const commentBody = mockAddComment.mock.calls[0][3] as string; + expect(commentBody).not.toContain('Debug log'); + }); + it('pushes failure result to currentConfiguration.results when addComment throws', async () => { mockAddComment.mockRejectedValue(new Error('API error')); const param = baseParam({ isIssue: true }); diff --git a/src/usecase/steps/common/publish_resume_use_case.ts b/src/usecase/steps/common/publish_resume_use_case.ts index 2eb08953..f6242708 100644 --- a/src/usecase/steps/common/publish_resume_use_case.ts +++ b/src/usecase/steps/common/publish_resume_use_case.ts @@ -2,7 +2,7 @@ import { Execution } from "../../../data/model/execution"; import { Result } from "../../../data/model/result"; import { IssueRepository } from "../../../data/repository/issue_repository"; import { getRandomElement } from "../../../utils/list_utils"; -import { logError, logInfo } from "../../../utils/logger"; +import { getAccumulatedLogsAsText, logError, logInfo } from "../../../utils/logger"; import { getTaskEmoji } from "../../../utils/task_emoji"; import { ParamUseCase } from "../../base/param_usecase"; @@ -128,6 +128,23 @@ Check your project configuration, if everything is okay consider [opening an iss ` } + let debugLogSection = ''; + if (param.debug) { + const logsText = getAccumulatedLogsAsText(); + if (logsText.length > 0) { + debugLogSection = ` + +
+Debug log + +\`\`\` +${logsText} +\`\`\` +
+`; + } + } + const commentBody = `# ${title} ${content} ${errors.length > 0 ? errors : ''} @@ -135,7 +152,7 @@ ${errors.length > 0 ? errors : ''} ${stupidGif} ${footer} - +${debugLogSection} 🚀 Happy coding! `; diff --git a/src/utils/__tests__/logger.test.ts b/src/utils/__tests__/logger.test.ts index 41ede95c..fb74a66a 100644 --- a/src/utils/__tests__/logger.test.ts +++ b/src/utils/__tests__/logger.test.ts @@ -1,6 +1,9 @@ import { setGlobalLoggerDebug, setStructuredLogging, + clearAccumulatedLogs, + getAccumulatedLogEntries, + getAccumulatedLogsAsText, logInfo, logWarn, logWarning, @@ -19,6 +22,7 @@ describe('logger', () => { jest.clearAllMocks(); setGlobalLoggerDebug(false); setStructuredLogging(false); + clearAccumulatedLogs(); }); afterAll(() => { @@ -138,4 +142,78 @@ describe('logger', () => { expect(consoleErrorSpy).toHaveBeenCalledWith('de'); }); }); + + describe('accumulated logs', () => { + it('accumulates logInfo entries', () => { + logInfo('a'); + logInfo('b'); + const entries = getAccumulatedLogEntries(); + expect(entries).toHaveLength(2); + expect(entries[0]).toMatchObject({ level: 'info', message: 'a' }); + expect(entries[1]).toMatchObject({ level: 'info', message: 'b' }); + }); + + it('accumulates logWarn and logError entries', () => { + logWarn('w'); + logError('e'); + const entries = getAccumulatedLogEntries(); + expect(entries).toHaveLength(2); + expect(entries[0]).toMatchObject({ level: 'warn', message: 'w' }); + expect(entries[1]).toMatchObject({ level: 'error', message: 'e' }); + }); + + it('getAccumulatedLogsAsText formats entries with level prefix', () => { + logInfo('hello'); + logWarn('world'); + expect(getAccumulatedLogsAsText()).toBe('[INFO] hello\n[WARN] world'); + }); + + it('getAccumulatedLogsAsText includes stack for errors with stack', () => { + const err = new Error('fail'); + err.stack = 'Error: fail\n at foo.js:1:1'; + logError(err); + const text = getAccumulatedLogsAsText(); + expect(text).toContain('[ERROR] fail'); + expect(text).toContain('Error: fail'); + }); + + it('clearAccumulatedLogs removes all entries', () => { + logInfo('x'); + expect(getAccumulatedLogEntries()).toHaveLength(1); + clearAccumulatedLogs(); + expect(getAccumulatedLogEntries()).toHaveLength(0); + expect(getAccumulatedLogsAsText()).toBe(''); + }); + + it('logDebugInfo only accumulates when debug is on', () => { + setGlobalLoggerDebug(false); + logDebugInfo('hidden'); + expect(getAccumulatedLogEntries()).toHaveLength(0); + setGlobalLoggerDebug(true); + logDebugInfo('visible'); + const entries = getAccumulatedLogEntries(); + expect(entries).toHaveLength(1); + expect(entries[0]).toMatchObject({ level: 'debug', message: 'visible' }); + }); + + it('logInfo with skipAccumulation does not accumulate', () => { + logInfo('skip', false, undefined, true); + expect(getAccumulatedLogEntries()).toHaveLength(0); + }); + + it('logWarning accumulates via logWarn', () => { + logWarning('warn msg'); + const entries = getAccumulatedLogEntries(); + expect(entries).toHaveLength(1); + expect(entries[0]).toMatchObject({ level: 'warn', message: 'warn msg' }); + }); + + it('getAccumulatedLogEntries returns a copy so mutating it does not affect internal buffer', () => { + logInfo('one'); + const entries = getAccumulatedLogEntries(); + entries.push({ level: 'info', message: 'fake', timestamp: 0 }); + expect(getAccumulatedLogEntries()).toHaveLength(1); + expect(getAccumulatedLogsAsText()).toBe('[INFO] one'); + }); + }); }); diff --git a/src/utils/logger.ts b/src/utils/logger.ts index df6cb031..2daebd89 100644 --- a/src/utils/logger.ts +++ b/src/utils/logger.ts @@ -9,6 +9,30 @@ export interface LogEntry { metadata?: Record; } +const accumulatedLogEntries: LogEntry[] = []; + +function pushLogEntry(entry: LogEntry): void { + accumulatedLogEntries.push(entry); +} + +export function getAccumulatedLogEntries(): LogEntry[] { + return [...accumulatedLogEntries]; +} + +export function getAccumulatedLogsAsText(): string { + return accumulatedLogEntries + .map((e) => { + const prefix = `[${e.level.toUpperCase()}]`; + const meta = e.metadata?.stack ? `\n${String(e.metadata.stack)}` : ''; + return `${prefix} ${e.message}${meta}`; + }) + .join('\n'); +} + +export function clearAccumulatedLogs(): void { + accumulatedLogEntries.length = 0; +} + export function setGlobalLoggerDebug(debug: boolean, isRemote: boolean = false) { loggerDebug = debug; loggerRemote = isRemote; @@ -22,7 +46,10 @@ function formatStructuredLog(entry: LogEntry): string { return JSON.stringify(entry); } -export function logInfo(message: string, previousWasSingleLine: boolean = false, metadata?: Record) { +export function logInfo(message: string, previousWasSingleLine: boolean = false, metadata?: Record, skipAccumulation?: boolean) { + if (!skipAccumulation) { + pushLogEntry({ level: 'info', message, timestamp: Date.now(), metadata }); + } if (previousWasSingleLine && !loggerRemote) { console.log() } @@ -40,6 +67,7 @@ export function logInfo(message: string, previousWasSingleLine: boolean = false, } export function logWarn(message: string, metadata?: Record) { + pushLogEntry({ level: 'warn', message, timestamp: Date.now(), metadata }); if (structuredLogging) { console.warn(formatStructuredLog({ level: 'warn', @@ -58,16 +86,17 @@ export function logWarning(message: string) { export function logError(message: unknown, metadata?: Record) { const errorMessage = message instanceof Error ? message.message : String(message); - + const metaWithStack = { + ...metadata, + stack: message instanceof Error ? message.stack : undefined + }; + pushLogEntry({ level: 'error', message: errorMessage, timestamp: Date.now(), metadata: metaWithStack }); if (structuredLogging) { console.error(formatStructuredLog({ level: 'error', message: errorMessage, timestamp: Date.now(), - metadata: { - ...metadata, - stack: message instanceof Error ? message.stack : undefined - } + metadata: metaWithStack })); } else { console.error(errorMessage); @@ -76,6 +105,7 @@ export function logError(message: unknown, metadata?: Record) { export function logDebugInfo(message: string, previousWasSingleLine: boolean = false, metadata?: Record) { if (loggerDebug) { + pushLogEntry({ level: 'debug', message, timestamp: Date.now(), metadata }); if (structuredLogging) { console.log(formatStructuredLog({ level: 'debug', @@ -84,7 +114,7 @@ export function logDebugInfo(message: string, previousWasSingleLine: boolean = f metadata })); } else { - logInfo(message, previousWasSingleLine); + logInfo(message, previousWasSingleLine, undefined, true); } } } From d64a8740c2ce6b5e8b3113cda34b12a4fde36b36 Mon Sep 17 00:00:00 2001 From: Efra Espada Date: Sun, 15 Feb 2026 20:54:34 +0100 Subject: [PATCH 05/16] bugfix-314-error-merging-release-branch-after-successful-deployment: Enhance IssueRepository and publishFindings functionality to support commit SHA watermarks in comments. Updated addComment and updateComment methods to accept an options parameter for commitSha, allowing comments to include a link to the specific commit. Adjusted related tests to verify the inclusion of watermarks in comment bodies. --- build/cli/index.js | 95 ++++++++++++++++--- .../src/data/repository/issue_repository.d.ts | 8 +- .../bugbot/publish_findings_use_case.d.ts | 2 + build/cli/src/utils/comment_watermark.d.ts | 11 +++ build/github_action/index.js | 95 ++++++++++++++++--- .../src/data/repository/issue_repository.d.ts | 8 +- .../bugbot/publish_findings_use_case.d.ts | 2 + .../src/utils/comment_watermark.d.ts | 11 +++ .../__tests__/issue_repository.test.ts | 44 ++++++--- src/data/repository/issue_repository.ts | 15 ++- ...detect_potential_problems_use_case.test.ts | 6 +- .../publish_findings_use_case.test.ts | 4 +- .../bugbot/publish_findings_use_case.ts | 22 +++-- .../detect_potential_problems_use_case.ts | 2 + src/utils/__tests__/comment_watermark.test.ts | 25 +++++ src/utils/comment_watermark.ts | 27 ++++++ 16 files changed, 324 insertions(+), 53 deletions(-) create mode 100644 build/cli/src/utils/comment_watermark.d.ts create mode 100644 build/github_action/src/utils/comment_watermark.d.ts create mode 100644 src/utils/__tests__/comment_watermark.test.ts create mode 100644 src/utils/comment_watermark.ts diff --git a/build/cli/index.js b/build/cli/index.js index 8a80670b..8f19b8d6 100755 --- a/build/cli/index.js +++ b/build/cli/index.js @@ -50335,6 +50335,7 @@ Object.defineProperty(exports, "__esModule", ({ value: true })); exports.IssueRepository = exports.PROGRESS_LABEL_PATTERN = void 0; const core = __importStar(__nccwpck_require__(2186)); const github = __importStar(__nccwpck_require__(5438)); +const comment_watermark_1 = __nccwpck_require__(4467); const logger_1 = __nccwpck_require__(8836); const milestone_1 = __nccwpck_require__(2298); /** Matches labels that are progress percentages (e.g. "0%", "85%"). Used for setProgressLabel and syncing. */ @@ -50691,23 +50692,27 @@ class IssueRepository { }); return pullRequest.data.head.ref; }; - this.addComment = async (owner, repository, issueNumber, comment, token) => { + this.addComment = async (owner, repository, issueNumber, comment, token, options) => { + const watermark = (0, comment_watermark_1.getCommentWatermark)(options?.commitSha ? { commitSha: options.commitSha, owner, repo: repository } : undefined); + const body = `${comment}\n\n${watermark}`; const octokit = github.getOctokit(token); await octokit.rest.issues.createComment({ owner: owner, repo: repository, issue_number: issueNumber, - body: comment, + body, }); (0, logger_1.logDebugInfo)(`Comment added to Issue ${issueNumber}.`); }; - this.updateComment = async (owner, repository, issueNumber, commentId, comment, token) => { + this.updateComment = async (owner, repository, issueNumber, commentId, comment, token, options) => { + const watermark = (0, comment_watermark_1.getCommentWatermark)(options?.commitSha ? { commitSha: options.commitSha, owner, repo: repository } : undefined); + const body = `${comment}\n\n${watermark}`; const octokit = github.getOctokit(token); await octokit.rest.issues.updateComment({ owner: owner, repo: repository, comment_id: commentId, - body: comment, + body, }); (0, logger_1.logDebugInfo)(`Comment ${commentId} updated in Issue ${issueNumber}.`); }; @@ -56273,12 +56278,13 @@ Object.defineProperty(exports, "__esModule", ({ value: true })); exports.publishFindings = publishFindings; const issue_repository_1 = __nccwpck_require__(57); const pull_request_repository_1 = __nccwpck_require__(634); +const comment_watermark_1 = __nccwpck_require__(4467); const logger_1 = __nccwpck_require__(8836); const marker_1 = __nccwpck_require__(2401); const path_validation_1 = __nccwpck_require__(1999); /** Creates or updates issue comments for each finding; creates PR review comments only when finding.file is in prFiles. */ async function publishFindings(param) { - const { execution, context, findings, overflowCount = 0, overflowTitles = [] } = param; + const { execution, context, findings, commitSha, overflowCount = 0, overflowTitles = [] } = param; const { existingByFindingId, openPrNumbers, prContext } = context; const issueNumber = execution.issueNumber; const token = execution.tokens.token; @@ -56286,18 +56292,22 @@ async function publishFindings(param) { const repo = execution.repo; const issueRepository = new issue_repository_1.IssueRepository(); const pullRequestRepository = new pull_request_repository_1.PullRequestRepository(); + const bugbotWatermark = commitSha && owner && repo + ? (0, comment_watermark_1.getCommentWatermark)({ commitSha, owner, repo }) + : (0, comment_watermark_1.getCommentWatermark)(); const prFiles = prContext?.prFiles ?? []; const pathToFirstDiffLine = prContext?.pathToFirstDiffLine ?? {}; const prCommentsToCreate = []; for (const finding of findings) { const existing = existingByFindingId[finding.id]; const commentBody = (0, marker_1.buildCommentBody)(finding, false); + const bodyWithWatermark = `${commentBody}\n\n${bugbotWatermark}`; if (existing?.issueCommentId != null) { - await issueRepository.updateComment(owner, repo, issueNumber, existing.issueCommentId, commentBody, token); + await issueRepository.updateComment(owner, repo, issueNumber, existing.issueCommentId, commentBody, token, commitSha ? { commitSha } : undefined); (0, logger_1.logDebugInfo)(`Updated bugbot comment for finding ${finding.id} on issue.`); } else { - await issueRepository.addComment(owner, repo, issueNumber, commentBody, token); + await issueRepository.addComment(owner, repo, issueNumber, commentBody, token, commitSha ? { commitSha } : undefined); (0, logger_1.logDebugInfo)(`Added bugbot comment for finding ${finding.id} on issue.`); } // PR review comment: only if this finding's file is in the PR changed files (so GitHub can attach the comment). @@ -56306,10 +56316,10 @@ async function publishFindings(param) { if (path) { const line = finding.line ?? pathToFirstDiffLine[path] ?? 1; if (existing?.prCommentId != null && existing.prNumber === openPrNumbers[0]) { - await pullRequestRepository.updatePullRequestReviewComment(owner, repo, existing.prCommentId, commentBody, token); + await pullRequestRepository.updatePullRequestReviewComment(owner, repo, existing.prCommentId, bodyWithWatermark, token); } else { - prCommentsToCreate.push({ path, line, body: commentBody }); + prCommentsToCreate.push({ path, line, body: bodyWithWatermark }); } } else if (finding.file != null && String(finding.file).trim() !== "") { @@ -56327,7 +56337,7 @@ async function publishFindings(param) { const overflowBody = `## More findings (comment limit) There are **${overflowCount}** more finding(s) that were not published as individual comments. Review locally or in the full diff to see the list.${titlesList}`; - await issueRepository.addComment(owner, repo, issueNumber, overflowBody, token); + await issueRepository.addComment(owner, repo, issueNumber, overflowBody, token, commitSha ? { commitSha } : undefined); (0, logger_1.logDebugInfo)(`Added overflow comment: ${overflowCount} additional finding(s) not published individually.`); } } @@ -56587,12 +56597,46 @@ exports.CheckChangesIssueSizeUseCase = CheckChangesIssueSizeUseCase; /***/ }), /***/ 7395: -/***/ ((__unused_webpack_module, exports, __nccwpck_require__) => { +/***/ (function(__unused_webpack_module, exports, __nccwpck_require__) { "use strict"; +var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + var desc = Object.getOwnPropertyDescriptor(m, k); + if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) { + desc = { enumerable: true, get: function() { return m[k]; } }; + } + Object.defineProperty(o, k2, desc); +}) : (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + o[k2] = m[k]; +})); +var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) { + Object.defineProperty(o, "default", { enumerable: true, value: v }); +}) : function(o, v) { + o["default"] = v; +}); +var __importStar = (this && this.__importStar) || (function () { + var ownKeys = function(o) { + ownKeys = Object.getOwnPropertyNames || function (o) { + var ar = []; + for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k; + return ar; + }; + return ownKeys(o); + }; + return function (mod) { + if (mod && mod.__esModule) return mod; + var result = {}; + if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]); + __setModuleDefault(result, mod); + return result; + }; +})(); Object.defineProperty(exports, "__esModule", ({ value: true })); exports.DetectPotentialProblemsUseCase = void 0; +const github = __importStar(__nccwpck_require__(5438)); const result_1 = __nccwpck_require__(7305); const ai_repository_1 = __nccwpck_require__(8307); const constants_1 = __nccwpck_require__(8593); @@ -56671,6 +56715,7 @@ class DetectPotentialProblemsUseCase { execution: param, context, findings: toPublish, + commitSha: github.context.sha, overflowCount: overflowCount > 0 ? overflowCount : undefined, overflowTitles: overflowCount > 0 ? overflowTitles : undefined, }); @@ -59766,6 +59811,34 @@ ${this.translatedKey} exports.CheckPullRequestCommentLanguageUseCase = CheckPullRequestCommentLanguageUseCase; +/***/ }), + +/***/ 4467: +/***/ ((__unused_webpack_module, exports) => { + +"use strict"; + +/** + * Watermark appended to comments (issues and PRs) to attribute Copilot. + * Bugbot comments include commit link and note about auto-update on new commits. + */ +Object.defineProperty(exports, "__esModule", ({ value: true })); +exports.COPILOT_MARKETPLACE_URL = void 0; +exports.getCommentWatermark = getCommentWatermark; +exports.COPILOT_MARKETPLACE_URL = 'https://github.com/marketplace/actions/copilot-github-with-super-powers'; +const DEFAULT_WATERMARK = `Made with ❤️ by [vypdev/copilot](${exports.COPILOT_MARKETPLACE_URL})`; +function commitUrl(owner, repo, sha) { + return `https://github.com/${owner}/${repo}/commit/${sha}`; +} +function getCommentWatermark(options) { + if (options?.commitSha && options?.owner && options?.repo) { + const url = commitUrl(options.owner, options.repo, options.commitSha); + return `Written by [vypdev/copilot](${exports.COPILOT_MARKETPLACE_URL}) for commit [${options.commitSha}](${url}). This will update automatically on new commits.`; + } + return DEFAULT_WATERMARK; +} + + /***/ }), /***/ 8593: diff --git a/build/cli/src/data/repository/issue_repository.d.ts b/build/cli/src/data/repository/issue_repository.d.ts index dbd004e0..f69e3668 100644 --- a/build/cli/src/data/repository/issue_repository.d.ts +++ b/build/cli/src/data/repository/issue_repository.d.ts @@ -37,8 +37,12 @@ export declare class IssueRepository { isIssue: (owner: string, repository: string, issueNumber: number, token: string) => Promise; isPullRequest: (owner: string, repository: string, issueNumber: number, token: string) => Promise; getHeadBranch: (owner: string, repository: string, issueNumber: number, token: string) => Promise; - addComment: (owner: string, repository: string, issueNumber: number, comment: string, token: string) => Promise; - updateComment: (owner: string, repository: string, issueNumber: number, commentId: number, comment: string, token: string) => Promise; + addComment: (owner: string, repository: string, issueNumber: number, comment: string, token: string, options?: { + commitSha?: string; + }) => Promise; + updateComment: (owner: string, repository: string, issueNumber: number, commentId: number, comment: string, token: string, options?: { + commitSha?: string; + }) => Promise; /** * Lists all comments on an issue (for bugbot: find existing findings by marker). * Uses pagination to fetch every comment (default API returns only 30 per page). diff --git a/build/cli/src/usecase/steps/commit/bugbot/publish_findings_use_case.d.ts b/build/cli/src/usecase/steps/commit/bugbot/publish_findings_use_case.d.ts index 22a093cc..712e16b8 100644 --- a/build/cli/src/usecase/steps/commit/bugbot/publish_findings_use_case.d.ts +++ b/build/cli/src/usecase/steps/commit/bugbot/publish_findings_use_case.d.ts @@ -12,6 +12,8 @@ export interface PublishFindingsParam { execution: Execution; context: BugbotContext; findings: BugbotFinding[]; + /** Commit SHA for bugbot watermark (commit link). When set, comment uses "for commit ..." watermark. */ + commitSha?: string; /** When findings were limited by max comments, add one summary comment with this overflow info. */ overflowCount?: number; overflowTitles?: string[]; diff --git a/build/cli/src/utils/comment_watermark.d.ts b/build/cli/src/utils/comment_watermark.d.ts new file mode 100644 index 00000000..858f7dc1 --- /dev/null +++ b/build/cli/src/utils/comment_watermark.d.ts @@ -0,0 +1,11 @@ +/** + * Watermark appended to comments (issues and PRs) to attribute Copilot. + * Bugbot comments include commit link and note about auto-update on new commits. + */ +export declare const COPILOT_MARKETPLACE_URL = "https://github.com/marketplace/actions/copilot-github-with-super-powers"; +export interface BugbotWatermarkOptions { + commitSha: string; + owner: string; + repo: string; +} +export declare function getCommentWatermark(options?: BugbotWatermarkOptions): string; diff --git a/build/github_action/index.js b/build/github_action/index.js index 561f9284..30cc551d 100644 --- a/build/github_action/index.js +++ b/build/github_action/index.js @@ -45403,6 +45403,7 @@ Object.defineProperty(exports, "__esModule", ({ value: true })); exports.IssueRepository = exports.PROGRESS_LABEL_PATTERN = void 0; const core = __importStar(__nccwpck_require__(2186)); const github = __importStar(__nccwpck_require__(5438)); +const comment_watermark_1 = __nccwpck_require__(4467); const logger_1 = __nccwpck_require__(8836); const milestone_1 = __nccwpck_require__(2298); /** Matches labels that are progress percentages (e.g. "0%", "85%"). Used for setProgressLabel and syncing. */ @@ -45759,23 +45760,27 @@ class IssueRepository { }); return pullRequest.data.head.ref; }; - this.addComment = async (owner, repository, issueNumber, comment, token) => { + this.addComment = async (owner, repository, issueNumber, comment, token, options) => { + const watermark = (0, comment_watermark_1.getCommentWatermark)(options?.commitSha ? { commitSha: options.commitSha, owner, repo: repository } : undefined); + const body = `${comment}\n\n${watermark}`; const octokit = github.getOctokit(token); await octokit.rest.issues.createComment({ owner: owner, repo: repository, issue_number: issueNumber, - body: comment, + body, }); (0, logger_1.logDebugInfo)(`Comment added to Issue ${issueNumber}.`); }; - this.updateComment = async (owner, repository, issueNumber, commentId, comment, token) => { + this.updateComment = async (owner, repository, issueNumber, commentId, comment, token, options) => { + const watermark = (0, comment_watermark_1.getCommentWatermark)(options?.commitSha ? { commitSha: options.commitSha, owner, repo: repository } : undefined); + const body = `${comment}\n\n${watermark}`; const octokit = github.getOctokit(token); await octokit.rest.issues.updateComment({ owner: owner, repo: repository, comment_id: commentId, - body: comment, + body, }); (0, logger_1.logDebugInfo)(`Comment ${commentId} updated in Issue ${issueNumber}.`); }; @@ -51341,12 +51346,13 @@ Object.defineProperty(exports, "__esModule", ({ value: true })); exports.publishFindings = publishFindings; const issue_repository_1 = __nccwpck_require__(57); const pull_request_repository_1 = __nccwpck_require__(634); +const comment_watermark_1 = __nccwpck_require__(4467); const logger_1 = __nccwpck_require__(8836); const marker_1 = __nccwpck_require__(2401); const path_validation_1 = __nccwpck_require__(1999); /** Creates or updates issue comments for each finding; creates PR review comments only when finding.file is in prFiles. */ async function publishFindings(param) { - const { execution, context, findings, overflowCount = 0, overflowTitles = [] } = param; + const { execution, context, findings, commitSha, overflowCount = 0, overflowTitles = [] } = param; const { existingByFindingId, openPrNumbers, prContext } = context; const issueNumber = execution.issueNumber; const token = execution.tokens.token; @@ -51354,18 +51360,22 @@ async function publishFindings(param) { const repo = execution.repo; const issueRepository = new issue_repository_1.IssueRepository(); const pullRequestRepository = new pull_request_repository_1.PullRequestRepository(); + const bugbotWatermark = commitSha && owner && repo + ? (0, comment_watermark_1.getCommentWatermark)({ commitSha, owner, repo }) + : (0, comment_watermark_1.getCommentWatermark)(); const prFiles = prContext?.prFiles ?? []; const pathToFirstDiffLine = prContext?.pathToFirstDiffLine ?? {}; const prCommentsToCreate = []; for (const finding of findings) { const existing = existingByFindingId[finding.id]; const commentBody = (0, marker_1.buildCommentBody)(finding, false); + const bodyWithWatermark = `${commentBody}\n\n${bugbotWatermark}`; if (existing?.issueCommentId != null) { - await issueRepository.updateComment(owner, repo, issueNumber, existing.issueCommentId, commentBody, token); + await issueRepository.updateComment(owner, repo, issueNumber, existing.issueCommentId, commentBody, token, commitSha ? { commitSha } : undefined); (0, logger_1.logDebugInfo)(`Updated bugbot comment for finding ${finding.id} on issue.`); } else { - await issueRepository.addComment(owner, repo, issueNumber, commentBody, token); + await issueRepository.addComment(owner, repo, issueNumber, commentBody, token, commitSha ? { commitSha } : undefined); (0, logger_1.logDebugInfo)(`Added bugbot comment for finding ${finding.id} on issue.`); } // PR review comment: only if this finding's file is in the PR changed files (so GitHub can attach the comment). @@ -51374,10 +51384,10 @@ async function publishFindings(param) { if (path) { const line = finding.line ?? pathToFirstDiffLine[path] ?? 1; if (existing?.prCommentId != null && existing.prNumber === openPrNumbers[0]) { - await pullRequestRepository.updatePullRequestReviewComment(owner, repo, existing.prCommentId, commentBody, token); + await pullRequestRepository.updatePullRequestReviewComment(owner, repo, existing.prCommentId, bodyWithWatermark, token); } else { - prCommentsToCreate.push({ path, line, body: commentBody }); + prCommentsToCreate.push({ path, line, body: bodyWithWatermark }); } } else if (finding.file != null && String(finding.file).trim() !== "") { @@ -51395,7 +51405,7 @@ async function publishFindings(param) { const overflowBody = `## More findings (comment limit) There are **${overflowCount}** more finding(s) that were not published as individual comments. Review locally or in the full diff to see the list.${titlesList}`; - await issueRepository.addComment(owner, repo, issueNumber, overflowBody, token); + await issueRepository.addComment(owner, repo, issueNumber, overflowBody, token, commitSha ? { commitSha } : undefined); (0, logger_1.logDebugInfo)(`Added overflow comment: ${overflowCount} additional finding(s) not published individually.`); } } @@ -51655,12 +51665,46 @@ exports.CheckChangesIssueSizeUseCase = CheckChangesIssueSizeUseCase; /***/ }), /***/ 7395: -/***/ ((__unused_webpack_module, exports, __nccwpck_require__) => { +/***/ (function(__unused_webpack_module, exports, __nccwpck_require__) { "use strict"; +var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + var desc = Object.getOwnPropertyDescriptor(m, k); + if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) { + desc = { enumerable: true, get: function() { return m[k]; } }; + } + Object.defineProperty(o, k2, desc); +}) : (function(o, m, k, k2) { + if (k2 === undefined) k2 = k; + o[k2] = m[k]; +})); +var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) { + Object.defineProperty(o, "default", { enumerable: true, value: v }); +}) : function(o, v) { + o["default"] = v; +}); +var __importStar = (this && this.__importStar) || (function () { + var ownKeys = function(o) { + ownKeys = Object.getOwnPropertyNames || function (o) { + var ar = []; + for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k; + return ar; + }; + return ownKeys(o); + }; + return function (mod) { + if (mod && mod.__esModule) return mod; + var result = {}; + if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]); + __setModuleDefault(result, mod); + return result; + }; +})(); Object.defineProperty(exports, "__esModule", ({ value: true })); exports.DetectPotentialProblemsUseCase = void 0; +const github = __importStar(__nccwpck_require__(5438)); const result_1 = __nccwpck_require__(7305); const ai_repository_1 = __nccwpck_require__(8307); const constants_1 = __nccwpck_require__(8593); @@ -51739,6 +51783,7 @@ class DetectPotentialProblemsUseCase { execution: param, context, findings: toPublish, + commitSha: github.context.sha, overflowCount: overflowCount > 0 ? overflowCount : undefined, overflowTitles: overflowCount > 0 ? overflowTitles : undefined, }); @@ -55069,6 +55114,34 @@ ${this.translatedKey} exports.CheckPullRequestCommentLanguageUseCase = CheckPullRequestCommentLanguageUseCase; +/***/ }), + +/***/ 4467: +/***/ ((__unused_webpack_module, exports) => { + +"use strict"; + +/** + * Watermark appended to comments (issues and PRs) to attribute Copilot. + * Bugbot comments include commit link and note about auto-update on new commits. + */ +Object.defineProperty(exports, "__esModule", ({ value: true })); +exports.COPILOT_MARKETPLACE_URL = void 0; +exports.getCommentWatermark = getCommentWatermark; +exports.COPILOT_MARKETPLACE_URL = 'https://github.com/marketplace/actions/copilot-github-with-super-powers'; +const DEFAULT_WATERMARK = `Made with ❤️ by [vypdev/copilot](${exports.COPILOT_MARKETPLACE_URL})`; +function commitUrl(owner, repo, sha) { + return `https://github.com/${owner}/${repo}/commit/${sha}`; +} +function getCommentWatermark(options) { + if (options?.commitSha && options?.owner && options?.repo) { + const url = commitUrl(options.owner, options.repo, options.commitSha); + return `Written by [vypdev/copilot](${exports.COPILOT_MARKETPLACE_URL}) for commit [${options.commitSha}](${url}). This will update automatically on new commits.`; + } + return DEFAULT_WATERMARK; +} + + /***/ }), /***/ 8593: diff --git a/build/github_action/src/data/repository/issue_repository.d.ts b/build/github_action/src/data/repository/issue_repository.d.ts index dbd004e0..f69e3668 100644 --- a/build/github_action/src/data/repository/issue_repository.d.ts +++ b/build/github_action/src/data/repository/issue_repository.d.ts @@ -37,8 +37,12 @@ export declare class IssueRepository { isIssue: (owner: string, repository: string, issueNumber: number, token: string) => Promise; isPullRequest: (owner: string, repository: string, issueNumber: number, token: string) => Promise; getHeadBranch: (owner: string, repository: string, issueNumber: number, token: string) => Promise; - addComment: (owner: string, repository: string, issueNumber: number, comment: string, token: string) => Promise; - updateComment: (owner: string, repository: string, issueNumber: number, commentId: number, comment: string, token: string) => Promise; + addComment: (owner: string, repository: string, issueNumber: number, comment: string, token: string, options?: { + commitSha?: string; + }) => Promise; + updateComment: (owner: string, repository: string, issueNumber: number, commentId: number, comment: string, token: string, options?: { + commitSha?: string; + }) => Promise; /** * Lists all comments on an issue (for bugbot: find existing findings by marker). * Uses pagination to fetch every comment (default API returns only 30 per page). diff --git a/build/github_action/src/usecase/steps/commit/bugbot/publish_findings_use_case.d.ts b/build/github_action/src/usecase/steps/commit/bugbot/publish_findings_use_case.d.ts index 22a093cc..712e16b8 100644 --- a/build/github_action/src/usecase/steps/commit/bugbot/publish_findings_use_case.d.ts +++ b/build/github_action/src/usecase/steps/commit/bugbot/publish_findings_use_case.d.ts @@ -12,6 +12,8 @@ export interface PublishFindingsParam { execution: Execution; context: BugbotContext; findings: BugbotFinding[]; + /** Commit SHA for bugbot watermark (commit link). When set, comment uses "for commit ..." watermark. */ + commitSha?: string; /** When findings were limited by max comments, add one summary comment with this overflow info. */ overflowCount?: number; overflowTitles?: string[]; diff --git a/build/github_action/src/utils/comment_watermark.d.ts b/build/github_action/src/utils/comment_watermark.d.ts new file mode 100644 index 00000000..858f7dc1 --- /dev/null +++ b/build/github_action/src/utils/comment_watermark.d.ts @@ -0,0 +1,11 @@ +/** + * Watermark appended to comments (issues and PRs) to attribute Copilot. + * Bugbot comments include commit link and note about auto-update on new commits. + */ +export declare const COPILOT_MARKETPLACE_URL = "https://github.com/marketplace/actions/copilot-github-with-super-powers"; +export interface BugbotWatermarkOptions { + commitSha: string; + owner: string; + repo: string; +} +export declare function getCommentWatermark(options?: BugbotWatermarkOptions): string; diff --git a/src/data/repository/__tests__/issue_repository.test.ts b/src/data/repository/__tests__/issue_repository.test.ts index ce105459..b591fefd 100644 --- a/src/data/repository/__tests__/issue_repository.test.ts +++ b/src/data/repository/__tests__/issue_repository.test.ts @@ -367,15 +367,24 @@ describe('IssueRepository', () => { }); describe('addComment', () => { - it('calls issues.createComment with owner, repo, issue_number, body', async () => { + it('calls issues.createComment with owner, repo, issue_number, body including default watermark', async () => { mockRest.issues.createComment.mockResolvedValue(undefined); await repo.addComment('owner', 'repo', 10, 'Hello comment', 'token'); - expect(mockRest.issues.createComment).toHaveBeenCalledWith({ - owner: 'owner', - repo: 'repo', - issue_number: 10, - body: 'Hello comment', - }); + const call = mockRest.issues.createComment.mock.calls[0][0]; + expect(call).toMatchObject({ owner: 'owner', repo: 'repo', issue_number: 10 }); + expect(call.body).toContain('Hello comment'); + expect(call.body).toContain('Made with'); + expect(call.body).toContain('vypdev/copilot'); + }); + + it('appends bugbot watermark when commitSha option is provided', async () => { + mockRest.issues.createComment.mockResolvedValue(undefined); + await repo.addComment('owner', 'repo', 10, 'Bugbot finding', 'token', { commitSha: 'abc123' }); + const call = mockRest.issues.createComment.mock.calls[0][0]; + expect(call.body).toContain('Bugbot finding'); + expect(call.body).toContain('Written by'); + expect(call.body).toContain('abc123'); + expect(call.body).toContain('github.com/owner/repo/commit/abc123'); }); }); @@ -588,15 +597,22 @@ describe('IssueRepository', () => { }); describe('updateComment', () => { - it('calls issues.updateComment with comment_id and body', async () => { + it('calls issues.updateComment with comment_id and body including default watermark', async () => { mockRest.issues.updateComment.mockResolvedValue(undefined); await repo.updateComment('o', 'r', 1, 100, 'Updated body', 'token'); - expect(mockRest.issues.updateComment).toHaveBeenCalledWith({ - owner: 'o', - repo: 'r', - comment_id: 100, - body: 'Updated body', - }); + const call = mockRest.issues.updateComment.mock.calls[0][0]; + expect(call).toMatchObject({ owner: 'o', repo: 'r', comment_id: 100 }); + expect(call.body).toContain('Updated body'); + expect(call.body).toContain('Made with'); + }); + + it('appends bugbot watermark when commitSha option is provided', async () => { + mockRest.issues.updateComment.mockResolvedValue(undefined); + await repo.updateComment('o', 'r', 1, 100, 'Finding update', 'token', { commitSha: 'sha1' }); + const call = mockRest.issues.updateComment.mock.calls[0][0]; + expect(call.body).toContain('Finding update'); + expect(call.body).toContain('Written by'); + expect(call.body).toContain('sha1'); }); }); diff --git a/src/data/repository/issue_repository.ts b/src/data/repository/issue_repository.ts index fd264305..a0446c29 100644 --- a/src/data/repository/issue_repository.ts +++ b/src/data/repository/issue_repository.ts @@ -1,5 +1,6 @@ import * as core from "@actions/core"; import * as github from "@actions/github"; +import { getCommentWatermark } from "../../utils/comment_watermark"; import { logDebugInfo, logError } from "../../utils/logger"; import { Labels } from "../model/labels"; import { Milestone } from "../model/milestone"; @@ -503,13 +504,18 @@ export class IssueRepository { issueNumber: number, comment: string, token: string, + options?: { commitSha?: string }, ) => { + const watermark = getCommentWatermark( + options?.commitSha ? { commitSha: options.commitSha, owner, repo: repository } : undefined + ); + const body = `${comment}\n\n${watermark}`; const octokit = github.getOctokit(token); await octokit.rest.issues.createComment({ owner: owner, repo: repository, issue_number: issueNumber, - body: comment, + body, }); logDebugInfo(`Comment added to Issue ${issueNumber}.`); @@ -522,13 +528,18 @@ export class IssueRepository { commentId: number, comment: string, token: string, + options?: { commitSha?: string }, ) => { + const watermark = getCommentWatermark( + options?.commitSha ? { commitSha: options.commitSha, owner, repo: repository } : undefined + ); + const body = `${comment}\n\n${watermark}`; const octokit = github.getOctokit(token); await octokit.rest.issues.updateComment({ owner: owner, repo: repository, comment_id: commentId, - body: comment, + body, }); logDebugInfo(`Comment ${commentId} updated in Issue ${issueNumber}.`); diff --git a/src/usecase/steps/commit/__tests__/detect_potential_problems_use_case.test.ts b/src/usecase/steps/commit/__tests__/detect_potential_problems_use_case.test.ts index 62eed6b7..69586731 100644 --- a/src/usecase/steps/commit/__tests__/detect_potential_problems_use_case.test.ts +++ b/src/usecase/steps/commit/__tests__/detect_potential_problems_use_case.test.ts @@ -225,7 +225,7 @@ describe('DetectPotentialProblemsUseCase', () => { expect(results[0].success).toBe(true); expect(results[0].steps?.[0]).toContain('1 new/current finding(s)'); expect(mockAddComment).toHaveBeenCalledTimes(1); - expect(mockAddComment).toHaveBeenCalledWith('owner', 'repo', 42, expect.any(String), 'token'); + expect(mockAddComment).toHaveBeenCalledWith('owner', 'repo', 42, expect.any(String), 'token', undefined); expect(mockAddComment.mock.calls[0][3]).toContain('Possible null dereference'); expect(mockAddComment.mock.calls[0][3]).toContain('copilot-bugbot'); expect(mockAddComment.mock.calls[0][3]).toContain('finding_id:"src/foo.ts:10:possible-null"'); @@ -282,7 +282,7 @@ describe('DetectPotentialProblemsUseCase', () => { await useCase.invoke(baseParam()); - expect(mockUpdateComment).toHaveBeenCalledWith('owner', 'repo', 42, 999, expect.any(String), 'token'); + expect(mockUpdateComment).toHaveBeenCalledWith('owner', 'repo', 42, 999, expect.any(String), 'token', undefined); expect(mockAddComment).not.toHaveBeenCalled(); }); @@ -423,7 +423,7 @@ describe('DetectPotentialProblemsUseCase', () => { await useCase.invoke(baseParam()); - expect(mockAddComment).toHaveBeenCalledWith('owner', 'repo', 42, expect.any(String), 'token'); + expect(mockAddComment).toHaveBeenCalledWith('owner', 'repo', 42, expect.any(String), 'token', undefined); expect(mockCreateReviewWithComments).not.toHaveBeenCalled(); }); diff --git a/src/usecase/steps/commit/bugbot/__tests__/publish_findings_use_case.test.ts b/src/usecase/steps/commit/bugbot/__tests__/publish_findings_use_case.test.ts index aef60701..f488f133 100644 --- a/src/usecase/steps/commit/bugbot/__tests__/publish_findings_use_case.test.ts +++ b/src/usecase/steps/commit/bugbot/__tests__/publish_findings_use_case.test.ts @@ -74,7 +74,7 @@ describe("publishFindings", () => { }); expect(mockAddComment).toHaveBeenCalledTimes(1); - expect(mockAddComment).toHaveBeenCalledWith("o", "r", 42, expect.stringContaining("## Test"), "t"); + expect(mockAddComment).toHaveBeenCalledWith("o", "r", 42, expect.stringContaining("## Test"), "t", undefined); expect(mockUpdateComment).not.toHaveBeenCalled(); }); @@ -87,7 +87,7 @@ describe("publishFindings", () => { findings: [finding()], }); - expect(mockUpdateComment).toHaveBeenCalledWith("o", "r", 42, 100, expect.any(String), "t"); + expect(mockUpdateComment).toHaveBeenCalledWith("o", "r", 42, 100, expect.any(String), "t", undefined); expect(mockAddComment).not.toHaveBeenCalled(); }); diff --git a/src/usecase/steps/commit/bugbot/publish_findings_use_case.ts b/src/usecase/steps/commit/bugbot/publish_findings_use_case.ts index 8c0ca06d..65414265 100644 --- a/src/usecase/steps/commit/bugbot/publish_findings_use_case.ts +++ b/src/usecase/steps/commit/bugbot/publish_findings_use_case.ts @@ -9,6 +9,7 @@ import type { Execution } from "../../../../data/model/execution"; import { IssueRepository } from "../../../../data/repository/issue_repository"; import { PullRequestRepository } from "../../../../data/repository/pull_request_repository"; +import { getCommentWatermark } from "../../../../utils/comment_watermark"; import { logDebugInfo, logInfo } from "../../../../utils/logger"; import type { BugbotContext } from "./types"; import type { BugbotFinding } from "./types"; @@ -19,6 +20,8 @@ export interface PublishFindingsParam { execution: Execution; context: BugbotContext; findings: BugbotFinding[]; + /** Commit SHA for bugbot watermark (commit link). When set, comment uses "for commit ..." watermark. */ + commitSha?: string; /** When findings were limited by max comments, add one summary comment with this overflow info. */ overflowCount?: number; overflowTitles?: string[]; @@ -26,7 +29,7 @@ export interface PublishFindingsParam { /** Creates or updates issue comments for each finding; creates PR review comments only when finding.file is in prFiles. */ export async function publishFindings(param: PublishFindingsParam): Promise { - const { execution, context, findings, overflowCount = 0, overflowTitles = [] } = param; + const { execution, context, findings, commitSha, overflowCount = 0, overflowTitles = [] } = param; const { existingByFindingId, openPrNumbers, prContext } = context; const issueNumber = execution.issueNumber; const token = execution.tokens.token; @@ -36,6 +39,11 @@ export async function publishFindings(param: PublishFindingsParam): Promise = []; @@ -43,6 +51,7 @@ export async function publishFindings(param: PublishFindingsParam): Promise 0 ? overflowCount : undefined, overflowTitles: overflowCount > 0 ? overflowTitles : undefined, }); diff --git a/src/utils/__tests__/comment_watermark.test.ts b/src/utils/__tests__/comment_watermark.test.ts new file mode 100644 index 00000000..01ded15c --- /dev/null +++ b/src/utils/__tests__/comment_watermark.test.ts @@ -0,0 +1,25 @@ +import { COPILOT_MARKETPLACE_URL, getCommentWatermark } from '../comment_watermark'; + +describe('comment_watermark', () => { + it('exports marketplace URL', () => { + expect(COPILOT_MARKETPLACE_URL).toBe( + 'https://github.com/marketplace/actions/copilot-github-with-super-powers' + ); + }); + + it('returns default watermark when no options', () => { + const w = getCommentWatermark(); + expect(w).toContain('Made with'); + expect(w).toContain('vypdev/copilot'); + expect(w).toContain(COPILOT_MARKETPLACE_URL); + }); + + it('returns bugbot watermark with commit link when options provided', () => { + const w = getCommentWatermark({ commitSha: 'abc123', owner: 'o', repo: 'r' }); + expect(w).toContain('Written by'); + expect(w).toContain('vypdev/copilot'); + expect(w).toContain('abc123'); + expect(w).toContain('github.com/o/r/commit/abc123'); + expect(w).toContain('This will update automatically on new commits'); + }); +}); diff --git a/src/utils/comment_watermark.ts b/src/utils/comment_watermark.ts new file mode 100644 index 00000000..49423345 --- /dev/null +++ b/src/utils/comment_watermark.ts @@ -0,0 +1,27 @@ +/** + * Watermark appended to comments (issues and PRs) to attribute Copilot. + * Bugbot comments include commit link and note about auto-update on new commits. + */ + +export const COPILOT_MARKETPLACE_URL = + 'https://github.com/marketplace/actions/copilot-github-with-super-powers'; + +const DEFAULT_WATERMARK = `Made with ❤️ by [vypdev/copilot](${COPILOT_MARKETPLACE_URL})`; + +export interface BugbotWatermarkOptions { + commitSha: string; + owner: string; + repo: string; +} + +function commitUrl(owner: string, repo: string, sha: string): string { + return `https://github.com/${owner}/${repo}/commit/${sha}`; +} + +export function getCommentWatermark(options?: BugbotWatermarkOptions): string { + if (options?.commitSha && options?.owner && options?.repo) { + const url = commitUrl(options.owner, options.repo, options.commitSha); + return `Written by [vypdev/copilot](${COPILOT_MARKETPLACE_URL}) for commit [${options.commitSha}](${url}). This will update automatically on new commits.`; + } + return DEFAULT_WATERMARK; +} From b637168b4018a3c02c0e93694ab71440b3820d0e Mon Sep 17 00:00:00 2001 From: Efra Espada Date: Sun, 15 Feb 2026 20:58:48 +0100 Subject: [PATCH 06/16] bugfix-314-error-merging-release-branch-after-successful-deployment: Enhance tests for DetectPotentialProblemsUseCase by mocking GitHub actions context to handle undefined SHA. This improves test reliability and ensures proper handling of commit context in potential problem detection. --- .../__tests__/detect_potential_problems_use_case.test.ts | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/src/usecase/steps/commit/__tests__/detect_potential_problems_use_case.test.ts b/src/usecase/steps/commit/__tests__/detect_potential_problems_use_case.test.ts index 69586731..caf71ada 100644 --- a/src/usecase/steps/commit/__tests__/detect_potential_problems_use_case.test.ts +++ b/src/usecase/steps/commit/__tests__/detect_potential_problems_use_case.test.ts @@ -8,6 +8,14 @@ import { DetectPotentialProblemsUseCase } from '../detect_potential_problems_use import { Ai } from '../../../../data/model/ai'; import type { Execution } from '../../../../data/model/execution'; +jest.mock('@actions/github', () => { + const actual = jest.requireActual('@actions/github'); + return { + ...actual, + context: { ...actual.context, sha: undefined }, + }; +}); + jest.mock('../../../../utils/logger', () => ({ logInfo: jest.fn(), logError: jest.fn(), From f4da070f07237710c18e70f73cb4df6b41b87042 Mon Sep 17 00:00:00 2001 From: Efra Espada Date: Tue, 17 Feb 2026 22:55:06 +0100 Subject: [PATCH 07/16] bugfix-314-error-merging-release-branch-after-successful-deployment: Fix commit URL generation to properly encode owner and repository names. Added tests to verify URL encoding for special characters in commit links. --- src/utils/__tests__/comment_watermark.test.ts | 15 +++++++++++++++ src/utils/comment_watermark.ts | 2 +- 2 files changed, 16 insertions(+), 1 deletion(-) diff --git a/src/utils/__tests__/comment_watermark.test.ts b/src/utils/__tests__/comment_watermark.test.ts index 01ded15c..e09d1fff 100644 --- a/src/utils/__tests__/comment_watermark.test.ts +++ b/src/utils/__tests__/comment_watermark.test.ts @@ -22,4 +22,19 @@ describe('comment_watermark', () => { expect(w).toContain('github.com/o/r/commit/abc123'); expect(w).toContain('This will update automatically on new commits'); }); + + it('URL-encodes owner and repo in commit link when they contain special characters', () => { + const w = getCommentWatermark({ + commitSha: 'abc123', + owner: 'my.org', + repo: 'my-repo', + }); + expect(w).toContain('github.com/my.org/my-repo/commit/abc123'); + const w2 = getCommentWatermark({ + commitSha: 'def456', + owner: 'org with spaces', + repo: 'repo', + }); + expect(w2).toContain('github.com/org%20with%20spaces/repo/commit/def456'); + }); }); diff --git a/src/utils/comment_watermark.ts b/src/utils/comment_watermark.ts index 49423345..bd417060 100644 --- a/src/utils/comment_watermark.ts +++ b/src/utils/comment_watermark.ts @@ -15,7 +15,7 @@ export interface BugbotWatermarkOptions { } function commitUrl(owner: string, repo: string, sha: string): string { - return `https://github.com/${owner}/${repo}/commit/${sha}`; + return `https://github.com/${encodeURIComponent(owner)}/${encodeURIComponent(repo)}/commit/${sha}`; } export function getCommentWatermark(options?: BugbotWatermarkOptions): string { From 8c840d810c943e8a728e049c726d3c8662be4b43 Mon Sep 17 00:00:00 2001 From: Efra Espada Date: Tue, 17 Feb 2026 22:55:49 +0100 Subject: [PATCH 08/16] bugfix-314-error-merging-release-branch-after-successful-deployment: Fix commit URL generation in CLI and GitHub action files to properly encode owner and repository names, ensuring correct handling of special characters in URLs. --- build/cli/index.js | 2 +- build/github_action/index.js | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/build/cli/index.js b/build/cli/index.js index 8f19b8d6..7341479a 100755 --- a/build/cli/index.js +++ b/build/cli/index.js @@ -59828,7 +59828,7 @@ exports.getCommentWatermark = getCommentWatermark; exports.COPILOT_MARKETPLACE_URL = 'https://github.com/marketplace/actions/copilot-github-with-super-powers'; const DEFAULT_WATERMARK = `Made with ❤️ by [vypdev/copilot](${exports.COPILOT_MARKETPLACE_URL})`; function commitUrl(owner, repo, sha) { - return `https://github.com/${owner}/${repo}/commit/${sha}`; + return `https://github.com/${encodeURIComponent(owner)}/${encodeURIComponent(repo)}/commit/${sha}`; } function getCommentWatermark(options) { if (options?.commitSha && options?.owner && options?.repo) { diff --git a/build/github_action/index.js b/build/github_action/index.js index 30cc551d..04d2399c 100644 --- a/build/github_action/index.js +++ b/build/github_action/index.js @@ -55131,7 +55131,7 @@ exports.getCommentWatermark = getCommentWatermark; exports.COPILOT_MARKETPLACE_URL = 'https://github.com/marketplace/actions/copilot-github-with-super-powers'; const DEFAULT_WATERMARK = `Made with ❤️ by [vypdev/copilot](${exports.COPILOT_MARKETPLACE_URL})`; function commitUrl(owner, repo, sha) { - return `https://github.com/${owner}/${repo}/commit/${sha}`; + return `https://github.com/${encodeURIComponent(owner)}/${encodeURIComponent(repo)}/commit/${sha}`; } function getCommentWatermark(options) { if (options?.commitSha && options?.owner && options?.repo) { From 4dd336ae7fb7b2e403b0851e3a8a212a4fb9bbb4 Mon Sep 17 00:00:00 2001 From: Efra Espada Date: Tue, 17 Feb 2026 23:07:09 +0100 Subject: [PATCH 09/16] bugfix-314-error-merging-release-branch-after-successful-deployment: Enhance publishFindings tests to validate commitSha handling in comments and overflow scenarios. Added cases for using commitSha as a watermark, passing it to addComment and updateComment, and ensuring correct behavior when findings lack associated files. Improved test coverage for comment creation logic in various conditions. --- .../publish_findings_use_case.test.ts | 154 ++++++++++++++++++ 1 file changed, 154 insertions(+) diff --git a/src/usecase/steps/commit/bugbot/__tests__/publish_findings_use_case.test.ts b/src/usecase/steps/commit/bugbot/__tests__/publish_findings_use_case.test.ts index f488f133..3980f99b 100644 --- a/src/usecase/steps/commit/bugbot/__tests__/publish_findings_use_case.test.ts +++ b/src/usecase/steps/commit/bugbot/__tests__/publish_findings_use_case.test.ts @@ -224,4 +224,158 @@ describe("publishFindings", () => { expect(overflowCall[3]).toContain("Finding 0"); expect(overflowCall[3]).not.toContain("Finding 19"); }); + + it("uses commitSha for watermark and passes commitSha to addComment when provided", async () => { + await publishFindings({ + execution: baseExecution, + context: baseContext(), + findings: [finding()], + commitSha: "abc123", + }); + + expect(mockAddComment).toHaveBeenCalledWith( + "o", + "r", + 42, + expect.any(String), + "t", + { commitSha: "abc123" } + ); + }); + + it("passes commitSha to updateComment when finding has issueCommentId and commitSha is provided", async () => { + await publishFindings({ + execution: baseExecution, + context: baseContext({ + existingByFindingId: { f1: { issueCommentId: 100, resolved: false } }, + }), + findings: [finding()], + commitSha: "def456", + }); + + expect(mockUpdateComment).toHaveBeenCalledWith( + "o", + "r", + 42, + 100, + expect.any(String), + "t", + { commitSha: "def456" } + ); + }); + + it("uses line 1 when finding has no line and pathToFirstDiffLine has no entry for path", async () => { + await publishFindings({ + execution: baseExecution, + context: baseContext({ + openPrNumbers: [50], + prContext: { + prHeadSha: "sha1", + prFiles: [{ filename: "src/b.ts", status: "modified" }], + pathToFirstDiffLine: {}, + }, + }), + findings: [finding({ id: "f2", file: "src/b.ts" })], + }); + + expect(mockCreateReviewWithComments).toHaveBeenCalledWith( + "o", + "r", + 50, + "sha1", + expect.arrayContaining([ + expect.objectContaining({ path: "src/b.ts", line: 1 }), + ]), + "t" + ); + }); + + it("creates new PR review comment when existing prCommentId is for a different PR", async () => { + await publishFindings({ + execution: baseExecution, + context: baseContext({ + openPrNumbers: [50], + existingByFindingId: { f1: { prCommentId: 300, prNumber: 99, resolved: false } }, + prContext: { + prHeadSha: "sha1", + prFiles: [{ filename: "src/foo.ts", status: "modified" }], + pathToFirstDiffLine: {}, + }, + }), + findings: [finding({ file: "src/foo.ts" })], + }); + + expect(mockUpdatePullRequestReviewComment).not.toHaveBeenCalled(); + expect(mockCreateReviewWithComments).toHaveBeenCalledWith( + "o", + "r", + 50, + "sha1", + expect.arrayContaining([ + expect.objectContaining({ path: "src/foo.ts", body: expect.any(String) }), + ]), + "t" + ); + }); + + it("adds overflow comment with no titles list when overflowTitles is empty", async () => { + await publishFindings({ + execution: baseExecution, + context: baseContext(), + findings: [], + overflowCount: 2, + overflowTitles: [], + }); + + const overflowCall = mockAddComment.mock.calls.find( + (c: unknown[]) => (c[3] as string).includes("More findings") + ); + expect(overflowCall).toBeDefined(); + expect(overflowCall[3]).toContain("**2**"); + expect(overflowCall[3]).not.toMatch(/\n- /); + }); + + it("passes commitSha to addComment when adding overflow comment", async () => { + await publishFindings({ + execution: baseExecution, + context: baseContext(), + findings: [], + overflowCount: 1, + overflowTitles: [], + commitSha: "overflow-sha", + }); + + const overflowCall = mockAddComment.mock.calls.find( + (c: unknown[]) => (c[3] as string).includes("More findings") + ); + expect(overflowCall).toBeDefined(); + expect(overflowCall[5]).toEqual({ commitSha: "overflow-sha" }); + }); + + it("does not log when finding.file is not in prFiles but file is null or empty", async () => { + const { logInfo } = await import("../../../../../utils/logger"); + (logInfo as jest.Mock).mockClear(); + await publishFindings({ + execution: baseExecution, + context: baseContext({ + openPrNumbers: [50], + prContext: { + prHeadSha: "sha1", + prFiles: [{ filename: "src/only.ts", status: "modified" }], + pathToFirstDiffLine: {}, + }, + }), + findings: [ + finding({ id: "no-file", file: undefined }), + finding({ id: "empty-file", file: "" }), + finding({ id: "whitespace-file", file: " " }), + ], + }); + + expect(mockAddComment).toHaveBeenCalledTimes(3); + expect(mockCreateReviewWithComments).not.toHaveBeenCalled(); + expect(logInfo).not.toHaveBeenCalledWith( + expect.stringContaining("not in PR changed files") + ); + }); }); From 53f8788e701072c990559af4dd69445170d48f32 Mon Sep 17 00:00:00 2001 From: Efra Espada Date: Tue, 17 Feb 2026 23:12:40 +0100 Subject: [PATCH 10/16] bugfix-314-error-merging-release-branch-after-successful-deployment: Refactor publishFindings to improve commitSha handling in comments. Updated logic to ensure correct behavior in scenarios with missing associated files and added tests for various conditions, enhancing overall test coverage. --- .../base/__tests__/content_interface.test.ts | 127 ++++++++ .../__tests__/issue_content_interface.test.ts | 281 ++++++++++++++++++ 2 files changed, 408 insertions(+) create mode 100644 src/manager/description/base/__tests__/content_interface.test.ts create mode 100644 src/manager/description/base/__tests__/issue_content_interface.test.ts diff --git a/src/manager/description/base/__tests__/content_interface.test.ts b/src/manager/description/base/__tests__/content_interface.test.ts new file mode 100644 index 00000000..6a8af641 --- /dev/null +++ b/src/manager/description/base/__tests__/content_interface.test.ts @@ -0,0 +1,127 @@ +import { ContentInterface } from '../content_interface'; + +jest.mock('../../../../utils/logger', () => ({ + logError: jest.fn(), +})); + +/** Concrete implementation for testing the abstract ContentInterface. */ +class TestContent extends ContentInterface { + constructor( + public readonly testId: string, + public readonly testVisible: boolean, + ) { + super(); + } + get id(): string { + return this.testId; + } + get visibleContent(): boolean { + return this.testVisible; + } +} + +describe('ContentInterface', () => { + describe('visibleContent: true (HTML comment style)', () => { + const handler = new TestContent('foo', true); + const start = ''; + const end = ''; + + describe('getContent', () => { + it('returns undefined when description is undefined', () => { + expect(handler.getContent(undefined)).toBeUndefined(); + }); + + it('returns undefined when start pattern is missing', () => { + const desc = `pre\n${end}\npost`; + expect(handler.getContent(desc)).toBeUndefined(); + }); + + it('returns undefined when end pattern is missing', () => { + const desc = `pre\n${start}\nmid`; + expect(handler.getContent(desc)).toBeUndefined(); + }); + + it('returns content between start and end patterns', () => { + const desc = `pre\n${start}\ninner\n${end}\npost`; + expect(handler.getContent(desc)).toBe('\ninner\n'); + }); + + it('returns first block when multiple blocks exist', () => { + const desc = `${start}\nfirst\n${end}\n${start}\nsecond\n${end}`; + expect(handler.getContent(desc)).toBe('\nfirst\n'); + }); + + it('returns undefined when only start tag is present', () => { + const desc = `pre\n${start}\norphan`; + expect(handler.getContent(desc)).toBeUndefined(); + }); + }); + + describe('updateContent', () => { + it('returns undefined when description is undefined', () => { + expect(handler.updateContent(undefined, 'x')).toBeUndefined(); + }); + + it('returns undefined when content is undefined', () => { + expect(handler.updateContent('body', undefined)).toBeUndefined(); + }); + + it('appends new block when no existing block', () => { + const desc = 'some body'; + const result = handler.updateContent(desc, 'new'); + expect(result).toBe(`some body\n\n${start}\nnew\n${end}`); + }); + + it('replaces existing block when block exists', () => { + const desc = `pre\n${start}\nold\n${end}\npost`; + const result = handler.updateContent(desc, 'new'); + expect(result).toBe(`pre\n${start}\nnew\n${end}\npost`); + }); + + it('returns undefined when only start tag exists (cannot add, update fails)', () => { + const desc = `pre\n${start}\norphan`; + const result = handler.updateContent(desc, 'new'); + expect(result).toBeUndefined(); + }); + }); + }); + + describe('visibleContent: false (hidden comment style)', () => { + const handler = new TestContent('config', false); + const start = ''; + + describe('getContent', () => { + it('returns undefined when description is undefined', () => { + expect(handler.getContent(undefined)).toBeUndefined(); + }); + + it('returns undefined when start pattern is missing', () => { + expect(handler.getContent(`pre\n${end}\npost`)).toBeUndefined(); + }); + + it('returns undefined when end pattern is missing', () => { + expect(handler.getContent(`pre\n${start}\nmid`)).toBeUndefined(); + }); + + it('returns content between start and end patterns', () => { + const desc = `pre\n${start}\n{"x":1}\n${end}\npost`; + expect(handler.getContent(desc)).toBe('\n{"x":1}\n'); + }); + }); + + describe('updateContent', () => { + it('appends new block when no existing block', () => { + const desc = 'body'; + const result = handler.updateContent(desc, 'data'); + expect(result).toBe(`body\n\n${start}\ndata\n${end}`); + }); + + it('replaces existing block when block exists', () => { + const desc = `pre\n${start}\nold\n${end}\npost`; + const result = handler.updateContent(desc, 'new'); + expect(result).toBe(`pre\n${start}\nnew\n${end}\npost`); + }); + }); + }); +}); diff --git a/src/manager/description/base/__tests__/issue_content_interface.test.ts b/src/manager/description/base/__tests__/issue_content_interface.test.ts new file mode 100644 index 00000000..8dac3c9a --- /dev/null +++ b/src/manager/description/base/__tests__/issue_content_interface.test.ts @@ -0,0 +1,281 @@ +import type { Execution } from '../../../../data/model/execution'; +import { IssueContentInterface } from '../issue_content_interface'; + +jest.mock('../../../../utils/logger', () => ({ + logError: jest.fn(), +})); + +const mockGetDescription = jest.fn(); +const mockUpdateDescription = jest.fn(); + +jest.mock('../../../../data/repository/issue_repository', () => ({ + IssueRepository: jest.fn().mockImplementation(() => ({ + getDescription: mockGetDescription, + updateDescription: mockUpdateDescription, + })), +})); + +/** Concrete implementation for testing IssueContentInterface. */ +class TestIssueContent extends IssueContentInterface { + get id(): string { + return 'test-block'; + } + get visibleContent(): boolean { + return false; + } +} + +const START = ''; + +function descriptionWithBlock(body: string): string { + return `pre\n${START}\n${body}\n${END}\npost`; +} + +function minimalExecution(overrides: Record = {}): Execution { + return { + owner: 'o', + repo: 'r', + tokens: { token: 't' }, + isIssue: true, + isPullRequest: false, + isPush: false, + isSingleAction: false, + issue: { number: 42 }, + pullRequest: { number: 99 }, + issueNumber: 42, + singleAction: { issue: 123, isIssue: false, isPullRequest: false, isPush: false }, + ...overrides, + } as unknown as Execution; +} + +describe('IssueContentInterface', () => { + let handler: TestIssueContent; + + beforeEach(() => { + jest.clearAllMocks(); + handler = new TestIssueContent(); + }); + + describe('internalGetter', () => { + it('uses issue.number when isIssue and not single action', async () => { + mockGetDescription.mockResolvedValue(descriptionWithBlock('data')); + const execution = minimalExecution({ isIssue: true, isSingleAction: false }); + + const result = await handler.internalGetter(execution); + + expect(mockGetDescription).toHaveBeenCalledWith('o', 'r', 42, 't'); + expect(result).toBe('\ndata\n'); + }); + + it('uses pullRequest.number when isPullRequest and not single action', async () => { + mockGetDescription.mockResolvedValue(descriptionWithBlock('pr-data')); + const execution = minimalExecution({ + isIssue: false, + isPullRequest: true, + isSingleAction: false, + }); + + const result = await handler.internalGetter(execution); + + expect(mockGetDescription).toHaveBeenCalledWith('o', 'r', 99, 't'); + expect(result).toBe('\npr-data\n'); + }); + + it('uses issueNumber when isPush', async () => { + mockGetDescription.mockResolvedValue(descriptionWithBlock('push-data')); + const execution = minimalExecution({ + isIssue: false, + isPullRequest: false, + isPush: true, + issueNumber: 7, + }); + + const result = await handler.internalGetter(execution); + + expect(mockGetDescription).toHaveBeenCalledWith('o', 'r', 7, 't'); + expect(result).toBe('\npush-data\n'); + }); + + it('uses issueNumber when isSingleAction', async () => { + mockGetDescription.mockResolvedValue(descriptionWithBlock('single')); + const execution = minimalExecution({ + isSingleAction: true, + issueNumber: 5, + }); + + const result = await handler.internalGetter(execution); + + expect(mockGetDescription).toHaveBeenCalledWith('o', 'r', 5, 't'); + expect(result).toBe('\nsingle\n'); + }); + + it('returns undefined when execution is not issue, PR, push or single action', async () => { + const execution = minimalExecution({ + isIssue: false, + isPullRequest: false, + isPush: false, + isSingleAction: false, + }); + + const result = await handler.internalGetter(execution); + + expect(mockGetDescription).not.toHaveBeenCalled(); + expect(result).toBeUndefined(); + }); + + it('returns undefined when getContent finds no block', async () => { + mockGetDescription.mockResolvedValue('no block here'); + const execution = minimalExecution(); + + const result = await handler.internalGetter(execution); + + expect(result).toBeUndefined(); + }); + + it('throws when getDescription rejects', async () => { + mockGetDescription.mockRejectedValue(new Error('api error')); + const execution = minimalExecution(); + + await expect(handler.internalGetter(execution)).rejects.toThrow('api error'); + }); + }); + + describe('internalUpdate', () => { + it('fetches description, updates content and calls updateDescription when isIssue', async () => { + const desc = descriptionWithBlock('old'); + mockGetDescription.mockResolvedValue(desc); + mockUpdateDescription.mockResolvedValue(undefined); + const execution = minimalExecution({ isIssue: true, isSingleAction: false }); + + const result = await handler.internalUpdate(execution, 'new'); + + expect(mockGetDescription).toHaveBeenCalledWith('o', 'r', 42, 't'); + expect(result).toContain('\nnew\n'); + expect(mockUpdateDescription).toHaveBeenCalledWith('o', 'r', 42, expect.any(String), 't'); + }); + + it('uses pullRequest.number when isPullRequest', async () => { + mockGetDescription.mockResolvedValue(descriptionWithBlock('x')); + mockUpdateDescription.mockResolvedValue(undefined); + const execution = minimalExecution({ + isIssue: false, + isPullRequest: true, + isSingleAction: false, + }); + + await handler.internalUpdate(execution, 'y'); + + expect(mockGetDescription).toHaveBeenCalledWith('o', 'r', 99, 't'); + expect(mockUpdateDescription).toHaveBeenCalledWith('o', 'r', 99, expect.any(String), 't'); + }); + + it('uses issueNumber when isPush', async () => { + mockGetDescription.mockResolvedValue(descriptionWithBlock('a')); + mockUpdateDescription.mockResolvedValue(undefined); + const execution = minimalExecution({ + isIssue: false, + isPullRequest: false, + isPush: true, + issueNumber: 11, + }); + + await handler.internalUpdate(execution, 'b'); + + expect(mockGetDescription).toHaveBeenCalledWith('o', 'r', 11, 't'); + expect(mockUpdateDescription).toHaveBeenCalledWith('o', 'r', 11, expect.any(String), 't'); + }); + + it('when isSingleAction and isIssue uses issue.number', async () => { + mockGetDescription.mockResolvedValue(descriptionWithBlock('c')); + mockUpdateDescription.mockResolvedValue(undefined); + const execution = minimalExecution({ + isSingleAction: true, + isIssue: true, + issue: { number: 88 }, + }); + + await handler.internalUpdate(execution, 'd'); + + expect(mockGetDescription).toHaveBeenCalledWith('o', 'r', 88, 't'); + }); + + it('when isSingleAction and isPullRequest uses pullRequest.number', async () => { + mockGetDescription.mockResolvedValue(descriptionWithBlock('e')); + mockUpdateDescription.mockResolvedValue(undefined); + const execution = minimalExecution({ + isSingleAction: true, + isIssue: false, + isPullRequest: true, + pullRequest: { number: 77 }, + }); + + await handler.internalUpdate(execution, 'f'); + + expect(mockGetDescription).toHaveBeenCalledWith('o', 'r', 77, 't'); + }); + + it('when isSingleAction and isPush uses issueNumber', async () => { + mockGetDescription.mockResolvedValue(descriptionWithBlock('g')); + mockUpdateDescription.mockResolvedValue(undefined); + const execution = minimalExecution({ + isSingleAction: true, + isIssue: false, + isPullRequest: false, + isPush: true, + issueNumber: 33, + }); + + await handler.internalUpdate(execution, 'h'); + + expect(mockGetDescription).toHaveBeenCalledWith('o', 'r', 33, 't'); + }); + + it('when isSingleAction and not issue/PR/push uses singleAction.issue', async () => { + mockGetDescription.mockResolvedValue(descriptionWithBlock('i')); + mockUpdateDescription.mockResolvedValue(undefined); + const execution = minimalExecution({ + isSingleAction: true, + isIssue: false, + isPullRequest: false, + isPush: false, + singleAction: { issue: 999, isIssue: false, isPullRequest: false, isPush: false }, + }); + + await handler.internalUpdate(execution, 'j'); + + expect(mockGetDescription).toHaveBeenCalledWith('o', 'r', 999, 't'); + }); + + it('returns undefined when execution is not issue, PR, push or single action', async () => { + const execution = minimalExecution({ + isIssue: false, + isPullRequest: false, + isPush: false, + isSingleAction: false, + }); + + const result = await handler.internalUpdate(execution, 'content'); + + expect(mockGetDescription).not.toHaveBeenCalled(); + expect(result).toBeUndefined(); + }); + + it('returns undefined when updateContent returns undefined', async () => { + mockGetDescription.mockResolvedValue('only start