Conversation
Add 10 new test files covering main routes, auth, database, catalog, compose generator, exchange rates, and all 14 collectors. Update CI to report coverage via Codecov. Add codecov.yml with 90% target (orchestrator.py and worker_api.py excluded as they require docker SDK). 777 tests passing, 93% coverage on tracked code.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 52 minutes and 32 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (8)
📝 WalkthroughWalkthroughThis PR adds comprehensive test coverage and CI/CD configuration. It introduces GitHub Actions workflow modifications for pytest execution with coverage reporting, a Codecov configuration file, and 13 new test modules covering authentication, catalog loading, collectors, Docker Compose generation, database operations, exchange rates, and API endpoints. Total of 5,008 lines of tests added. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
| def test_tailscale_dns_allowed(self): | ||
| from app.main import _validate_worker_url | ||
| result = _validate_worker_url("http://worker.mango.ts.net:8081") | ||
| assert "worker.mango.ts.net" in result |
Welcome to Codecov 🎉Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests. Thanks for integrating Codecov - We've got you covered ☂️ |
Add 55 tests covering specific uncovered lines across catalog, collectors, database, auth, fleet_key, and main modules. Update codecov.yml with full project/patch targets, ignore patterns, and comment layout.
Fix import sorting (I001), nested with statements (SIM117), unused variable (F841), and blind exception catch (B017).
Fix unused imports (F401), ambiguous variable name (E741), nested with statements (SIM117), and HTTPException test assertion.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/test.yml (1)
17-17:⚠️ Potential issue | 🟡 MinorPython version mismatch with project guideline.
Workflow pins Python
3.14, but the coding guideline mandates Python 3.12 for the backend. Either intentionally bump the guideline or align CI back to 3.12 — running tests on a different minor than the deployment target risks Codecov reporting coverage that doesn't match production behavior.As per coding guidelines: "Use FastAPI with Python 3.12 and async patterns for the backend framework".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test.yml at line 17, The GitHub Actions CI is pinned to python-version "3.14" but project guidelines require Python 3.12; update the workflow's python-version key in the test pipeline to "3.12" (in the .github/workflows/test.yml workflow where the python-version: "3.14" entry exists) so CI runs with the same Python minor as the backend; if the project intent is to move to 3.14 instead, coordinate and update the guideline instead of changing the workflow.
🧹 Nitpick comments (13)
tests/test_compose.py (1)
48-52: Redundant assignment.
_mock_service(image="")already setsdocker.image=""; the explicitsvc["docker"]["image"] = ""on line 50 is dead.🔧 Cleanup
def test_no_image_returns_none(self): svc = _mock_service(image="") - svc["docker"]["image"] = "" result = compose_generator._service_to_compose(svc) assert result is None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_compose.py` around lines 48 - 52, The test test_no_image_returns_none in tests/test_compose.py contains a redundant explicit assignment svc["docker"]["image"] = "" because _mock_service(image="") already sets docker.image to an empty string; remove the explicit assignment line to clean up the test while keeping the call to _mock_service(image="") and the assertion that compose_generator._service_to_compose(svc) returns None.tests/test_exchange_rates_extended.py (1)
19-58: Module-level state isn't isolated between tests.
refresh()mutates the package-globals_crypto_usd,_fiat_rates, and_last_fetch. These tests don't snapshot/restore them, so order-dependent leakage intotests/test_exchange_rates.py(or vice versa) is possible — particularly_last_fetch > 0carrying over and masking a real failure later. Consider an autouse fixture that snapshots and restores the module state, similar to the try/finally pattern already used intest_exchange_rates.py::TestToUsd.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_exchange_rates_extended.py` around lines 19 - 58, Tests mutate module-level state (_crypto_usd, _fiat_rates, _last_fetch) in exchange_rates.refresh which can leak between tests; add an autouse fixture in tests/test_exchange_rates_extended.py that snapshots these globals before each test and restores them after (try/finally) so tests like TestRefreshSuccess, TestRefresh_crypto_failure_still_fetches_fiat, and TestRefresh_total_failure run isolated and _last_fetch cannot carry over between tests.tests/test_database.py (2)
174-178: Empty assertion test.
test_delete_empty_keys_nooponly verifies the call doesn't raise — it's a pure coverage probe. Add an assertion that no rows were deleted (e.g., insert a key first, call with[], verify the key still exists) to give it actual semantic weight.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_database.py` around lines 174 - 178, Add a real assertion to test_delete_empty_keys_noop by inserting a known config row via the database API (e.g., using database.set_config or an insert helper), then call database.delete_config_keys([]) as currently done, and finally query the config (e.g., database.get_config or a direct select) to assert the inserted key still exists and no rows were deleted; update the async run closure to perform insert, delete with empty list, and the existence assertion so the test verifies behavior rather than only ensuring no exception.
1-445: Inconsistent async pattern across the test suite.This module uses sync tests wrapping
asyncio.run(...)whiletests/test_exchange_rates.pyuses@pytest.mark.asyncio. Both work, but the duplicatedasync def run(): ...; asyncio.run(run())boilerplate in every test bloats the file. Sincepytest-asynciois already a dependency, consider standardizing on@pytest.mark.asyncioto drop the inner-function pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_database.py` around lines 1 - 445, Tests use sync wrappers calling asyncio.run(...) for each async test, causing repetitive boilerplate; replace these with native async pytest tests by converting each inner async def run():...; asyncio.run(run()) pattern into top-level async test coroutines decorated with `@pytest.mark.asyncio` (or making the whole module use pytest.mark.asyncio via pytest_plugins/config) and remove all asyncio.run(...) calls; update functions referenced in tests (e.g., TestInitDb.test_creates_tables, all methods in TestEarnings, TestConfig, TestDeployments, TestUsers, TestUserPreferences, TestWorkers, TestHealthEvents, TestDataRetention) to be async def and add `@pytest.mark.asyncio` above each test method (or module-level marker) so they run directly under pytest-asyncio.tests/test_collectors_deep.py (2)
19-32: Minor: default mutable-ishjson_data=Nonepattern is fine, but theor {}coalesces falsy JSON payloads like[],0, or""into{}.Not triggered by current tests, but if a future test passes
json_data=[]orjson_data=0expecting the collector to see that exact value,resp.json()will unexpectedly return{}. Considerjson_data if json_data is not None else {}to future-proof the helper (also applies to the identical helpers intests/test_collectors_extended.pyandtests/test_coverage_gaps.py).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_collectors_deep.py` around lines 19 - 32, The helper _mock_response currently uses "json_data or {}" which coerces falsy but valid payloads (e.g., [], 0, "") into {}, so change the assignment in _mock_response to use an explicit None check: set resp.json.return_value to json_data if json_data is not None else {}. Apply the same change in the identical helper functions in tests/test_collectors_extended.py and tests/test_coverage_gaps.py to preserve intended falsy JSON values.
405-418: Redundanttext=assignments in_mock_responsecalls.
_mock_response(..., text="<p>no balance</p>", ...)already setsresp.text; the subsequenthtml_resp.text = "<p>no balance</p>"on lines 408 and 424 is a no-op duplicate. Safe to drop. Same pattern appears intests/test_collectors_extended.pyaround the PacketStream/Bytelixir HTML tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_collectors_deep.py` around lines 405 - 418, The test contains redundant manual assignments to the mock response body after calling the helper: remove the duplicate lines like `html_resp.text = "<p>no balance</p>"` since `_mock_response(..., text="...")` already sets `resp.text`; update tests/test_collectors_deep.py (and similar spots in tests/test_collectors_extended.py) to only pass the text arg to `_mock_response` and delete the subsequent `html_resp.text = ...` assignments so the mocked responses are set in one place.tests/test_collectors_extended.py (2)
137-151: Helperhttpx_cookies_mockis used before it's defined in source order.Line 137 references
httpx_cookies_mockinsideTestEarnAppCollector.test_collect_403, but the function isn't declared until line 147 (below that class). It works only because Python resolves free names at call time, but anyone reading top-to-bottom will trip over it. Move the helper up next to_make_async_client/_mock_response.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_collectors_extended.py` around lines 137 - 151, Move the helper function httpx_cookies_mock so it's defined before it's first used in TestEarnAppCollector.test_collect_403; specifically, relocate httpx_cookies_mock above the test class (or at least above the test method) near the other test helpers like _make_async_client and _mock_response so readers see helper functions in one place and the test no longer references a function declared later.
29-33: Already imported at top? Prefer a single top-levelimport httpxinstead of re-importing inside the helper.
_mock_responsedoesimport httpx as _httpxevery call.tests/test_collectors_deep.pyandtests/test_coverage_gaps.pyalready importhttpxat module scope; matching that convention here would be cheaper and more consistent (ruff may also flag the inline import depending on config).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_collectors_extended.py` around lines 29 - 33, The helper currently does an inline import "import httpx as _httpx" when setting resp.raise_for_status.side_effect; move that import to module scope by adding "import httpx" at the top of tests/test_collectors_extended.py and update the helper to reference httpx.HTTPStatusError (or httpx as used elsewhere) instead of _httpx.HTTPStatusError so the inline import is removed and the test file matches the module-level import convention used by tests/test_collectors_deep.py and tests/test_coverage_gaps.py.tests/test_coverage_gaps.py (1)
969-982: Nice targeted checks; consider also asserting the encrypted-prefix round-trip.
_is_secret_keycoverage here is solid. As a follow-up, a 1-liner verifyingdecrypt_value(encrypt_value("x"))returns"x"would pin the happy path for the Fernet helpers alongside the invalid-ciphertext branch on line 971.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_coverage_gaps.py` around lines 969 - 982, Add a one-line positive-roundtrip assertion to the test to verify the Fernet helpers: call database.encrypt_value on a plain string (e.g., "x") and then pass that output to database.decrypt_value and assert the result equals the original string; update the test_decrypt_invalid_token or test_is_secret_key_various (e.g., add a new assert after existing checks) to include assert database.decrypt_value(database.encrypt_value("x")) == "x" so the happy path for encrypt_value and decrypt_value is covered alongside the invalid-token case.tests/test_auth_extended.py (2)
141-153:test_known_default_ignoredsilently depends on"changeme"being in_KNOWN_DEFAULTS.If
_KNOWN_DEFAULTSever drops"changeme"(or renames the sentinel), this test passes by generating a new key anyway — but for the wrong reason, since the env var would now be accepted. Assert the precondition explicitly:+ assert "changeme" in auth._KNOWN_DEFAULTS result = auth._resolve_secret_key() assert result != "changeme"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_auth_extended.py` around lines 141 - 153, The test test_known_default_ignored relies on "changeme" being treated as a known default by auth._resolve_secret_key but doesn't assert that precondition; add an explicit assertion at the start of the test that "changeme" is present in auth._KNOWN_DEFAULTS (or otherwise check the sentinel exists) so the test fails loudly if the sentinel is removed/renamed, then proceed with the existing patched env/CASHPILOT_SECRET_KEY behavior to verify the function generates a new key.
74-77: Weak assertion — theormakes this effectively tautological.
args[1]["httponly"] is True or args[0][1] == "test-token"passes as long as either is true, and the real implementation sets both, so this never exerciseshttponlymeaningfully. Split into two explicit asserts:- args = resp.set_cookie.call_args - assert args[1]["httponly"] is True or args[0][1] == "test-token" - assert result is resp + args = resp.set_cookie.call_args + assert args.args[1] == "test-token" + assert args.kwargs["httponly"] is True + assert result is respSame weakening appears on lines 87 and 94 (
call_kwargs[1].get("secure") is True or call_kwargs.kwargs.get("secure") is True—call_args[1]andcall_args.kwargsare the same dict, so one branch is redundant).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_auth_extended.py` around lines 74 - 77, The test uses an "or" in assertions that makes them ineffective; replace the combined assertions with explicit checks: after resp.set_cookie.assert_called_once() and args = resp.set_cookie.call_args assert args[1].get("httponly") is True and assert args[0][1] == "test-token" separately. Do the same for the other weak checks that use call_args[1] vs call_args.kwargs—replace the "or" expressions with two explicit asserts, e.g., assert call_args.kwargs.get("secure") is True (or assert call_args[1].get("secure") is True) and separately assert the expected cookie value where applicable, removing the redundant branch.tests/test_main_deploy_routes.py (2)
449-477: Verifyslugmatching against catalog services — test uses raw"honeygain".
catalog.get_serviceis patched to always returnsvcregardless of slug, so the assertiondata[0]["slug"] == "honeygain"is really asserting the container's slug, not the catalog's. That's fine for this test's intent, but be aware this won't catch regressions where the aggregation keys by catalog slug vs. container slug. A follow-up test that returnsNonefor unknown slugs would complement this.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_main_deploy_routes.py` around lines 449 - 477, The test test_aggregation_with_workers_and_earnings currently patches app.main.catalog.get_service to always return svc, which masks whether aggregation uses the catalog slug vs. container slug; update the patch to behave like the real catalog by making app.main.catalog.get_service return None for unknown slugs (or use a side_effect that checks the passed slug and returns svc only for "honeygain") and/or add an additional assertion/test case that patches get_service to return None to ensure aggregation keys are taken from the catalog when available; locate the patch referencing "app.main.catalog.get_service" and change its return behavior accordingly.
508-513: Brittle: assumescpuis returned as a"3.0%"-style string.
float(data[0]["cpu"].rstrip("%"))will blow up withAttributeErrorif the endpoint ever switches to returning a numeric value (which is a common, reasonable change). Consider asserting against the shape the API contract actually guarantees, e.g.assert data[0]["cpu"] in ("3.0%", "3.00%")or parsing defensively.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_main_deploy_routes.py` around lines 508 - 513, Test assumes data[0]["cpu"] is a "%"-suffixed string; change the assertion in tests/test_main_deploy_routes.py to parse defensively: read cpu = data[0]["cpu'], if it's a string strip a trailing "%" then float() it, otherwise convert the numeric value to float, then assert the resulting float equals 3.0; update the assertions around resp, data and data[0]["cpu"] accordingly so the test passes whether the API returns "3.0%" or a numeric 3.0.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/test.yml:
- Around line 32-34: The workflow uses the nonexistent input name "file" for
codecov-action@v5, so update the action input to use the correct "files" input;
locate the codecov step referencing codecov-action@v5 (the line containing
"file: ./coverage.xml") and replace that input with "files: ./coverage.xml" (or
a files array/glob if multiple paths) so the explicit coverage path is actually
uploaded.
In `@tests/test_collectors_extended.py`:
- Around line 505-518: The test_collect_html_scrape_success assertion uses a
loose tolerance and expects 0.0403 even though the HTML '<span>$</span>0.04<span
class="text-2xs">025</span>' is parsed by BytelixirCollector as "0.04" + "025" →
0.04025; update the test (test_collect_html_scrape_success) to assert the
precise parsed value (e.g., pytest.approx(0.04025, abs=1e-6) or an exact
equality if preferred) so that the expectation matches the actual concatenation
result and will fail on real regressions.
In `@tests/test_coverage_gaps.py`:
- Around line 945-958: The test test_grass_epoch_fallback_no_devices currently
only asserts result.balance == 0.0 and will not catch cases where the collector
returned an error alongside a zero balance; update the assertion to also
validate the error state for GrassCollector by asserting result.error is None
(or assert the exact expected error string if the no-devices path should set an
error), so change the test to check both result.balance and result.error after
calling c.collect().
- Around line 302-309: The fixture `client` is mutating
`app.router.lifespan_context` to `_noop_lifespan` without restoring it, causing
global state leakage; update the fixture to save the original `orig =
app.router.lifespan_context` before assigning `_noop_lifespan` and restore it in
a finally block after the `TestClient` context (or move the override into a
`conftest.py` scoped fixture so it’s applied intentionally for the whole test
session). Ensure you reference `client`, `app.router.lifespan_context`, and
`_noop_lifespan` when making the change so the original lifespan is reliably
restored after the fixture runs.
- Around line 115-154: The tests mutate module-level caches catalog._services
and catalog._by_slug and do not restore them, causing test pollution; update the
tests (test_get_services_lazy_loads, test_get_service_lazy_loads,
TestCatalogSighup.test_sighup_handler_reloads) to snapshot-and-restore or
replace those dicts for the duration of each test (e.g., accept and use the
provided _catalog_cache fixture or use patch.object to temporarily set
catalog._services and catalog._by_slug to new empty dicts), run
get_services/get_service/_sighup_handler while the temporary dicts are in place,
and restore the original dict objects after the test so the module-level caches
are not left populated with synthetic entries.
In `@tests/test_main_deploy_routes.py`:
- Around line 22-26: The module-level override of app.router.lifespan_context
with _noop_lifespan leaks across tests; instead create a fixture that
temporarily replaces app.router.lifespan_context and restores it after the test.
Implement a pytest async fixture (e.g., noop_lifespan_fixture) that saves
original = app.router.lifespan_context, sets app.router.lifespan_context =
_noop_lifespan, yields control to the test, then in teardown restores
app.router.lifespan_context = original; update tests to use that fixture (or use
FastAPI dependency overrides) so the replacement is scoped per-test rather than
global.
In `@tests/test_main_routes.py`:
- Around line 1021-1024: Update the test_tailscale_dns_allowed to assert the
exact output from the validator instead of a substring: call
_validate_worker_url("http://worker.mango.ts.net:8081") and assert result ==
"http://worker.mango.ts.net:8081" so the test verifies _validate_worker_url's
precise return value rather than merely containing the substring; reference the
test function test_tailscale_dns_allowed and the validator function
_validate_worker_url when making the change.
- Around line 21-27: The module-level replacement of app.router.lifespan_context
with _noop_lifespan leaks across test modules; change this to a fixture that
saves the original, sets app.router.lifespan_context = _noop_lifespan for the
duration of the test, yields, and then restores the original value (or use
monkeypatch to set and restore) and move that fixture into the test module or
conftest.py; also remove the standalone client fixture that depended on the
global swap and instead depend on the new lifespan fixture so the real lifespan
is not mutated globally.
---
Outside diff comments:
In @.github/workflows/test.yml:
- Line 17: The GitHub Actions CI is pinned to python-version "3.14" but project
guidelines require Python 3.12; update the workflow's python-version key in the
test pipeline to "3.12" (in the .github/workflows/test.yml workflow where the
python-version: "3.14" entry exists) so CI runs with the same Python minor as
the backend; if the project intent is to move to 3.14 instead, coordinate and
update the guideline instead of changing the workflow.
---
Nitpick comments:
In `@tests/test_auth_extended.py`:
- Around line 141-153: The test test_known_default_ignored relies on "changeme"
being treated as a known default by auth._resolve_secret_key but doesn't assert
that precondition; add an explicit assertion at the start of the test that
"changeme" is present in auth._KNOWN_DEFAULTS (or otherwise check the sentinel
exists) so the test fails loudly if the sentinel is removed/renamed, then
proceed with the existing patched env/CASHPILOT_SECRET_KEY behavior to verify
the function generates a new key.
- Around line 74-77: The test uses an "or" in assertions that makes them
ineffective; replace the combined assertions with explicit checks: after
resp.set_cookie.assert_called_once() and args = resp.set_cookie.call_args assert
args[1].get("httponly") is True and assert args[0][1] == "test-token"
separately. Do the same for the other weak checks that use call_args[1] vs
call_args.kwargs—replace the "or" expressions with two explicit asserts, e.g.,
assert call_args.kwargs.get("secure") is True (or assert
call_args[1].get("secure") is True) and separately assert the expected cookie
value where applicable, removing the redundant branch.
In `@tests/test_collectors_deep.py`:
- Around line 19-32: The helper _mock_response currently uses "json_data or {}"
which coerces falsy but valid payloads (e.g., [], 0, "") into {}, so change the
assignment in _mock_response to use an explicit None check: set
resp.json.return_value to json_data if json_data is not None else {}. Apply the
same change in the identical helper functions in
tests/test_collectors_extended.py and tests/test_coverage_gaps.py to preserve
intended falsy JSON values.
- Around line 405-418: The test contains redundant manual assignments to the
mock response body after calling the helper: remove the duplicate lines like
`html_resp.text = "<p>no balance</p>"` since `_mock_response(..., text="...")`
already sets `resp.text`; update tests/test_collectors_deep.py (and similar
spots in tests/test_collectors_extended.py) to only pass the text arg to
`_mock_response` and delete the subsequent `html_resp.text = ...` assignments so
the mocked responses are set in one place.
In `@tests/test_collectors_extended.py`:
- Around line 137-151: Move the helper function httpx_cookies_mock so it's
defined before it's first used in TestEarnAppCollector.test_collect_403;
specifically, relocate httpx_cookies_mock above the test class (or at least
above the test method) near the other test helpers like _make_async_client and
_mock_response so readers see helper functions in one place and the test no
longer references a function declared later.
- Around line 29-33: The helper currently does an inline import "import httpx as
_httpx" when setting resp.raise_for_status.side_effect; move that import to
module scope by adding "import httpx" at the top of
tests/test_collectors_extended.py and update the helper to reference
httpx.HTTPStatusError (or httpx as used elsewhere) instead of
_httpx.HTTPStatusError so the inline import is removed and the test file matches
the module-level import convention used by tests/test_collectors_deep.py and
tests/test_coverage_gaps.py.
In `@tests/test_compose.py`:
- Around line 48-52: The test test_no_image_returns_none in
tests/test_compose.py contains a redundant explicit assignment
svc["docker"]["image"] = "" because _mock_service(image="") already sets
docker.image to an empty string; remove the explicit assignment line to clean up
the test while keeping the call to _mock_service(image="") and the assertion
that compose_generator._service_to_compose(svc) returns None.
In `@tests/test_coverage_gaps.py`:
- Around line 969-982: Add a one-line positive-roundtrip assertion to the test
to verify the Fernet helpers: call database.encrypt_value on a plain string
(e.g., "x") and then pass that output to database.decrypt_value and assert the
result equals the original string; update the test_decrypt_invalid_token or
test_is_secret_key_various (e.g., add a new assert after existing checks) to
include assert database.decrypt_value(database.encrypt_value("x")) == "x" so the
happy path for encrypt_value and decrypt_value is covered alongside the
invalid-token case.
In `@tests/test_database.py`:
- Around line 174-178: Add a real assertion to test_delete_empty_keys_noop by
inserting a known config row via the database API (e.g., using
database.set_config or an insert helper), then call
database.delete_config_keys([]) as currently done, and finally query the config
(e.g., database.get_config or a direct select) to assert the inserted key still
exists and no rows were deleted; update the async run closure to perform insert,
delete with empty list, and the existence assertion so the test verifies
behavior rather than only ensuring no exception.
- Around line 1-445: Tests use sync wrappers calling asyncio.run(...) for each
async test, causing repetitive boilerplate; replace these with native async
pytest tests by converting each inner async def run():...; asyncio.run(run())
pattern into top-level async test coroutines decorated with `@pytest.mark.asyncio`
(or making the whole module use pytest.mark.asyncio via pytest_plugins/config)
and remove all asyncio.run(...) calls; update functions referenced in tests
(e.g., TestInitDb.test_creates_tables, all methods in TestEarnings, TestConfig,
TestDeployments, TestUsers, TestUserPreferences, TestWorkers, TestHealthEvents,
TestDataRetention) to be async def and add `@pytest.mark.asyncio` above each test
method (or module-level marker) so they run directly under pytest-asyncio.
In `@tests/test_exchange_rates_extended.py`:
- Around line 19-58: Tests mutate module-level state (_crypto_usd, _fiat_rates,
_last_fetch) in exchange_rates.refresh which can leak between tests; add an
autouse fixture in tests/test_exchange_rates_extended.py that snapshots these
globals before each test and restores them after (try/finally) so tests like
TestRefreshSuccess, TestRefresh_crypto_failure_still_fetches_fiat, and
TestRefresh_total_failure run isolated and _last_fetch cannot carry over between
tests.
In `@tests/test_main_deploy_routes.py`:
- Around line 449-477: The test test_aggregation_with_workers_and_earnings
currently patches app.main.catalog.get_service to always return svc, which masks
whether aggregation uses the catalog slug vs. container slug; update the patch
to behave like the real catalog by making app.main.catalog.get_service return
None for unknown slugs (or use a side_effect that checks the passed slug and
returns svc only for "honeygain") and/or add an additional assertion/test case
that patches get_service to return None to ensure aggregation keys are taken
from the catalog when available; locate the patch referencing
"app.main.catalog.get_service" and change its return behavior accordingly.
- Around line 508-513: Test assumes data[0]["cpu"] is a "%"-suffixed string;
change the assertion in tests/test_main_deploy_routes.py to parse defensively:
read cpu = data[0]["cpu'], if it's a string strip a trailing "%" then float()
it, otherwise convert the numeric value to float, then assert the resulting
float equals 3.0; update the assertions around resp, data and data[0]["cpu"]
accordingly so the test passes whether the API returns "3.0%" or a numeric 3.0.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 73a057eb-953b-458e-a811-d11b79352c6c
📒 Files selected for processing (13)
.github/workflows/test.ymlcodecov.ymltests/test_auth_extended.pytests/test_catalog_loader.pytests/test_collectors_deep.pytests/test_collectors_extended.pytests/test_compose.pytests/test_coverage_gaps.pytests/test_database.pytests/test_exchange_rates.pytests/test_exchange_rates_extended.pytests/test_main_deploy_routes.pytests/test_main_routes.py
| with: | ||
| file: ./coverage.xml | ||
| token: ${{ secrets.CODECOV_TOKEN }} |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
codecov/codecov-action v5 inputs file vs files
💡 Result:
In codecov/codecov-action v5, there is no 'file' input; only 'files' is available. The singular 'file' input was deprecated in favor of the plural 'files' starting from v4 or v5. The 'files' input accepts a comma-separated list of explicit files (or globs, including exclusions with '!') to upload. These are added to any coverage files found via search in the specified 'directory' (defaults to current working directory). To upload ONLY the specified files (without searching for others), combine with 'disable_search: true'. Examples from official docs: with: files: ./coverage1.xml,./coverage2.xml,!./cache or with: files: ./coverage1.xml,./coverage2.xml disable_search: true Note: Even with 'files', the action may still log or find other files unless 'disable_search' is set, as search runs by default but specified files take precedence. This behavior is confirmed in the v5 action.yml and README for versions like v5.5.2.
Citations:
- 1: https://github.com/codecov/codecov-action/tree/v5.5.0
- 2: https://github.com/codecov/codecov-action/blob/v5.5.2/action.yml
- 3: https://github.com/codecov/codecov-action
- 4: https://github.com/codecov/codecov-action/tree/v5.4.1
- 5: https://github.com/codecov/codecov-action/tree/v4.4.0
- 6: https://github.com/codecov/codecov-action/blob/main/README.md
- 7: https://github.com/codecov/codecov-action/blob/main/action.yml
Change file to files for codecov-action@v5.
The file input doesn't exist in codecov-action v5 (only files is valid). This causes the input to be silently ignored, leaving Codecov to fall back to auto-discovery instead of uploading the explicit path you specified.
Fix
with:
- file: ./coverage.xml
+ files: ./coverage.xml
token: ${{ secrets.CODECOV_TOKEN }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| with: | |
| file: ./coverage.xml | |
| token: ${{ secrets.CODECOV_TOKEN }} | |
| with: | |
| files: ./coverage.xml | |
| token: ${{ secrets.CODECOV_TOKEN }} |
🧰 Tools
🪛 actionlint (1.7.12)
[error] 33-33: input "file" is not defined in action "codecov/codecov-action@v5". available inputs are "base_sha", "binary", "codecov_yml_path", "commit_parent", "directory", "disable_file_fixes", "disable_safe_directory", "disable_search", "disable_telem", "dry_run", "env_vars", "exclude", "fail_ci_if_error", "files", "flags", "force", "gcov_args", "gcov_executable", "gcov_ignore", "gcov_include", "git_service", "handle_no_reports_found", "job_code", "name", "network_filter", "network_prefix", "os", "override_branch", "override_build", "override_build_url", "override_commit", "override_pr", "plugins", "recurse_submodules", "report_code", "report_type", "root_dir", "run_command", "skip_validation", "slug", "swift_project", "token", "url", "use_legacy_upload_endpoint", "use_oidc", "use_pypi", "verbose", "version", "working-directory"
(action)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/test.yml around lines 32 - 34, The workflow uses the
nonexistent input name "file" for codecov-action@v5, so update the action input
to use the correct "files" input; locate the codecov step referencing
codecov-action@v5 (the line containing "file: ./coverage.xml") and replace that
input with "files: ./coverage.xml" (or a files array/glob if multiple paths) so
the explicit coverage path is actually uploaded.
| def test_collect_html_scrape_success(self): | ||
| from app.collectors.bytelixir import BytelixirCollector | ||
|
|
||
| html = '<span>$</span>0.04<span class="text-2xs">025</span>' | ||
| resp = _mock_response(200, text=html, url="https://dash.bytelixir.com/en") | ||
| resp.text = html | ||
| client = _make_async_client() | ||
| client.get.return_value = resp | ||
|
|
||
| with patch.object(BytelixirCollector, "_make_client", return_value=client): | ||
| c = BytelixirCollector(session_cookie="valid-sess") | ||
| result = asyncio.run(c.collect()) | ||
| assert result.error is None | ||
| assert result.balance == pytest.approx(0.0403, abs=0.001) |
There was a problem hiding this comment.
Double-check the expected balance for the split-span parse.
Given HTML '<span>$</span>0.04<span class="text-2xs">025</span>', the parser concatenates "0.04" + "025" → 0.04025. The assertion pytest.approx(0.0403, abs=0.001) passes today because the tolerance is loose, but the intent reads like you expected 0.0403. If the true value is 0.04025, tighten the expectation so a real regression fails:
- assert result.balance == pytest.approx(0.0403, abs=0.001)
+ assert result.balance == pytest.approx(0.04025, abs=1e-6)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_collect_html_scrape_success(self): | |
| from app.collectors.bytelixir import BytelixirCollector | |
| html = '<span>$</span>0.04<span class="text-2xs">025</span>' | |
| resp = _mock_response(200, text=html, url="https://dash.bytelixir.com/en") | |
| resp.text = html | |
| client = _make_async_client() | |
| client.get.return_value = resp | |
| with patch.object(BytelixirCollector, "_make_client", return_value=client): | |
| c = BytelixirCollector(session_cookie="valid-sess") | |
| result = asyncio.run(c.collect()) | |
| assert result.error is None | |
| assert result.balance == pytest.approx(0.0403, abs=0.001) | |
| def test_collect_html_scrape_success(self): | |
| from app.collectors.bytelixir import BytelixirCollector | |
| html = '<span>$</span>0.04<span class="text-2xs">025</span>' | |
| resp = _mock_response(200, text=html, url="https://dash.bytelixir.com/en") | |
| resp.text = html | |
| client = _make_async_client() | |
| client.get.return_value = resp | |
| with patch.object(BytelixirCollector, "_make_client", return_value=client): | |
| c = BytelixirCollector(session_cookie="valid-sess") | |
| result = asyncio.run(c.collect()) | |
| assert result.error is None | |
| assert result.balance == pytest.approx(0.04025, abs=1e-6) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_collectors_extended.py` around lines 505 - 518, The
test_collect_html_scrape_success assertion uses a loose tolerance and expects
0.0403 even though the HTML '<span>$</span>0.04<span
class="text-2xs">025</span>' is parsed by BytelixirCollector as "0.04" + "025" →
0.04025; update the test (test_collect_html_scrape_success) to assert the
precise parsed value (e.g., pytest.approx(0.04025, abs=1e-6) or an exact
equality if preferred) so that the expectation matches the actual concatenation
result and will fail on real regressions.
| def test_get_services_lazy_loads(self, tmp_path): | ||
| svc_dir = tmp_path / "services" / "bandwidth" | ||
| svc_dir.mkdir(parents=True) | ||
| (svc_dir / "lazy.yml").write_text(_make_service_yaml("lazy")) | ||
|
|
||
| # Clear cache | ||
| catalog._services.clear() | ||
| catalog._by_slug.clear() | ||
|
|
||
| with patch.object(catalog, "SERVICES_DIR", tmp_path / "services"): | ||
| services = catalog.get_services() | ||
| assert len(services) >= 1 | ||
|
|
||
| def test_get_service_lazy_loads(self, tmp_path): | ||
| svc_dir = tmp_path / "services" / "bandwidth" | ||
| svc_dir.mkdir(parents=True) | ||
| (svc_dir / "lazysvc.yml").write_text(_make_service_yaml("lazysvc")) | ||
|
|
||
| # Clear cache | ||
| catalog._services.clear() | ||
| catalog._by_slug.clear() | ||
|
|
||
| with patch.object(catalog, "SERVICES_DIR", tmp_path / "services"): | ||
| svc = catalog.get_service("lazysvc") | ||
| assert svc is not None | ||
| assert svc["slug"] == "lazysvc" | ||
|
|
||
|
|
||
| class TestCatalogSighup: | ||
| """Cover lines 121-128: SIGHUP handler and registration.""" | ||
|
|
||
| def test_sighup_handler_reloads(self, tmp_path): | ||
| svc_dir = tmp_path / "services" / "bandwidth" | ||
| svc_dir.mkdir(parents=True) | ||
| (svc_dir / "svc.yml").write_text(_make_service_yaml("svc")) | ||
|
|
||
| with patch.object(catalog, "SERVICES_DIR", tmp_path / "services"): | ||
| catalog._sighup_handler(0, None) | ||
| # After reload, service should be in cache | ||
| assert len(catalog._services) >= 1 |
There was a problem hiding this comment.
Test pollution: clears catalog._services/_by_slug without restoration.
catalog._services.clear() (and _by_slug.clear()) mutates module-level caches. After the with patch.object(catalog, "SERVICES_DIR", ...) block exits and get_services() / _sighup_handler has populated the cache from tmp_path, the caches still contain the synthetic lazy/lazysvc/svc entries. Any subsequent test (in this file or others) that trusts the real on-disk catalog will now see stale/test data, and ordering determines whether it passes.
Use a fixture that snapshots and restores the caches, or patch the dicts themselves:
+@pytest.fixture
+def _catalog_cache():
+ services = catalog._services.copy()
+ by_slug = catalog._by_slug.copy()
+ catalog._services.clear()
+ catalog._by_slug.clear()
+ try:
+ yield
+ finally:
+ catalog._services.clear(); catalog._services.extend(services)
+ catalog._by_slug.clear(); catalog._by_slug.update(by_slug)and take _catalog_cache as a parameter on each of these tests (also applies to TestCatalogSighup.test_sighup_handler_reloads).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_coverage_gaps.py` around lines 115 - 154, The tests mutate
module-level caches catalog._services and catalog._by_slug and do not restore
them, causing test pollution; update the tests (test_get_services_lazy_loads,
test_get_service_lazy_loads, TestCatalogSighup.test_sighup_handler_reloads) to
snapshot-and-restore or replace those dicts for the duration of each test (e.g.,
accept and use the provided _catalog_cache fixture or use patch.object to
temporarily set catalog._services and catalog._by_slug to new empty dicts), run
get_services/get_service/_sighup_handler while the temporary dicts are in place,
and restore the original dict objects after the test so the module-level caches
are not left populated with synthetic entries.
| @pytest.fixture | ||
| def client(): | ||
| from app.main import app | ||
| app.router.lifespan_context = _noop_lifespan | ||
| from fastapi.testclient import TestClient | ||
| with TestClient(app, raise_server_exceptions=False) as c: | ||
| yield c | ||
|
|
There was a problem hiding this comment.
Same lifespan_context override here as in test_main_deploy_routes.py — assignment is scoped to the fixture but never restored.
Line 305 replaces app.router.lifespan_context per-fixture but doesn't save/restore the original, so it stays as _noop_lifespan for the rest of the pytest session after the first use. Combined with the identical module-level override in tests/test_main_deploy_routes.py, this makes ordering matter for anything that later imports app.main.app expecting a real lifespan. Restore in a try/finally or (better) move to conftest.py.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_coverage_gaps.py` around lines 302 - 309, The fixture `client` is
mutating `app.router.lifespan_context` to `_noop_lifespan` without restoring it,
causing global state leakage; update the fixture to save the original `orig =
app.router.lifespan_context` before assigning `_noop_lifespan` and restore it in
a finally block after the `TestClient` context (or move the override into a
`conftest.py` scoped fixture so it’s applied intentionally for the whole test
session). Ensure you reference `client`, `app.router.lifespan_context`, and
`_noop_lifespan` when making the change so the original lifespan is reliably
restored after the fixture runs.
| def test_grass_epoch_fallback_no_devices(self): | ||
| """Cover grass.py lines 100-101, 107-108, 117: active epoch with empty devices.""" | ||
| from app.collectors.grass import GrassCollector | ||
|
|
||
| user_resp = _mock_response(200, {"result": {"data": {"totalPoints": 0}}}) | ||
| devices_resp = _mock_response(200, {"result": {"data": []}}) | ||
|
|
||
| client = _make_async_client() | ||
| client.get.side_effect = [user_resp, devices_resp] | ||
|
|
||
| with patch("app.collectors.grass.httpx.AsyncClient", return_value=client): | ||
| c = GrassCollector(access_token="test-token") | ||
| result = asyncio.run(c.collect()) | ||
| assert result.balance == 0.0 |
There was a problem hiding this comment.
test_grass_epoch_fallback_no_devices — good edge case, but the assertion result.balance == 0.0 won't catch an error result.
If the collector sets balance=0.0 with error="...", the test still passes. Add assert result.error is None if the zero-devices branch is supposed to be a clean zero, or assert the specific error message if not.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_coverage_gaps.py` around lines 945 - 958, The test
test_grass_epoch_fallback_no_devices currently only asserts result.balance ==
0.0 and will not catch cases where the collector returned an error alongside a
zero balance; update the assertion to also validate the error state for
GrassCollector by asserting result.error is None (or assert the exact expected
error string if the no-devices path should set an error), so change the test to
check both result.balance and result.error after calling c.collect().
| @asynccontextmanager | ||
| async def _noop_lifespan(a): | ||
| yield | ||
|
|
||
| app.router.lifespan_context = _noop_lifespan |
There was a problem hiding this comment.
Module-level mutation of app.router.lifespan_context leaks into the rest of the test suite.
Because app is a singleton imported from app.main, overwriting lifespan_context here replaces the real lifespan for every subsequent test module that imports the same app. Test ordering will decide whether those later tests see the real or the no-op lifespan, which is a classic flakiness source. Prefer scoping the override to a fixture (and restoring it) or using FastAPI dependency overrides.
🧪 Suggested scoping
-@asynccontextmanager
-async def _noop_lifespan(a):
- yield
-
-app.router.lifespan_context = _noop_lifespan
-
-
def _owner_user():
return {"uid": 1, "u": "admin", "r": "owner"}
@@
`@pytest.fixture`
def client():
- with TestClient(app, raise_server_exceptions=False) as c:
- yield c
+ `@asynccontextmanager`
+ async def _noop_lifespan(_a):
+ yield
+ original = app.router.lifespan_context
+ app.router.lifespan_context = _noop_lifespan
+ try:
+ with TestClient(app, raise_server_exceptions=False) as c:
+ yield c
+ finally:
+ app.router.lifespan_context = original🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_main_deploy_routes.py` around lines 22 - 26, The module-level
override of app.router.lifespan_context with _noop_lifespan leaks across tests;
instead create a fixture that temporarily replaces app.router.lifespan_context
and restores it after the test. Implement a pytest async fixture (e.g.,
noop_lifespan_fixture) that saves original = app.router.lifespan_context, sets
app.router.lifespan_context = _noop_lifespan, yields control to the test, then
in teardown restores app.router.lifespan_context = original; update tests to use
that fixture (or use FastAPI dependency overrides) so the replacement is scoped
per-test rather than global.
| # Replace the real lifespan with a no-op for tests | ||
| @asynccontextmanager | ||
| async def _noop_lifespan(a): | ||
| yield | ||
|
|
||
|
|
||
| app.router.lifespan_context = _noop_lifespan |
There was a problem hiding this comment.
Module-level lifespan mutation leaks across the entire test session.
Setting app.router.lifespan_context = _noop_lifespan at import time permanently replaces the app's real lifespan for every subsequent test module that imports app.main.app (e.g., tests/test_main_deploy_routes.py, tests/test_coverage_gaps.py). Test ordering then silently determines whether other suites get the real or the no-op lifespan — a classic flakiness vector.
Move the swap into a fixture (or conftest.py) that saves and restores the original.
🔧 Suggested fix
-# Replace the real lifespan with a no-op for tests
-@asynccontextmanager
-async def _noop_lifespan(a):
- yield
-
-
-app.router.lifespan_context = _noop_lifespan
+# Replace the real lifespan with a no-op for tests
+@asynccontextmanager
+async def _noop_lifespan(a):
+ yield
+
+
+@pytest.fixture
+def client():
+ """TestClient with no-op lifespan to avoid scheduler/DB issues."""
+ original = app.router.lifespan_context
+ app.router.lifespan_context = _noop_lifespan
+ try:
+ with TestClient(app, raise_server_exceptions=False) as c:
+ yield c
+ finally:
+ app.router.lifespan_context = original(and remove the standalone client fixture below)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_main_routes.py` around lines 21 - 27, The module-level replacement
of app.router.lifespan_context with _noop_lifespan leaks across test modules;
change this to a fixture that saves the original, sets
app.router.lifespan_context = _noop_lifespan for the duration of the test,
yields, and then restores the original value (or use monkeypatch to set and
restore) and move that fixture into the test module or conftest.py; also remove
the standalone client fixture that depended on the global swap and instead
depend on the new lifespan fixture so the real lifespan is not mutated globally.
| def test_tailscale_dns_allowed(self): | ||
| from app.main import _validate_worker_url | ||
| result = _validate_worker_url("http://worker.mango.ts.net:8081") | ||
| assert "worker.mango.ts.net" in result |
There was a problem hiding this comment.
Test asserts only substring presence.
assert "worker.mango.ts.net" in result passes for any URL that contains the substring anywhere (e.g., http://evil.com/worker.mango.ts.net). Tighten to assert result == "http://worker.mango.ts.net:8081" so the test actually verifies the validator's output rather than mirroring the same weak substring check CodeQL flagged in production code.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_main_routes.py` around lines 1021 - 1024, Update the
test_tailscale_dns_allowed to assert the exact output from the validator instead
of a substring: call _validate_worker_url("http://worker.mango.ts.net:8081") and
assert result == "http://worker.mango.ts.net:8081" so the test verifies
_validate_worker_url's precise return value rather than merely containing the
substring; reference the test function test_tailscale_dns_allowed and the
validator function _validate_worker_url when making the change.
Summary
tests/test_coverage_gaps.pycovering specific uncovered lines across catalog, collectors, database, auth, fleet_key, and main modulescodecov.ymlwith full project/patch targets (90%), ignore patterns for untestable modules (orchestrator, worker_api), and comment layout configTest plan
Summary by CodeRabbit
Tests
Chores