Conversation
There was a problem hiding this comment.
Pull request overview
This PR hardens the fixed-mask (“pixel masking”) workflow by adding deterministic mask capture/scoring + provenance, exporting mask metadata alongside analysis outputs, and providing repeatable review tooling for both raw-video reruns and exported real-video bundles.
Changes:
- Added mask scoring/consensus utilities plus per-ROI provenance (
MaskCaptureMetadata) and surfaced that provenance in the UI and exports. - Updated workers/UI to use signal-based mask capture (global/per-ROI) and added metadata sidecar export (
*_analysis_metadata.json). - Added review tooling (
tools/run_mask_review.py,tools/run_real_video_review.py) and tests/docs for the new workflow.
Reviewed changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
tools/run_real_video_review.py |
New script to bundle/copy exported artifacts and validate fixed-mask metadata/confidence. |
tools/run_mask_review.py |
New script to auto-capture masks + run analysis from raw videos and emit overlay images/review summaries. |
tools/real_video_review_manifest.example.json |
Example manifest for exported-run bundling/review. |
tools/mask_review_manifest.example.json |
Example manifest for raw-video rerun mask review. |
tests/unit/test_real_video_review.py |
Unit coverage for real-video review bundle generation. |
tests/unit/test_export_csv_exporter.py |
Updates expectations to include the new metadata sidecar in exports. |
tests/unit/test_analysis_masking.py |
Unit coverage for new masking primitives (signal mask, candidate scoring, consensus warnings). |
tests/ui/test_mask_hardening.py |
UI test ensuring parameter changes invalidate fixed masks. |
tests/integration/test_workers.py |
Integration expectation updates for worker outputs (sources/masks/metadata). |
tests/conftest.py |
Test factory now initializes fixed-mask state fields on VideoAnalyzer. |
ecl_analysis/workers.py |
Reworked auto-capture workers to use candidate scoring + consensus masks and return per-ROI provenance. |
ecl_analysis/video_analyzer.py |
UI: mask-quality display, settings persistence for mask params, new overlay semantics, export metadata snapshot. |
ecl_analysis/export/csv_exporter.py |
Writes *_analysis_metadata.json sidecar containing run settings + mask provenance. |
ecl_analysis/constants.py |
Adds default noise-floor threshold constant. |
ecl_analysis/analysis/models.py |
Adds MaskCaptureMetadata plus new fields on request/result for mask + analysis metadata. |
ecl_analysis/analysis/masking.py |
New module for building/cleaning/scoring masks and computing deterministic consensus + warnings. |
ecl_analysis/analysis/__init__.py |
Re-exports new masking APIs and metadata type. |
README.md |
Updated guidance for mask review workflow, overlay meaning, and review tooling. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| roi_overlay[adaptive_only_mask] = roi_overlay[adaptive_only_mask] * 0.55 + np.array([255, 0, 0]) * 0.45 | ||
| roi_overlay[overlap_mask] = roi_overlay[overlap_mask] * 0.45 + np.array([255, 0, 255]) * 0.55 | ||
| else: | ||
| roi_overlay[adaptive_mask] = roi_overlay[adaptive_mask] * 0.7 + np.array([0, 0, 255]) * 0.3 |
There was a problem hiding this comment.
The overlay tooltip/README describe adaptive-only pixels as blue, but when there is no fixed mask the code tints adaptive_mask red. This makes the visualization inconsistent depending on whether a fixed mask exists. Consider tinting adaptive_mask blue in the fixed_mask is None branch, or updating the tooltip text to match the actual color semantics.
| roi_overlay[adaptive_mask] = roi_overlay[adaptive_mask] * 0.7 + np.array([0, 0, 255]) * 0.3 | |
| roi_overlay[adaptive_mask] = roi_overlay[adaptive_mask] * 0.7 + np.array([255, 0, 0]) * 0.3 |
| background_value = compute_background_brightness( | ||
| frame=frame, | ||
| rects=rects, | ||
| background_roi_idx=background_roi_idx, | ||
| background_percentile=90.0, | ||
| frame_l_star=l_star_frame, | ||
| ) |
There was a problem hiding this comment.
compute_background_brightness is called with a hard-coded background_percentile=90.0, but each review case supports overriding background_percentile and the analysis run uses that value. This makes overlay masks potentially disagree with the capture/analysis settings. Pass the case's background_percentile through _save_overlay_images/_overlay_masks and use it here.
| fixed_mask = candidate.astype(bool) | ||
|
|
||
| if fixed_mask is None: | ||
| roi[adaptive_mask] = roi[adaptive_mask] * 0.7 + np.array([0, 0, 255]) * 0.3 |
There was a problem hiding this comment.
When fixed_mask is missing, the overlay tints the adaptive mask red. Elsewhere (and in the UI tooltip/README) adaptive-only pixels are described as blue. To keep the review overlays consistent with the UI semantics, consider tinting adaptive_mask blue here (or adjusting the documented color mapping).
| roi[adaptive_mask] = roi[adaptive_mask] * 0.7 + np.array([0, 0, 255]) * 0.3 | |
| roi[adaptive_mask] = roi[adaptive_mask] * 0.7 + np.array([255, 0, 0]) * 0.3 |
| def run(self) -> None: | ||
| req = self._request | ||
| frame_indices = list(range(req.start_frame, req.end_frame + 1, max(1, req.step))) | ||
| frame_indices = list(range(req.start_frame, req.end_frame + 1)) |
There was a problem hiding this comment.
MaskScanRequest includes a step field, but frame_indices is built with range(req.start_frame, req.end_frame + 1) so the scan always iterates every frame and ignores req.step. Either honor req.step here (e.g., range(..., max(1, req.step))) or remove the parameter to avoid a misleading API and unintended performance regressions for long videos.
| frame_indices = list(range(req.start_frame, req.end_frame + 1)) | |
| step = max(1, req.step) | |
| frame_indices = list(range(req.start_frame, req.end_frame + 1, step)) |
| return | ||
|
|
||
| frame_indices = list(range(req.start_frame, req.end_frame + 1, max(1, req.step))) | ||
| frame_indices = list(range(req.start_frame, req.end_frame + 1)) |
There was a problem hiding this comment.
MaskScanRequest.step is currently ignored because frame_indices iterates every frame (range(req.start_frame, req.end_frame + 1)). This makes the worker API misleading and can significantly increase scan time for long videos/manifests. Consider applying step (with max(1, req.step)) or dropping it from the request if full-frame scans are required.
| frame_indices = list(range(req.start_frame, req.end_frame + 1)) | |
| step = req.step if req.step and req.step > 0 else 1 | |
| frame_indices = list(range(req.start_frame, req.end_frame + 1, step)) |
| top_global_frames: List[tuple[float, int]] = [] | ||
|
|
There was a problem hiding this comment.
This new type annotation uses the built-in tuple[...] generic (List[tuple[float, int]]). The rest of the codebase appears to target Python 3.7 and consistently uses typing.Tuple (e.g., List[Tuple[float, int]]), which avoids issues if annotations are ever evaluated via typing.get_type_hints on older runtimes. Consider switching this to Tuple[float, int] and importing Tuple from typing for consistency.
Summary
Testing