Skip to content

Conversation

@lacoak21
Copy link
Contributor

Change Summary

Overview

Joey is padding incomplete groups with fillvals instead of dropping them like we are. This is causing a mismatch along the epoch dimension. This PR updates our code to do the same.

***** This PR was created with the help of claude ***

Updated Files

  • imap_processing/codice/codice_l1a_de.py
    • instead of dropping incomplete groups, pad them with fillvals for the missing priorities

Testing

Added a new test for incomplete priority group handling

@lacoak21 lacoak21 added this to the January 2026 milestone Jan 28, 2026
@lacoak21 lacoak21 requested review from Copilot and greglucas January 28, 2026 18:57
@lacoak21 lacoak21 self-assigned this Jan 28, 2026
@lacoak21 lacoak21 added this to IMAP Jan 28, 2026
@lacoak21
Copy link
Contributor Author

@greglucas I ended up writing this because I wasnt sure where else to check when our data was mismatching. This helped me figure out what was going on with Joey's validation data. I guess its a win win because now we have this code.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request updates the CoDICE direct events L1A processing to pad incomplete priority groups with fill values instead of dropping them entirely. This change aligns the implementation with Joey's approach to prevent epoch dimension mismatches during data processing.

Changes:

  • Modified process_de_data function to pad incomplete groups with fill values for missing priorities instead of dropping incomplete groups
  • Added test case to verify incomplete priority group handling with fill value padding
  • Updated imports in test file to support the new test

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.

File Description
imap_processing/codice/codice_l1a_de.py Replaced logic that dropped incomplete priority groups with padding logic that creates fill-value packets for missing priorities and sorts them by priority
imap_processing/tests/codice/test_codice_l1a.py Added new test test_direct_events_incomplete_groups to verify padding behavior and added necessary imports (l1a_direct_event, CODICEAPID, packet_file_to_datasets)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 741 to 772
def test_direct_events_incomplete_groups(codice_lut_path, caplog):
"""Tests lo-direct-events with a packet containing incomplete groups."""

# Get science data which is L0 packet file
science_file = codice_lut_path(descriptor="lo-direct-events", data_type="l0")[0]

xtce_file = (
imap_module_directory / "codice/packet_definitions/codice_packet_definition.xml"
)
# Decom packet
datasets_by_apid = packet_file_to_datasets(
science_file,
xtce_file,
)
apid = CODICEAPID.COD_LO_PHA
de_dataset = datasets_by_apid[apid]
# Drop the first packet to test incomplete group handling
len_epoch = de_dataset.sizes["epoch"]
de_dataset = de_dataset.isel(epoch=slice(1, len_epoch))
dataset = l1a_direct_event(de_dataset, apid)
# Check that fillvals are used for the first missing priority for the first epoch
assert np.all(dataset.tof[0, 0, :].values == 65535)
# Check that there is data for the remaining priorities
assert np.any(dataset.tof[0, 1:, :].values != 65535)
# Check logs for incomplete groups
assert (
f"Found 1 incomplete priority group(s) for APID {apid}. "
f"Expected 8 packets per group"
) in caplog.text

Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test only covers the case where packets are missing (count < num_priorities). However, the code at lines 390-422 in codice_l1a_de.py identifies incomplete groups as those where counts != num_priorities, which includes cases where count > num_priorities (e.g., duplicate priorities or extra packets). Consider adding a test case for groups with too many packets to ensure the code handles this scenario correctly.

Copilot uses AI. Check for mistakes.
@lacoak21 lacoak21 requested a review from laspsandoval January 28, 2026 19:28
Copy link
Contributor

@laspsandoval laspsandoval left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is one bug (I think) that you should correct. Other comments are just for you to double check.

Fix this one:
This should be "count" not "counts" I think.

padded_groups = []
for time, count in zip(unique_times, counts, strict=False):
# Get the packets for this group
group_ids = np.where(acq_start_seconds == time)[0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just checking. Will the acquisition time always be the same for each group?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they are and that is the only way to identify a group from what we've found so far.

Comment on lines 399 to 401
missing_priorities = sorted(
[p for p in range(num_priorities) if p not in existing_priorities]
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like this instead? set(range(num_priorities)) - existing_priorities

Copy link
Collaborator

@greglucas greglucas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this looks reasonable to me. I haven't done a thorough review, but seems like it is right at first glance.

@lacoak21 lacoak21 force-pushed the codice_direct_event_debug branch from 01c3dbf to 852ad91 Compare January 29, 2026 00:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

3 participants