Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughAdds a period/time_optimize dimension to Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
temoa/components/commodities.py (1)
784-806:⚠️ Potential issue | 🟠 MajorBackfill defaults per
(region, period, demand), not justdemand.
demands_specifiedis still collapsed todem, so one custom DSD forRLin a single period marksRLas “specified” everywhere. With the restored period index, omitted(r, p, RL)combinations are never defaulted fromsegment_fraction, and they fail the later per-period sum check instead of inheriting the default profile.Suggested fix
- demands_specified = set( - map( - demand_specific_distributon_dem, - (i for i in demand_specific_distribution.sparse_keys()), - ) - ) - unset_demand_distributions = used_dems.difference( - demands_specified - ) # the demands not mentioned in DSD *at all* - - if unset_demand_distributions: - unset_distributions = set( - cross_product( - model.regions, - model.time_optimize, - model.time_season, - model.time_of_day, - unset_demand_distributions, - ) - ) - for r, p, s, d, dem in unset_distributions: - demand_specific_distribution[r, p, s, d, dem] = value( - model.segment_fraction[s, d] - ) # DSD._constructed = True + specified_rp_dems = { + (r, p, dem) + for r, p, _s, _d, dem in demand_specific_distribution.sparse_keys() + } + unset_rp_dems = { + (r, p, dem) + for r, p, dem in model.demand.sparse_keys() + if (r, p, dem) not in specified_rp_dems + } + + for r, p, dem in unset_rp_dems: + for s, d in cross_product(model.time_season, model.time_of_day): + demand_specific_distribution[r, p, s, d, dem] = value( + model.segment_fraction[s, d] + ) # DSD._constructed = True🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@temoa/components/commodities.py` around lines 784 - 806, demands_specified currently collapses keys to just dem, causing a single custom DSD to mark that demand as specified everywhere; change the computation of demands_specified to capture the full (region, period, demand) tuple from demand_specific_distribution.sparse_keys() (e.g., map to (r, p, dem) instead of dem), ensure used_dems is the same shape (set of (r,p,dem)), compute unset_demand_distributions as the difference of those tuples, and then build unset_distributions by crossing model.regions, model.time_optimize, model.time_season, model.time_of_day with those (r,p,dem) tuples so the loop assigns demand_specific_distribution[r, p, s, d, dem] = value(model.segment_fraction[s, d]) for the correct per-region/per-period combinations.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/testing_data/test_system.sql`:
- Around line 254-285: The fixture repeats the 2020 RH splits for future years
so it doesn't catch period-specific bugs; change at least one future-year row in
demand_specific_distribution (e.g., any of the ('R1',2030,...,'RH',...) or
('R2',2025,...,'RH',...) entries) to a different RH value (for example adjust a
winter night/day RH from 0.4/0.3 to 0.45/0.25 or change a spring/night 0.1 to
0.15) so the 2025/2030 profile differs from 2020 and the restored period index
is exercised.
---
Outside diff comments:
In `@temoa/components/commodities.py`:
- Around line 784-806: demands_specified currently collapses keys to just dem,
causing a single custom DSD to mark that demand as specified everywhere; change
the computation of demands_specified to capture the full (region, period,
demand) tuple from demand_specific_distribution.sparse_keys() (e.g., map to (r,
p, dem) instead of dem), ensure used_dems is the same shape (set of (r,p,dem)),
compute unset_demand_distributions as the difference of those tuples, and then
build unset_distributions by crossing model.regions, model.time_optimize,
model.time_season, model.time_of_day with those (r,p,dem) tuples so the loop
assigns demand_specific_distribution[r, p, s, d, dem] =
value(model.segment_fraction[s, d]) for the correct per-region/per-period
combinations.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 21bfcac4-c799-4877-979a-6c9dddd65eb4
📒 Files selected for processing (14)
temoa/_internal/table_data_puller.pytemoa/components/capacity.pytemoa/components/commodities.pytemoa/components/emissions.pytemoa/components/reserves.pytemoa/core/model.pytemoa/data_io/component_manifest.pytemoa/db_schema/temoa_schema_v4.sqltemoa/tutorial_assets/utopia.sqltests/testing_data/mediumville.sqltests/testing_data/seasonal_storage.sqltests/testing_data/storageville.sqltests/testing_data/test_system.sqltests/testing_data/utopia_data.sql
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2025,'spring','day','RH',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2025,'spring','night','RH',0.1,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2025,'summer','day','RH',0.0,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2025,'summer','night','RH',0.0,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2025,'fall','day','RH',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2025,'fall','night','RH',0.1,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2025,'winter','day','RH',0.3,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2025,'winter','night','RH',0.4,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2025,'spring','day','RH',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2025,'spring','night','RH',0.1,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2025,'summer','day','RH',0.0,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2025,'summer','night','RH',0.0,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2025,'fall','day','RH',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2025,'fall','night','RH',0.1,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2025,'winter','day','RH',0.3,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2025,'winter','night','RH',0.4,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'spring','day','RH',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'spring','night','RH',0.1,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'summer','day','RH',0.0,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'summer','night','RH',0.0,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'fall','day','RH',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'fall','night','RH',0.1,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'winter','day','RH',0.3,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'winter','night','RH',0.4,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'spring','day','RH',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'spring','night','RH',0.1,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'summer','day','RH',0.0,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'summer','night','RH',0.0,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'fall','day','RH',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'fall','night','RH',0.1,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'winter','day','RH',0.3,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'winter','night','RH',0.4,''); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use a period-specific RH split in at least one future year.
The 2025/2030 rows repeat the 2020 profile exactly, so this fixture still passes if a consumer accidentally ignores p and treats demand_specific_distribution as period-invariant. Making one future period intentionally different here, or in a small dedicated regression fixture, would actually exercise the restored index.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/testing_data/test_system.sql` around lines 254 - 285, The fixture
repeats the 2020 RH splits for future years so it doesn't catch period-specific
bugs; change at least one future-year row in demand_specific_distribution (e.g.,
any of the ('R1',2030,...,'RH',...) or ('R2',2025,...,'RH',...) entries) to a
different RH value (for example adjust a winter night/day RH from 0.4/0.3 to
0.45/0.25 or change a spring/night 0.1 to 0.15) so the 2025/2030 profile differs
from 2020 and the restored period index is exercised.
There was a problem hiding this comment.
@idelder, here's a minimal change that keeps the sums valid while making the 2030 profile distinct from 2020/2025 (shifting winter demand slightly later into the night):
-REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'spring','day','RH',0.05,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'spring','night','RH',0.1,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'summer','day','RH',0.0,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'summer','night','RH',0.0,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'fall','day','RH',0.05,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'fall','night','RH',0.1,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'winter','day','RH',0.3,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'winter','night','RH',0.4,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'spring','day','RH',0.05,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'spring','night','RH',0.1,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'summer','day','RH',0.0,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'summer','night','RH',0.0,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'fall','day','RH',0.05,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'fall','night','RH',0.1,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'winter','day','RH',0.3,'');
-REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'winter','night','RH',0.4,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'spring','day','RH',0.05,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'spring','night','RH',0.1,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'summer','day','RH',0.0,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'summer','night','RH',0.0,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'fall','day','RH',0.05,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'fall','night','RH',0.1,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'winter','day','RH',0.25,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R1',2030,'winter','night','RH',0.45,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'spring','day','RH',0.05,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'spring','night','RH',0.1,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'summer','day','RH',0.0,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'summer','night','RH',0.0,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'fall','day','RH',0.05,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'fall','night','RH',0.1,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'winter','day','RH',0.25,'');
+REPLACE INTO "demand_specific_distribution" VALUES('R2',2030,'winter','night','RH',0.45,'');The sum across all slices remains 1.0 (0.05 + 0.10 + 0.00 + 0.00 + 0.05 + 0.10 + 0.25 + 0.45 = 1.0). With 2020/2025 using winter/day=0.3 and 2030 using 0.25, any code path that ignores p and uses a stale distribution will produce a wrong result that a test can catch.
🧠 Learnings used
Learnt from: idelder
Repo: TemoaProject/temoa PR: 277
File: temoa/model_checking/network_model_data.py:357-361
Timestamp: 2026-03-16T18:27:06.688Z
Learning: In TemoaProject/temoa, `NetworkModelData.demand_commodities` in `temoa/model_checking/network_model_data.py` is used only as a starting-node set for the commodity network sweep, not as a validation or filter set. Having extra period keys outside the myopic window is harmless because `available_techs` is already strictly filtered to the active myopic periods; unreachable demand entries are simply ignored during the sweep. Do not flag unfiltered demand_commodities in myopic mode as a correctness issue.
Learnt from: idelder
Repo: TemoaProject/temoa PR: 264
File: temoa/tutorial_assets/utopia.sql:1441-1449
Timestamp: 2026-03-09T13:28:25.754Z
Learning: In the Temoa project (TemoaProject/temoa), the auto-population of the `time_season_sequential` table when it is empty is intentional behavior, not a runtime fallback. For `seasonal_timeslice`/`consecutive_days` models, the sequential seasons are designed to automatically match the regular seasons defined in `time_season`. Therefore, SQL assets for such models (e.g., `temoa/tutorial_assets/utopia.sql`) intentionally omit explicit INSERT statements for `time_season_sequential`.
There was a problem hiding this comment.
@idelder, understood — no changes needed here. I'll drop the suggestion.
🧠 Learnings used
Learnt from: idelder
Repo: TemoaProject/temoa PR: 277
File: temoa/model_checking/network_model_data.py:357-361
Timestamp: 2026-03-16T18:27:06.688Z
Learning: In TemoaProject/temoa, `NetworkModelData.demand_commodities` in `temoa/model_checking/network_model_data.py` is used only as a starting-node set for the commodity network sweep, not as a validation or filter set. Having extra period keys outside the myopic window is harmless because `available_techs` is already strictly filtered to the active myopic periods; unreachable demand entries are simply ignored during the sweep. Do not flag unfiltered demand_commodities in myopic mode as a correctness issue.
Learnt from: idelder
Repo: TemoaProject/temoa PR: 264
File: temoa/tutorial_assets/utopia.sql:1441-1449
Timestamp: 2026-03-09T13:28:25.754Z
Learning: In the Temoa project (TemoaProject/temoa), the auto-population of the `time_season_sequential` table when it is empty is intentional behavior, not a runtime fallback. For `seasonal_timeslice`/`consecutive_days` models, the sequential seasons are designed to automatically match the regular seasons defined in `time_season`. Therefore, SQL assets for such models (e.g., `temoa/tutorial_assets/utopia.sql`) intentionally omit explicit INSERT statements for `time_season_sequential`.
Learnt from: idelder
Repo: TemoaProject/temoa PR: 264
File: temoa/tutorial_assets/utopia.sql:898-912
Timestamp: 2026-03-09T13:29:38.792Z
Learning: In the Temoa project (TemoaProject/temoa), the `season` column on `limit_storage_level_fraction` in SQL assets (e.g., temoa/tutorial_assets/utopia.sql and data_files/example_dbs/*.sql) intentionally has no FK to `time_season`. Season validity for this table is enforced at the code/loader level rather than at the database schema level. Restoring a season_label FK table or splitting the table into seasonal/non-seasonal variants are considered worse alternatives. Do not flag the missing FK on this column as an issue.
1b81750 to
8d08479
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
temoa/components/commodities.py (1)
779-857:⚠️ Potential issue | 🟠 MajorTrack specified DSDs by
(region, period, demand).
demands_specifiedstill collapses sparse keys todemonly. With the restored period index, one explicit DSD for a demand suppresses default population for every other(region, period)of that demand; when validation reaches one of those missing tuples, it can also fall through tomax()on an emptykeyslist.Proposed fix
- demands_specified = set( - map( - demand_specific_distributon_dem, - (i for i in demand_specific_distribution.sparse_keys()), - ) - ) - unset_demand_distributions = used_dems.difference( - demands_specified - ) # the demands not mentioned in DSD *at all* + used_rp_dems = {(r, p, dem) for r, p, dem in model.demand.sparse_keys()} + specified_rp_dems = { + (r, p, dem) + for r, p, _s, _d, dem in demand_specific_distribution.sparse_keys() + } + unset_rp_dems = used_rp_dems.difference( + specified_rp_dems + ) # the (region, period, demand) tuples not mentioned in DSD at all - if unset_demand_distributions: - unset_distributions = set( - cross_product( - model.regions, - model.time_optimize, - model.time_season, - model.time_of_day, - unset_demand_distributions, - ) - ) + if unset_rp_dems: + unset_distributions = ( + (r, p, s, d, dem) + for r, p, dem in unset_rp_dems + for s, d in cross_product(model.time_season, model.time_of_day) + ) for r, p, s, d, dem in unset_distributions: demand_specific_distribution[r, p, s, d, dem] = value( model.segment_fraction[s, d] ) # DSD._constructed = True - used_rp_dems = {(r, p, dem) for r, p, dem in model.demand.sparse_keys()} for r, p, dem in used_rp_dems: expected_key_length = len(model.time_season) * len(model.time_of_day) keys = [ k for k in demand_specific_distribution.sparse_keys() if demand_specific_distribution_region(k) == r and demand_specific_distributon_period(k) == p and demand_specific_distributon_dem(k) == dem ] + if not keys: + raise ValueError( + f'No demand_specific_distribution entries found for {(r, p, dem)}' + ) if len(keys) != expected_key_length:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@temoa/components/commodities.py` around lines 779 - 857, The bug is that demands_specified only records dem (collapsing the period), so one explicit DSD suppresses defaults across all periods and later validation can get empty keys; update the tracking to use the full (region, period, dem) tuple. Specifically, change the construction of demands_specified to map each sparse key via demand_specific_distribution_region, demand_specific_distributon_period and demand_specific_distributon_dem to produce (r, p, dem) tuples (instead of just dem), then compute unset_demand_distributions relative to used_rp_dems (or rename used_dems to used_rp_dems) and use that set in the cross_product/population loop that assigns demand_specific_distribution[r, p, s, d, dem]. Also ensure the final validation that builds keys filters by region, period and dem consistently using those same helper functions so keys is never empty for a used (r,p,dem).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/testing_data/utopia_data.sql`:
- Around line 252-281: The test fixture only inserts fully explicit
period-indexed rows into demand_specific_distribution so it doesn't exercise the
mixed explicit/default code path; add a row (for example for a chosen scenario
like city 'utopia' and year 2010) where one season/time combination has an
explicit entry in demand_specific_distribution while another combination for the
same city/year/time/season is omitted so that the code falls back to
segment_fraction; update the testing data block near the existing REPLACE INTO
"demand_specific_distribution" entries to include this mixed case to ensure the
path that merges explicit period entries with segment_fraction fallback is
exercised (refer to the table demand_specific_distribution and the
segment_fraction fallback behavior).
---
Outside diff comments:
In `@temoa/components/commodities.py`:
- Around line 779-857: The bug is that demands_specified only records dem
(collapsing the period), so one explicit DSD suppresses defaults across all
periods and later validation can get empty keys; update the tracking to use the
full (region, period, dem) tuple. Specifically, change the construction of
demands_specified to map each sparse key via
demand_specific_distribution_region, demand_specific_distributon_period and
demand_specific_distributon_dem to produce (r, p, dem) tuples (instead of just
dem), then compute unset_demand_distributions relative to used_rp_dems (or
rename used_dems to used_rp_dems) and use that set in the
cross_product/population loop that assigns demand_specific_distribution[r, p, s,
d, dem]. Also ensure the final validation that builds keys filters by region,
period and dem consistently using those same helper functions so keys is never
empty for a used (r,p,dem).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: f58b799c-b3fe-4526-bad2-a37e78215596
📒 Files selected for processing (14)
temoa/_internal/table_data_puller.pytemoa/components/capacity.pytemoa/components/commodities.pytemoa/components/emissions.pytemoa/components/reserves.pytemoa/core/model.pytemoa/data_io/component_manifest.pytemoa/db_schema/temoa_schema_v4.sqltemoa/tutorial_assets/utopia.sqltests/testing_data/mediumville.sqltests/testing_data/seasonal_storage.sqltests/testing_data/storageville.sqltests/testing_data/test_system.sqltests/testing_data/utopia_data.sql
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',1990,'inter','day','RH',0.12,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',1990,'inter','night','RH',0.06,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',1990,'winter','day','RH',0.5467,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',1990,'winter','night','RH',0.2733,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',1990,'inter','day','RL',0.15,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',1990,'inter','night','RL',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',1990,'summer','day','RL',0.15,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',1990,'summer','night','RL',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',1990,'winter','day','RL',0.5,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',1990,'winter','night','RL',0.1,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2000,'inter','day','RH',0.12,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2000,'inter','night','RH',0.06,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2000,'winter','day','RH',0.5467,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2000,'winter','night','RH',0.2733,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2000,'inter','day','RL',0.15,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2000,'inter','night','RL',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2000,'summer','day','RL',0.15,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2000,'summer','night','RL',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2000,'winter','day','RL',0.5,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2000,'winter','night','RL',0.1,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2010,'inter','day','RH',0.12,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2010,'inter','night','RH',0.06,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2010,'winter','day','RH',0.5467,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2010,'winter','night','RH',0.2733,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2010,'inter','day','RL',0.15,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2010,'inter','night','RL',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2010,'summer','day','RL',0.15,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2010,'summer','night','RL',0.05,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2010,'winter','day','RL',0.5,''); | ||
| REPLACE INTO "demand_specific_distribution" VALUES('utopia',2010,'winter','night','RL',0.1,''); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add a mixed explicit/default period fixture.
This block only exercises the fully explicit period-indexed DSD path. A case where one period is overridden and another period falls back to segment_fraction would cover the restored key shape more directly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/testing_data/utopia_data.sql` around lines 252 - 281, The test fixture
only inserts fully explicit period-indexed rows into
demand_specific_distribution so it doesn't exercise the mixed explicit/default
code path; add a row (for example for a chosen scenario like city 'utopia' and
year 2010) where one season/time combination has an explicit entry in
demand_specific_distribution while another combination for the same
city/year/time/season is omitted so that the code falls back to
segment_fraction; update the testing data block near the existing REPLACE INTO
"demand_specific_distribution" entries to include this mixed case to ensure the
path that merges explicit period entries with segment_fraction fallback is
exercised (refer to the table demand_specific_distribution and the
segment_fraction fallback behavior).
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@temoa/components/capacity.py`:
- Around line 317-327: The subtraction of model.v_retired_capacity[r, p, t, v]
in the fallback branch is using period-p surviving-capacity units while cap_end
has already been projected to p_next via the survival ratio; scale the
retirement term by the same survival ratio used above
(value(model.lifetime_survival_curve[r, p_next, t, v]) /
value(model.lifetime_survival_curve[r, p, t, v])) before subtracting so the
units match when updating cap_end in the branch that handles p ==
model.time_optimize.last() or p_next == v + value(model.lifetime_process[r, t,
v]).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: cf7cf6b7-012f-42c2-bf24-26af219239b8
📒 Files selected for processing (4)
temoa/components/capacity.pytemoa/data_io/component_manifest.pytemoa/data_io/hybrid_loader.pytemoa/model_checking/commodity_network_manager.py
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
temoa/data_io/component_manifest.py (1)
321-330:⚠️ Potential issue | 🔴 CriticalMyopic filtering is not applied when custom loader is disabled.
The custom loader
_load_loan_ratehandles myopic period filtering (vintage >= base_year) but is commented out. Theviable_rtv_newvalidator filters by viability set, not by myopic period. With the custom loader disabled,is_period_filtered=Falseis incorrect—myopic filtering is not happening. This affects bothcost_invest(same pattern) andloan_rate. Either uncomment the custom loader or handle myopic period filtering in the standard loading path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@temoa/data_io/component_manifest.py` around lines 321 - 330, The LoadItem for model.loan_rate (and the similar cost_invest entries) relies on the custom loader _load_loan_rate to enforce myopic period filtering (vintage >= base_year) but the custom_loader_name is commented out while is_period_filtered=False, so myopic filtering is skipped; fix by either uncommenting and restoring custom_loader_name='_load_loan_rate' for the LoadItem(s) or, if you want to use the standard loader, set is_period_filtered=True and update the standard loading/validation path (or the validator used, e.g., viable_rtv_new) to explicitly apply vintage >= base_year filtering for those tables so myopic periods are excluded.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@temoa/components/capacity.py`:
- Around line 318-334: The nested conditional under the else block can be
flattened: replace the "else:" followed by "if p == model.time_optimize.last()
or v + value(model.lifetime_process[r, t, v]) == p_end:" with a single "elif" so
the check is directly attached to the prior "if" and remove the extra
indentation; leave the cap_end assignments for both branches unchanged (the
branch that uses value(model.lifetime_survival_curve[r, p_end, t, v]) /
value(model.lifetime_survival_curve[r, p, t, v]) remains for the p==... or
v+...==p_end case, and the branch that uses model.v_capacity[r, p_end, t, v] *
value(model.lifetime_survival_curve[r, p_end, t, v]) /
value(model.process_life_frac[r, p_end, t, v]) remains for the other case).
---
Outside diff comments:
In `@temoa/data_io/component_manifest.py`:
- Around line 321-330: The LoadItem for model.loan_rate (and the similar
cost_invest entries) relies on the custom loader _load_loan_rate to enforce
myopic period filtering (vintage >= base_year) but the custom_loader_name is
commented out while is_period_filtered=False, so myopic filtering is skipped;
fix by either uncommenting and restoring custom_loader_name='_load_loan_rate'
for the LoadItem(s) or, if you want to use the standard loader, set
is_period_filtered=True and update the standard loading/validation path (or the
validator used, e.g., viable_rtv_new) to explicitly apply vintage >= base_year
filtering for those tables so myopic periods are excluded.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 0deb05a9-65f2-4c9a-a399-4b813acef9db
📒 Files selected for processing (2)
temoa/components/capacity.pytemoa/data_io/component_manifest.py
| else: | ||
| # Mid-life period, ending capacity is beginning capacity of next period | ||
| p_next = model.time_future.next(p) | ||
|
|
||
| if p == model.time_optimize.last() or p_next == v + value(model.lifetime_process[r, t, v]): | ||
| if p == model.time_optimize.last() or v + value(model.lifetime_process[r, t, v]) == p_end: | ||
| # No v_capacity or v_retired_capacity for next period so just continue down the | ||
| # survival curve | ||
| # survival curve. If v+l=p then eol would be dumped in the next period | ||
| cap_end = ( | ||
| cap_begin | ||
| * value(model.lifetime_survival_curve[r, p_next, t, v]) | ||
| * value(model.lifetime_survival_curve[r, p_end, t, v]) | ||
| / value(model.lifetime_survival_curve[r, p, t, v]) | ||
| ) | ||
| else: | ||
| # Get the next period's beginning capacity | ||
| cap_end = ( | ||
| model.v_capacity[r, p_next, t, v] | ||
| * value(model.lifetime_survival_curve[r, p_next, t, v]) | ||
| / value(model.process_life_frac[r, p_next, t, v]) | ||
| model.v_capacity[r, p_end, t, v] | ||
| * value(model.lifetime_survival_curve[r, p_end, t, v]) | ||
| / value(model.process_life_frac[r, p_end, t, v]) | ||
| ) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider using elif for cleaner structure.
The static analysis correctly identifies that the nested if inside else could be simplified to elif for better readability.
♻️ Proposed refactor
- if p <= v + value(model.lifetime_process[r, t, v]) < p_end:
+ if p <= v + value(model.lifetime_process[r, t, v]) < p_end:
# EOL so capacity ends on zero
cap_end = 0
- else:
- # Mid-life period, ending capacity is beginning capacity of next period
- if p == model.time_optimize.last() or v + value(model.lifetime_process[r, t, v]) == p_end:
+ elif p == model.time_optimize.last() or v + value(model.lifetime_process[r, t, v]) == p_end:
+ # Mid-life period, ending capacity is beginning capacity of next period
+ # No v_capacity or v_retired_capacity for next period so just continue down the
+ # survival curve. If v+l=p_end then eol would be dumped in the next period
+ cap_end = (
+ cap_begin
+ * value(model.lifetime_survival_curve[r, p_end, t, v])
+ / value(model.lifetime_survival_curve[r, p, t, v])
+ )
+ else:
+ # Get the next period's beginning capacity
+ cap_end = (
+ model.v_capacity[r, p_end, t, v]
+ * value(model.lifetime_survival_curve[r, p_end, t, v])
+ / value(model.process_life_frac[r, p_end, t, v])
+ )🧰 Tools
🪛 Ruff (0.15.7)
[warning] 318-320: Use elif instead of else then if, to reduce indentation
Convert to elif
(PLR5501)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@temoa/components/capacity.py` around lines 318 - 334, The nested conditional
under the else block can be flattened: replace the "else:" followed by "if p ==
model.time_optimize.last() or v + value(model.lifetime_process[r, t, v]) ==
p_end:" with a single "elif" so the check is directly attached to the prior "if"
and remove the extra indentation; leave the cap_end assignments for both
branches unchanged (the branch that uses value(model.lifetime_survival_curve[r,
p_end, t, v]) / value(model.lifetime_survival_curve[r, p, t, v]) remains for the
p==... or v+...==p_end case, and the branch that uses model.v_capacity[r, p_end,
t, v] * value(model.lifetime_survival_curve[r, p_end, t, v]) /
value(model.process_life_frac[r, p_end, t, v]) remains for the other case).
Summary by CodeRabbit
New Features
Database Schema
Updates
Data & Load
Bug Fixes