Summary
Add shared Core infrastructure for collecting, storing, and exposing gameplay/run analytics data that feature mods can use for dashboards, graphs, and post-mortem analysis.
This is a support/foundation issue for the QoL dashboard request:
Why this belongs in Core
A dashboard feature can start in QoL, but the underlying data collection should probably not be tightly coupled to a single QoL UI. Other mods may eventually want to register metrics, inspect run history, or display summaries. If each mod invents its own telemetry format, we get the usual beautiful ecosystem of five incompatible wheels, because apparently one wheel was too merciful.
Core should provide a small, reusable layer for:
- Registering metrics.
- Sampling values over time.
- Persisting per-run summaries.
- Loading previous-run data for comparison/ghost overlays.
- Letting feature mods consume analytics data without duplicating collection logic.
Initial use case
The first consumer would be a QoL analytics dashboard showing things like:
- Total assets over time.
- Income as percentage of total assets.
- Research-related progress/rate metrics.
- Previous-run ghost/reference curves.
Proposed Core API direction
Metric registration
Allow mods to register metric providers with metadata such as:
- Metric ID.
- Display name.
- Unit/format type, e.g. currency/assets, percentage, rate, count.
- Optional category, e.g. economy, research, production.
- Sampling strategy or suggested interval.
Time-series sampling
Provide a lightweight service that can:
- Periodically sample registered metrics.
- Store timestamped values for the current run.
- Avoid excessive data growth by supporting downsampling or capped history.
- Expose read-only series data to UI consumers.
Run lifecycle/history
Provide a way to identify and persist run summaries:
- Current run/session ID.
- Start/end timestamps if available.
- Basic run metadata.
- Final/summarized metric snapshots.
- Historical data for previous runs.
Ghost/reference data
Support loading one or more previous runs as comparison data:
- Read previous metric series.
- Normalize/align times from run start.
- Expose comparison data in a format usable by chart UIs.
Open questions
- Does Upload Labs expose a reliable run/session lifecycle event that Core can hook into?
- Where should run-history files be stored?
- Should raw time-series data be persisted, summarized, or both?
- What sampling interval is safe without hurting performance?
- Should Core provide chart widgets, or only data APIs while QoL owns rendering?
- Should metrics be opt-in per mod to avoid useless overhead?
Acceptance criteria
Related
Summary
Add shared Core infrastructure for collecting, storing, and exposing gameplay/run analytics data that feature mods can use for dashboards, graphs, and post-mortem analysis.
This is a support/foundation issue for the QoL dashboard request:
Why this belongs in Core
A dashboard feature can start in QoL, but the underlying data collection should probably not be tightly coupled to a single QoL UI. Other mods may eventually want to register metrics, inspect run history, or display summaries. If each mod invents its own telemetry format, we get the usual beautiful ecosystem of five incompatible wheels, because apparently one wheel was too merciful.
Core should provide a small, reusable layer for:
Initial use case
The first consumer would be a QoL analytics dashboard showing things like:
Proposed Core API direction
Metric registration
Allow mods to register metric providers with metadata such as:
Time-series sampling
Provide a lightweight service that can:
Run lifecycle/history
Provide a way to identify and persist run summaries:
Ghost/reference data
Support loading one or more previous runs as comparison data:
Open questions
Acceptance criteria
Related