Skip to content

Comments

Bump to v9: Restructure mathsys as library-first#40

Merged
alejandro-vaz merged 5 commits intomainfrom
development
Feb 22, 2026
Merged

Bump to v9: Restructure mathsys as library-first#40
alejandro-vaz merged 5 commits intomainfrom
development

Conversation

@alejandro-vaz
Copy link
Owner

@alejandro-vaz alejandro-vaz commented Feb 22, 2026

Summary by CodeRabbit

  • New Features

    • Added LaTeX output generation capability for mathematical expressions
    • Added support for function definitions and function calls
    • Enhanced command-line interface with new targets (Help, Version, Tokens, Check, Ast, Latex) and flag shortcuts (-q for quiet, -v for verbose)
    • Added module loading functionality
  • Documentation

    • Updated installation instructions in README
  • Chores

    • Bumped version from 7.0.0 to 9.0.0
    • Added strum dependency for enum serialization support

@alejandro-vaz alejandro-vaz self-assigned this Feb 22, 2026
@coderabbitai
Copy link

coderabbitai bot commented Feb 22, 2026

Walkthrough

Version bump from v7 to v9 restructures mathsys from a binary-only tool into a library-first architecture. Major changes include modularization of tokenizer/parser/solver, redesign of the backend trait system, overhaul of CLI argument handling with enum-based types, visibility narrowing across syntax definitions, and establishment of a public lib.rs interface with a unified TRANSFORMERS pipeline.

Changes

Cohort / File(s) Summary
Configuration & Metadata
.github/CHANGELOG.md, .github/README.md, Cargo.toml, .gitignore, .vscode/settings.json
Version bump to 9.0.0, installation method changed from pip to cargo, readme field added, new strum dependency, and expanded changelog with v8/v9 entries. Gitignore rules adjusted for .DS_Store and mathsys entries.
CLI Entry & Argument Handling
entry.rs, issues.rs, settings/mod.rs, settings/noise.rs
Complete overhaul of File, Flag, Alias, and Target to use enums and tuple structs with parsing/expansion logic. Issue enum variants renamed/refactored (UnknownTarget→UnknownArgument). New settings module introduced with cli() constructor and Settings struct for argument accumulation and configuration.
Main Binary Driver
main.rs
Refactored from multi-function design to compact driver leveraging centralized mathsys runtime. Removed internal pipeline definitions; now imports from lib and uses TRANSFORMERS.run() with Data-variant-driven output handling.
Library Interface
lib.rs
New file establishing public library API with modular structure (prelude, issues, entry, tokenizer, parser, solver, syntax, backends, settings). Defines TRANSFORMERS pipeline, Run/Data structures for unified result representation, and public methods for tokenization, parsing, solving, and target-based output.
Tokenizer System
tokenizer/mod.rs, tokenizer/token.rs, tokenizer/tokenizer.rs
Reorganized from monolithic tokenizer.rs into mod.rs (pipeline) and token.rs (definitions). Responsibility and Kind enums, BindedToken/ShallowToken structs, ORDER static, and MAXLEN constant moved to token.rs; tokenization engine relocated to mod.rs as pub(super) methods.
Parser System
parser/mod.rs, parser/grammar.rs
Visibility reduced to pub(super) for Parser, Backpointer, Part, MINPOINTERS, and GRAMMAR. Parser methods changed from &mut self to &self. Grammar expanded with Function and Call nonterminals; BINDING→DEFINITION token rename. Token path corrections (tokenizer::token::Kind).
Solver System
solver/mod.rs, solver/context.rs, solver/nonterminal.rs, solver/solver.rs
Refactored from single solver.rs into modular structure with context (function tracking) and nonterminal (object definitions) submodules. Solver methods now in mod.rs. Object enum extended with Function and Call variants; Item loses getnt helper. Spawn signatures updated to include optional Context parameter.
Backend System
backends/mod.rs, backends/latex.rs, backends/traits.rs
New backends/mod.rs establishes Spawn and Backends traits with pub(super) visibility and dispatch attribute. Old backends/traits.rs deleted. LaTeX prelude imports expanded into multiple lines. Backend dispatch mechanism shifts from static traits to per-type spawn implementations.
Syntax Definitions
syntax/mod.rs, syntax/level1.rs, syntax/level2.rs, syntax/level3.rs, syntax/level4.rs, syntax/level5.rs, syntax/start.rs
All Level1–5 enums and contained structs narrowed to pub(crate) with pub(crate) field visibility. Start moved from start.rs into mod.rs. Spawn signatures updated throughout to include optional Context parameter. Function variant added to Level1; Call variant added to Level5. Definition, Node, Equation, Use, and all Level4–5 types now expose internal fields as pub(crate).
Prelude & Utilities
prelude.rs
Visibility narrowed for several re-exports to pub(super) (Colored, regex, IndexMap, ahash, smallvec, strum_macros). ExitCode added (replaces process::exit). Duration added under time. EnumString and AsRefStr expanded from strum_macros. Version added from rustc_version_runtime.
Test/Example Files
file.msm
Deleted simple mathematical assignment test file.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant main as main.rs
    participant TRANS as TRANSFORMERS<br/>(lib.rs)
    participant TZ as Tokenizer
    participant P as Parser
    participant S as Solver
    participant B as Backends
    participant Settings as Settings/Noise

    User->>main: run mathsys CLI
    main->>Settings: Settings::cli()
    Settings-->>main: configured Settings
    main->>TRANS: run(&settings)
    
    loop for each target
        TRANS->>TZ: tokenize(content)
        TZ-->>TRANS: Vec<BindedToken>
        
        TRANS->>P: parse(tokens)
        P-->>TRANS: Start (AST)
        
        TRANS->>S: solve(pool)
        S-->>TRANS: NonTerminal
        
        alt target == Tokens
            TRANS->>TRANS: tokens(&settings)
            TRANS-->>main: Data::Tokens
        else target == Ast
            TRANS-->>main: Data::Ast
        else target == Latex
            TRANS->>B: latex()
            B-->>TRANS: String representation
            TRANS-->>main: Data::Latex
        else target == Check/Help/Version
            TRANS-->>main: Data::Check/Version
        end
        
        main->>main: format & output result
    end
    
    main-->>User: exit code + console output
Loading

Poem

🔨 From binary chains to library light,
Mathsys splits itself just right.
Traits now dispatch, Context flows,
A nine-version journey—onward it goes!
Harsh refactor? Sure, but you needed this fight. 💪


Real talk though: This is a significant restructuring, and you've done the work. The shift from pub to pub(crate)/pub(super) across syntax definitions shows intentional API boundary-setting, and moving to a library-first architecture is the right call. Just make sure your public-facing lib.rs API is actually stable and well-tested—you don't want to bump to v10 in three months because the Data enum variants weren't quite right. The Spawn trait signature changes across every syntax level are pervasive; double-check that all implementations actually handle the Context parameter correctly. Good progress overall. 👊

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 11.96% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed Title check skipped as CodeRabbit has written the PR title.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai bot changed the title @coderabbitai Bump to v9: Restructure mathsys as library-first Feb 22, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 37

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (6)
syntax/level4.rs (1)

78-92: 🧹 Nitpick | 🔵 Trivial

Dense but correct Limit::spawn — consider a brief comment on the next reassignment trick.

The pattern on lines 82-84 where next is conditionally advanced to peek past an optional direction token is clever. It works, but someone reading this for the first time will do a double-take. One inline comment would save them 5 minutes.

Same note as level3.rs: the bare panic!() calls on lines 80, 81, 84 give no diagnostic context.

♻️ Add minimal context to panics
-    let Some(Item::NonTerminal(NonTerminal::Level5(Level5::Variable(variable)))) = iterator.next() else {panic!()};
-    let Some(Item::NonTerminal(NonTerminal::Level2(approach))) = iterator.next() else {panic!()};
+    let Some(Item::NonTerminal(NonTerminal::Level5(Level5::Variable(variable)))) = iterator.next() else {panic!("Limit::spawn: expected Variable")};
+    let Some(Item::NonTerminal(NonTerminal::Level2(approach))) = iterator.next() else {panic!("Limit::spawn: expected Level2 approach")};
     let mut next = iterator.next();
+    // If next item is a token (+/-), consume it as direction and advance to the next item
     let direction = if let Some(Item::Token(token)) = next {next = iterator.next(); Some(token.value == "+")} else {None};
-    let Some(Item::NonTerminal(NonTerminal::Level5(Level5::Nest(nest)))) = next else {panic!()};
+    let Some(Item::NonTerminal(NonTerminal::Level5(Level5::Nest(nest)))) = next else {panic!("Limit::spawn: expected Nest")};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@syntax/level4.rs` around lines 78 - 92, The Limit::spawn implementation uses
a tricky reassignment of the local variable next to peek past an optional
direction token; add a one-line inline comment above the block that sets
direction explaining that next is advanced to consume an optional Token so the
subsequent pattern match reads the following item. Also replace bare panic!()
calls in Spawn for Limit (the three occurrences that unwrap pattern matches on
iterator.next()) with panics that include context (e.g., which expected
Item/NonTerminal was missing and the variable being parsed) so failures in spawn
are descriptive; reference the function Spawn for Limit, the local variables
iterator and next, and the match patterns that expect Level5::Variable,
Level2(approach), and Level5::Nest.
syntax/level3.rs (1)

55-58: 🧹 Nitpick | 🔵 Trivial

Bare panic!() gives zero debugging context.

When this blows up in your face (and it will), you'll wish you knew what the unexpected item was. This pattern appears across multiple spawn implementations (level2, level3, level4, level5). At minimum, include a message. Even better, use unreachable!() if you truly expect it to be unreachable.

♻️ Add diagnostic info to panic
-        other => panic!()
+        other => unreachable!("unexpected item in Term::spawn: {other:?}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@syntax/level3.rs` around lines 55 - 58, The bare panic!() in the match over
Item in the spawn implementations (e.g., the match handling Item::Token,
Item::NonTerminal(NonTerminal::Level4(level4)) in level3.spawn and similar code
in level2/level4/level5) provides no diagnostic context; replace it with either
unreachable!() if that arm truly cannot happen, or panic! with a descriptive
message that includes the unexpected value (e.g., include other via debug
formatting) so failures show what Item variant and value caused the error.
syntax/level2.rs (1)

47-58: 🧹 Nitpick | 🔵 Trivial

Bare panic!() gives zero diagnostic context on malformed input.

Look, this pattern is everywhere in the codebase (level3, level4, level5 all do it), so I know you didn't start it here. But every one of these bare panic!() calls is a future debugging nightmare. When a grammar rule feeds an unexpected item variant, you get a crash with no indication of which spawn failed or what it received.

At minimum, consider unreachable!("Expression::spawn: unexpected item {other:?}") — since Item derives Debug, this costs nothing and saves hours later.

♻️ Suggested improvement
-        other => panic!()
+        other => unreachable!("Expression::spawn: unexpected item {other:?}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@syntax/level2.rs` around lines 47 - 58, The bare panic!() in the Spawn impl
for Expression (function spawn for Expression/NonTerminal::Level2) provides no
diagnostic info; replace it with a panic variant that includes the unexpected
item and context, e.g. use unreachable! or panic! with a formatted message like
"Expression::spawn: unexpected item {other:?}" so when matching Item::Token /
Item::NonTerminal fails you log the concrete `other` (Item implements Debug) and
the function name to aid debugging.
parser/mod.rs (2)

98-99: 🧹 Nitpick | 🔵 Trivial

Unused closure parameter iteration — use _.

Clippy will flag this. Quick fix:

♻️ Fix
-        chart.extend((0..(filtered.len() + 1)).map(|iteration| FastMap::new()));
-        waiting.extend((0..(filtered.len() + 1)).map(|iteration| FastMap::new()));
+        chart.extend((0..=filtered.len()).map(|_| FastMap::new()));
+        waiting.extend((0..=filtered.len()).map(|_| FastMap::new()));

(Also switched to 0..=filtered.len() which is equivalent and slightly more idiomatic for inclusive ranges.)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@parser/mod.rs` around lines 98 - 99, The closure passed to map is using an
unused parameter `iteration`, causing a Clippy warning; update the range and
closures so the parameter is ignored (use `_`) and prefer the inclusive range
`0..=filtered.len()` for clarity: change the two lines that call
chart.extend(...) and waiting.extend(...) to iterate with `0..=filtered.len()`
and map(|_| FastMap::new()) respectively, referencing the existing identifiers
chart, waiting, filtered.len(), and FastMap::new().

140-149: ⚠️ Potential issue | 🟠 Major

Heavy cloning in the completion hot path.

Line 141 clones the entire backpointer set into pool. Line 142 clones backpointers from the chart, iterates, pushes, and collects into a Vec. Line 148 clones the entire completed set on every successful completion. In an Earley parser, complete is called a lot. For non-trivial grammars and inputs, this cloning overhead will add up.

I'm not saying rewrite it today — but be aware this is your performance bottleneck. If you start seeing slowness on larger inputs, this is where to look first. Indices or Rc-based sharing could reduce allocation pressure significantly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@parser/mod.rs` around lines 140 - 149, The completion hot path is doing heavy
cloning:
pool.entry(pointer.clone()).or_default().extend(self.review(...).clone()),
cloning backpointers from chart into a Vec while pushing pointer, and cloning
completed before extending agenda; locate the block around pool.entry, the
creation of backpointers, self.recall, and the agenda.extend(completed.clone())
calls and replace full clones with shared references (Rc/Arc or indices) or
in-place mutation: store Rc<SmallVec<...>> in pool and chart so you can clone
the Rc cheaply, map/backpointer-modify by creating a new shared wrapper with the
extra pointer (or store pointer lists as small immutable shared slices), and
change agenda.extend(completed.clone()) to extend from a slice/iterator of
shared handles or indices instead of cloning the whole collection to eliminate
repeated allocations.
syntax/level1.rs (1)

58-70: 🛠️ Refactor suggestion | 🟠 Major

All spawn implementations use bare panic!() / .unwrap() with no error context.

Same issue as flagged for Function::spawnDefinition, Node, and Equation all have bare panic!() or .unwrap() with no diagnostic message. When a grammar mismatch causes one of these to blow up, you'll get a stack trace pointing at a generic panic with no indication of which spawn or what item caused it. Even a panic!("Definition::spawn: unexpected item") would save you 20 minutes of debugging. Be kind to future-you.

Also applies to: 101-105, 114-120

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@syntax/level1.rs` around lines 58 - 70, Replace bare panic!() and .unwrap()
in spawn implementations with diagnostics: in Definition::spawn (and similarly
Node::spawn, Equation::spawn, Function::spawn) change panic!() to a panic that
includes the type and the unexpected item (e.g., "Definition::spawn: unexpected
item {:?}") and replace variable.unwrap()/value.unwrap() with expect(...) or map
to Result that include messages like "Definition::spawn: missing variable" /
"Definition::spawn: missing value"; apply the same pattern to the other spawn
functions to ensure failures report which spawn and what was missing or
unexpected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/CHANGELOG.md:
- Around line 210-234: Add blank lines above and below each markdown heading in
the new sections (e.g., "## v8 -- Semantic solver", "### Added", "### Changed",
"### Removed", "### Fixed", and the "## v9 -- Semantic-driven disambiguation"
block and its subheadings) so they conform to MD022, and ensure the file ends
with a single trailing newline to satisfy MD047; update the section blocks and
list spacing accordingly while preserving existing text content and numbering.

In @.gitignore:
- Around line 9-10: Remove the redundant `.DS_Store` pattern from the
.gitignore: keep the recursive glob pattern `**/.DS_Store` and delete the
explicit `.DS_Store` line so the single entry covers both root and nested
`.DS_Store` files.

In @.vscode/settings.json:
- Around line 40-48: The current color pair uses color "#00FF00" and
backgroundColor "#FF00FF" which is visually harsh; update the object with a less
aggressive combination by either changing "color" to a softer green like
"#39FF14" or switching "color" to a dark foreground such as "#1A1A1A" while
keeping "backgroundColor" as the magenta, or alternatively tone down
"backgroundColor" to a darker/translucent magenta; modify the keys "color"
and/or "backgroundColor" in the object where "tag": "!" is defined to implement
one of these options.
- Line 53: The removal of the "python.analysis.ignore" setting causes Pylance to
analyze the entire .venv and produce noise; restore the JSON entry
"python.analysis.ignore": [".venv/**"] in the .vscode/settings.json so the
analyzer ignores the virtual environment directory (ensure the key name is exact
and the pattern ".venv/**" is present under the root settings object).

In `@backends/mod.rs`:
- Around line 66-68: The Spawn trait currently requires Backends via "trait
Spawn: Backends", coupling construction to rendering; remove the supertrait so
Spawn is independent (change to "pub(super) trait Spawn { ... }") and move any
Backends requirement to impl sites or the spawn method signature as needed
(e.g., impl<T: Backends> Spawn for T or add a where Self: Backends on specific
impls) so types can be constructed without forcing a global Backends bound;
update all impl blocks and callers of Spawn::spawn (and any uses of latex()) to
include Backends bounds only where rendering is actually required.
- Around line 14-58: Remove the unused syntax imports from the large use block
and keep only the actual symbols in use: Item, NonTerminal and Context;
specifically, update the use self::super::{ solver::{ nonterminal::{ Item,
NonTerminal }, context::Context } } import pattern and delete all the unused
syntax imports (Start, Definition, Function, Level1, Node, Equation, Use,
Level2, Expression, Level3, Term, Level4, Limit, Factor, Level5, Absolute,
Infinite, Variable, Nest, Tensor, Whole, Undefined, Rational, Call) so the
module only imports Item, NonTerminal and Context.

In `@entry.rs`:
- Around line 55-67: The Alias::expand implementation duplicates the q/v
mapping; instead drive expansion from the single source FLAGLIASES (or a shared
public const) so mappings live in one place: remove the hardcoded match in
Alias::expand and lookup each char against FLAGLIASES (or call a new public
helper in issues.rs that maps a char -> Option<Flag>), push the resulting Flag
into converted or return Issue::UnknownAliasCharacter (preserving alias and at)
when no mapping exists; reference Alias::expand, FLAGLIASES, and
Issue::UnknownAliasCharacter to locate the code to change.
- Around line 36-44: The File::read implementation swallows the underlying I/O
error and uses to_str().unwrap(), causing misleading "File not found" errors and
potential panics; update File::read (and the Issue variant usage) to capture the
Result from readFile(&self.0) and map Err to an Issue that preserves the
original io::Error (or its display) instead of calling .ok(), and replace
self.0.to_str().unwrap() with a non-panicking conversion such as
self.0.to_string_lossy() when constructing Issue::FileNotFound so path and
actual error details are included in the returned Result; locate the logic in
File::read, the call to readFile, and the creation of Issue::FileNotFound to
make this change.

In `@issues.rs`:
- Line 80: The current consume method takes a pointer to self and truncates it
to u8, producing a meaningless exit code; replace that with a deterministic
mapping: either return a constant nonzero code (e.g. 1u8) for errors, or map
each Issue variant to a distinct code by matching on Issue in the consume method
(or mark the enum with #[repr(u8)] and cast the variant to u8). Update the
consume(self) -> u8 implementation on the Issue type to use an explicit match or
a repr-based cast instead of "&self as *const Issue as u8" so exit codes are
deterministic and meaningful.

In `@lib.rs`:
- Around line 66-77: The Run.time is being measured before compute runs because
struct fields evaluate in order; fix by capturing the computation result first
and then measuring elapsed: inside run(&self, settings: &Settings) iterate
targets, call let data = self.compute(target, settings) (using the same
Time::now() start), then let elapsed = time.elapsed(), and push Run { time:
elapsed, data }; ensure you still reset time = Time::now() (or set start) for
the next iteration as originally intended.
- Around line 49-50: The declaration of VERSION uses a static binding but should
be a compile-time constant; replace the static declaration with a const
definition (i.e., change "static VERSION: usize = 9;" to "const VERSION: usize =
9;") so the value is inlined and no static allocation or address is required;
update any code that relies on taking a reference to VERSION to use the value
directly if needed.
- Around line 85-90: The computed percentage in the Target::Tokens arm is a
ratio (tokens.len() as f32 / MAXLEN as f32) but is later formatted with a
percent sign in main.rs, so change the computation in the Data::Tokens
construction (Target::Tokens branch) to multiply the ratio by 100; update the
percentage field assignment (where tokens.len() and MAXLEN are used) to
(tokens.len() as f32 / MAXLEN as f32) * 100 so printed output shows a true
percentage.
- Around line 91-94: The match on Target::Check is just propagating errors from
self.start and can be simplified using the ? operator: call
self.start(settings)? and then return Data::Check; update the arm for
Target::Check so it invokes self.start(settings)? instead of matching Ok/Err and
remove the explicit Err return; keep the rest of the flow unchanged and ensure
the arm still yields Data::Check on success.
- Around line 8-10: The repo uses unstable Rust features
(#![feature(try_trait_v2)], #![feature(default_field_values)],
#![feature(if_let_guard)]) and although rust-toolchain.toml pins nightly, add a
short README note explaining that the crate requires the nightly toolchain and
list those three feature flags with a brief one-line explanation of each and how
to set the toolchain (referencing rust-toolchain.toml) so contributors know to
use nightly before building or contributing.

In `@main.rs`:
- Line 8: Remove the unused crate-level feature gate in main.rs: delete the
#![feature(if_let_guard)] attribute since the binary crate does not use if-let
guards; the library crate (settings/mod.rs) should retain its own feature
declaration independently. Ensure no other code in main.rs relies on if-let
guard syntax before removing.

In `@parser/mod.rs`:
- Around line 88-92: The three helper methods wait, recall, and review are
written as single very long lines; split each function signature and body across
multiple lines to improve readability and maintainability: put the fn
declaration on its own line, break long generic/borrowed parameter lists onto
separate lines, and move the body into a small block with braces and a clear
statement sequence (e.g., capture symbol or entry in a local variable, then call
entry(...).or_default().insert(...) or return the or_default() result). Keep the
same names (wait, recall, review), attributes (#[inline(always)]), and types so
behavior doesn't change; just reformat and introduce short local variables to
make the logic readable in the hot loop.
- Around line 90-92: The method review currently mutates the chart by calling
entry().or_default(); change review to be a true read-only accessor by using
chart[index as usize].get(&state) (or get_mut only in recall) and return either
Option<&FastSet<...>> or a usize (e.g., map(|s| s.len()).unwrap_or(0)) so
callers (e.g., the site at line 132) don't insert empty entries; keep recall as
the mutable entry-or-default writer and update any call sites of review to
handle the Option/usize result accordingly.
- Around line 93-95: Change the signature of run to accept a slice
(&[BindedToken<'parsing>]) instead of &Vec<...>, then build filtered from the
slice (e.g., iter().cloned().filter(...).collect()). Replace the retain closure
with a concise predicate that avoids the if-let and unwrap, e.g. use
ORDER.get(&token.kind).map_or(true, |(_, resp)| !matches!(resp,
Responsibility::Null)) so unknown kinds won’t panic and tokens with
Responsibility::Null are filtered out; update any call sites of run accordingly.
- Line 82: MINPOINTERS is declared as a static but used as an array length in
SmallVec type annotations, which requires a compile-time constant; change the
declaration from a static to a const (e.g., replace the current pub(super)
static MINPOINTERS: usize = 2; with a pub(super) const MINPOINTERS: usize = 2;)
so MINPOINTERS becomes a const expression usable in array sizes wherever
referenced in parser/mod.rs and other modules using SmallVec<[T; MINPOINTERS]>.

In `@prelude.rs`:
- Around line 46-50: The build breaks because prelude.rs imports
regex::bytes::Regex (and RegexSet/Captures) but tokenizer/mod.rs calls .as_str()
on Regex, which only exists on the non-bytes Regex; fix by switching the prelude
to export the regular (string) API: replace pub(super) use regex::bytes::{Regex,
RegexSet, Captures} with pub(super) use regex::{Regex, RegexSet, Captures} so
Regex::as_str() and RegexSet::new(&[&str]) work in tokenizer/mod.rs
(alternative: if you must keep bytes::Regex, extract and store the original
pattern strings where RegexSet::new is called instead, but preferred fix is to
import regex::{Regex, RegexSet, Captures}).

In `@settings/mod.rs`:
- Around line 51-59: The current apply method uses return Ok(match ...) with ?
inside match arms which is legal but obscure; change apply(&mut self, argument:
Argument) to perform a plain match on argument (matching
Argument::Alias,::File,::Flag,::Target) where Alias expansion calls self.apply
for each expanded flag (propagating errors with ?), File pushes to self.files,
Flag matches Flag::Quiet/Verbose and calls self.noise.change(...), and Target
pushes to self.targets, then return Ok(()) after the match to make error
propagation and success explicit; refer to the apply function and the Argument
and Flag variants to locate the logic to refactor.
- Around line 39-45: The cli() function is doing too much in one expression;
split the pipeline into clear steps: call getArguments(), skip(1) and collect
into a Vec<String>, then iterate and map each argument with a small matcher that
(1) detects files using std::path::Path::new(&arg).extension().and_then(|s|
s.to_str()) == Some("msm") to create Argument::File(File(arg.clone())), (2)
handles flags via Flag::try_from(&arg as &str)? returning Argument::Flag, (3)
handles aliases starting with "-" to create Argument::Alias, (4) parses Target
with argument.parse::<Target>()?, and otherwise returns an
Err(Issue::UnknownArgument(arg.clone())); collect::<Result<Vec<Argument>,
Issue>>()? and then call Settings::set(collected_args) and return the
result—this makes errors point to the specific step and uses Path::extension()
for robust file detection.
- Around line 34-37: The Settings struct uses unstable default field values
(e.g., the Settings struct declaration and fields like files, targets, noise)
and your lib.rs already enables the nightly feature gate, so update project
tooling/docs to require a compatible nightly toolchain: add a
rust-toolchain.toml pinning the minimum nightly (the feature has been available
since nightly 2024-12-10) or clearly document the required nightly version in
README/CONTRIBUTING; also verify lib.rs contains the matching
#![feature(default_field_values)] gate so local builds fail fast if the
toolchain is wrong.

In `@settings/noise.rs`:
- Around line 20-29: The pattern matches in Noise::verbose and Noise::quiet use
an unused binding named `other` and the debug/zero methods use verbose if-let
logic; replace `other` with `_` to silence the unused-binding warning and
simplify all four methods to use the matches!() macro for clarity and
conciseness (i.e., implement verbose, quiet, debug, and zero using
matches!(self, Noise::Debug | Noise::Verbose), matches!(self, Noise::Quiet |
Noise::Zero), matches!(self, Noise::Debug), and matches!(self, Noise::Zero)
respectively), keeping method names `verbose`, `quiet`, `debug`, and `zero`
unchanged.

In `@solver/context.rs`:
- Line 11: Replace the fragile relative import in context.rs that uses
super::super::syntax::level5::Variable with an absolute crate path; change the
import to use crate::syntax::level5::Variable so the module reference is stable
(locate the use statement that imports Variable and update it accordingly).
- Around line 25-26: Rename the non-idiomatic camelCase methods registerFn and
isFn to snake_case register_fn and is_fn in the impl for Context (and update
every callsite), and update their definitions' signatures accordingly; search
for other camelCase methods mentioned (e.g., augmentVariables, getArguments) and
follow the same rename pattern if present to avoid clippy warnings and
compilation errors—ensure you update all references, trait impls, and
documentation/comments that mention registerFn/isFn so the codebase compiles
cleanly.

In `@solver/mod.rs`:
- Around line 96-105: The choose function currently only handles the Variable vs
Call Factor disambiguation and panics on any other ambiguity; change it so it
never panics: update fn choose (and its signature if needed) to return a
Result<bool, ChooseError> or at least a non-panicking default (e.g., prefer one
side deterministically) and replace the panic arm (the match arm handling other
=> ...) with code that either returns Err(ChooseError{first,second,building}) or
returns a documented default preference; ensure the recursive flip using
reincident still avoids infinite recursion by returning Err on reincident
instead of panicking, and add a short doc comment on choose describing the
chosen fallback behavior and the Variable vs Call special-case (referencing
choose, reincident, and building).
- Around line 55-57: The current run function is a dense one-liner using an
awkward closure `if let Part::NonTerminal(Object::Start) = backpointer.symbol
{true} else {false}` and a complex build/unwrap pattern; refactor by extracting
the find into a named variable (e.g., start_bp) using matches! to test the
symbol (find(|bp| matches!(bp.symbol, Part::NonTerminal(Object::Start)))), then
call self.build(start_bp, &mut Context::new(), true) and match the returned
Partition with `let Partition::NonTerminal(NonTerminal::Start(start)) = ... else
{ return Err(Issue::SyntaxError) };` before returning Ok(start); keep references
to Backpointer, build, Context::new, Partition::NonTerminal, and
NonTerminal::Start to locate the code.
- Around line 74-95: The panic in solve (the match arm for candidates length 0)
should be replaced with a descriptive panic that includes runtime context:
report the current index, a debug representation of candidates (e.g., candidates
or candidates.len()), and relevant identifiers like the building being resolved
and any small context from Context; update the match arm in fn solve to
construct a formatted error message (using format! or panic!-with-args) instead
of the bare panic!(), so failures print actionable diagnostics to help
debugging.

In `@solver/nonterminal.rs`:
- Around line 99-126: The Level1–Level5 pass-through arms in Object::summon
silently discard any items after the first, risking silent data loss; update
each of the Object::Level1, Object::Level2, Object::Level3, Object::Level4 and
Object::Level5 match arms (the arms that pattern-match Item::NonTerminal(...)
and take items.into_iter().next().unwrap()) to assert that exactly one item was
provided (e.g. assert_eq!(items.len(), 1) or equivalent) before extracting the
first item, so any violation fails loudly and makes debugging easier.

In `@syntax/level1.rs`:
- Around line 140-146: The load method on Use silently swallows file read errors
due to the `if let Ok(content) = File(...).read()` pattern; change it to
propagate the read error instead of returning Ok(()) on failure: replace the `if
let` with using the `?` operator or a `match` so `let content =
File(self.module.clone().into()).read()?;` (or map Err to an Issue) then
continue running transformers.tokenizer, parser, solver and set `self.start =
Some(start)` before returning Ok(()); ensure any io errors from File::read are
converted into the function's Result<..., Issue> (e.g., via From conversion or
mapping) so missing/unreadable modules produce an Err from Use::load rather than
a silent success.
- Around line 80-93: Rename Context::registerFn to register_fn and Context::isFn
to is_fn in the Context type and update all call sites accordingly (e.g., the
call in Function::spawn). In Function::spawn replace the three bare panic!()
calls with panic! macro calls that include descriptive messages indicating what
Item/variant was expected (e.g., "expected first item to be
NonTerminal::Level5::Variable", "expected last item to be NonTerminal::Level2",
and "expected remaining items to be NonTerminal::Level5::Variable") so failures
show clear context for debugging.

In `@syntax/level5.rs`:
- Line 5: The import use core::panic; is unused and shadows nothing — remove
that line from the file; leave calls to the panic!() macro as-is (panic! is a
macro in scope, not provided by the core::panic module), so delete the use
core::panic; statement referencing core::panic to fix the unused import warning.

In `@syntax/mod.rs`:
- Around line 42-45: The match on self.stream.len() creates an unused binding
named other; change that pattern from other => (r"\[", r"\]") to _ => (r"\[",
r"\]") so the unused value is not bound (update the match in the function/method
containing self.stream.len()).
- Around line 49-53: The one-line impls should be expanded and made explicit: in
impl Spawn for Start refactor spawn to iterate items with a match (or map with a
closure that returns Result) instead of burying an if-let panic inside map — use
a clear panic or better return an Err with context naming the unexpected Item
variant when matching Item::NonTerminal(NonTerminal::Level1(level1)) fails, and
construct NonTerminal::Start(Self { stream }) from the collected Vec; in impl
Start refactor modules to write the for statement as a normal loop over &mut
self.stream that matches Level1::Use(element) and calls
element.load(transformers, settings)?; after the loop return Ok(()) instead of
wrapping the for-loop in Ok(...). Ensure references to Spawn::spawn,
Start::modules, Item::NonTerminal, NonTerminal::Start, and Level1::Use are used
to locate the code.

In `@tokenizer/mod.rs`:
- Around line 62-68: The match arm binds an unused variable named `other` when
matching `token.kind`; change that unused binding to `_` to avoid the warning —
update the match in the loop that inspects `token.kind` (the arms for
`Kind::NEWLINES` and `Kind::ENDOFFILE` remain the same) so the catch-all arm
uses `_` instead of `other`, keeping the existing side-effect of updating
`position.column`.

In `@tokenizer/token.rs`:
- Around line 107-112: ShallowToken is currently opaque (fields start and kind
are private) but Vec<ShallowToken> is exposed via Data::Tokens; either make the
fields intentionally accessible or document the opacity. Fix by adding public
accessor methods on ShallowToken (e.g., pub fn start(&self) -> u64 and pub fn
kind(&self) -> Kind) so callers can read those fields, or add a clear doc
comment on the ShallowToken struct explaining it is intentionally opaque and
only for debugging/diagnostic use; reference ShallowToken, its fields start and
kind, and Data::Tokens when making the change.

---

Outside diff comments:
In `@parser/mod.rs`:
- Around line 98-99: The closure passed to map is using an unused parameter
`iteration`, causing a Clippy warning; update the range and closures so the
parameter is ignored (use `_`) and prefer the inclusive range
`0..=filtered.len()` for clarity: change the two lines that call
chart.extend(...) and waiting.extend(...) to iterate with `0..=filtered.len()`
and map(|_| FastMap::new()) respectively, referencing the existing identifiers
chart, waiting, filtered.len(), and FastMap::new().
- Around line 140-149: The completion hot path is doing heavy cloning:
pool.entry(pointer.clone()).or_default().extend(self.review(...).clone()),
cloning backpointers from chart into a Vec while pushing pointer, and cloning
completed before extending agenda; locate the block around pool.entry, the
creation of backpointers, self.recall, and the agenda.extend(completed.clone())
calls and replace full clones with shared references (Rc/Arc or indices) or
in-place mutation: store Rc<SmallVec<...>> in pool and chart so you can clone
the Rc cheaply, map/backpointer-modify by creating a new shared wrapper with the
extra pointer (or store pointer lists as small immutable shared slices), and
change agenda.extend(completed.clone()) to extend from a slice/iterator of
shared handles or indices instead of cloning the whole collection to eliminate
repeated allocations.

In `@syntax/level1.rs`:
- Around line 58-70: Replace bare panic!() and .unwrap() in spawn
implementations with diagnostics: in Definition::spawn (and similarly
Node::spawn, Equation::spawn, Function::spawn) change panic!() to a panic that
includes the type and the unexpected item (e.g., "Definition::spawn: unexpected
item {:?}") and replace variable.unwrap()/value.unwrap() with expect(...) or map
to Result that include messages like "Definition::spawn: missing variable" /
"Definition::spawn: missing value"; apply the same pattern to the other spawn
functions to ensure failures report which spawn and what was missing or
unexpected.

In `@syntax/level2.rs`:
- Around line 47-58: The bare panic!() in the Spawn impl for Expression
(function spawn for Expression/NonTerminal::Level2) provides no diagnostic info;
replace it with a panic variant that includes the unexpected item and context,
e.g. use unreachable! or panic! with a formatted message like
"Expression::spawn: unexpected item {other:?}" so when matching Item::Token /
Item::NonTerminal fails you log the concrete `other` (Item implements Debug) and
the function name to aid debugging.

In `@syntax/level3.rs`:
- Around line 55-58: The bare panic!() in the match over Item in the spawn
implementations (e.g., the match handling Item::Token,
Item::NonTerminal(NonTerminal::Level4(level4)) in level3.spawn and similar code
in level2/level4/level5) provides no diagnostic context; replace it with either
unreachable!() if that arm truly cannot happen, or panic! with a descriptive
message that includes the unexpected value (e.g., include other via debug
formatting) so failures show what Item variant and value caused the error.

In `@syntax/level4.rs`:
- Around line 78-92: The Limit::spawn implementation uses a tricky reassignment
of the local variable next to peek past an optional direction token; add a
one-line inline comment above the block that sets direction explaining that next
is advanced to consume an optional Token so the subsequent pattern match reads
the following item. Also replace bare panic!() calls in Spawn for Limit (the
three occurrences that unwrap pattern matches on iterator.next()) with panics
that include context (e.g., which expected Item/NonTerminal was missing and the
variable being parsed) so failures in spawn are descriptive; reference the
function Spawn for Limit, the local variables iterator and next, and the match
patterns that expect Level5::Variable, Level2(approach), and Level5::Nest.

Comment on lines +210 to +234
1. Full issue coverage now.

## v8 -- Semantic solver
### Added
1. Local module importing from filesystem.
1. Added `README.md` in `Cargo.toml`.
### Changed
1. Contextual semantic solver replaced old heuristic solver.
1. Improved filetree and directories.
### Removed
1. Old numerical solver.
### Fixed
1. Imported modules now must have `.msm` file extension.

## v9 -- Semantic-driven disambiguation
### Added
1. Functions.
1. Ambiguity resolution for function calls and implicit multiplication based on context.
### Changed
1. Definitions now are done with `:=` instead of `==` and `=` always produces equations.
### Removed
1. Old declaration constructs.
1. Locking mechanics.
### Fixed
1. CLI behaviour under stress. No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix markdown formatting: missing blank lines around headings and missing trailing newline.

Come on, the rest of the changelog is formatted properly — don't drop the ball on the new sections. Static analysis flagged numerous MD022 violations (headings need blank lines above and below) and a missing trailing newline (MD047).

📝 Proposed formatting fix
 1. Full issue coverage now.
+
 ## v8 -- Semantic solver
+
 ### Added
 1. Local module importing from filesystem.
 1. Added `README.md` in `Cargo.toml`.
+
 ### Changed
 1. Contextual semantic solver replaced old heuristic solver.
 1. Improved filetree and directories.
+
 ### Removed
 1. Old numerical solver.
+
 ### Fixed
 1. Imported modules now must have `.msm` file extension.
+
 ## v9 -- Semantic-driven disambiguation
+
 ### Added
 1. Functions.
 1. Ambiguity resolution for function calls and implicit multiplication based on context.
+
 ### Changed
 1. Definitions now are done with `:=` instead of `==` and `=` always produces equations.
+
 ### Removed
 1. Old declaration constructs.
 1. Locking mechanics.
+
 ### Fixed
 1. CLI behaviour under stress.
+
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
1. Full issue coverage now.
## v8 -- Semantic solver
### Added
1. Local module importing from filesystem.
1. Added `README.md` in `Cargo.toml`.
### Changed
1. Contextual semantic solver replaced old heuristic solver.
1. Improved filetree and directories.
### Removed
1. Old numerical solver.
### Fixed
1. Imported modules now must have `.msm` file extension.
## v9 -- Semantic-driven disambiguation
### Added
1. Functions.
1. Ambiguity resolution for function calls and implicit multiplication based on context.
### Changed
1. Definitions now are done with `:=` instead of `==` and `=` always produces equations.
### Removed
1. Old declaration constructs.
1. Locking mechanics.
### Fixed
1. CLI behaviour under stress.
1. Full issue coverage now.
## v8 -- Semantic solver
### Added
1. Local module importing from filesystem.
1. Added `README.md` in `Cargo.toml`.
### Changed
1. Contextual semantic solver replaced old heuristic solver.
1. Improved filetree and directories.
### Removed
1. Old numerical solver.
### Fixed
1. Imported modules now must have `.msm` file extension.
## v9 -- Semantic-driven disambiguation
### Added
1. Functions.
1. Ambiguity resolution for function calls and implicit multiplication based on context.
### Changed
1. Definitions now are done with `:=` instead of `==` and `=` always produces equations.
### Removed
1. Old declaration constructs.
1. Locking mechanics.
### Fixed
1. CLI behaviour under stress.
🧰 Tools
🪛 LanguageTool

[style] ~234-~234: ‘under stress’ might be wordy. Consider a shorter alternative.
Context: ...g mechanics. ### Fixed 1. CLI behaviour under stress.

(EN_WORDINESS_PREMIUM_UNDER_STRESS)

🪛 markdownlint-cli2 (0.21.0)

[warning] 212-212: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 213-213: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above

(MD022, blanks-around-headings)


[warning] 213-213: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 216-216: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above

(MD022, blanks-around-headings)


[warning] 216-216: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 219-219: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above

(MD022, blanks-around-headings)


[warning] 219-219: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 221-221: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above

(MD022, blanks-around-headings)


[warning] 221-221: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 224-224: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 225-225: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above

(MD022, blanks-around-headings)


[warning] 225-225: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 228-228: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above

(MD022, blanks-around-headings)


[warning] 228-228: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 230-230: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above

(MD022, blanks-around-headings)


[warning] 230-230: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 233-233: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above

(MD022, blanks-around-headings)


[warning] 233-233: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


[warning] 234-234: Files should end with a single newline character

(MD047, single-trailing-newline)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/CHANGELOG.md around lines 210 - 234, Add blank lines above and below
each markdown heading in the new sections (e.g., "## v8 -- Semantic solver",
"### Added", "### Changed", "### Removed", "### Fixed", and the "## v9 --
Semantic-driven disambiguation" block and its subheadings) so they conform to
MD022, and ensure the file ends with a single trailing newline to satisfy MD047;
update the section blocks and list spacing accordingly while preserving existing
text content and numbering.

Comment on lines 9 to +10
**/.DS_Store
.DS_Store
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Redundant .DS_Store entry.

Line 9 (**/.DS_Store) already covers the root directory. The explicit .DS_Store on line 10 is unnecessary.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.gitignore around lines 9 - 10, Remove the redundant `.DS_Store` pattern
from the .gitignore: keep the recursive glob pattern `**/.DS_Store` and delete
the explicit `.DS_Store` line so the single entry covers both root and nested
`.DS_Store` files.

Comment on lines +40 to +48
{
"tag": "!",
"color": "#00FF00",
"strikethrough": false,
"underline": false,
"backgroundColor": "#FF00FF",
"bold": true,
"italic": false
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

That color combo is going to melt someone's retinas, bud.

#00FF00 (pure neon green) on #FF00FF (pure magenta) is about as subtle as a foghorn. It technically works, but it's going to be visually aggressive for anything longer than a glance. Consider toning down either the foreground or background — e.g., a softer green like #39FF14 on a dark/transparent background, or keeping the magenta background but using a dark foreground for contrast.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.vscode/settings.json around lines 40 - 48, The current color pair uses
color "#00FF00" and backgroundColor "#FF00FF" which is visually harsh; update
the object with a less aggressive combination by either changing "color" to a
softer green like "#39FF14" or switching "color" to a dark foreground such as
"#1A1A1A" while keeping "backgroundColor" as the magenta, or alternatively tone
down "backgroundColor" to a darker/translucent magenta; modify the keys "color"
and/or "backgroundColor" in the object where "tag": "!" is defined to implement
one of these options.

"rust-analyzer.inlayHints.typeHints.enable": false,
"rust-analyzer.inlayHints.renderColons": false,
"python.analysis.ignore": [".venv/**"]
"rust-analyzer.inlayHints.renderColons": false
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Heads up — dropping python.analysis.ignore for .venv/** will hurt you.

The AI summary notes that "python.analysis.ignore": [".venv/**"] was removed. Without it, Pylance now crawls your entire virtual environment on every analysis pass. You'll get a flood of false-positive errors and warnings from third-party package internals polluting the Problems panel. Put it back.

🛠️ Restore the ignore setting
     "python.analysis.diagnosticMode": "workspace",
+    "python.analysis.ignore": [".venv/**"],
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.vscode/settings.json at line 53, The removal of the
"python.analysis.ignore" setting causes Pylance to analyze the entire .venv and
produce noise; restore the JSON entry "python.analysis.ignore": [".venv/**"] in
the .vscode/settings.json so the analyzer ignores the virtual environment
directory (ensure the key name is exact and the pattern ".venv/**" is present
under the root settings object).

Comment on lines +14 to +58
use self::super::{
solver::{
nonterminal::{
Item,
NonTerminal
},
context::Context
},
syntax::{
Start,
level1::{
Definition,
Function,
Level1,
Node,
Equation,
Use
},
level2::{
Level2,
Expression
},
level3::{
Level3,
Term
},
level4::{
Limit,
Level4,
Factor
},
level5::{
Absolute,
Level5,
Infinite,
Variable,
Nest,
Tensor,
Whole,
Undefined,
Rational,
Call
}
}
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

fd -t f "mod.rs" backends/ --exec wc -l {} \;

Repository: alejandro-vaz/mathsys

Length of output: 83


🏁 Script executed:

cat -n backends/mod.rs

Repository: alejandro-vaz/mathsys

Length of output: 1779


🏁 Script executed:

# Check if syntax types are used in the backends module
rg "Start|Definition|Function|Level1|Node|Equation|Use|Level2|Expression|Level3|Term|Level4|Limit|Factor|Level5|Absolute|Infinite|Variable|Nest|Tensor|Whole|Undefined|Rational|Call" backends/ -t rust --no-filename -n

Repository: alejandro-vaz/mathsys

Length of output: 962


🏁 Script executed:

# Check what's in the latex module
cat -n backends/latex.rs | head -100

Repository: alejandro-vaz/mathsys

Length of output: 3158


Remove unused syntax imports from backends/mod.rs.

Your imports of Start, Definition, Function, Level1, Node, Equation, Use, Level2, Expression, Level3, Term, Level4, Limit, Factor, Level5, Absolute, Infinite, Variable, Nest, Tensor, Whole, Undefined, Rational, and Call aren't used anywhere in this file. Not in the trait definitions, not in the latex module—nowhere. Even the #[dispatch] macro doesn't pull these in. Clean it up and keep only what you actually use: Item, NonTerminal, and Context.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backends/mod.rs` around lines 14 - 58, Remove the unused syntax imports from
the large use block and keep only the actual symbols in use: Item, NonTerminal
and Context; specifically, update the use self::super::{ solver::{
nonterminal::{ Item, NonTerminal }, context::Context } } import pattern and
delete all the unused syntax imports (Start, Definition, Function, Level1, Node,
Equation, Use, Level2, Expression, Level3, Term, Level4, Limit, Factor, Level5,
Absolute, Infinite, Variable, Nest, Tensor, Whole, Undefined, Rational, Call) so
the module only imports Item, NonTerminal and Context.

//^ HEAD
//^

use core::panic;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

use core::panic; is unused — remove it.

You're using the panic!() macro throughout, which is always in scope. This import brings in the core::panic module (not the macro), and nothing in this file references it.

♻️ Remove unused import
-use core::panic;
-
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
use core::panic;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@syntax/level5.rs` at line 5, The import use core::panic; is unused and
shadows nothing — remove that line from the file; leave calls to the panic!()
macro as-is (panic! is a macro in scope, not provided by the core::panic
module), so delete the use core::panic; statement referencing core::panic to fix
the unused import warning.

Comment on lines +42 to +45
let (start, end) = match self.stream.len() {
0 => ("", ""),
1 => (r"\(", r"\)"),
other => (r"\[", r"\]")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unused variable binding other on line 45.

This should be _ since the value isn't used. I know you've got #![allow(unused_variables)] up top, but that's a crutch, not a license.

Proposed fix
-            other => (r"\[", r"\]")
+            _ => (r"\[", r"\]")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let (start, end) = match self.stream.len() {
0 => ("", ""),
1 => (r"\(", r"\)"),
other => (r"\[", r"\]")
let (start, end) = match self.stream.len() {
0 => ("", ""),
1 => (r"\(", r"\)"),
_ => (r"\[", r"\]")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@syntax/mod.rs` around lines 42 - 45, The match on self.stream.len() creates
an unused binding named other; change that pattern from other => (r"\[", r"\]")
to _ => (r"\[", r"\]") so the unused value is not bound (update the match in the
function/method containing self.stream.len()).

Comment on lines +49 to +53
} impl Spawn for Start {fn spawn(items: Vec<Item>, context: Option<&mut Context>) -> NonTerminal {
return NonTerminal::Start(Self {
stream: items.into_iter().map(|element| if let Item::NonTerminal(NonTerminal::Level1(level1)) = element {level1} else {panic!("{element:?}")}).collect()
})
}} impl Start {pub(super) fn modules(&mut self, transformers: &Transformers, settings: &Settings) -> Result<(), Issue> {return Ok(for statement in &mut self.stream {if let Level1::Use(element) = statement {element.load(transformers, settings)?}})}} No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

These one-liners are doing way too much — break them up for readability.

Lines 49–53 cram entire trait implementations and a method body onto single lines. I get wanting to be terse, but this is a maintenance nightmare. The spawn impl has a panic! buried inside a map/collect chain, and modules wraps a for-loop in Ok(...) — both deserve room to breathe.

Specifically:

  1. Spawn::spawn — the if let ... else { panic! } inside map is invisible at a glance. Destructuring failures here will produce a useless panic message ("{element:?}" with no context about what went wrong).
  2. Start::modulesOk(for statement in ... { ... }) is valid Rust (for-loop returns ()) but deeply unconventional. Just write the loop, then Ok(()).
Suggested restructure for modules
-impl Start {pub(super) fn modules(&mut self, transformers: &Transformers, settings: &Settings) -> Result<(), Issue> {return Ok(for statement in &mut self.stream {if let Level1::Use(element) = statement {element.load(transformers, settings)?}})}}
+impl Start {
+    pub(super) fn modules(&mut self, transformers: &Transformers, settings: &Settings) -> Result<(), Issue> {
+        for statement in &mut self.stream {
+            if let Level1::Use(element) = statement {
+                element.load(transformers, settings)?;
+            }
+        }
+        Ok(())
+    }
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} impl Spawn for Start {fn spawn(items: Vec<Item>, context: Option<&mut Context>) -> NonTerminal {
return NonTerminal::Start(Self {
stream: items.into_iter().map(|element| if let Item::NonTerminal(NonTerminal::Level1(level1)) = element {level1} else {panic!("{element:?}")}).collect()
})
}} impl Start {pub(super) fn modules(&mut self, transformers: &Transformers, settings: &Settings) -> Result<(), Issue> {return Ok(for statement in &mut self.stream {if let Level1::Use(element) = statement {element.load(transformers, settings)?}})}}
} impl Spawn for Start {fn spawn(items: Vec<Item>, context: Option<&mut Context>) -> NonTerminal {
return NonTerminal::Start(Self {
stream: items.into_iter().map(|element| if let Item::NonTerminal(NonTerminal::Level1(level1)) = element {level1} else {panic!("{element:?}")}).collect()
})
}} impl Start {
pub(super) fn modules(&mut self, transformers: &Transformers, settings: &Settings) -> Result<(), Issue> {
for statement in &mut self.stream {
if let Level1::Use(element) = statement {
element.load(transformers, settings)?;
}
}
Ok(())
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@syntax/mod.rs` around lines 49 - 53, The one-line impls should be expanded
and made explicit: in impl Spawn for Start refactor spawn to iterate items with
a match (or map with a closure that returns Result) instead of burying an if-let
panic inside map — use a clear panic or better return an Err with context naming
the unexpected Item variant when matching
Item::NonTerminal(NonTerminal::Level1(level1)) fails, and construct
NonTerminal::Start(Self { stream }) from the collected Vec; in impl Start
refactor modules to write the for statement as a normal loop over &mut
self.stream that matches Level1::Use(element) and calls
element.load(transformers, settings)?; after the loop return Ok(()) instead of
wrapping the for-loop in Ok(...). Ensure references to Spawn::spawn,
Start::modules, Item::NonTerminal, NonTerminal::Start, and Level1::Use are used
to locate the code.

Comment on lines +62 to +68
match token.kind {
Kind::NEWLINES => {position.line += length as u32; position.column = 1},
Kind::ENDOFFILE => {tokens.push(token); return Ok(tokens)},
other => position.column += length as u32
}
tokens.push(token);
position.cursor += length as u64;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unused binding other on line 65 — use _.

Same habit as in syntax/mod.rs. You're matching on token.kind and not using other. Be consistent.

-                other => position.column += length as u32
+                _ => position.column += length as u32
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tokenizer/mod.rs` around lines 62 - 68, The match arm binds an unused
variable named `other` when matching `token.kind`; change that unused binding to
`_` to avoid the warning — update the match in the loop that inspects
`token.kind` (the arms for `Kind::NEWLINES` and `Kind::ENDOFFILE` remain the
same) so the catch-all arm uses `_` instead of `other`, keeping the existing
side-effect of updating `position.column`.

Comment on lines +107 to +112
//> INSTANCES -> SHALLOW
#[derive(Debug)]
pub struct ShallowToken {
start: u64,
kind: Kind
} No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Search for ShallowToken usage and context
rg -n 'ShallowToken' --type=rust -B 2 -A 2

Repository: alejandro-vaz/mathsys

Length of output: 1380


🏁 Script executed:

# Search for Data::Tokens to verify if it's publicly exposed
rg -n 'Data.*Tokens|Tokens.*pub' --type=rust -B 2 -A 2

Repository: alejandro-vaz/mathsys

Length of output: 925


🏁 Script executed:

# Check the module structure and any pub implementations for ShallowToken
ast-grep --pattern 'impl ShallowToken {
  $$$
}'

Repository: alejandro-vaz/mathsys

Length of output: 47


ShallowToken is intentionally opaque — but make sure that's actually what you want.

You're exposing Vec<ShallowToken> publicly via Data::Tokens, but callers can only debug-print them. They can't touch start or kind without public accessors. That's fine if tokens are just diagnostic blobs, but if consumers ever need to read those fields, you'll regret locking them down. Right now there's no way to access them—so either add getters or at least document why this is locked. Don't leave future-you guessing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tokenizer/token.rs` around lines 107 - 112, ShallowToken is currently opaque
(fields start and kind are private) but Vec<ShallowToken> is exposed via
Data::Tokens; either make the fields intentionally accessible or document the
opacity. Fix by adding public accessor methods on ShallowToken (e.g., pub fn
start(&self) -> u64 and pub fn kind(&self) -> Kind) so callers can read those
fields, or add a clear doc comment on the ShallowToken struct explaining it is
intentionally opaque and only for debugging/diagnostic use; reference
ShallowToken, its fields start and kind, and Data::Tokens when making the
change.

@alejandro-vaz alejandro-vaz merged commit 6a6dedc into main Feb 22, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant