Bump to v9: Restructure mathsys as library-first#40
Conversation
WalkthroughVersion bump from v7 to v9 restructures mathsys from a binary-only tool into a library-first architecture. Major changes include modularization of tokenizer/parser/solver, redesign of the backend trait system, overhaul of CLI argument handling with enum-based types, visibility narrowing across syntax definitions, and establishment of a public lib.rs interface with a unified TRANSFORMERS pipeline. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant main as main.rs
participant TRANS as TRANSFORMERS<br/>(lib.rs)
participant TZ as Tokenizer
participant P as Parser
participant S as Solver
participant B as Backends
participant Settings as Settings/Noise
User->>main: run mathsys CLI
main->>Settings: Settings::cli()
Settings-->>main: configured Settings
main->>TRANS: run(&settings)
loop for each target
TRANS->>TZ: tokenize(content)
TZ-->>TRANS: Vec<BindedToken>
TRANS->>P: parse(tokens)
P-->>TRANS: Start (AST)
TRANS->>S: solve(pool)
S-->>TRANS: NonTerminal
alt target == Tokens
TRANS->>TRANS: tokens(&settings)
TRANS-->>main: Data::Tokens
else target == Ast
TRANS-->>main: Data::Ast
else target == Latex
TRANS->>B: latex()
B-->>TRANS: String representation
TRANS-->>main: Data::Latex
else target == Check/Help/Version
TRANS-->>main: Data::Check/Version
end
main->>main: format & output result
end
main-->>User: exit code + console output
Poem
Real talk though: This is a significant restructuring, and you've done the work. The shift from pub to pub(crate)/pub(super) across syntax definitions shows intentional API boundary-setting, and moving to a library-first architecture is the right call. Just make sure your public-facing lib.rs API is actually stable and well-tested—you don't want to bump to v10 in three months because the Data enum variants weren't quite right. The Spawn trait signature changes across every syntax level are pervasive; double-check that all implementations actually handle the Context parameter correctly. Good progress overall. 👊 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 37
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
syntax/level4.rs (1)
78-92: 🧹 Nitpick | 🔵 TrivialDense but correct
Limit::spawn— consider a brief comment on thenextreassignment trick.The pattern on lines 82-84 where
nextis conditionally advanced to peek past an optional direction token is clever. It works, but someone reading this for the first time will do a double-take. One inline comment would save them 5 minutes.Same note as level3.rs: the bare
panic!()calls on lines 80, 81, 84 give no diagnostic context.♻️ Add minimal context to panics
- let Some(Item::NonTerminal(NonTerminal::Level5(Level5::Variable(variable)))) = iterator.next() else {panic!()}; - let Some(Item::NonTerminal(NonTerminal::Level2(approach))) = iterator.next() else {panic!()}; + let Some(Item::NonTerminal(NonTerminal::Level5(Level5::Variable(variable)))) = iterator.next() else {panic!("Limit::spawn: expected Variable")}; + let Some(Item::NonTerminal(NonTerminal::Level2(approach))) = iterator.next() else {panic!("Limit::spawn: expected Level2 approach")}; let mut next = iterator.next(); + // If next item is a token (+/-), consume it as direction and advance to the next item let direction = if let Some(Item::Token(token)) = next {next = iterator.next(); Some(token.value == "+")} else {None}; - let Some(Item::NonTerminal(NonTerminal::Level5(Level5::Nest(nest)))) = next else {panic!()}; + let Some(Item::NonTerminal(NonTerminal::Level5(Level5::Nest(nest)))) = next else {panic!("Limit::spawn: expected Nest")};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@syntax/level4.rs` around lines 78 - 92, The Limit::spawn implementation uses a tricky reassignment of the local variable next to peek past an optional direction token; add a one-line inline comment above the block that sets direction explaining that next is advanced to consume an optional Token so the subsequent pattern match reads the following item. Also replace bare panic!() calls in Spawn for Limit (the three occurrences that unwrap pattern matches on iterator.next()) with panics that include context (e.g., which expected Item/NonTerminal was missing and the variable being parsed) so failures in spawn are descriptive; reference the function Spawn for Limit, the local variables iterator and next, and the match patterns that expect Level5::Variable, Level2(approach), and Level5::Nest.syntax/level3.rs (1)
55-58: 🧹 Nitpick | 🔵 TrivialBare
panic!()gives zero debugging context.When this blows up in your face (and it will), you'll wish you knew what the unexpected item was. This pattern appears across multiple
spawnimplementations (level2, level3, level4, level5). At minimum, include a message. Even better, useunreachable!()if you truly expect it to be unreachable.♻️ Add diagnostic info to panic
- other => panic!() + other => unreachable!("unexpected item in Term::spawn: {other:?}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@syntax/level3.rs` around lines 55 - 58, The bare panic!() in the match over Item in the spawn implementations (e.g., the match handling Item::Token, Item::NonTerminal(NonTerminal::Level4(level4)) in level3.spawn and similar code in level2/level4/level5) provides no diagnostic context; replace it with either unreachable!() if that arm truly cannot happen, or panic! with a descriptive message that includes the unexpected value (e.g., include other via debug formatting) so failures show what Item variant and value caused the error.syntax/level2.rs (1)
47-58: 🧹 Nitpick | 🔵 TrivialBare
panic!()gives zero diagnostic context on malformed input.Look, this pattern is everywhere in the codebase (level3, level4, level5 all do it), so I know you didn't start it here. But every one of these bare
panic!()calls is a future debugging nightmare. When a grammar rule feeds an unexpected item variant, you get a crash with no indication of which spawn failed or what it received.At minimum, consider
unreachable!("Expression::spawn: unexpected item {other:?}")— sinceItemderivesDebug, this costs nothing and saves hours later.♻️ Suggested improvement
- other => panic!() + other => unreachable!("Expression::spawn: unexpected item {other:?}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@syntax/level2.rs` around lines 47 - 58, The bare panic!() in the Spawn impl for Expression (function spawn for Expression/NonTerminal::Level2) provides no diagnostic info; replace it with a panic variant that includes the unexpected item and context, e.g. use unreachable! or panic! with a formatted message like "Expression::spawn: unexpected item {other:?}" so when matching Item::Token / Item::NonTerminal fails you log the concrete `other` (Item implements Debug) and the function name to aid debugging.parser/mod.rs (2)
98-99: 🧹 Nitpick | 🔵 TrivialUnused closure parameter
iteration— use_.Clippy will flag this. Quick fix:
♻️ Fix
- chart.extend((0..(filtered.len() + 1)).map(|iteration| FastMap::new())); - waiting.extend((0..(filtered.len() + 1)).map(|iteration| FastMap::new())); + chart.extend((0..=filtered.len()).map(|_| FastMap::new())); + waiting.extend((0..=filtered.len()).map(|_| FastMap::new()));(Also switched to
0..=filtered.len()which is equivalent and slightly more idiomatic for inclusive ranges.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@parser/mod.rs` around lines 98 - 99, The closure passed to map is using an unused parameter `iteration`, causing a Clippy warning; update the range and closures so the parameter is ignored (use `_`) and prefer the inclusive range `0..=filtered.len()` for clarity: change the two lines that call chart.extend(...) and waiting.extend(...) to iterate with `0..=filtered.len()` and map(|_| FastMap::new()) respectively, referencing the existing identifiers chart, waiting, filtered.len(), and FastMap::new().
140-149:⚠️ Potential issue | 🟠 MajorHeavy cloning in the completion hot path.
Line 141 clones the entire backpointer set into
pool. Line 142 clones backpointers from the chart, iterates, pushes, and collects into aVec. Line 148 clones the entirecompletedset on every successful completion. In an Earley parser,completeis called a lot. For non-trivial grammars and inputs, this cloning overhead will add up.I'm not saying rewrite it today — but be aware this is your performance bottleneck. If you start seeing slowness on larger inputs, this is where to look first. Indices or
Rc-based sharing could reduce allocation pressure significantly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@parser/mod.rs` around lines 140 - 149, The completion hot path is doing heavy cloning: pool.entry(pointer.clone()).or_default().extend(self.review(...).clone()), cloning backpointers from chart into a Vec while pushing pointer, and cloning completed before extending agenda; locate the block around pool.entry, the creation of backpointers, self.recall, and the agenda.extend(completed.clone()) calls and replace full clones with shared references (Rc/Arc or indices) or in-place mutation: store Rc<SmallVec<...>> in pool and chart so you can clone the Rc cheaply, map/backpointer-modify by creating a new shared wrapper with the extra pointer (or store pointer lists as small immutable shared slices), and change agenda.extend(completed.clone()) to extend from a slice/iterator of shared handles or indices instead of cloning the whole collection to eliminate repeated allocations.syntax/level1.rs (1)
58-70: 🛠️ Refactor suggestion | 🟠 MajorAll
spawnimplementations use barepanic!()/.unwrap()with no error context.Same issue as flagged for
Function::spawn—Definition,Node, andEquationall have barepanic!()or.unwrap()with no diagnostic message. When a grammar mismatch causes one of these to blow up, you'll get a stack trace pointing at a generic panic with no indication of which spawn or what item caused it. Even apanic!("Definition::spawn: unexpected item")would save you 20 minutes of debugging. Be kind to future-you.Also applies to: 101-105, 114-120
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@syntax/level1.rs` around lines 58 - 70, Replace bare panic!() and .unwrap() in spawn implementations with diagnostics: in Definition::spawn (and similarly Node::spawn, Equation::spawn, Function::spawn) change panic!() to a panic that includes the type and the unexpected item (e.g., "Definition::spawn: unexpected item {:?}") and replace variable.unwrap()/value.unwrap() with expect(...) or map to Result that include messages like "Definition::spawn: missing variable" / "Definition::spawn: missing value"; apply the same pattern to the other spawn functions to ensure failures report which spawn and what was missing or unexpected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/CHANGELOG.md:
- Around line 210-234: Add blank lines above and below each markdown heading in
the new sections (e.g., "## v8 -- Semantic solver", "### Added", "### Changed",
"### Removed", "### Fixed", and the "## v9 -- Semantic-driven disambiguation"
block and its subheadings) so they conform to MD022, and ensure the file ends
with a single trailing newline to satisfy MD047; update the section blocks and
list spacing accordingly while preserving existing text content and numbering.
In @.gitignore:
- Around line 9-10: Remove the redundant `.DS_Store` pattern from the
.gitignore: keep the recursive glob pattern `**/.DS_Store` and delete the
explicit `.DS_Store` line so the single entry covers both root and nested
`.DS_Store` files.
In @.vscode/settings.json:
- Around line 40-48: The current color pair uses color "#00FF00" and
backgroundColor "#FF00FF" which is visually harsh; update the object with a less
aggressive combination by either changing "color" to a softer green like
"#39FF14" or switching "color" to a dark foreground such as "#1A1A1A" while
keeping "backgroundColor" as the magenta, or alternatively tone down
"backgroundColor" to a darker/translucent magenta; modify the keys "color"
and/or "backgroundColor" in the object where "tag": "!" is defined to implement
one of these options.
- Line 53: The removal of the "python.analysis.ignore" setting causes Pylance to
analyze the entire .venv and produce noise; restore the JSON entry
"python.analysis.ignore": [".venv/**"] in the .vscode/settings.json so the
analyzer ignores the virtual environment directory (ensure the key name is exact
and the pattern ".venv/**" is present under the root settings object).
In `@backends/mod.rs`:
- Around line 66-68: The Spawn trait currently requires Backends via "trait
Spawn: Backends", coupling construction to rendering; remove the supertrait so
Spawn is independent (change to "pub(super) trait Spawn { ... }") and move any
Backends requirement to impl sites or the spawn method signature as needed
(e.g., impl<T: Backends> Spawn for T or add a where Self: Backends on specific
impls) so types can be constructed without forcing a global Backends bound;
update all impl blocks and callers of Spawn::spawn (and any uses of latex()) to
include Backends bounds only where rendering is actually required.
- Around line 14-58: Remove the unused syntax imports from the large use block
and keep only the actual symbols in use: Item, NonTerminal and Context;
specifically, update the use self::super::{ solver::{ nonterminal::{ Item,
NonTerminal }, context::Context } } import pattern and delete all the unused
syntax imports (Start, Definition, Function, Level1, Node, Equation, Use,
Level2, Expression, Level3, Term, Level4, Limit, Factor, Level5, Absolute,
Infinite, Variable, Nest, Tensor, Whole, Undefined, Rational, Call) so the
module only imports Item, NonTerminal and Context.
In `@entry.rs`:
- Around line 55-67: The Alias::expand implementation duplicates the q/v
mapping; instead drive expansion from the single source FLAGLIASES (or a shared
public const) so mappings live in one place: remove the hardcoded match in
Alias::expand and lookup each char against FLAGLIASES (or call a new public
helper in issues.rs that maps a char -> Option<Flag>), push the resulting Flag
into converted or return Issue::UnknownAliasCharacter (preserving alias and at)
when no mapping exists; reference Alias::expand, FLAGLIASES, and
Issue::UnknownAliasCharacter to locate the code to change.
- Around line 36-44: The File::read implementation swallows the underlying I/O
error and uses to_str().unwrap(), causing misleading "File not found" errors and
potential panics; update File::read (and the Issue variant usage) to capture the
Result from readFile(&self.0) and map Err to an Issue that preserves the
original io::Error (or its display) instead of calling .ok(), and replace
self.0.to_str().unwrap() with a non-panicking conversion such as
self.0.to_string_lossy() when constructing Issue::FileNotFound so path and
actual error details are included in the returned Result; locate the logic in
File::read, the call to readFile, and the creation of Issue::FileNotFound to
make this change.
In `@issues.rs`:
- Line 80: The current consume method takes a pointer to self and truncates it
to u8, producing a meaningless exit code; replace that with a deterministic
mapping: either return a constant nonzero code (e.g. 1u8) for errors, or map
each Issue variant to a distinct code by matching on Issue in the consume method
(or mark the enum with #[repr(u8)] and cast the variant to u8). Update the
consume(self) -> u8 implementation on the Issue type to use an explicit match or
a repr-based cast instead of "&self as *const Issue as u8" so exit codes are
deterministic and meaningful.
In `@lib.rs`:
- Around line 66-77: The Run.time is being measured before compute runs because
struct fields evaluate in order; fix by capturing the computation result first
and then measuring elapsed: inside run(&self, settings: &Settings) iterate
targets, call let data = self.compute(target, settings) (using the same
Time::now() start), then let elapsed = time.elapsed(), and push Run { time:
elapsed, data }; ensure you still reset time = Time::now() (or set start) for
the next iteration as originally intended.
- Around line 49-50: The declaration of VERSION uses a static binding but should
be a compile-time constant; replace the static declaration with a const
definition (i.e., change "static VERSION: usize = 9;" to "const VERSION: usize =
9;") so the value is inlined and no static allocation or address is required;
update any code that relies on taking a reference to VERSION to use the value
directly if needed.
- Around line 85-90: The computed percentage in the Target::Tokens arm is a
ratio (tokens.len() as f32 / MAXLEN as f32) but is later formatted with a
percent sign in main.rs, so change the computation in the Data::Tokens
construction (Target::Tokens branch) to multiply the ratio by 100; update the
percentage field assignment (where tokens.len() and MAXLEN are used) to
(tokens.len() as f32 / MAXLEN as f32) * 100 so printed output shows a true
percentage.
- Around line 91-94: The match on Target::Check is just propagating errors from
self.start and can be simplified using the ? operator: call
self.start(settings)? and then return Data::Check; update the arm for
Target::Check so it invokes self.start(settings)? instead of matching Ok/Err and
remove the explicit Err return; keep the rest of the flow unchanged and ensure
the arm still yields Data::Check on success.
- Around line 8-10: The repo uses unstable Rust features
(#![feature(try_trait_v2)], #![feature(default_field_values)],
#![feature(if_let_guard)]) and although rust-toolchain.toml pins nightly, add a
short README note explaining that the crate requires the nightly toolchain and
list those three feature flags with a brief one-line explanation of each and how
to set the toolchain (referencing rust-toolchain.toml) so contributors know to
use nightly before building or contributing.
In `@main.rs`:
- Line 8: Remove the unused crate-level feature gate in main.rs: delete the
#![feature(if_let_guard)] attribute since the binary crate does not use if-let
guards; the library crate (settings/mod.rs) should retain its own feature
declaration independently. Ensure no other code in main.rs relies on if-let
guard syntax before removing.
In `@parser/mod.rs`:
- Around line 88-92: The three helper methods wait, recall, and review are
written as single very long lines; split each function signature and body across
multiple lines to improve readability and maintainability: put the fn
declaration on its own line, break long generic/borrowed parameter lists onto
separate lines, and move the body into a small block with braces and a clear
statement sequence (e.g., capture symbol or entry in a local variable, then call
entry(...).or_default().insert(...) or return the or_default() result). Keep the
same names (wait, recall, review), attributes (#[inline(always)]), and types so
behavior doesn't change; just reformat and introduce short local variables to
make the logic readable in the hot loop.
- Around line 90-92: The method review currently mutates the chart by calling
entry().or_default(); change review to be a true read-only accessor by using
chart[index as usize].get(&state) (or get_mut only in recall) and return either
Option<&FastSet<...>> or a usize (e.g., map(|s| s.len()).unwrap_or(0)) so
callers (e.g., the site at line 132) don't insert empty entries; keep recall as
the mutable entry-or-default writer and update any call sites of review to
handle the Option/usize result accordingly.
- Around line 93-95: Change the signature of run to accept a slice
(&[BindedToken<'parsing>]) instead of &Vec<...>, then build filtered from the
slice (e.g., iter().cloned().filter(...).collect()). Replace the retain closure
with a concise predicate that avoids the if-let and unwrap, e.g. use
ORDER.get(&token.kind).map_or(true, |(_, resp)| !matches!(resp,
Responsibility::Null)) so unknown kinds won’t panic and tokens with
Responsibility::Null are filtered out; update any call sites of run accordingly.
- Line 82: MINPOINTERS is declared as a static but used as an array length in
SmallVec type annotations, which requires a compile-time constant; change the
declaration from a static to a const (e.g., replace the current pub(super)
static MINPOINTERS: usize = 2; with a pub(super) const MINPOINTERS: usize = 2;)
so MINPOINTERS becomes a const expression usable in array sizes wherever
referenced in parser/mod.rs and other modules using SmallVec<[T; MINPOINTERS]>.
In `@prelude.rs`:
- Around line 46-50: The build breaks because prelude.rs imports
regex::bytes::Regex (and RegexSet/Captures) but tokenizer/mod.rs calls .as_str()
on Regex, which only exists on the non-bytes Regex; fix by switching the prelude
to export the regular (string) API: replace pub(super) use regex::bytes::{Regex,
RegexSet, Captures} with pub(super) use regex::{Regex, RegexSet, Captures} so
Regex::as_str() and RegexSet::new(&[&str]) work in tokenizer/mod.rs
(alternative: if you must keep bytes::Regex, extract and store the original
pattern strings where RegexSet::new is called instead, but preferred fix is to
import regex::{Regex, RegexSet, Captures}).
In `@settings/mod.rs`:
- Around line 51-59: The current apply method uses return Ok(match ...) with ?
inside match arms which is legal but obscure; change apply(&mut self, argument:
Argument) to perform a plain match on argument (matching
Argument::Alias,::File,::Flag,::Target) where Alias expansion calls self.apply
for each expanded flag (propagating errors with ?), File pushes to self.files,
Flag matches Flag::Quiet/Verbose and calls self.noise.change(...), and Target
pushes to self.targets, then return Ok(()) after the match to make error
propagation and success explicit; refer to the apply function and the Argument
and Flag variants to locate the logic to refactor.
- Around line 39-45: The cli() function is doing too much in one expression;
split the pipeline into clear steps: call getArguments(), skip(1) and collect
into a Vec<String>, then iterate and map each argument with a small matcher that
(1) detects files using std::path::Path::new(&arg).extension().and_then(|s|
s.to_str()) == Some("msm") to create Argument::File(File(arg.clone())), (2)
handles flags via Flag::try_from(&arg as &str)? returning Argument::Flag, (3)
handles aliases starting with "-" to create Argument::Alias, (4) parses Target
with argument.parse::<Target>()?, and otherwise returns an
Err(Issue::UnknownArgument(arg.clone())); collect::<Result<Vec<Argument>,
Issue>>()? and then call Settings::set(collected_args) and return the
result—this makes errors point to the specific step and uses Path::extension()
for robust file detection.
- Around line 34-37: The Settings struct uses unstable default field values
(e.g., the Settings struct declaration and fields like files, targets, noise)
and your lib.rs already enables the nightly feature gate, so update project
tooling/docs to require a compatible nightly toolchain: add a
rust-toolchain.toml pinning the minimum nightly (the feature has been available
since nightly 2024-12-10) or clearly document the required nightly version in
README/CONTRIBUTING; also verify lib.rs contains the matching
#![feature(default_field_values)] gate so local builds fail fast if the
toolchain is wrong.
In `@settings/noise.rs`:
- Around line 20-29: The pattern matches in Noise::verbose and Noise::quiet use
an unused binding named `other` and the debug/zero methods use verbose if-let
logic; replace `other` with `_` to silence the unused-binding warning and
simplify all four methods to use the matches!() macro for clarity and
conciseness (i.e., implement verbose, quiet, debug, and zero using
matches!(self, Noise::Debug | Noise::Verbose), matches!(self, Noise::Quiet |
Noise::Zero), matches!(self, Noise::Debug), and matches!(self, Noise::Zero)
respectively), keeping method names `verbose`, `quiet`, `debug`, and `zero`
unchanged.
In `@solver/context.rs`:
- Line 11: Replace the fragile relative import in context.rs that uses
super::super::syntax::level5::Variable with an absolute crate path; change the
import to use crate::syntax::level5::Variable so the module reference is stable
(locate the use statement that imports Variable and update it accordingly).
- Around line 25-26: Rename the non-idiomatic camelCase methods registerFn and
isFn to snake_case register_fn and is_fn in the impl for Context (and update
every callsite), and update their definitions' signatures accordingly; search
for other camelCase methods mentioned (e.g., augmentVariables, getArguments) and
follow the same rename pattern if present to avoid clippy warnings and
compilation errors—ensure you update all references, trait impls, and
documentation/comments that mention registerFn/isFn so the codebase compiles
cleanly.
In `@solver/mod.rs`:
- Around line 96-105: The choose function currently only handles the Variable vs
Call Factor disambiguation and panics on any other ambiguity; change it so it
never panics: update fn choose (and its signature if needed) to return a
Result<bool, ChooseError> or at least a non-panicking default (e.g., prefer one
side deterministically) and replace the panic arm (the match arm handling other
=> ...) with code that either returns Err(ChooseError{first,second,building}) or
returns a documented default preference; ensure the recursive flip using
reincident still avoids infinite recursion by returning Err on reincident
instead of panicking, and add a short doc comment on choose describing the
chosen fallback behavior and the Variable vs Call special-case (referencing
choose, reincident, and building).
- Around line 55-57: The current run function is a dense one-liner using an
awkward closure `if let Part::NonTerminal(Object::Start) = backpointer.symbol
{true} else {false}` and a complex build/unwrap pattern; refactor by extracting
the find into a named variable (e.g., start_bp) using matches! to test the
symbol (find(|bp| matches!(bp.symbol, Part::NonTerminal(Object::Start)))), then
call self.build(start_bp, &mut Context::new(), true) and match the returned
Partition with `let Partition::NonTerminal(NonTerminal::Start(start)) = ... else
{ return Err(Issue::SyntaxError) };` before returning Ok(start); keep references
to Backpointer, build, Context::new, Partition::NonTerminal, and
NonTerminal::Start to locate the code.
- Around line 74-95: The panic in solve (the match arm for candidates length 0)
should be replaced with a descriptive panic that includes runtime context:
report the current index, a debug representation of candidates (e.g., candidates
or candidates.len()), and relevant identifiers like the building being resolved
and any small context from Context; update the match arm in fn solve to
construct a formatted error message (using format! or panic!-with-args) instead
of the bare panic!(), so failures print actionable diagnostics to help
debugging.
In `@solver/nonterminal.rs`:
- Around line 99-126: The Level1–Level5 pass-through arms in Object::summon
silently discard any items after the first, risking silent data loss; update
each of the Object::Level1, Object::Level2, Object::Level3, Object::Level4 and
Object::Level5 match arms (the arms that pattern-match Item::NonTerminal(...)
and take items.into_iter().next().unwrap()) to assert that exactly one item was
provided (e.g. assert_eq!(items.len(), 1) or equivalent) before extracting the
first item, so any violation fails loudly and makes debugging easier.
In `@syntax/level1.rs`:
- Around line 140-146: The load method on Use silently swallows file read errors
due to the `if let Ok(content) = File(...).read()` pattern; change it to
propagate the read error instead of returning Ok(()) on failure: replace the `if
let` with using the `?` operator or a `match` so `let content =
File(self.module.clone().into()).read()?;` (or map Err to an Issue) then
continue running transformers.tokenizer, parser, solver and set `self.start =
Some(start)` before returning Ok(()); ensure any io errors from File::read are
converted into the function's Result<..., Issue> (e.g., via From conversion or
mapping) so missing/unreadable modules produce an Err from Use::load rather than
a silent success.
- Around line 80-93: Rename Context::registerFn to register_fn and Context::isFn
to is_fn in the Context type and update all call sites accordingly (e.g., the
call in Function::spawn). In Function::spawn replace the three bare panic!()
calls with panic! macro calls that include descriptive messages indicating what
Item/variant was expected (e.g., "expected first item to be
NonTerminal::Level5::Variable", "expected last item to be NonTerminal::Level2",
and "expected remaining items to be NonTerminal::Level5::Variable") so failures
show clear context for debugging.
In `@syntax/level5.rs`:
- Line 5: The import use core::panic; is unused and shadows nothing — remove
that line from the file; leave calls to the panic!() macro as-is (panic! is a
macro in scope, not provided by the core::panic module), so delete the use
core::panic; statement referencing core::panic to fix the unused import warning.
In `@syntax/mod.rs`:
- Around line 42-45: The match on self.stream.len() creates an unused binding
named other; change that pattern from other => (r"\[", r"\]") to _ => (r"\[",
r"\]") so the unused value is not bound (update the match in the function/method
containing self.stream.len()).
- Around line 49-53: The one-line impls should be expanded and made explicit: in
impl Spawn for Start refactor spawn to iterate items with a match (or map with a
closure that returns Result) instead of burying an if-let panic inside map — use
a clear panic or better return an Err with context naming the unexpected Item
variant when matching Item::NonTerminal(NonTerminal::Level1(level1)) fails, and
construct NonTerminal::Start(Self { stream }) from the collected Vec; in impl
Start refactor modules to write the for statement as a normal loop over &mut
self.stream that matches Level1::Use(element) and calls
element.load(transformers, settings)?; after the loop return Ok(()) instead of
wrapping the for-loop in Ok(...). Ensure references to Spawn::spawn,
Start::modules, Item::NonTerminal, NonTerminal::Start, and Level1::Use are used
to locate the code.
In `@tokenizer/mod.rs`:
- Around line 62-68: The match arm binds an unused variable named `other` when
matching `token.kind`; change that unused binding to `_` to avoid the warning —
update the match in the loop that inspects `token.kind` (the arms for
`Kind::NEWLINES` and `Kind::ENDOFFILE` remain the same) so the catch-all arm
uses `_` instead of `other`, keeping the existing side-effect of updating
`position.column`.
In `@tokenizer/token.rs`:
- Around line 107-112: ShallowToken is currently opaque (fields start and kind
are private) but Vec<ShallowToken> is exposed via Data::Tokens; either make the
fields intentionally accessible or document the opacity. Fix by adding public
accessor methods on ShallowToken (e.g., pub fn start(&self) -> u64 and pub fn
kind(&self) -> Kind) so callers can read those fields, or add a clear doc
comment on the ShallowToken struct explaining it is intentionally opaque and
only for debugging/diagnostic use; reference ShallowToken, its fields start and
kind, and Data::Tokens when making the change.
---
Outside diff comments:
In `@parser/mod.rs`:
- Around line 98-99: The closure passed to map is using an unused parameter
`iteration`, causing a Clippy warning; update the range and closures so the
parameter is ignored (use `_`) and prefer the inclusive range
`0..=filtered.len()` for clarity: change the two lines that call
chart.extend(...) and waiting.extend(...) to iterate with `0..=filtered.len()`
and map(|_| FastMap::new()) respectively, referencing the existing identifiers
chart, waiting, filtered.len(), and FastMap::new().
- Around line 140-149: The completion hot path is doing heavy cloning:
pool.entry(pointer.clone()).or_default().extend(self.review(...).clone()),
cloning backpointers from chart into a Vec while pushing pointer, and cloning
completed before extending agenda; locate the block around pool.entry, the
creation of backpointers, self.recall, and the agenda.extend(completed.clone())
calls and replace full clones with shared references (Rc/Arc or indices) or
in-place mutation: store Rc<SmallVec<...>> in pool and chart so you can clone
the Rc cheaply, map/backpointer-modify by creating a new shared wrapper with the
extra pointer (or store pointer lists as small immutable shared slices), and
change agenda.extend(completed.clone()) to extend from a slice/iterator of
shared handles or indices instead of cloning the whole collection to eliminate
repeated allocations.
In `@syntax/level1.rs`:
- Around line 58-70: Replace bare panic!() and .unwrap() in spawn
implementations with diagnostics: in Definition::spawn (and similarly
Node::spawn, Equation::spawn, Function::spawn) change panic!() to a panic that
includes the type and the unexpected item (e.g., "Definition::spawn: unexpected
item {:?}") and replace variable.unwrap()/value.unwrap() with expect(...) or map
to Result that include messages like "Definition::spawn: missing variable" /
"Definition::spawn: missing value"; apply the same pattern to the other spawn
functions to ensure failures report which spawn and what was missing or
unexpected.
In `@syntax/level2.rs`:
- Around line 47-58: The bare panic!() in the Spawn impl for Expression
(function spawn for Expression/NonTerminal::Level2) provides no diagnostic info;
replace it with a panic variant that includes the unexpected item and context,
e.g. use unreachable! or panic! with a formatted message like
"Expression::spawn: unexpected item {other:?}" so when matching Item::Token /
Item::NonTerminal fails you log the concrete `other` (Item implements Debug) and
the function name to aid debugging.
In `@syntax/level3.rs`:
- Around line 55-58: The bare panic!() in the match over Item in the spawn
implementations (e.g., the match handling Item::Token,
Item::NonTerminal(NonTerminal::Level4(level4)) in level3.spawn and similar code
in level2/level4/level5) provides no diagnostic context; replace it with either
unreachable!() if that arm truly cannot happen, or panic! with a descriptive
message that includes the unexpected value (e.g., include other via debug
formatting) so failures show what Item variant and value caused the error.
In `@syntax/level4.rs`:
- Around line 78-92: The Limit::spawn implementation uses a tricky reassignment
of the local variable next to peek past an optional direction token; add a
one-line inline comment above the block that sets direction explaining that next
is advanced to consume an optional Token so the subsequent pattern match reads
the following item. Also replace bare panic!() calls in Spawn for Limit (the
three occurrences that unwrap pattern matches on iterator.next()) with panics
that include context (e.g., which expected Item/NonTerminal was missing and the
variable being parsed) so failures in spawn are descriptive; reference the
function Spawn for Limit, the local variables iterator and next, and the match
patterns that expect Level5::Variable, Level2(approach), and Level5::Nest.
| 1. Full issue coverage now. | ||
|
|
||
| ## v8 -- Semantic solver | ||
| ### Added | ||
| 1. Local module importing from filesystem. | ||
| 1. Added `README.md` in `Cargo.toml`. | ||
| ### Changed | ||
| 1. Contextual semantic solver replaced old heuristic solver. | ||
| 1. Improved filetree and directories. | ||
| ### Removed | ||
| 1. Old numerical solver. | ||
| ### Fixed | ||
| 1. Imported modules now must have `.msm` file extension. | ||
|
|
||
| ## v9 -- Semantic-driven disambiguation | ||
| ### Added | ||
| 1. Functions. | ||
| 1. Ambiguity resolution for function calls and implicit multiplication based on context. | ||
| ### Changed | ||
| 1. Definitions now are done with `:=` instead of `==` and `=` always produces equations. | ||
| ### Removed | ||
| 1. Old declaration constructs. | ||
| 1. Locking mechanics. | ||
| ### Fixed | ||
| 1. CLI behaviour under stress. No newline at end of file |
There was a problem hiding this comment.
Fix markdown formatting: missing blank lines around headings and missing trailing newline.
Come on, the rest of the changelog is formatted properly — don't drop the ball on the new sections. Static analysis flagged numerous MD022 violations (headings need blank lines above and below) and a missing trailing newline (MD047).
📝 Proposed formatting fix
1. Full issue coverage now.
+
## v8 -- Semantic solver
+
### Added
1. Local module importing from filesystem.
1. Added `README.md` in `Cargo.toml`.
+
### Changed
1. Contextual semantic solver replaced old heuristic solver.
1. Improved filetree and directories.
+
### Removed
1. Old numerical solver.
+
### Fixed
1. Imported modules now must have `.msm` file extension.
+
## v9 -- Semantic-driven disambiguation
+
### Added
1. Functions.
1. Ambiguity resolution for function calls and implicit multiplication based on context.
+
### Changed
1. Definitions now are done with `:=` instead of `==` and `=` always produces equations.
+
### Removed
1. Old declaration constructs.
1. Locking mechanics.
+
### Fixed
1. CLI behaviour under stress.
+📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 1. Full issue coverage now. | |
| ## v8 -- Semantic solver | |
| ### Added | |
| 1. Local module importing from filesystem. | |
| 1. Added `README.md` in `Cargo.toml`. | |
| ### Changed | |
| 1. Contextual semantic solver replaced old heuristic solver. | |
| 1. Improved filetree and directories. | |
| ### Removed | |
| 1. Old numerical solver. | |
| ### Fixed | |
| 1. Imported modules now must have `.msm` file extension. | |
| ## v9 -- Semantic-driven disambiguation | |
| ### Added | |
| 1. Functions. | |
| 1. Ambiguity resolution for function calls and implicit multiplication based on context. | |
| ### Changed | |
| 1. Definitions now are done with `:=` instead of `==` and `=` always produces equations. | |
| ### Removed | |
| 1. Old declaration constructs. | |
| 1. Locking mechanics. | |
| ### Fixed | |
| 1. CLI behaviour under stress. | |
| 1. Full issue coverage now. | |
| ## v8 -- Semantic solver | |
| ### Added | |
| 1. Local module importing from filesystem. | |
| 1. Added `README.md` in `Cargo.toml`. | |
| ### Changed | |
| 1. Contextual semantic solver replaced old heuristic solver. | |
| 1. Improved filetree and directories. | |
| ### Removed | |
| 1. Old numerical solver. | |
| ### Fixed | |
| 1. Imported modules now must have `.msm` file extension. | |
| ## v9 -- Semantic-driven disambiguation | |
| ### Added | |
| 1. Functions. | |
| 1. Ambiguity resolution for function calls and implicit multiplication based on context. | |
| ### Changed | |
| 1. Definitions now are done with `:=` instead of `==` and `=` always produces equations. | |
| ### Removed | |
| 1. Old declaration constructs. | |
| 1. Locking mechanics. | |
| ### Fixed | |
| 1. CLI behaviour under stress. | |
🧰 Tools
🪛 LanguageTool
[style] ~234-~234: ‘under stress’ might be wordy. Consider a shorter alternative.
Context: ...g mechanics. ### Fixed 1. CLI behaviour under stress.
(EN_WORDINESS_PREMIUM_UNDER_STRESS)
🪛 markdownlint-cli2 (0.21.0)
[warning] 212-212: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 213-213: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above
(MD022, blanks-around-headings)
[warning] 213-213: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 216-216: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above
(MD022, blanks-around-headings)
[warning] 216-216: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 219-219: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above
(MD022, blanks-around-headings)
[warning] 219-219: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 221-221: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above
(MD022, blanks-around-headings)
[warning] 221-221: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 224-224: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 225-225: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above
(MD022, blanks-around-headings)
[warning] 225-225: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 228-228: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above
(MD022, blanks-around-headings)
[warning] 228-228: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 230-230: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above
(MD022, blanks-around-headings)
[warning] 230-230: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 233-233: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Above
(MD022, blanks-around-headings)
[warning] 233-233: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
[warning] 234-234: Files should end with a single newline character
(MD047, single-trailing-newline)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/CHANGELOG.md around lines 210 - 234, Add blank lines above and below
each markdown heading in the new sections (e.g., "## v8 -- Semantic solver",
"### Added", "### Changed", "### Removed", "### Fixed", and the "## v9 --
Semantic-driven disambiguation" block and its subheadings) so they conform to
MD022, and ensure the file ends with a single trailing newline to satisfy MD047;
update the section blocks and list spacing accordingly while preserving existing
text content and numbering.
| **/.DS_Store | ||
| .DS_Store |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Redundant .DS_Store entry.
Line 9 (**/.DS_Store) already covers the root directory. The explicit .DS_Store on line 10 is unnecessary.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 9 - 10, Remove the redundant `.DS_Store` pattern
from the .gitignore: keep the recursive glob pattern `**/.DS_Store` and delete
the explicit `.DS_Store` line so the single entry covers both root and nested
`.DS_Store` files.
| { | ||
| "tag": "!", | ||
| "color": "#00FF00", | ||
| "strikethrough": false, | ||
| "underline": false, | ||
| "backgroundColor": "#FF00FF", | ||
| "bold": true, | ||
| "italic": false | ||
| }, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
That color combo is going to melt someone's retinas, bud.
#00FF00 (pure neon green) on #FF00FF (pure magenta) is about as subtle as a foghorn. It technically works, but it's going to be visually aggressive for anything longer than a glance. Consider toning down either the foreground or background — e.g., a softer green like #39FF14 on a dark/transparent background, or keeping the magenta background but using a dark foreground for contrast.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.vscode/settings.json around lines 40 - 48, The current color pair uses
color "#00FF00" and backgroundColor "#FF00FF" which is visually harsh; update
the object with a less aggressive combination by either changing "color" to a
softer green like "#39FF14" or switching "color" to a dark foreground such as
"#1A1A1A" while keeping "backgroundColor" as the magenta, or alternatively tone
down "backgroundColor" to a darker/translucent magenta; modify the keys "color"
and/or "backgroundColor" in the object where "tag": "!" is defined to implement
one of these options.
| "rust-analyzer.inlayHints.typeHints.enable": false, | ||
| "rust-analyzer.inlayHints.renderColons": false, | ||
| "python.analysis.ignore": [".venv/**"] | ||
| "rust-analyzer.inlayHints.renderColons": false |
There was a problem hiding this comment.
Heads up — dropping python.analysis.ignore for .venv/** will hurt you.
The AI summary notes that "python.analysis.ignore": [".venv/**"] was removed. Without it, Pylance now crawls your entire virtual environment on every analysis pass. You'll get a flood of false-positive errors and warnings from third-party package internals polluting the Problems panel. Put it back.
🛠️ Restore the ignore setting
"python.analysis.diagnosticMode": "workspace",
+ "python.analysis.ignore": [".venv/**"],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.vscode/settings.json at line 53, The removal of the
"python.analysis.ignore" setting causes Pylance to analyze the entire .venv and
produce noise; restore the JSON entry "python.analysis.ignore": [".venv/**"] in
the .vscode/settings.json so the analyzer ignores the virtual environment
directory (ensure the key name is exact and the pattern ".venv/**" is present
under the root settings object).
| use self::super::{ | ||
| solver::{ | ||
| nonterminal::{ | ||
| Item, | ||
| NonTerminal | ||
| }, | ||
| context::Context | ||
| }, | ||
| syntax::{ | ||
| Start, | ||
| level1::{ | ||
| Definition, | ||
| Function, | ||
| Level1, | ||
| Node, | ||
| Equation, | ||
| Use | ||
| }, | ||
| level2::{ | ||
| Level2, | ||
| Expression | ||
| }, | ||
| level3::{ | ||
| Level3, | ||
| Term | ||
| }, | ||
| level4::{ | ||
| Limit, | ||
| Level4, | ||
| Factor | ||
| }, | ||
| level5::{ | ||
| Absolute, | ||
| Level5, | ||
| Infinite, | ||
| Variable, | ||
| Nest, | ||
| Tensor, | ||
| Whole, | ||
| Undefined, | ||
| Rational, | ||
| Call | ||
| } | ||
| } | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f "mod.rs" backends/ --exec wc -l {} \;Repository: alejandro-vaz/mathsys
Length of output: 83
🏁 Script executed:
cat -n backends/mod.rsRepository: alejandro-vaz/mathsys
Length of output: 1779
🏁 Script executed:
# Check if syntax types are used in the backends module
rg "Start|Definition|Function|Level1|Node|Equation|Use|Level2|Expression|Level3|Term|Level4|Limit|Factor|Level5|Absolute|Infinite|Variable|Nest|Tensor|Whole|Undefined|Rational|Call" backends/ -t rust --no-filename -nRepository: alejandro-vaz/mathsys
Length of output: 962
🏁 Script executed:
# Check what's in the latex module
cat -n backends/latex.rs | head -100Repository: alejandro-vaz/mathsys
Length of output: 3158
Remove unused syntax imports from backends/mod.rs.
Your imports of Start, Definition, Function, Level1, Node, Equation, Use, Level2, Expression, Level3, Term, Level4, Limit, Factor, Level5, Absolute, Infinite, Variable, Nest, Tensor, Whole, Undefined, Rational, and Call aren't used anywhere in this file. Not in the trait definitions, not in the latex module—nowhere. Even the #[dispatch] macro doesn't pull these in. Clean it up and keep only what you actually use: Item, NonTerminal, and Context.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backends/mod.rs` around lines 14 - 58, Remove the unused syntax imports from
the large use block and keep only the actual symbols in use: Item, NonTerminal
and Context; specifically, update the use self::super::{ solver::{
nonterminal::{ Item, NonTerminal }, context::Context } } import pattern and
delete all the unused syntax imports (Start, Definition, Function, Level1, Node,
Equation, Use, Level2, Expression, Level3, Term, Level4, Limit, Factor, Level5,
Absolute, Infinite, Variable, Nest, Tensor, Whole, Undefined, Rational, Call) so
the module only imports Item, NonTerminal and Context.
| //^ HEAD | ||
| //^ | ||
|
|
||
| use core::panic; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
use core::panic; is unused — remove it.
You're using the panic!() macro throughout, which is always in scope. This import brings in the core::panic module (not the macro), and nothing in this file references it.
♻️ Remove unused import
-use core::panic;
-📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| use core::panic; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@syntax/level5.rs` at line 5, The import use core::panic; is unused and
shadows nothing — remove that line from the file; leave calls to the panic!()
macro as-is (panic! is a macro in scope, not provided by the core::panic
module), so delete the use core::panic; statement referencing core::panic to fix
the unused import warning.
| let (start, end) = match self.stream.len() { | ||
| 0 => ("", ""), | ||
| 1 => (r"\(", r"\)"), | ||
| other => (r"\[", r"\]") |
There was a problem hiding this comment.
Unused variable binding other on line 45.
This should be _ since the value isn't used. I know you've got #![allow(unused_variables)] up top, but that's a crutch, not a license.
Proposed fix
- other => (r"\[", r"\]")
+ _ => (r"\[", r"\]")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let (start, end) = match self.stream.len() { | |
| 0 => ("", ""), | |
| 1 => (r"\(", r"\)"), | |
| other => (r"\[", r"\]") | |
| let (start, end) = match self.stream.len() { | |
| 0 => ("", ""), | |
| 1 => (r"\(", r"\)"), | |
| _ => (r"\[", r"\]") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@syntax/mod.rs` around lines 42 - 45, The match on self.stream.len() creates
an unused binding named other; change that pattern from other => (r"\[", r"\]")
to _ => (r"\[", r"\]") so the unused value is not bound (update the match in the
function/method containing self.stream.len()).
| } impl Spawn for Start {fn spawn(items: Vec<Item>, context: Option<&mut Context>) -> NonTerminal { | ||
| return NonTerminal::Start(Self { | ||
| stream: items.into_iter().map(|element| if let Item::NonTerminal(NonTerminal::Level1(level1)) = element {level1} else {panic!("{element:?}")}).collect() | ||
| }) | ||
| }} impl Start {pub(super) fn modules(&mut self, transformers: &Transformers, settings: &Settings) -> Result<(), Issue> {return Ok(for statement in &mut self.stream {if let Level1::Use(element) = statement {element.load(transformers, settings)?}})}} No newline at end of file |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
These one-liners are doing way too much — break them up for readability.
Lines 49–53 cram entire trait implementations and a method body onto single lines. I get wanting to be terse, but this is a maintenance nightmare. The spawn impl has a panic! buried inside a map/collect chain, and modules wraps a for-loop in Ok(...) — both deserve room to breathe.
Specifically:
Spawn::spawn— theif let ... else { panic! }insidemapis invisible at a glance. Destructuring failures here will produce a useless panic message ("{element:?}"with no context about what went wrong).Start::modules—Ok(for statement in ... { ... })is valid Rust (for-loop returns()) but deeply unconventional. Just write the loop, thenOk(()).
Suggested restructure for modules
-impl Start {pub(super) fn modules(&mut self, transformers: &Transformers, settings: &Settings) -> Result<(), Issue> {return Ok(for statement in &mut self.stream {if let Level1::Use(element) = statement {element.load(transformers, settings)?}})}}
+impl Start {
+ pub(super) fn modules(&mut self, transformers: &Transformers, settings: &Settings) -> Result<(), Issue> {
+ for statement in &mut self.stream {
+ if let Level1::Use(element) = statement {
+ element.load(transformers, settings)?;
+ }
+ }
+ Ok(())
+ }
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } impl Spawn for Start {fn spawn(items: Vec<Item>, context: Option<&mut Context>) -> NonTerminal { | |
| return NonTerminal::Start(Self { | |
| stream: items.into_iter().map(|element| if let Item::NonTerminal(NonTerminal::Level1(level1)) = element {level1} else {panic!("{element:?}")}).collect() | |
| }) | |
| }} impl Start {pub(super) fn modules(&mut self, transformers: &Transformers, settings: &Settings) -> Result<(), Issue> {return Ok(for statement in &mut self.stream {if let Level1::Use(element) = statement {element.load(transformers, settings)?}})}} | |
| } impl Spawn for Start {fn spawn(items: Vec<Item>, context: Option<&mut Context>) -> NonTerminal { | |
| return NonTerminal::Start(Self { | |
| stream: items.into_iter().map(|element| if let Item::NonTerminal(NonTerminal::Level1(level1)) = element {level1} else {panic!("{element:?}")}).collect() | |
| }) | |
| }} impl Start { | |
| pub(super) fn modules(&mut self, transformers: &Transformers, settings: &Settings) -> Result<(), Issue> { | |
| for statement in &mut self.stream { | |
| if let Level1::Use(element) = statement { | |
| element.load(transformers, settings)?; | |
| } | |
| } | |
| Ok(()) | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@syntax/mod.rs` around lines 49 - 53, The one-line impls should be expanded
and made explicit: in impl Spawn for Start refactor spawn to iterate items with
a match (or map with a closure that returns Result) instead of burying an if-let
panic inside map — use a clear panic or better return an Err with context naming
the unexpected Item variant when matching
Item::NonTerminal(NonTerminal::Level1(level1)) fails, and construct
NonTerminal::Start(Self { stream }) from the collected Vec; in impl Start
refactor modules to write the for statement as a normal loop over &mut
self.stream that matches Level1::Use(element) and calls
element.load(transformers, settings)?; after the loop return Ok(()) instead of
wrapping the for-loop in Ok(...). Ensure references to Spawn::spawn,
Start::modules, Item::NonTerminal, NonTerminal::Start, and Level1::Use are used
to locate the code.
| match token.kind { | ||
| Kind::NEWLINES => {position.line += length as u32; position.column = 1}, | ||
| Kind::ENDOFFILE => {tokens.push(token); return Ok(tokens)}, | ||
| other => position.column += length as u32 | ||
| } | ||
| tokens.push(token); | ||
| position.cursor += length as u64; |
There was a problem hiding this comment.
Unused binding other on line 65 — use _.
Same habit as in syntax/mod.rs. You're matching on token.kind and not using other. Be consistent.
- other => position.column += length as u32
+ _ => position.column += length as u32🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tokenizer/mod.rs` around lines 62 - 68, The match arm binds an unused
variable named `other` when matching `token.kind`; change that unused binding to
`_` to avoid the warning — update the match in the loop that inspects
`token.kind` (the arms for `Kind::NEWLINES` and `Kind::ENDOFFILE` remain the
same) so the catch-all arm uses `_` instead of `other`, keeping the existing
side-effect of updating `position.column`.
| //> INSTANCES -> SHALLOW | ||
| #[derive(Debug)] | ||
| pub struct ShallowToken { | ||
| start: u64, | ||
| kind: Kind | ||
| } No newline at end of file |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Search for ShallowToken usage and context
rg -n 'ShallowToken' --type=rust -B 2 -A 2Repository: alejandro-vaz/mathsys
Length of output: 1380
🏁 Script executed:
# Search for Data::Tokens to verify if it's publicly exposed
rg -n 'Data.*Tokens|Tokens.*pub' --type=rust -B 2 -A 2Repository: alejandro-vaz/mathsys
Length of output: 925
🏁 Script executed:
# Check the module structure and any pub implementations for ShallowToken
ast-grep --pattern 'impl ShallowToken {
$$$
}'Repository: alejandro-vaz/mathsys
Length of output: 47
ShallowToken is intentionally opaque — but make sure that's actually what you want.
You're exposing Vec<ShallowToken> publicly via Data::Tokens, but callers can only debug-print them. They can't touch start or kind without public accessors. That's fine if tokens are just diagnostic blobs, but if consumers ever need to read those fields, you'll regret locking them down. Right now there's no way to access them—so either add getters or at least document why this is locked. Don't leave future-you guessing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tokenizer/token.rs` around lines 107 - 112, ShallowToken is currently opaque
(fields start and kind are private) but Vec<ShallowToken> is exposed via
Data::Tokens; either make the fields intentionally accessible or document the
opacity. Fix by adding public accessor methods on ShallowToken (e.g., pub fn
start(&self) -> u64 and pub fn kind(&self) -> Kind) so callers can read those
fields, or add a clear doc comment on the ShallowToken struct explaining it is
intentionally opaque and only for debugging/diagnostic use; reference
ShallowToken, its fields start and kind, and Data::Tokens when making the
change.
Summary by CodeRabbit
New Features
Documentation
Chores
strumdependency for enum serialization support