[codex] Add function-based batched evaluation API#76
Draft
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
batchedf!(values, indices)in-place batch evaluation for TCI2, whereindiceshas shape(length(localdims), npoints)and each column is one point.BatchEvaluator/ThreadedBatchEvaluator/makebatchevaluatableAPI;batchedf!is now the only batch evaluation interface.batchedf!examples.Motivation
The previous inheritance-based design forced libraries that wanted to provide batch evaluation to depend on TCI.jl types just to implement the interface. That extra coupling is awkward for downstream packages and makes interop harder than necessary.
A plain function argument is closer to Julia/Python style: users can pass a closure, a callable object, or a backend-specific function directly, without defining a subtype. This is more explicit at the call site and easier to understand than making
fsecretly batch-capable through inheritance.The mutating
batchedf!API also makes allocation ownership clear: TCI.jl allocates the output buffer, and the caller fills it. The layout follows the Torch-style convention of treating the rightmost dimension as the batch dimension. Hereindices[:, p]is one point, andaxes(indices, 2)iterates over the batch.Breaking Changes
BatchEvaluator,ThreadedBatchEvaluator, andmakebatchevaluatable.batchedfkeyword from this PR branch.crossinterpolate2(...; batchedf!)andoptimize!(...; batchedf!).New API Sketch
batchedf!receives a preallocated output vector and an integer matrix with shape(length(localdims), npoints). The second dimension is the batch dimension, and each column is one global index set. It must write one value per column tovalues[p]in the same order. Its return value is ignored, though returningvaluesis conventional.Thread-parallel evaluation can be written directly in
batchedf!:Validation
julia --project=. test/test_batcheval.jl-> 14/14 passjulia --project=. test/test_cachedfunction.jl-> 104/104 passjulia --project=. test/test_tensorci2.jl-> 2246/2246 passjulia --project=. -e 'using Pkg; Pkg.test()'-> full suite passedNotes