[Relax][Frontend][TFLite] Add DENSIFY operator test and fix prefetched handling#19421
Open
Aharrypotter wants to merge 2 commits intoapache:mainfrom
Open
[Relax][Frontend][TFLite] Add DENSIFY operator test and fix prefetched handling#19421Aharrypotter wants to merge 2 commits intoapache:mainfrom
Aharrypotter wants to merge 2 commits intoapache:mainfrom
Conversation
…ed node handling
This PR adds test coverage for the TFLite DENSIFY operator, which converts
sparse weight tensors to dense format at conversion time (not runtime).
## Bug Fixes
1. **convert_op_to_relax**: Check `ret is None` before `normalize(ret)` to
avoid null AST traversal crash when DENSIFY returns None.
2. **get_tensor_expr**: Add prefetched node check before `get_tensor_value()`
to handle DENSIFY outputs with empty buffers.
3. **convert_fully_connected**: Add prefetched weight handling for weights
coming from DENSIFY.
4. **convert_transpose_conv**: Add prefetched weight handling (same issue as
convert_fully_connected).
## Test Coverage
Four test cases covering different DENSIFY usage scenarios:
- `test_densify`: Basic DENSIFY conversion to constant
- `test_densify_with_add`: DENSIFY followed by element-wise ADD
- `test_densify_with_conv2d`: Network-level test with sparse conv2d (2D kernel)
- `test_densify_with_fully_connected`: Network-level test with sparse FC (4x4 weight)
Sparse TFLite models are built manually using flatbuffers API with CSR format
sparsity. OperatorCode sets both DeprecatedBuiltinCode and BuiltinCode for
schema compatibility.
## Testing
All tests pass:
```bash
pytest tests/python/relax/test_frontend_tflite.py::test_densify \
tests/python/relax/test_frontend_tflite.py::test_densify_with_add \
tests/python/relax/test_frontend_tflite.py::test_densify_with_conv2d \
tests/python/relax/test_frontend_tflite.py::test_densify_with_fully_connected -v
Contributor
There was a problem hiding this comment.
Code Review
This pull request introduces support for the TFLite DENSIFY operator in the Relax frontend, enabling sparse weight tensors to be converted to dense constants during model conversion. The implementation includes a new get_tensor_value_or_prefetched helper and refactors get_tensor_expr to handle prefetched nodes. Comprehensive tests were added, utilizing manually constructed sparse TFLite models to verify the conversion logic across various operators. Feedback suggests simplifying the convert_fully_connected implementation by using the get_tensor_expr helper method to reduce redundancy.
tlopex
requested changes
Apr 20, 2026
This commit fixes two issues in the DENSIFY test suite that caused
failures in TVM's CI environment:
1. Remove dependency on tfl.Int32VectorStartValuesVector
- This flatbuffers helper is absent in older tflite package versions
used in CI Docker images.
- _tflite_int32_table now builds the int32 vector manually using
builder.StartVector(4, len(values), 4).
2. Fix flatbuffers IsNestedError in _build_buffer
- The original implementation called _tflite_byte_vector (which
invokes builder.StartVector) after tfl.BufferStart, violating
flatbuffers' no-nesting rule.
- Fixed by creating the data vector before entering the Buffer table.
Also simplifies convert_fully_connected to use get_tensor_expr
directly, as suggested by gemini-code-assist, since the prefetched
node handling is already encapsulated there.
f62e555 to
190652e
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds test coverage for the TFLite DENSIFY operator as requested in issue #18971 , and fixes several related bugs in the TFLite frontend.
DENSIFY converts sparse weight tensors to dense format at conversion time (not runtime). The dense weights become constants in the output IR via the
prefetched_nodesmechanism.Changes
Bug Fixes
convert_op_to_relax: Checkret is Nonebeforenormalize(ret)to avoid crash when DENSIFY returnsNone.get_tensor_expr: Addis_prefetched()check beforeget_tensor_value()to handle DENSIFY outputs with empty buffers.convert_fully_connected: Add prefetched weight handlingconvert_transpose_conv: Add prefetched weight handlingTests
Four test cases covering different DENSIFY usage scenarios:
test_densifytest_densify_with_addtest_densify_with_conv2dtest_densify_with_fully_connectedNote: Sparse TFLite models are built manually using flatbuffers API (TensorFlow does not provide an API for creating sparse models).
Testing
All tests pass:
pytest tests/python/relax/test_frontend_tflite.py::test_densify \ tests/python/relax/test_frontend_tflite.py::test_densify_with_add \ tests/python/relax/test_frontend_tflite.py::test_densify_with_conv2d \ tests/python/relax/test_frontend_tflite.py::test_densify_with_fully_connected -vtest_densifyPASSEDtest_densify_with_addPASSEDtest_densify_with_conv2dPASSEDtest_densify_with_fully_connectedPASSEDReferences
Issue #18971 : TFLite operator test coverage tracking
Related: #19408 (MATRIX_DIAG, MATRIX_SET_DIAG, SPARSE_TO_DENSE tests)