Skip to content

[Relax][Frontend][TFLite] Add DENSIFY operator test and fix prefetched handling#19421

Open
Aharrypotter wants to merge 2 commits intoapache:mainfrom
Aharrypotter:tflite-densify-test-18971
Open

[Relax][Frontend][TFLite] Add DENSIFY operator test and fix prefetched handling#19421
Aharrypotter wants to merge 2 commits intoapache:mainfrom
Aharrypotter:tflite-densify-test-18971

Conversation

@Aharrypotter
Copy link
Copy Markdown
Contributor

Summary

This PR adds test coverage for the TFLite DENSIFY operator as requested in issue #18971 , and fixes several related bugs in the TFLite frontend.
DENSIFY converts sparse weight tensors to dense format at conversion time (not runtime). The dense weights become constants in the output IR via the prefetched_nodes mechanism.

Changes

Bug Fixes

  1. convert_op_to_relax: Check ret is None before normalize(ret) to avoid crash when DENSIFY returns None.
  2. get_tensor_expr: Add is_prefetched() check before get_tensor_value() to handle DENSIFY outputs with empty buffers.
  3. convert_fully_connected: Add prefetched weight handling
  4. convert_transpose_conv: Add prefetched weight handling

Tests

Four test cases covering different DENSIFY usage scenarios:

Test Downstream Op Purpose
test_densify None Basic DENSIFY to constant
test_densify_with_add ADD Prefetched as regular input
test_densify_with_conv2d CONV2D Network-level test (2D conv)
test_densify_with_fully_connected FULLY_CONNECTED Network-level test (FC layer)

Note: Sparse TFLite models are built manually using flatbuffers API (TensorFlow does not provide an API for creating sparse models).

Testing

All tests pass:

pytest tests/python/relax/test_frontend_tflite.py::test_densify \
       tests/python/relax/test_frontend_tflite.py::test_densify_with_add \
       tests/python/relax/test_frontend_tflite.py::test_densify_with_conv2d \
       tests/python/relax/test_frontend_tflite.py::test_densify_with_fully_connected -v
  • test_densify PASSED
  • test_densify_with_add PASSED
  • test_densify_with_conv2d PASSED
  • test_densify_with_fully_connected PASSED

References

Issue #18971 : TFLite operator test coverage tracking
Related: #19408 (MATRIX_DIAG, MATRIX_SET_DIAG, SPARSE_TO_DENSE tests)

…ed node handling

This PR adds test coverage for the TFLite DENSIFY operator, which converts
sparse weight tensors to dense format at conversion time (not runtime).

## Bug Fixes

1. **convert_op_to_relax**: Check `ret is None` before `normalize(ret)` to
   avoid null AST traversal crash when DENSIFY returns None.

2. **get_tensor_expr**: Add prefetched node check before `get_tensor_value()`
   to handle DENSIFY outputs with empty buffers.

3. **convert_fully_connected**: Add prefetched weight handling for weights
   coming from DENSIFY.

4. **convert_transpose_conv**: Add prefetched weight handling (same issue as
   convert_fully_connected).

## Test Coverage

Four test cases covering different DENSIFY usage scenarios:
- `test_densify`: Basic DENSIFY conversion to constant
- `test_densify_with_add`: DENSIFY followed by element-wise ADD
- `test_densify_with_conv2d`: Network-level test with sparse conv2d (2D kernel)
- `test_densify_with_fully_connected`: Network-level test with sparse FC (4x4 weight)

Sparse TFLite models are built manually using flatbuffers API with CSR format
sparsity. OperatorCode sets both DeprecatedBuiltinCode and BuiltinCode for
schema compatibility.

## Testing

All tests pass:
```bash
pytest tests/python/relax/test_frontend_tflite.py::test_densify \
       tests/python/relax/test_frontend_tflite.py::test_densify_with_add \
       tests/python/relax/test_frontend_tflite.py::test_densify_with_conv2d \
       tests/python/relax/test_frontend_tflite.py::test_densify_with_fully_connected -v
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the TFLite DENSIFY operator in the Relax frontend, enabling sparse weight tensors to be converted to dense constants during model conversion. The implementation includes a new get_tensor_value_or_prefetched helper and refactors get_tensor_expr to handle prefetched nodes. Comprehensive tests were added, utilizing manually constructed sparse TFLite models to verify the conversion logic across various operators. Feedback suggests simplifying the convert_fully_connected implementation by using the get_tensor_expr helper method to reduce redundancy.

Comment thread python/tvm/relax/frontend/tflite/tflite_frontend.py Outdated
Comment thread python/tvm/relax/frontend/tflite/tflite_frontend.py Outdated
  This commit fixes two issues in the DENSIFY test suite that caused
  failures in TVM's CI environment:

  1. Remove dependency on tfl.Int32VectorStartValuesVector
     - This flatbuffers helper is absent in older tflite package versions
       used in CI Docker images.
     - _tflite_int32_table now builds the int32 vector manually using
       builder.StartVector(4, len(values), 4).

  2. Fix flatbuffers IsNestedError in _build_buffer
     - The original implementation called _tflite_byte_vector (which
       invokes builder.StartVector) after tfl.BufferStart, violating
       flatbuffers' no-nesting rule.
     - Fixed by creating the data vector before entering the Buffer table.

  Also simplifies convert_fully_connected to use get_tensor_expr
  directly, as suggested by gemini-code-assist, since the prefetched
  node handling is already encapsulated there.
@Aharrypotter Aharrypotter force-pushed the tflite-densify-test-18971 branch from f62e555 to 190652e Compare April 20, 2026 16:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants