[AIROCMLIR-445] Lower linalg.generic convolution into rock#2252
[AIROCMLIR-445] Lower linalg.generic convolution into rock#2252Mr-Anyone wants to merge 6 commits intopr-template-migraphx-to-linalg-convfrom
linalg.generic convolution into rock#2252Conversation
8e87ba8 to
205ea46
Compare
| linalg::GenericOp castedOp = dyn_cast<linalg::GenericOp>(op); | ||
| if (castedOp && | ||
| llvm::any_of(castedOp.getIteratorTypesArray(), [](auto type) { | ||
| return linalg::isReductionIterator(type); | ||
| })) { | ||
| return false; | ||
| } |
There was a problem hiding this comment.
In theory this could lead to non-conv linalg.generics to be accidentally picked up here. Could we look for generics that were marked with the conv_op attribute instead?
There was a problem hiding this comment.
That could work for now.
Originally, my intention was to catch all linalg.generic that has a reduction iterator since rock-regularize doesn't support it.
rocMLIR/mlir/lib/Dialect/Rock/Transforms/Regularize.cpp
Lines 138 to 141 in 51df5f4
| FailureOr<ConvFields> | ||
| ConvLinalgConverter::isConv(ConversionPatternRewriter &rewriter, | ||
| linalg::GenericOp op) const { | ||
| // FIXME: In the future, strides, dilation, and padding can be extracted |
There was a problem hiding this comment.
Nit: I'm not a fan of FIXME or TODO's sprinked throughout the codebase without corresponding tickets opened for them. Can you make sure that there is a ticket open for this?
| case 2: | ||
| return filter; | ||
| case 1: { | ||
| // Conv1D is expanded into Conv2D (matching migraphx-to-tosa): unmerge |
There was a problem hiding this comment.
Nit: Moving forwards the linalg path is going to replace tosa, so there is no need to make reference to a pipeline that will no longer exist.
There was a problem hiding this comment.
I don't think that LinalgToRock should have any e2e tests in the directory hierarchy. Anything under Conversion/LinalgToRock should just be for LIT tests testing this explicit pass. You can move the E2E tests to where all of the other E2E tests are.
There was a problem hiding this comment.
Same as my earlier comment. I don't love the idea of having e2e tests in a Conversion/* folder. We should try our best to keep these separate as much as possible. The same thing applies to all of the other files in this directory.
There was a problem hiding this comment.
I moved it to test/fusion/pr-e2e
| auto convertToArrayAttr = | ||
| [&](Attribute arr, ArrayRef<int64_t> dimOneDefaults = {}) -> ArrayAttr { | ||
| SmallVector<int64_t, 4> values; | ||
| llvm::transform( | ||
| cast<ArrayAttr>(arr).getValue(), std::back_inserter(values), | ||
| [](Attribute val) { return cast<IntegerAttr>(val).getInt(); }); | ||
| // Conv1D is expanded into Conv2D to match the migraphx-to-tosa pipeline. | ||
| // Append identity defaults (stride=1, dilation=1, pad=0) for the extra | ||
| // spatial dimension. | ||
| if (spatialDim == 1) | ||
| values.insert(values.end(), dimOneDefaults.begin(), | ||
| dimOneDefaults.end()); | ||
| return rewriter.getIndexArrayAttr(values); | ||
| }; | ||
|
|
||
| auto dilation = | ||
| convertToArrayAttr(op->getAttr("dilation"), /*dimOneDefaults=*/1); | ||
| auto stride = | ||
| convertToArrayAttr(op->getAttr("stride"), /*dimOneDefaults=*/1); |
There was a problem hiding this comment.
I think this could potentially cause a crash if we have a linalg.generic with a conv_op attribute but it's missing one of dilation, stride, or pad. In this case wouldn't op->getAttr() return a null attribute that would cause a crash when we try to cast it in the lambda?
| rewriter.replaceOp(expanded, result); | ||
| rewriter.eraseOp(padded); |
There was a problem hiding this comment.
Probably good to sanity check/assert that padded only has a single use before it gets deleted.
There was a problem hiding this comment.
I moved it to line 211.
There was a problem hiding this comment.
Could we add some more scenarios to this test?
- Different data types
- Asymmetric padding
- Maybe a negative test without the
conv_opattribute
There was a problem hiding this comment.
Sure, I have added a lot more testing.
Maybe a negative test without the conv_op attribute
Currently, nothing happens to the linalg.generic if there is no conv_op attribute because it doesn't know that this is a convolution operation without this attribute. I have added test for cases when there is a conv_op, but we have invalid or no strides/padding.
There was a problem hiding this comment.
Pull request overview
This PR adds a new lowering in the LinalgToRock conversion to recognize convolution-shaped linalg.generic ops (annotated with MIGraphX-emitted conv attributes) and convert them into rock.conv, aligning the MIGraphX→Linalg path with the existing MIGraphX→TOSA→Rock behavior.
Changes:
- Add a
linalg.generic(convolution) →rock.convconversion pattern, including layout transforms and optional removal oftensor.pad. - Update LinalgToRock legality so reduction-iterator
linalg.genericops are no longer considered legal (to force conversion). - Add/extend lit + e2e tests to exercise the new lowering path and new pipeline combination.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| mlir/lib/Conversion/LinalgToRock/LinalgToRock.cpp | Implements convolution detection and rewrites to rock.conv (plus input/filter/output layout transforms and pad removal). |
| mlir/lib/Conversion/LinalgToRock/LinalgToRockPass.cpp | Updates conversion target legality to treat reduction-iterator linalg.generic as illegal. |
| mlir/test/Conversion/LinalgToRock/linalg-to-rock-conv.mlir | New lit test asserting linalg.generic conv lowers to rock.conv with expected attributes/layouts. |
| mlir/test/Conversion/LinalgToRock/e2e/linalg-to-rock-conv3d-no-pad.e2d.mlir | New e2e test to cover a no-padding conv3d case through the full pipeline. |
| mlir/test/Conversion/MIGraphXToLinalg/e2e/migraphx-to-linalg-conv.cpu.mlir | Adds a “FINAL” run to validate the MIGraphX→Linalg path can continue lowering to Rock. |
| mlir/test/Conversion/MIGraphXToLinalg/e2e/migraphx-to-linalg-conv1d-group.cpu.mlir | Same as above for 1D grouped conv. |
| mlir/test/Conversion/MIGraphXToLinalg/e2e/migraphx-to-linalg-conv2d-group.cpu.mlir | Same as above for 2D grouped conv. |
| mlir/test/Conversion/MIGraphXToLinalg/e2e/migraphx-to-linalg-conv3d-group.cpu.mlir | Same as above for 3D grouped conv. |
Comments suppressed due to low confidence (1)
mlir/test/Conversion/LinalgToRock/e2e/linalg-to-rock-conv3d-no-pad.e2d.mlir:1
- File name ends with
.e2d.mlirwhile other tests in this folder use.e2e.mlir. If this is meant to be an end-to-end test, consider renaming for consistency/discoverability.
// RUN: rocmlir-gen -fut conv3d_add -arch %arch --clone-harness %s | rocmlir-driver -host-pipeline=migraphx-linalg,highlevel -kernel-pipeline=migraphx-linalg,highlevel -targets %arch | rocmlir-gen -ph -print-results -rand 1 -rand_type float -fut conv3d_add_wrapper --verifier clone - | rocmlir-driver -host-pipeline mhal -kernel-pipeline full | xmir-runner --shared-libs=%linalg_test_lib_dir/libmlir_rocm_runtime%shlibext,%conv_validation_wrapper_library_dir/libconv-validation-wrappers%shlibext,%linalg_test_lib_dir/libmlir_runner_utils%shlibext,%linalg_test_lib_dir/libmlir_float16_utils%shlibext,%linalg_test_lib_dir/libmlir_c_runner_utils%shlibext,%linalg_test_lib_dir/libmlir_async_runtime%shlibext --entry-point-result=void | FileCheck %s
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // CHECK-LABEL: func.func @conv_2d | ||
| // CHECK: rock.conv({{.*}}) {dilations = [2 : index, 3 : index], filter_layout = ["g", "k", "c", "y", "x"], input_layout = ["ni", "gi", "ci", "hi", "wi"], output_layout = ["no", "go", "ko", "ho", "wo"], padding = [2 : index, 2 : index, 2 : index, 2 : index], strides = [4 : index, 5 : index]} | ||
| func.func @conv_2d(%arg0: tensor<122016xf32>, %arg1: tensor<320xf32>) -> tensor<8208xf32> attributes {arch = "##TOKEN_ARCH##", kernel} { | ||
| %cst = arith.constant dense<0.000000e+00> : tensor<2x2x4x27x19xf32> | ||
| %cst_0 = arith.constant 0.000000e+00 : f32 | ||
| %expanded = tensor.expand_shape %arg0 [[0, 1, 2, 3]] output_shape [2, 4, 123, 124] : tensor<122016xf32> into tensor<2x4x123x124xf32> | ||
| %padded = tensor.pad %expanded low[0, 0, 2, 2] high[0, 0, 2, 2] { | ||
| ^bb0(%arg2: index, %arg3: index, %arg4: index, %arg5: index): | ||
| tensor.yield %cst_0 : f32 | ||
| } : tensor<2x4x123x124xf32> to tensor<2x4x127x128xf32> | ||
| %expanded_1 = tensor.expand_shape %padded [[0], [1, 2], [3], [4]] output_shape [2, 2, 2, 127, 128] : tensor<2x4x127x128xf32> into tensor<2x2x2x127x128xf32> | ||
| %expanded_2 = tensor.expand_shape %arg1 [[0, 1, 2, 3, 4]] output_shape [2, 4, 2, 4, 5] : tensor<320xf32> into tensor<2x4x2x4x5xf32> |
There was a problem hiding this comment.
This test is intended to validate that tensor.pad is removed because rock.conv handles padding internally, but the FileCheck patterns only assert that rock.conv exists. Add a CHECK-NOT: tensor.pad (or similar) in the conv2d/conv1d sections to prevent regressions where the pad-removal rewrite silently stops working.
linalg.generic convolution into rock
c153ee4 to
9c0c5f2
Compare
|
I have cleaned up this PR compare to before. Previously, we rock.transform into a different layout to match tosa path. But now, all rock.conv gemm is in the form NGC*, GFC*, and NGF*. This removed around 100 lines of code. The exception to that is conv1d, as it seems that |
Motivation
Lower
linalg.genericintorock.conv.Technical Details
The conversion primarily depends on the attribute in linalg.generic to extract the padding, dilation, stride, and op code. We transfer those attribute to
rock.conv. The conversion works as the following:isConvfunction reads the stride, dilation, and padding from the op attributes and transform it into the equivalent rock.conv attributerock.conv.rock.convFor more context, this PR tries to match whatever we are currently doing in the migraphx -> tosa -> rock pipeline.
Test Plan
Added a lit test and a lot of E2E tests
Test Result
Passed E2E and lit test.
Submission Checklist