Skip to content

[AIROCMLIR-445] Lower linalg.generic convolution into rock#2252

Open
Mr-Anyone wants to merge 6 commits intopr-template-migraphx-to-linalg-convfrom
pr-template-migraphx-to-linalg-conv-2
Open

[AIROCMLIR-445] Lower linalg.generic convolution into rock#2252
Mr-Anyone wants to merge 6 commits intopr-template-migraphx-to-linalg-convfrom
pr-template-migraphx-to-linalg-conv-2

Conversation

@Mr-Anyone
Copy link
Member

@Mr-Anyone Mr-Anyone commented Feb 24, 2026

Motivation

Lower linalg.generic into rock.conv.

Technical Details

The conversion primarily depends on the attribute in linalg.generic to extract the padding, dilation, stride, and op code. We transfer those attribute to rock.conv. The conversion works as the following:

  1. isConv function reads the stride, dilation, and padding from the op attributes and transform it into the equivalent rock.conv attribute
  2. Padding (tensor.padding) is now removed since rock.conv handles that internally.
  3. Layout is then transform into the layout expected by the rock.conv.
  4. Emit attribute for rock.conv

For more context, this PR tries to match whatever we are currently doing in the migraphx -> tosa -> rock pipeline.

Test Plan

Added a lit test and a lot of E2E tests

Test Result

Passed E2E and lit test.

Submission Checklist

@Mr-Anyone Mr-Anyone requested a review from causten as a code owner February 24, 2026 19:44
@Mr-Anyone Mr-Anyone marked this pull request as draft February 24, 2026 19:45
@Mr-Anyone Mr-Anyone force-pushed the pr-template-migraphx-to-linalg-conv-2 branch from 8e87ba8 to 205ea46 Compare February 25, 2026 14:25
@Mr-Anyone Mr-Anyone marked this pull request as ready for review February 25, 2026 16:56
Comment on lines +53 to +59
linalg::GenericOp castedOp = dyn_cast<linalg::GenericOp>(op);
if (castedOp &&
llvm::any_of(castedOp.getIteratorTypesArray(), [](auto type) {
return linalg::isReductionIterator(type);
})) {
return false;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory this could lead to non-conv linalg.generics to be accidentally picked up here. Could we look for generics that were marked with the conv_op attribute instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That could work for now.

Originally, my intention was to catch all linalg.generic that has a reduction iterator since rock-regularize doesn't support it.

for (utils::IteratorType iterType : lgop.getIteratorTypesArray()) {
if (!linalg::isParallelIterator(iterType))
return lgop.emitError("Only fully parallel supported");
}

FailureOr<ConvFields>
ConvLinalgConverter::isConv(ConversionPatternRewriter &rewriter,
linalg::GenericOp op) const {
// FIXME: In the future, strides, dilation, and padding can be extracted
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I'm not a fan of FIXME or TODO's sprinked throughout the codebase without corresponding tickets opened for them. Can you make sure that there is a ticket open for this?

case 2:
return filter;
case 1: {
// Conv1D is expanded into Conv2D (matching migraphx-to-tosa): unmerge
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Moving forwards the linalg path is going to replace tosa, so there is no need to make reference to a pipeline that will no longer exist.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that LinalgToRock should have any e2e tests in the directory hierarchy. Anything under Conversion/LinalgToRock should just be for LIT tests testing this explicit pass. You can move the E2E tests to where all of the other E2E tests are.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as my earlier comment. I don't love the idea of having e2e tests in a Conversion/* folder. We should try our best to keep these separate as much as possible. The same thing applies to all of the other files in this directory.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved it to test/fusion/pr-e2e

Comment on lines +385 to +403
auto convertToArrayAttr =
[&](Attribute arr, ArrayRef<int64_t> dimOneDefaults = {}) -> ArrayAttr {
SmallVector<int64_t, 4> values;
llvm::transform(
cast<ArrayAttr>(arr).getValue(), std::back_inserter(values),
[](Attribute val) { return cast<IntegerAttr>(val).getInt(); });
// Conv1D is expanded into Conv2D to match the migraphx-to-tosa pipeline.
// Append identity defaults (stride=1, dilation=1, pad=0) for the extra
// spatial dimension.
if (spatialDim == 1)
values.insert(values.end(), dimOneDefaults.begin(),
dimOneDefaults.end());
return rewriter.getIndexArrayAttr(values);
};

auto dilation =
convertToArrayAttr(op->getAttr("dilation"), /*dimOneDefaults=*/1);
auto stride =
convertToArrayAttr(op->getAttr("stride"), /*dimOneDefaults=*/1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could potentially cause a crash if we have a linalg.generic with a conv_op attribute but it's missing one of dilation, stride, or pad. In this case wouldn't op->getAttr() return a null attribute that would cause a crash when we try to cast it in the lambda?

Comment on lines +272 to +273
rewriter.replaceOp(expanded, result);
rewriter.eraseOp(padded);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably good to sanity check/assert that padded only has a single use before it gets deleted.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved it to line 211.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add some more scenarios to this test?

  • Different data types
  • Asymmetric padding
  • Maybe a negative test without the conv_op attribute

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I have added a lot more testing.

Maybe a negative test without the conv_op attribute

Currently, nothing happens to the linalg.generic if there is no conv_op attribute because it doesn't know that this is a convolution operation without this attribute. I have added test for cases when there is a conv_op, but we have invalid or no strides/padding.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a new lowering in the LinalgToRock conversion to recognize convolution-shaped linalg.generic ops (annotated with MIGraphX-emitted conv attributes) and convert them into rock.conv, aligning the MIGraphX→Linalg path with the existing MIGraphX→TOSA→Rock behavior.

Changes:

  • Add a linalg.generic (convolution) → rock.conv conversion pattern, including layout transforms and optional removal of tensor.pad.
  • Update LinalgToRock legality so reduction-iterator linalg.generic ops are no longer considered legal (to force conversion).
  • Add/extend lit + e2e tests to exercise the new lowering path and new pipeline combination.

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
mlir/lib/Conversion/LinalgToRock/LinalgToRock.cpp Implements convolution detection and rewrites to rock.conv (plus input/filter/output layout transforms and pad removal).
mlir/lib/Conversion/LinalgToRock/LinalgToRockPass.cpp Updates conversion target legality to treat reduction-iterator linalg.generic as illegal.
mlir/test/Conversion/LinalgToRock/linalg-to-rock-conv.mlir New lit test asserting linalg.generic conv lowers to rock.conv with expected attributes/layouts.
mlir/test/Conversion/LinalgToRock/e2e/linalg-to-rock-conv3d-no-pad.e2d.mlir New e2e test to cover a no-padding conv3d case through the full pipeline.
mlir/test/Conversion/MIGraphXToLinalg/e2e/migraphx-to-linalg-conv.cpu.mlir Adds a “FINAL” run to validate the MIGraphX→Linalg path can continue lowering to Rock.
mlir/test/Conversion/MIGraphXToLinalg/e2e/migraphx-to-linalg-conv1d-group.cpu.mlir Same as above for 1D grouped conv.
mlir/test/Conversion/MIGraphXToLinalg/e2e/migraphx-to-linalg-conv2d-group.cpu.mlir Same as above for 2D grouped conv.
mlir/test/Conversion/MIGraphXToLinalg/e2e/migraphx-to-linalg-conv3d-group.cpu.mlir Same as above for 3D grouped conv.
Comments suppressed due to low confidence (1)

mlir/test/Conversion/LinalgToRock/e2e/linalg-to-rock-conv3d-no-pad.e2d.mlir:1

  • File name ends with .e2d.mlir while other tests in this folder use .e2e.mlir. If this is meant to be an end-to-end test, consider renaming for consistency/discoverability.
// RUN: rocmlir-gen -fut conv3d_add -arch %arch --clone-harness %s | rocmlir-driver -host-pipeline=migraphx-linalg,highlevel -kernel-pipeline=migraphx-linalg,highlevel -targets %arch | rocmlir-gen -ph -print-results -rand 1 -rand_type float -fut conv3d_add_wrapper --verifier clone - | rocmlir-driver -host-pipeline mhal -kernel-pipeline full | xmir-runner --shared-libs=%linalg_test_lib_dir/libmlir_rocm_runtime%shlibext,%conv_validation_wrapper_library_dir/libconv-validation-wrappers%shlibext,%linalg_test_lib_dir/libmlir_runner_utils%shlibext,%linalg_test_lib_dir/libmlir_float16_utils%shlibext,%linalg_test_lib_dir/libmlir_c_runner_utils%shlibext,%linalg_test_lib_dir/libmlir_async_runtime%shlibext --entry-point-result=void | FileCheck %s

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +33 to +44
// CHECK-LABEL: func.func @conv_2d
// CHECK: rock.conv({{.*}}) {dilations = [2 : index, 3 : index], filter_layout = ["g", "k", "c", "y", "x"], input_layout = ["ni", "gi", "ci", "hi", "wi"], output_layout = ["no", "go", "ko", "ho", "wo"], padding = [2 : index, 2 : index, 2 : index, 2 : index], strides = [4 : index, 5 : index]}
func.func @conv_2d(%arg0: tensor<122016xf32>, %arg1: tensor<320xf32>) -> tensor<8208xf32> attributes {arch = "##TOKEN_ARCH##", kernel} {
%cst = arith.constant dense<0.000000e+00> : tensor<2x2x4x27x19xf32>
%cst_0 = arith.constant 0.000000e+00 : f32
%expanded = tensor.expand_shape %arg0 [[0, 1, 2, 3]] output_shape [2, 4, 123, 124] : tensor<122016xf32> into tensor<2x4x123x124xf32>
%padded = tensor.pad %expanded low[0, 0, 2, 2] high[0, 0, 2, 2] {
^bb0(%arg2: index, %arg3: index, %arg4: index, %arg5: index):
tensor.yield %cst_0 : f32
} : tensor<2x4x123x124xf32> to tensor<2x4x127x128xf32>
%expanded_1 = tensor.expand_shape %padded [[0], [1, 2], [3], [4]] output_shape [2, 2, 2, 127, 128] : tensor<2x4x127x128xf32> into tensor<2x2x2x127x128xf32>
%expanded_2 = tensor.expand_shape %arg1 [[0, 1, 2, 3, 4]] output_shape [2, 4, 2, 4, 5] : tensor<320xf32> into tensor<2x4x2x4x5xf32>
Copy link

Copilot AI Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is intended to validate that tensor.pad is removed because rock.conv handles padding internally, but the FileCheck patterns only assert that rock.conv exists. Add a CHECK-NOT: tensor.pad (or similar) in the conv2d/conv1d sections to prevent regressions where the pad-removal rewrite silently stops working.

Copilot uses AI. Check for mistakes.
@Mr-Anyone Mr-Anyone changed the title [AIROCMLIR-445] Lower 'linalg.generic' convolution into rock [AIROCMLIR-445] Lower linalg.generic convolution into rock Mar 2, 2026
@Mr-Anyone Mr-Anyone force-pushed the pr-template-migraphx-to-linalg-conv-2 branch from c153ee4 to 9c0c5f2 Compare March 3, 2026 02:06
@Mr-Anyone Mr-Anyone marked this pull request as draft March 4, 2026 04:25
@Mr-Anyone Mr-Anyone marked this pull request as ready for review March 4, 2026 17:15
@Mr-Anyone Mr-Anyone requested a review from justinrosner March 4, 2026 17:15
@Mr-Anyone
Copy link
Member Author

Mr-Anyone commented Mar 4, 2026

I have cleaned up this PR compare to before.

Previously, we rock.transform into a different layout to match tosa path. But now, all rock.conv gemm is in the form NGC*, GFC*, and NGF*. This removed around 100 lines of code. The exception to that is conv1d, as it seems that rock.conv doesn't support 1D?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants