Skip to content

[AIROCMLIR-445] Lower migraphx.backwards_data_convolution#2256

Open
Mr-Anyone wants to merge 1 commit intopr-template-migraphx-to-linalg-conv-2from
pr-template-migraphx-to-linalg-conv-backward
Open

[AIROCMLIR-445] Lower migraphx.backwards_data_convolution#2256
Mr-Anyone wants to merge 1 commit intopr-template-migraphx-to-linalg-conv-2from
pr-template-migraphx-to-linalg-conv-backward

Conversation

@Mr-Anyone
Copy link
Member

Motivation

Being able to support 1D,2D,3D backwards convolution with for CPU with all the attributes (strides, dilation, padding, group).

Technical Details

For backwards convolution is similar to forward convolution, and it into these steps:

  1. Expand the input and the kernel in the G dimension. Input goes from NC* into NGC*, and filter goes form CF* into CGF*
  2. Emit the linalg.generic that should be like the python loop below.
  3. Implements padding through tensor.extract_slice
def my_grouped_impl(input_np, filter_np, output_shape, stride=(1, 1), dilation=(1, 1)):
  result = np.zeros(output_shape, dtype=np.float32)
  batch, group, channel, input_height, input_width = input_np.shape
  group, filter_count, channel, filter_height, filter_width = filter_np.shape

  for n in range(batch):# 0
    for g in range(group):# 1
      for hi in range(input_height):# 2
        for wi in range(input_width): # 3
          for f in range(filter_count):# 4
            # reduction starts here!
            for c in range(channel):# 5
              for hk in range(filter_height):# 6
                for wk in range(filter_width):# 7
                  height_access = hi*stride[0] + dilation[0] * hk
                  width_access = wi*stride[1] + dilation[1] * wk
                  result[n, g, f, height_access, width_access] +=  input_np[n, g, c, hi, wi] * filter_np[c, g, f, hk, wk]
  return result

If we have padding, it should look like the following code structure:

linalg.generic ins (...)  outs(%first) ... // computing the transposed kernel
%output = tensor.extract_slice %first [0, 0, padLow0, padLow1, ....][N, F, Ho, Wo, ....][1, 1, 1, ....] // apply padding

Test Plan

Passing all e2e tests.

Test Result

Passed all e2e test that has mixr-bwd-data-*

Submission Checklist

@Mr-Anyone Mr-Anyone requested a review from causten as a code owner February 26, 2026 15:31
@Mr-Anyone
Copy link
Member Author

A lot of the changes are refactor (moving the lambda function into a static function) - it makes more sense to move some of the changes to #2241

@Mr-Anyone Mr-Anyone force-pushed the pr-template-migraphx-to-linalg-conv-backward branch from c2b7fd8 to b10d964 Compare February 26, 2026 15:42
@Mr-Anyone Mr-Anyone force-pushed the pr-template-migraphx-to-linalg-conv-2 branch from c153ee4 to 9c0c5f2 Compare March 3, 2026 02:06
@Mr-Anyone Mr-Anyone force-pushed the pr-template-migraphx-to-linalg-conv-backward branch from b10d964 to a2445e9 Compare March 3, 2026 02:06
@Mr-Anyone Mr-Anyone force-pushed the pr-template-migraphx-to-linalg-conv-backward branch from a2445e9 to 7f446bf Compare March 3, 2026 02:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant