Skip to content

[privateuse1] Update tutorial to document Python backend approach#3842

Open
RiyaP2508 wants to merge 2 commits intopytorch:mainfrom
RiyaP2508:update-privateuseone-python-backend
Open

[privateuse1] Update tutorial to document Python backend approach#3842
RiyaP2508 wants to merge 2 commits intopytorch:mainfrom
RiyaP2508:update-privateuseone-python-backend

Conversation

@RiyaP2508
Copy link
Copy Markdown

@RiyaP2508 RiyaP2508 commented Apr 28, 2026

Summary

  • Added "Python Backend Approach (Simplified)" section documenting
    _setup_privateuseone_for_python_backend() from Add functions to setup PrivateUse1 as a python backend device. pytorch#157859
  • Complete end-to-end NumPy-backed example with autograd (11 registered ops)
  • Hooks/device guard customization with backward() failure warning
  • Added hooks registration warning to existing C++ section
  • C++ vs Python comparison table and limitations
  • Cross-references to torch_openreg and test files

Fixes pytorch/pytorch#179010
Fixes pytorch/pytorch#179008

Test plan

  • make html-noplot builds without RST warnings
  • New sections render correctly in docs preview
  • Code examples validated against PyTorch nightly

cc @albanD @jbschlosser @kshitij12345 @qihqi @fffrog

Add new section covering _setup_privateuseone_for_python_backend() with
tensor subclass pattern, torch.library op registration, hooks/device guard
customization, end-to-end NumPy-backed example, comparison table, and
limitations. Also add hooks registration warning to existing C++ section.

Fixes pytorch/pytorch#179010
Fixes pytorch/pytorch#179008
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 28, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3842

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented Apr 28, 2026

Hi @RiyaP2508!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@svekars svekars added C++ Issues relating to C++ tutorials core Tutorials of any level of difficulty related to the core pytorch functionality labels Apr 28, 2026
@meta-cla meta-cla Bot added the cla signed label Apr 28, 2026
@meta-cla
Copy link
Copy Markdown

meta-cla Bot commented Apr 28, 2026

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@sekyondaMeta
Copy link
Copy Markdown
Contributor

@claude

@claude
Copy link
Copy Markdown

claude Bot commented May 1, 2026

Claude finished @sekyondaMeta's task in 3m 11s —— View job


PR Review: PrivateUse1 Python Backend Tutorial

  • Gather context and read changed files
  • Review RST structure and content quality
  • Check code examples for correctness
  • Verify index.rst changes
  • Post review feedback

Overall Assessment

This is a solid addition documenting an important new PyTorch feature. The tutorial is well-structured, covers the key concepts, and provides a complete end-to-end example. Below are specific items to address.


Issues

1. RST heading underline length mismatch (line 301)

The heading at line 300 is 55 characters:

Register PrivateUse1HooksInterface for the new backend

But the underline at line 301 appears to be only 54 ^ characters. RST requires the underline to be at least as long as the heading text. This may cause a Sphinx build warning. Please verify and fix if needed.

Fix this →

2. view op ignores the size parameter (line 583-584)

@torch.library.impl("aten::view", "privateuseone")
def view(a, size):
    return wrap(unwrap(a), a.shape, a.dtype)

This passes a.shape instead of size to wrap, so the view always returns the original shape. While this works for the specific backward example (since the sum backward doesn't reshape), it's misleading in a tutorial context. Consider either:

  • Using size instead of a.shape: wrap(unwrap(a).reshape(size), size, a.dtype)
  • Adding a brief comment explaining this is a no-op stub sufficient for the backward pass

3. _copy_from logic may be inverted (line 575-580)

@torch.library.impl("aten::_copy_from", "privateuseone")
def copy_from(a, b):
    if a.device.type == "npy":
        npy_data = unwrap(a)
    else:
        npy_data = a.numpy()
    b.raw_data = npy_data

The semantics of _copy_from(src, dst) copy data from src into dst. The implementation checks a.device.type == "npy" (the source is on the custom backend) and extracts its data, then assigns to b.raw_data. This works for the CPU→backend copy path. However, it doesn't handle the backend→CPU path (where b is a CPU tensor without raw_data). For the tutorial's specific use case (.to("npy")), this is fine, but a note might help readers understand this is a one-directional implementation.

4. torch._C._acc API references are unverifiable

The tutorial references several torch._C._acc APIs:

  • torch._C._acc.create_empty_tensor
  • torch._C._acc.register_python_privateuseone_hook
  • torch._C._acc.register_python_privateuseone_device_guard
  • torch._C._acc.PrivateUse1Hooks
  • torch._C._acc.DeviceGuard

Since the test plan notes code examples haven't been validated against PyTorch nightly yet, these should be verified before merge. The _acc module namespace is unusual - please confirm these match the actual API in PyTorch (from PR #157859).


Suggestions (non-blocking)

5. "Starting from PyTorch 2.10" (line 342) — This is a very high version number. Please verify this is the correct version. If this feature ships with the current nightly, it might be 2.7 or 2.8.

6. Docstring-style comments in the code examples — Lines like """Wrap a NumPy array as a MyDeviceTensor.""" (line 403) and """Extract the raw NumPy array...""" (line 406) could be removed per the repo's style guidelines (minimize comments; code should be self-explanatory). The function names wrap/unwrap are clear enough.

7. The add function signature is missing the alpha parameter (line 535)

The aten::add.Tensor op has signature add(Tensor self, Tensor other, Scalar alpha=1). The registered function only takes (t1, t2) and ignores alpha. This will work for basic cases but could break if code passes an explicit alpha. Consider:

def add(t1, t2, alpha=1):
    out = unwrap(t1) + alpha * unwrap(t2)
    return wrap(out, out.shape, torch.float32)

8. index.rst tag addition — Adding Python to the tags is appropriate. The card description update is concise and informative.


What looks good

  • Clear separation between Quick Setup, Tensor Subclass, Op Registration, and Customization sections
  • The comparison table is very helpful for readers deciding which approach to use
  • Limitations section sets proper expectations
  • Cross-references to the PyTorch source and test files are valuable
  • The .. tip:: at the top of the file helps C++-focused readers discover the Python path early
  • The hooks registration warning in the C++ section fills a real documentation gap

Summary

The PR is close to merge-ready. The critical item is #4 (verifying the torch._C._acc APIs actually exist in PyTorch nightly). Items #1-3 are minor correctness improvements that would strengthen the tutorial. Item #5 (version number) should be confirmed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

C++ Issues relating to C++ tutorials cla signed core Tutorials of any level of difficulty related to the core pytorch functionality

Projects

None yet

3 participants