Skip to content

Conversation

@ptrendx
Copy link
Member

@ptrendx ptrendx commented Feb 11, 2026

Description

Creating new PR since the old one for some reason shows merge conflicts and we don't have permissions to update the branch ourselves.
@zobeideThePlayer FYI

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Change A
  • Change B

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

@ptrendx
Copy link
Member Author

ptrendx commented Feb 11, 2026

/te-ci L1 pytorch

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 11, 2026

Greptile Overview

Greptile Summary

This PR tweaks QuantizedTensor.__torch_dispatch__’s handling of aten.copy_ so that when copying from a QuantizedTensor into a regular tensor, the source is dequantized using the destination’s dtype (fp32/fp16/bf16), with a fallback to fp32 for other destination dtypes. This keeps the copied result aligned with the destination’s expected floating-point dtype instead of always using the dequantize default.

The change is contained to transformer_engine/pytorch/quantized_tensor.py and only affects the quantized→non-quantized copy path; quantized→quantized and non-quantized→quantized paths are unchanged.

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk.
  • Change is small and localized to QuantizedTensor torch_dispatch handling for copy_. It improves dtype fidelity when copying from quantized to non-quantized tensors and includes a conservative fallback for unsupported destination dtypes. No other behavior is altered.
  • No files require special attention

Important Files Changed

Filename Overview
transformer_engine/pytorch/quantized_tensor.py Updates QuantizedTensor copy_ dispatch to dequantize source using destination dtype (falling back to fp32 for unsupported dtypes) before copying into a non-quantized destination.

Sequence Diagram

sequenceDiagram
    participant Caller as User/Module
    participant QT as QuantizedTensor
    participant Copy as copy_()
    participant PT as torch.Tensor

    Caller->>QT: dst.copy_(src)
    activate Copy
    Copy->>Copy: validate src/dst
    Copy->>Copy: determine destination dtype
    Copy->>PT: underlying tensor copy (data/scale/metadata)
    Copy-->>QT: updated quantized state
    deactivate Copy
    QT-->>Caller: return dst
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, no comments

Edit Code Review Agent Settings | Greptile

Copy link
Member

@ksivaman ksivaman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants