Issues: pytorch/pytorch
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
DISABLED test_memory_format_type_cuda (__main__.TestTorchDeviceTypeCUDA)
module: flaky-tests
Problem is a flaky test in CI
module: tests
Issues related to tests (not the torch.testing module)
oncall: pt2
skipped
Denotes a (flaky) test currently skipped in CI.
#126954
opened May 23, 2024 by
pytorch-bot
bot
DISABLED test_memory_snapshot (__main__.TestCudaMallocAsync)
module: cuda
Related to torch.cuda, and CUDA support in general
module: flaky-tests
Problem is a flaky test in CI
module: rocm
AMD GPU support for Pytorch
skipped
Denotes a (flaky) test currently skipped in CI.
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#126953
opened May 23, 2024 by
pytorch-bot
bot
[DSD] keep 'exp_avg' as DTensor after This issue has been looked at a team member, and triaged and prioritized into an appropriate module
torch.distributed.checkpoint.state_dict.set_optimizer_state_dict
triaged
#126950
opened May 23, 2024 by
weifengpy
tensor_split()
with indices_or_sections=
doesn't work while tensor_split()
without indices_or_sections=
works
#126949
opened May 23, 2024 by
hyperkai
[DSD] keep 'initial_lr' in
torch.distributed.checkpoint.state_dict.set_optimizer_state_dict
#126948
opened May 23, 2024 by
weifengpy
dsplit()
with indices_or_sections=
doesn't work while dsplit()
without indices_or_sections=
works
#126947
opened May 23, 2024 by
hyperkai
[async H2D] memory ordering issue for async H2D with pin memory on CUDA device
#126946
opened May 23, 2024 by
zejun-chen
Linker Errors on ARM System While Building PyTorch from Source with clang
#126939
opened May 23, 2024 by
abhishek-fujitsu
[compiled autograd][aot autograd] accumulate grad (on param with non empty grad) mutates inputs and prevents cudagraph
module: aotdispatch
umbrella label for AOTAutograd issues
module: compiled autograd
compiled_autograd
module: cuda graphs
Ability to capture and then replay streams of CUDA kernels
module: pt2-dispatcher
PT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,
oncall: pt2
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#126938
opened May 23, 2024 by
xmfan
[compiled autograd][cudagraphs] accessing TLS cudagraph manager results in corrupted memory
module: compiled autograd
compiled_autograd
module: cuda graphs
Ability to capture and then replay streams of CUDA kernels
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#126934
opened May 23, 2024 by
xmfan
[ONNX] view(dtype=dtype) is not supported by both onnx.export and onnx.dynamo_export
#126921
opened May 22, 2024 by
borisfom
Alternative access order for the same buffer can bring big perf win
module: inductor
oncall: pt2
#126913
opened May 22, 2024 by
shunting314
function signature for multiprocessing.spawn is multiprocessing.spawn.spawn
#126899
opened May 22, 2024 by
kaiyuan-li
[BE] wrap deprecated function/class with Relatively self-contained tasks for better engineering contributors
module: deprecation
module: typing
Related to mypy type annotations
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
typing_extensions.deprecated
for better IDE integration
actionable
better-engineering
#126888
opened May 22, 2024 by
XuehaiPan
UNSTABLE inductor / cuda12.1-py3.10-gcc9-sm86 / test (inductor_timm)
module: ci
Related to continuous integration
oncall: pt2
unstable
#126884
opened May 22, 2024 by
clee2000
Bug:
torch.func.jacrev
fails with backend=aot_eager
oncall: pt2
#126882
opened May 22, 2024 by
guilhermeleobas
[DCP] DCP does not support objects which are lazy initialized.
enhancement
Not as big of a feature, but technically not a bug. Should be easy to fix
module: distributed_checkpoint
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#126881
opened May 22, 2024 by
LucasLLC
Add warning messages to provide info about expected performance improvement using cuda for a specific model
#126874
opened May 22, 2024 by
MariosGkMeng
[docs] scaled_dot_product_attention is_causal description is misleading
module: docs
Related to our documentation, both in docs/ and docblocks
oncall: transformer/mha
Issues related to Transformers and MultiheadAttention
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#126873
opened May 22, 2024 by
MikeTkachuk
running opcheck leads to Related to opcheck testing for custom operators
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Fail to import hypothesis in common_utils, tests are not derandomized
print
module: opcheck
opcheck has dependency on expecttest, which is not a pytorch runtime dependency, leading to "module not found" error message
module: opcheck
Related to opcheck testing for custom operators
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
DISABLED test_dtensor_op_db_vstack_cpu_float32 (__main__.TestDTensorOpsCPU)
module: flaky-tests
Problem is a flaky test in CI
oncall: distributed
Add this issue/PR to distributed oncall triage queue
skipped
Denotes a (flaky) test currently skipped in CI.
#126868
opened May 22, 2024 by
pytorch-bot
bot
Previous Next
ProTip!
Adding no:label will show everything without a label.