-
Notifications
You must be signed in to change notification settings - Fork 17
Issues: NVIDIA-AI-IOT/cuDLA-samples
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
how to build tensorrt engine and use python api to run engine on dla ?
#35
opened May 7, 2024 by
Railcalibur
How to profile cuDLA computation
question
Further information is requested
#31
opened Mar 21, 2024 by
angry-crab
No module named quantization
question
Further information is requested
#30
opened Mar 13, 2024 by
zyitom
How to run simultaneously on two DLAs
question
Further information is requested
#29
opened Mar 12, 2024 by
2yjia
How to obtain the min and max values of activation and weights?
#28
opened Mar 5, 2024 by
CangHaiQingYue
How to measure inference time for cudla standalone mode?
question
Further information is requested
#26
opened Feb 20, 2024 by
Railcalibur
/model.0/conv/_input_quantizer/Constant_1_output_0' is not supported on DLA.
triaged
#19
opened Jan 26, 2024 by
WangFengtu1996
Tensor sizes are different onnx model and DLA loadable engine outputs
#17
opened Jan 23, 2024 by
AnhPC03
[hybrid mode] load cuDLA module from memory FAILED in src/cudla_context_hybrid.cpp:96, CUDLA ERR: 7
#10
opened Nov 23, 2023 by
hygxy
[Note] Running cuDLA-samples on 6.0.6.0
enhancement
New feature or request
fixed
issue has been fixed
#5
opened Sep 8, 2023 by
zerollzeng
Error during Model Conversion Process - Impact Inquiry
triaged
#3
opened Aug 23, 2023 by
liuweixue001
ProTip!
Exclude everything labeled
bug
with -label:bug.