Skip to content

jackhanyuan/deeplabv3plus-ascend

Repository files navigation

deeplabv3plus ascend

Deeplabv3+ om model inference program on the Huawei Ascend platform

All programs passed the test on Huawei Atlas 300I inference card (Ascend 310 AI CPU, CANN 5.0.2, npu-smi 21.0.2).

You can run demo by python detect_deeplabv3plus_ascend.py.

Environments

In addition to the Ascend environments with ATC tools, CANN(pyACL), and Python, you will need the following python packages.

opencv_python
Pillow
onnx
torch

Export om model

(1) Training your Deeplabv3+ model by bubbliiiing/deeplabv3-plus-pytorch. Then export the pytorch model to onnx format.

(2) On the Huawei Ascend platform, using the atc tool convert the onnx model to om model.

# on Ascend 310 AI CPU, exporting onnx model to om model.
atc --input_shape="images:1,3,512,512" --input_format=NCHW --output="deeplab_mobilenetv2" --soc_version=Ascend310 --framework=5 --model="deeplab_mobilenetv2.onnx" --output_type=FP32 

Inference by Ascend NPU

(1) Clone repo and move *.om model to deeplabv3plus-ascend/ascend/*.om.

git clone git@github.com:jackhanyuan/deeplabv3plus-ascend.git
mv deeplab_mobilenetv2.om deeplabv3plus-ascend/ascend/

(2) Edit label file in deeplabv3plus-ascend/ascend/deeplabv3plus.label.

(3) Run inference program.

python detect_deeplabv3plus_ascend.py

The result will save to img_out folder.

About

Deeplabv3+ om model inference program on the Huawei Ascend platform

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages