Skip to content
This repository has been archived by the owner on Feb 16, 2019. It is now read-only.

AMD OpenVX modules: such as, neural network inference, 360 video stitching, etc.

Notifications You must be signed in to change notification settings

GPUOpen-ProfessionalCompute-Libraries/amdovx-modules

Repository files navigation

AMD OpenVX modules is now delivered in the MIVisionX. This content is archived for historical reference.

For the latest information on AMD OpenVX modules, go to https://gpuopen-professionalcompute-libraries.github.io/MIVisionX/

MIT licensed Build Status

AMD OpenVX modules (amdovx-modules)

The OpenVX framework provides a mechanism to add new vision functions to OpenVX by 3rd party vendors. This project has below OpenVX modules and utilities to extend amdovx-core project, which contains the AMD OpenVX Core Engine.

  • vx_nn: OpenVX neural network module
  • model_compiler: generate efficient inference library from pre-trained models (such as ONNX)
  • inference_generator: generate inference library from pre-trained CAFFE models
  • annInferenceServer: sample Inference Server
  • annInferenceApp: sample Inference Client Application
  • vx_loomsl: Radeon LOOM stitching library for live 360 degree video applications
  • loom_shell: an interpreter to prototype 360 degree video stitching applications using a script
  • vx_opencv: OpenVX module that implemented a mechanism to access OpenCV functionality as OpenVX kernels

If you're interested in Neural Network Inference, start with the sample inference application.

Inference Application Development Workflow Sample Inference Application
Block-Diagram-Inference-Workflow Block-Diagram-Inference-Sample

Refer to Wiki page for further details.

Pre-requisites

  • CPU: SSE4.1 or above CPU, 64-bit
  • GPU: Radeon Instinct or Vega Family of Products (16GB recommended)
    • Linux: install ROCm with OpenCL development kit
    • Windows: install the latest drivers and OpenCL SDK download
  • CMake 2.8 or newer download
  • Qt Creator for annInferenceApp
  • protobuf for inference_generator
    • install libprotobuf-dev and protobuf-compiler needed for vx_nn
  • OpenCV 3 (optional) download for vx_opencv
    • Set OpenCV_DIR environment variable to OpenCV/build folder

Refer to Wiki page for developer instructions.

Build using CMake on Linux (Ubuntu 16.04 64-bit) with ROCm

  • git clone, build and install other ROCm projects (using cmake and % make install) in the below order for vx_nn.
  • git clone this project using --recursive option so that correct branch of the amdovx-core project is cloned automatically in the deps folder.
  • build and install (using cmake and % make install)
    • executables will be placed in bin folder
    • libraries will be placed in lib folder
    • the installer will copy all executables into /opt/rocm/bin and libraries into /opt/rocm/lib
    • the installer also copies all the OpenVX and module header files into /opt/rocm/include folder
  • add the installed library path to LD_LIBRARY_PATH environment variable (default /opt/rocm/lib)
  • add the installed executable path to PATH environment variable (default /opt/rocm/bin)

Build annInferenceApp using Qt Creator

Build Radeon LOOM using Visual Studio Professional 2013 on 64-bit Windows 10/8.1/7