Easy, accelerated ML inference from BP and C++ using ONNX Runtime native library.
Tutorial video: Implement depth estimation
By simply calling a few Blueprint nodes, you can load and run cutting-edge AI.
This plugin supports ONNX (Open Neural Network Exchange), which is an open-source machine learning model format widely used.
Many ML frameworks such as PyTorch and TensorFlow can export its model in ONNX format.
Many trained models are available on ONNX Model Zoo.
Performance is our first consideration.
This plugin supports model optimization at runtime and GPU accelerations on various hardware as well as this plugin itself is optimized.
Demo Project contains practical examples of
using a single RGB camera.
Also, example projects for
Prerequisite to use with CUDA and TensorRT
To use with CUDA and TensorRT, you need to install the following versions of CUDA, cuDNN, and TensorRT.
The versions of cuDNN and TensorRT are different for RTX30** series and others. We only tested GTX1080Ti, RTX2070, RTX3060Ti and RTX3070. Others are not tested.
Versions for other than RTX30** series (RTX20**, GTX10**)
Versions for RTX30** series
To use with TensorRT, it is recommended to add the following environment variables to cache TensorRT Engine:
v1.4 (Mar 18, 2022)
- Update OnnxRuntime module
- Add an option to disable dependency on OpenCV.
v1.3 (Mar 4, 2022)
- Updated OnnxRuntime module
- Add a Blueprint callable function to construct "Onnx Tensor Info". Use this function to dynamically specify the shape of input/output tensors.
v1.2 (Feb 18, 2022)
- Updated TextureProcessing module
- Added a component to convert UTexture to float array. (`TextureProcessFloatComponent`)
- Added functions to create UTexture from arrays of byte or float.
- Fixed a bug that some UTexture cannot be processed by `TextureProcessComponent`.
- Now `BP_TextureProcessComponent` is deprecated. Use `TextureProcessComponent` instead.
- Updated CustomizedOpenCV module
- Removed OpenCV's `check` function to avoid conflict with UE4's `check` macro.
- Added example projects for
v1.1 (Feb 11, 2022)
- Added support for Ubuntu 18.04.6 Desktop 64bit
- GPU accelerations by CUDA and TensorRT are supported.
- You need an NVIDIA GPU which supports CUDA, cuDNN, and TensorRT.
- You need to install CUDA ver 11.4.2, cuDNN ver 8.2.4, and TensorRT ver 220.127.116.11.
- DNN models which contain unsupported operators cannot be loaded when TensorRT is enabled.
See the official document for supported operators.
(NNEngine uses TensorRT 8.2 as backend on Linux)
- Tested environment:
- Unreal Engine: 4.26.2, 4.27.2
- Vulkan utils: 1.1.70+dfsg1-1ubuntu0.18.04.1
- .NET SDK: 6.0.101-1
- OS: Ubuntu 18.04.6 Desktop 64bit
- CPU: Intel i3-8350K
- GPU: NVIDIA GeForce GTX 1080 Ti
- Driver: 470.130.01
- CUDA: 11.4.2-1
- cuDNN: 8.2.4
- TensorRT: 18.104.22.168
- Added EXPERIMENTAL support for Android as target build
- Tested environment:
- Device: Xiaomi Redmi Note 9S
- Android version: 10 QKQ1.191215.002
- You need to convert your model to ORT format.
See the official document for the details.
- There are some DNN models which cannot be loaded on Android.
- NNEngine uses ONNX Runtime Mobile ver 1.8.1 on Android.
- GPU acceleration by NNAPI is not tested yet.
Number of Blueprints: 3
Number of C++ Classes: 7+
Network Replicated: No
Supported Development Platforms: Windows 10 64bit, Ubuntu 18.04.6 Desktop 64bit
Supported Target Build Platforms: Windows 10 64bit, Ubuntu 18.04.6 Desktop 64bit