Spotlight Sale: Save 50% on select products now through July 15.

NNEngine - Neural Network Engine

Easy, accelerated ML inference from BP and C++ using ONNX Runtime native library.

  • Supported Platforms
  • Supported Engine Versions
    4.26 - 4.27, 5.0 - 5.4
  • Download Type
    Engine Plugin
    This product contains a code plugin, complete with pre-built binaries and all its source code that integrates with Unreal Engine, which can be installed to an engine version of your choice then enabled on a per-project basis.

Demo video: Overview, Monocular depth estimation demo, Artistic style transfer demo

Tutorial video: Implement depth estimation

Documentation: Link

By simply calling a few Blueprint nodes, you can load and run cutting-edge AI.

This plugin supports ONNX (Open Neural Network Exchange), which is an open-source machine learning model format widely used.

Many ML frameworks such as PyTorch and TensorFlow can export its model in ONNX format.

Many trained models are available on ONNX Model Zoo.

Performance is our first consideration.

This plugin supports model optimization at runtime and GPU accelerations on various hardware as well as this plugin itself is optimized.

Demo Project contains practical examples of

  • Human detection
  • Human pose estimation
  • Face detection
  • Facial landmark estimation
  • Eye tracking

using a single RGB camera.

Also, example projects for

are available.

Prerequisite to use with CUDA and TensorRT

To use with CUDA and TensorRT, you need to install the following versions of CUDA, cuDNN, and TensorRT. 


The versions of cuDNN and TensorRT are different for RTX30** series and others. We only tested GTX1080Ti, RTX2070, RTX3060Ti and RTX3070. Others are not tested.

Versions for other than RTX30** series (RTX20**, GTX10**)

  • CUDA: 11.0.3
  • cuDNN: 8.0.2 (July 24th, 2020), for CUDA 11.0
  • TensorRT: for CUDA 11.0

Versions for RTX30** series

  • CUDA: 11.0.3
  • cuDNN: 8.0.5 (November 9th, 2020), for CUDA 11.0
  • TensorRT: for CUDA 11.0


  • CUDA: 11.4.2 for Linux x86_64 Ubuntu 18.04
  • cuDNN: 8.2.4 (September 2nd, 2021), for CUDA 11.4, Linux x86_64
  • TensorRT: (8.2 GA Update 2) for Linux x86_64, CUDA 11.0-11.5

To use with TensorRT, it is recommended to add the following environment variables to cache TensorRT Engine:

  • "ORT_TENSORRT_ENGINE_CACHE_ENABLE" and set its value to "1".
  • "ORT_TENSORRT_CACHE_PATH" and set its value to any path where you want to save the cache, for example "C:\temp".

Change log 

v1.4 (Mar 18, 2022)

- Update OnnxRuntime module

- Add an option to disable dependency on OpenCV.

v1.3 (Mar 4, 2022)

- Updated OnnxRuntime module

- Add a Blueprint callable function to construct "Onnx Tensor Info". Use this function to dynamically specify the shape of input/output tensors.

v1.2 (Feb 18, 2022)

- Updated TextureProcessing module

  - Added a component to convert UTexture to float array. (`TextureProcessFloatComponent`)

  - Added functions to create UTexture from arrays of byte or float.

  - Fixed a bug that some UTexture cannot be processed by `TextureProcessComponent`.

    - Now `BP_TextureProcessComponent` is deprecated. Use `TextureProcessComponent` instead.

- Updated CustomizedOpenCV module

  - Removed OpenCV's `check` function to avoid conflict with UE4's `check` macro.

- Added example projects for

- Depth estimation using a monocular RGB camera

- Arbitrary artistic style transfer

v1.1 (Feb 11, 2022)

  - Added support for Ubuntu 18.04.6 Desktop 64bit

    - GPU accelerations by CUDA and TensorRT are supported.

      - You need an NVIDIA GPU which supports CUDA, cuDNN, and TensorRT.

      - You need to install CUDA ver 11.4.2, cuDNN ver 8.2.4, and TensorRT ver

      - DNN models which contain unsupported operators cannot be loaded when TensorRT is enabled.  

       See the official document for supported operators.

       (NNEngine uses TensorRT 8.2 as backend on Linux)

    - Tested environment:

      - Unreal Engine: 4.26.2, 4.27.2

      - Vulkan utils: 1.1.70+dfsg1-1ubuntu0.18.04.1

      - .NET SDK: 6.0.101-1

      - OS: Ubuntu 18.04.6 Desktop 64bit

      - CPU: Intel i3-8350K

      - GPU: NVIDIA GeForce GTX 1080 Ti

        - Driver: 470.130.01

        - CUDA: 11.4.2-1

        - cuDNN: 8.2.4

        - TensorRT:

  - Added EXPERIMENTAL support for Android as target build

    - Tested environment:

      - Device: Xiaomi Redmi Note 9S

      - Android version: 10 QKQ1.191215.002

    - Note:

      - You need to convert your model to ORT format.  

       See the official document for the details.

      - There are some DNN models which cannot be loaded on Android.

      - NNEngine uses ONNX Runtime Mobile ver 1.8.1 on Android.

    - GPU acceleration by NNAPI is not tested yet.

Technical Details


  • Loads ONNX models at runtime. Automatically optimizes the model when loaded.
  • Runs ONNX models.
  • (On Windows) Supports hardware acceleration by DirectML on DirectX 12 capable GPUs.
  • (On Windows) Supports hardware acceleration by CUDA and TensorRT on supported NVIDIA GPUs.
  • (On Windows) Gets a list of GPU names available on the system.
  • Processes (resize, crop, rotate) UTexture and converts it to int8 array.

Code Modules:

  • OnnxRuntime (Runtime)
  • TextureProcessing (Runtime)
  • CustomizedOpenCV (Runtime)
  • DirectXUtility (Runtime)

Number of Blueprints: 3

Number of C++ Classes: 7+

Network Replicated: No

Supported Development Platforms: Windows 10 64bit

Supported Target Build Platforms: Windows 10 64bit

Documentation: Link

Example Project:

  1. Human pose estimation and facial capture using a single RGB camera
  2. Depth estimation using a monocular RGB camera
  3. Arbitrary artistic style transfer

Important/Additional Notes:

  • Demo Project 1 is distributed as a C++ project. You need to install Visual Studio to compile it.