スポットライト セール:10月7日まで一部のプロップが 50% オフで購入できます。

Local LLM Plugin

Load and run Large Language Models (LLMs) locally in your project

  • サポートされたプラットフォーム
  • サポートされたエンジンバージョン
    5.3 - 5.4
  • ダウンロードのタイプ
    エンジン プラグイン
    この製品には、コード プラグインが含まれており、ビルド済みのバイナリと Unreal Engine に統合される全ソースコードが完備されています。任意のエンジン バージョンにインストールし、プロジェクト毎に有効化することが可能です。

Documentation: Link

Demo Project: GitHub , EXE

Demo Video: English, Japanese


This product is designed to integrate AI chatbots into games without using online services.

This plugin allows to load large language models (LLMs) of GGUF format and run them on Unreal Engine.



Run locally and within BP/C++
  • Runs offline on a local PC.
  • Just add one component to your BP and you are ready to use it.
  • No Python or dedicated server is required.



Useful features
  • Works asynchronously, additional questions can be asked at any time during answer generation.
  • You can save and load "state" that preserve the context of a conversation, allowing you to resume a previous conversation later.
  • Supports multibyte characters.



Hardware Requirements

Requires a CPU that supports AVX, AVX2 and FMA.

The following CPUs should work, but please try our free demo exe to check if your CPU is supported.

  • Intel: 4th Generation (Haswell) and above
  • AMD: All Ryzen series



Supported models

The following models have been tested to work.

- llama-3 7B

- Phi-3-medium

- Gemma 7B

- Gemma-2 9B

- Mistral 7B

- ArrowPro 7B KUJIRA (Japanese model)

- ELYZA JP 8B (Japanese model)


テクニカルノート

Features:

  • Load models of GGUF format
  • Async execution of the text generation using the models
  • Save and load "state" that preserve the context of the conversation

Code Modules:

  • Local LLM (Runtime)

Number of Blueprints: 1

Number of C++ Classes: 3+

Network Replicated: No

Supported Development Platforms: Windows 64bit

Supported Target Build Platforms: Windows 64bit

Documentation: Link

Demo Project: GitHub , EXE