Over 11,000 five-star assets
Rated by 85,000+ customers
Supported by 100,000+ forum members
Every asset moderated by Unity
ChatLab v1.1 now comes with DeepSeek R1 locally running on-device!
ChatLab is an LLM powered Unity editor extension designed for creating in-game dialogues with branching logic.
LINKS
The documentation page is dynamic as we are adding more functionality (faster LLMs working on various types of hardware, more templates, multiparty/group settings and self branching conversations).
Website and Support | Documentation
FEATURES
💥 Jumpstart Your Creativity
Get started in no time with 7 professionally designed templates. Whether you're a beginner or a pro, these templates are the perfect launchpad for your project.
💬 Dynamic Branching Conversations
Bring your stories to life! Create interactive dialogues with multiple outcomes and endless possibilities. No more one-size-fits-all endings.
🌟 Incredibly Easy to Use
With a lightweight and approachable design, our user interface is as friendly as it gets—no overwhelming menus, just pure productivity.
✨ Run Local Models
Power your conversations offline with support for local LLMs for added speed and privacy.
Models included in ChatLab
- Phi-3 mini INT4 Quantization
- DeepSeek R1 1.5B (INT4 Quantization)
✨ Seamless OpenAI Integration
Plug in OpenAI models effortlessly and take your storytelling to the next level.
✨ Language Conversion Made Simple
Automatically adapt dialogues for different languages - perfect for global projects.
💾 Save & Load with Ease
Never lose progress! Your dialogue trees and chat logs are auto-restored, and linear chats stay compatible with dialogue trees for ultimate flexibility.
🗄️Sleek, Creative UI
A UI designed to make you want to create. The perfect mix of node-based flow and modern design principles keeps your workspace visually engaging and streamlined.
EDITOR
🔧 Character-Centric Design
Manage characters with a profile system. Add, switch, and customize character roles, names, and descriptions.
🌐 Dialogue Trees, Simplified
Build branching conversation paths visually using a dynamic, node-based interface. Create multiple dialogue outcomes, alternate responses, and flexible decision chains with a few simple clicks.
⚡ Generate and Expand with AI
With growing number of responses, you may require assistance filling up the entire tree. In a linear conversation with 10 turns, you'd need to write out 10 dialogues. However, if you introduce choices in which a player is given 4 choices each, you'd have to write out 2729 dialogues*, hence we introduce the LLM powered auto reply system to save you the hassle.
* Developers use merging and dialogue reuse to counter this issue.
With ChatLab, you can automatically populate dialogue choices with AI-generated replies!
Select a node and hit "Generate Reply" or "Generate N Options" for fast branching conversations.
🛠 Interactive Contextual Tools
Each node is packed with easy-to-access options:
- Switch roles or characters within conversations
- Color coded nodes are easy on the eyes
- All fields are editable Edit all parts of the node
💾 Integrated Chat Logs
Switch effortlessly between Dialogue Tree and Chat Log views to follow conversation flow in linear or branching formats. Reverse compatibility means your linear chat logs convert smoothly into trees*.
*This does not happen the other way around.
✨ Powerful LLM Settings Panel
Access LLM configurations directly in the editor:
- Enable local or OpenAI-powered LLMs
- Adjust settings like temperature and max tokens to fine-tune response length
- Save and load models instantly for flexible performance
🖼 Drag, Zoom, and Create with Ease
Move seamlessly through your dialogue structures with mouse-based navigation:
- Drag with Alt or mouse buttons
- Zoom in and out to view the big picture or focus on fine details
DEPENDENCIES
This tool requires no external dependencies.
PIPELINES SUPPORTED
- Built-In: Out of the box
- URP / HDRP / SRP: 1 Material needs to be converted to the default Sprite Diffuse
LIMITATIONS
Since this tool is still under development, there are a few limitations:
- Currently, using ChatGPT during development is faster than using the local LLM (Phi3 and DeepSeek R1). Tested on i5, i7 and i9 CPU - minimum ~Phi3 (2 tok/sec), ~DeepSeek R1 (5-10 tok/sec).
- The local LLM sometimes forgets details due to the context window being at about 4-16k tokens. The local LLMs also runs on CPU only for now. As the models are ONNX format, they can run on any platform and OS that is supported by ONNX. A GPU inference is possible, but it does not come with ChatLab out of the box.
- [Solved, we support Unity 6 now with Newtonsoft JSON] We are working on Unity 6 integration. It will likely require an extra dependency - Newtonsoft JSON
- Currently, DeepSeek works in Demo Scenes, Builds, Runtime but is not enabled by default in the editor to help in writing dialogues. Phi-3 can can help in writing dialogues inside the Editor Window along with ChatGPT.
- DeepSeek R1 is Quantized with INT4 in this asset. This could mean a lower accuracy. Also, The version of DeepSeek offered is 1.5B param.
- Known Issue with DeepSeek R1 - if text is getting generated and the Unity stop button is clicked, Unity may crash. To safely exit, please stop the generation by clicking on the stop generation button and then click on the Unity stop button.
- DeepSeek is not a conversational model per se, it thinks out loud, so that can be a bit distracting. Please feel free to modify the ChatTemplate for DeepSeek to tweak it.
ChatLab - Dialogue Builder for Unity
