How it works
Built for people who own their tools
No telemetry. No cloud dependency. No surprise pricing. Your data, your models, your infrastructure.

Your data stays yours
Nothing leaves your machine unless you explicitly tell it to. Every boundary requires consent.
- Local-first architecture
- Typed tools, explicit I/O
- Inspect every step

Prototype in minutes
Visual editor with 1000+ pre-built nodes. Drag, connect, and see results in real-time.
- Drag‑and‑connect nodes
- Live preview & debugging
- Instant feedback loop

Control your costs
Run local models for free, or use APIs only when you choose. No hidden charges.
- Pick your execution environment
- Use your own API keys
- Mix local and cloud models

Actually portable
Same workflow file works everywhere. Visual editor and headless runtime share one engine.
- One graph, one runtime
- Open & portable formats
- Deploy anywhere

No lock-in
Your workflow is just a JSON file. Switch providers, self-host, or take it anywhere.
- Works completely offline
- Switch providers instantly
- Self-host everything

Cloud when you need it
Works great on your laptop. Scale to GPUs with one command when you're ready.
- One‑command deploys
- RunPod & Modal support
- Seeded runs & diffs
Download NodeTool
Free, open source, runs on your machine.
Optional deployment
Works on your laptop. Need GPUs? Deploy to serverless in one command.

Deploy in one command
nodetool deploy-runpod --workflow-id my-workflowAutomatically provisions serverless infrastructure
Downloads required models
Manages Docker containers and GPU allocation

RunPod Serverless GPUs

Auto-scaling Serverless
Serverless endpoints automatically scale from zero to hundreds of workers based on demand.
- Customizable min/max worker counts
- Scale to zero when idle
- Compatible with RunPod, GCP Cloud Run and Modal (coming soon)
Use any provider you want
Plug in your own API keys. We don't charge you a markup or force you through our backend. Or skip APIs entirely and use local models.

Your keys, direct to the provider
- OpenAI, Anthropic, HuggingFace, Gemini, Replicate
Mix and match
- Use different providers in one workflow
- Swap models without changing your graph
No middleman
- Direct API calls. No markup, no tracking.
- OpenAI-compatible
Note: Requires Nvidia GPU or Apple Silicon M1+ and at least 20GB of free space for model downloads.
Models
Model Manager
Download and manage model weights from Hugging Face and Ollama on your machine.

AI Video Generation
Generate videos from text or images using state-of-the-art models. Support for OpenAI Sora, Google Veo, HuggingFace, and local generation.
OpenAI Sora
Sora 2 & Sora 2 Pro models
- Text-to-video
- Image-to-video
- Auto-sizing
Google Veo
Veo 2.0, 3.0, and 3.0 Fast
- Text-to-video
- Image-to-video
- Fast generation
HuggingFace
Via inference providers
- Text-to-video
- Multiple providers
- API access
Local Generation
Wan models on your machine
- Wan 2.2 T2V
- No API costs
- Full privacy
Text-to-video and image-to-video workflows with full control over generation parameters
Fast local inference
Native support for MLX (Apple Silicon) and GGML (everything else). No API calls, no latency.
MLX — Apple Silicon Optimized
MLX-LM
Run LLMs locally on Apple Silicon
MLX-Audio
Text-to-speech on your Mac
MLX-Whisper
Fast speech recognition
mflux
Image generation with Flux
Optimized for M1, M2, M3, and M4 chips
GGML/GGUF — Cross-Platform Inference
llama.cpp
LLM inference in C/C++ - runs anywhere
whisper.cpp
High-performance speech recognition
GGUF Format
Efficient model format for inference
Works on macOS, Windows, Linux, and mobile devices
HuggingFace — Local Model Integration
Run thousands of HuggingFace models locally with our native integration. From text generation to video processing — all within your workflows.
Requires Apple Silicon or Nvidia GPU for optimal performance.
24+ model types • Thousands of models • All running locally
Features
Visual Canvas
Nodes are typed and composable. No hidden state, no magic.


Visual Canvas
Drag‑and‑connect, 1000+ nodes.

Multimodal
Text, image, audio, video.

Built‑in Memory
ChromaDB for RAG, no extra setup.

Observability
No black boxes. Inspect every step as it runs.
Node Library
All the building blocks
Hundreds of nodes for everything from API calls to local models to data transforms.

Computation & Control
- Functions & code
- Loops & branching
- Scheduling
Data & I/O
- Files & folders
- HTTP & Webhooks
- Databases & vector stores
Multimodal
- Vision & audio nodes
- Transcription & TTS
- Image/video tools
Chat
Chat interface
Tool calling, threads, workflow integration.

Organize Everything
Built-in Asset Manager
Import, organize, and manage all your media assets in one place.

Import & organize
Drag and drop files. Auto-organized by type, project, or tags.
Preview files
Built-in preview for images, audio, video, and documents.
Use in workflows
Reference assets in your workflows—folders or single files.
Images & Graphics
PNG, JPG, GIF, SVG, WebP
Audio & Video
MP3, WAV, MP4, MOV, AVI
Documents & Data
PDF, TXT, JSON, CSV, DOCX
Contact
Get in touch
Questions, bug reports, feature requests.
Say hi or tell us what you need
Made by two developers
Matthias Georgi: [email protected]
David Bührer: [email protected]