Smart assistants
Build chatbots that search documents, browse websites, and answer questions.

NodeTool runs your workflows on your machines without relying on any vendor cloud infrastructure.

NodeTool is open source, so you can inspect, modify, and self-host the entire stack.

NodeTool processes data locally and never collects, phones home, or sends usage telemetry anywhere.

NodeTool keeps one portable workflow format from laptop to deployment, so you control every environment.
Build workflows for any task
Build chatbots that search documents, browse websites, and answer questions.
Chain multiple AI models to research topics, write reports, and organize findings.
Schedule tasks, process files in batches, and trigger workflows based on events.
Generate images, videos, and written content with text prompts and style controls.
Transcribe recordings, synthesize speech, and remove background noise from audio files.
Classify text, detect objects in images, and extract insights from large datasets.
Same workflow file runs locally or on serverless GPUs with no rewrites.
Run one command to deploy. NodeTool provisions infrastructure, downloads models, and configures containers automatically. Endpoints scale to zero when idle. Works with RunPod and Google Cloud. Self-host on your own infrastructure.
GPU usage is billed directly by the provider.
Native support for MLX (Apple Silicon), GGUF, and more.
Apple Silicon optimized: LLMs, audio, speech, and image generation with mflux
Fast LLM inference and speech recognition on any platform
Production-grade high-throughput inference engine
Run thousands of HuggingFace models locally with our native integration.
24+ model types • Thousands of models • All running locally
Download and manage model weights from Hugging Face.

Bring your own API keys for OpenAI, Anthropic, Gemini, Fal AI, Replicate, and HuggingFace Inference Providers. No markup, no middleman. Or skip APIs entirely and use local models.

Note: Requires Nvidia GPU or Apple Silicon M1+ and at least 20GB of free space for model downloads.
Generate videos from text or images using state-of-the-art models. Support for OpenAI Sora, Google Veo, HuggingFace, and local generation.
Sora 2 & Sora 2 Pro models
Veo 2.0, 3.0, and 3.0 Fast
Via inference providers
Wan models on your machine
Text-to-video and image-to-video workflows with full control over generation parameters
Build complex workflows with simple building blocks. See exactly what's happening at every step.


Drag‑and‑connect, 1000+ nodes.

Text, image, audio, video.

ChromaDB for RAG, no extra setup.

No black boxes. Inspect every step as it runs.
Over 1000 ready-to-use components for AI models, data processing, file operations, and more.

Run any workflow through natural conversation. Ask questions, get results, and iterate—all in one chat interface.

Keep all your files organized in one place. Drag in images, videos, documents, or audio—then use them directly in any workflow.

Drag and drop files. Auto-organized by type, project, or tags.
Built-in preview for images, audio, video, and documents.
Reference assets in your workflows—folders or single files.
PNG, JPG, GIF, SVG, WebP
MP3, WAV, MP4, MOV, AVI
PDF, TXT, JSON, CSV, DOCX
Questions, bug reports, feature requests.
Matthias Georgi: [email protected]
David Bührer: [email protected]