How it works

Your tools, your data, your way

Local first

Local first

NodeTool runs your workflows on your machines without relying on any vendor cloud infrastructure.

Open source

Open source

NodeTool is open source, so you can inspect, modify, and self-host the entire stack.

Data stays yours

Data stays yours

NodeTool processes data locally and never collects, phones home, or sends usage telemetry anywhere.

No lock in

No lock in

NodeTool keeps one portable workflow format from laptop to deployment, so you control every environment.

Use cases

Build workflows for any task

Smart assistants

Build chatbots that search documents, browse websites, and answer questions.

Agent workflows

Chain multiple AI models to research topics, write reports, and organize findings.

Automation

Schedule tasks, process files in batches, and trigger workflows based on events.

Content generation

Generate images, videos, and written content with text prompts and style controls.

Voice and audio

Transcribe recordings, synthesize speech, and remove background noise from audio files.

Data analysis

Classify text, detect objects in images, and extract insights from large datasets.

Deploy when you need scale

Same workflow file runs locally or on serverless GPUs with no rewrites.

Run one command to deploy. NodeTool provisions infrastructure, downloads models, and configures containers automatically. Endpoints scale to zero when idle. Works with RunPod and Google Cloud. Self-host on your own infrastructure.

GPU usage is billed directly by the provider.

Fast local inference

Native support for MLX (Apple Silicon), GGUF, and more.

🤗

HuggingFace — Transformers & Diffusers

Run thousands of HuggingFace models locally with our native integration.

Text Generation
Image-to-Image
Speech Recognition
Text-to-Speech
Image Classification
Object Detection
Summarization
Translation
Question Answering
Text-to-Image
Video Processing
Depth Estimation
Image Segmentation
Audio Classification
Token Classification
Feature Extraction
Sentence Similarity
Multimodal

24+ model types • Thousands of models • All running locally

🧠

Models

Model Manager

Download and manage model weights from Hugging Face.

Model manager for Hugging Face and Ollama to download models locally
🔗

Use any provider you want

Bring your own API keys for OpenAI, Anthropic, Gemini, Fal AI, Replicate, and HuggingFace Inference Providers. No markup, no middleman. Or skip APIs entirely and use local models.

List of available LLM providers and their models including OpenAI, Anthropic, Hugging Face, Gemini and Ollama

Supported providers

  • OpenAI (GPT, DALL-E, Whisper, TTS)
  • Anthropic (Claude)
  • Google Gemini
  • Fal AI
  • Replicate
  • HuggingFace Inference Providers

Mix and match

  • Use different providers in one workflow
  • Swap models without changing your graph

No middleman

  • Direct API calls. No markup, no tracking.
  • OpenAI-compatible endpoints supported

Note: Requires Nvidia GPU or Apple Silicon M1+ and at least 20GB of free space for model downloads.

🎬

AI Video Generation

Generate videos from text or images using state-of-the-art models. Support for OpenAI Sora, Google Veo, HuggingFace, and local generation.

🎥
Text & Image to Video

OpenAI Sora

Sora 2 & Sora 2 Pro models

  • Text-to-video
  • Image-to-video
  • Auto-sizing
🎞️
Text & Image to Video

Google Veo

Veo 2.0, 3.0, and 3.0 Fast

  • Text-to-video
  • Image-to-video
  • Fast generation
🤗
Cloud Video Models

HuggingFace

Via inference providers

  • Text-to-video
  • Multiple providers
  • API access
💻
Self-Hosted

Local Generation

Wan models on your machine

  • Wan 2.2 T2V
  • No API costs
  • Full privacy

Text-to-video and image-to-video workflows with full control over generation parameters

Features

Visual Canvas

Build complex workflows with simple building blocks. See exactly what's happening at every step.

Visual canvas showing a workflow with nodes like Gmail Search, Template, Classifier and Add Label
Visual Canvas

Visual Canvas

Drag‑and‑connect, 1000+ nodes.

Multimodal

Multimodal

Text, image, audio, video.

Built‑in Memory

Built‑in Memory

ChromaDB for RAG, no extra setup.

Observability

Observability

No black boxes. Inspect every step as it runs.

Node Library

All the building blocks

Over 1000 ready-to-use components for AI models, data processing, file operations, and more.

Node menu showing all available node types for computation

Computation & Control

  • Functions & code
  • Loops & branching
  • Scheduling

Data & I/O

  • Files & folders
  • HTTP & Webhooks
  • Databases & vector stores

Multimodal

  • Vision & audio nodes
  • Transcription & TTS
  • Image/video tools

Chat

Talk to your workflows

Run any workflow through natural conversation. Ask questions, get results, and iterate—all in one chat interface.

Chat UI with tools, agents and workflow integrations

Organize Everything

Built-in Asset Manager

Keep all your files organized in one place. Drag in images, videos, documents, or audio—then use them directly in any workflow.

NodeTool Asset Manager interface preview

Import & organize

Drag and drop files. Auto-organized by type, project, or tags.

Preview files

Built-in preview for images, audio, video, and documents.

Use in workflows

Reference assets in your workflows—folders or single files.

Images & Graphics

PNG, JPG, GIF, SVG, WebP

Audio & Video

MP3, WAV, MP4, MOV, AVI

Documents & Data

PDF, TXT, JSON, CSV, DOCX

Community

On GitHub. Discord for questions.

Discord

Contact

Get in touch

Questions, bug reports, feature requests.

Say hi or tell us what you need

Made by two developers

Matthias Georgi: [email protected]

David Bührer: [email protected]