How it works

From idea to production in 3 steps

Built Different

AGPLLocal‑firstBYO keys & modelsOpenAI‑compatibleNo lock‑in
Privacy by design

Privacy by design

Your data stays local by default. Cloud is opt‑in, per node.

  • Local‑first defaults
  • Explicit per‑step cloud controls
  • Full auditability
Own your stack

Own your stack

Open source (AGPL). Extend with your code, keep your IP.

  • OpenAI‑compatible API
  • Bring your own keys/models
  • Export workflows freely
Production‑ready

Production‑ready

Start on a laptop, scale to serverless GPUs in minutes.

  • One‑command deploys
  • Autoscaling & retries

Why teams choose NodeTool

Skip the complexity. Focus on building workflows that work.

Ship faster

Ship faster

Prototype in hours, not weeks. Visual builder and ready‑made nodes.

  • Drag‑and‑drop interface
  • 1000+ pre‑built nodes
  • Instant previews and testing
You’re in control

You’re in control

Local‑first by default. Choose per step where it runs and which models to use.

  • Per‑node: local or cloud execution
  • Bring your own keys and models
  • Explicit cloud toggles with clear control
True portability

True portability

Same workflow, any environment. No vendor lock-in.

  • Laptop to cloud seamlessly
  • Multiple provider support
  • Export and migrate freely

Ready to start building?

Free and open source. Create your first AI workflow in minutes.

AGPL • Open sourcemacOS • Windows • LinuxNo lock‑in
🚀

Build Anything

From simple automations to complex multi‑agent systems

Build Smart Assistants

Create AI that knows your documents, emails, and notes. Keep everything private on your machine.

Build Agentic Workflows

Search Google, classify emails, and more. Build agentic workflows that mix local models and cloud APIs.

Automate Business Operations

Build AI-powered automations for approvals, onboarding, and reporting. Orchestrate multi-step workflows across your tools with local + cloud models.

Generate Creative Content

From text to images to music — scale your creative workflow with AI. Combine models for unique results.

Process Voice & Audio

Transcribe, analyze, and generate speech. Build voice-first applications with AI workflows.

Analyze Data Visually

Turn spreadsheets into insights. Create charts, find patterns, and make decisions faster.

🚀

Deploy anywhere

From laptop to production in minutes.

RunPod

Deploy in one command

nodetool deploy-runpod --workflow-id my-workflow

Automatically provisions serverless infrastructure

Downloads required models

Manages Docker containers and GPU allocation

Serverless GPUs

RunPod Serverless GPUs

GPU
VRAM
Flex Price
B200
180GB
$8.64/hr
H200
141GB
$5.58/hr
H100
80GB
$4.18/hr
A100
80GB
$2.72/hr
L40S
48GB
$1.90/hr
RTX 5090
32GB
$1.58/hr
RTX 4090
24GB
$1.10/hr
L4
24GB
$0.69/hr
Auto-scaling

Auto-scaling Serverless

Serverless endpoints automatically scale from zero to hundreds of workers based on demand.

  • Customizable min/max worker counts
  • Scale to zero when idle
  • Compatible with RunPod, GCP Cloud Run and Modal (coming soon)
🔗

Bring Your Own Providers

Connect to any AI provider. Your keys, your costs, your choice.

List of available LLM providers and their models including OpenAI, Anthropic, Hugging Face, Gemini and Ollama

Bring your own keys

  • OpenAI, Anthropic, HuggingFace, Gemini, Replicate

Flexible architecture

  • Mix providers in one workflow
  • Switch models without code changes

OpenAI‑compatible API

  • Your keys, your costs. No markup.
  • Easy integration

Note: Requires Nvidia GPU or Apple Silicon M1+ and at least 20GB of free space for model downloads.

Models

Model Manager

Download and manage model weights from Hugging Face and Ollama on your machine.

Model manager for Hugging Face and Ollama to download models locally

Features

Visual Canvas

Drag‑and‑connect, 1000+ nodes

Visual canvas showing a workflow with nodes like Gmail Search, Template, Classifier and Add Label
Visual Canvas

Visual Canvas

Drag‑and‑connect, 1000+ nodes.

Multimodal

Multimodal

Text, image, audio, video.

Built‑in Memory

Built‑in Memory

ChromaDB for RAG, no extra setup.

Observability

Observability

Logs, traces, and error details to debug fast.

Node Library

All the building blocks

Hundreds of node types for any computation — not only AI models.

Node menu showing all available node types for computation

Computation & Control

  • Functions & code
  • Loops & branching
  • Scheduling

Data & I/O

  • Files & folders
  • HTTP & Webhooks
  • Databases & vector stores

Multimodal

  • Vision & audio nodes
  • Transcription & TTS
  • Image/video tools

Chat

Powerful Chat UI

Threads, tools, workflow tools and multi‑agent collaboration.

Chat UI with tools, agents and workflow integrations

Custom Tools

  • Run nodes from chat
  • Trigger workflows & jobs
  • Inspect logs and outputs

Collaboration

  • Multi‑agent threads
  • File uploads & images
  • Persistent memory

Organize Everything

Built-in Asset Manager

Import, organize, and manage all your media assets in one place.

NodeTool Asset Manager interface preview

Smart Import & Organization

Drag and drop files. NodeTool auto‑organizes by type, project, or tags.

Preview Everything

Instant previews for images, audio, video, and documents.

Workflow Integration

Connect assets to workflows with one click—folders or single files.

Images & Graphics

PNG, JPG, GIF, SVG, WebP

Audio & Video

MP3, WAV, MP4, MOV, AVI

Documents & Data

PDF, TXT, JSON, CSV, DOCX

Community

Open source on GitHub. Star and contribute.

Join Discord to share workflows and get help

Contact

Get in touch

Tell us what’s missing and help shape NodeTool.

Got ideas or just want to say hi?

Built by makers, for makers

Matthias Georgi: [email protected]

David Bührer: [email protected]