The language designed for AI agents

Build apps in kilobytes
not megabytes

Naze is a declarative, AI-native language that compiles to WebAssembly. One source file. Three output formats. Zero dependencies. Designed for a world where agents write the code.

395KB
WASM Runtime
157
Grammar Rules
400+
Tests Passing
109
Examples

The Agent-First Web

The way we interact with the internet is about to fundamentally change.

Imagine never opening a browser tab again. You tell an AI agent what you need — it fetches, processes, and presents information. No clicking through menus. No scrolling through ads. No loading spinners. Just answers.

But Today's Web Wasn't Built For This

The average web page is 2.5 MB of HTML, CSS, and JavaScript — designed for human eyes, not machine consumption. When AI agents scrape these pages, they waste massive compute tokenizing bloated markup. Every unnecessary <div>, every CSS animation, every bundled framework adds tokens an agent must process and discard.

“A single web page tokenizes to ~165,000 tokens. That's reading a 400-page book — just to extract a restaurant's menu.”

The agent-first web needs a new foundation. A language that compiles to kilobytes, not megabytes. Where the binary is the API. Where AI agents can generate, serve, and consume applications without the overhead of a stack designed in the 1990s.

That language is Naze.

Why Naze?

Built from first principles for a world where AI writes most of the code.

AI-Native, Human-Readable

Designed as a compilation target for AI agents, but reads like a document humans can understand. One canonical form per concept.

Self-Contained Components

Components compose via ‘use’, but the compiler inlines everything into a flat render tree at build time. Each component is fully self-contained — an AI agent reads one file, not five. That’s why σ stays at 1.

No Middle Layers

No bundler, no transpiler, no virtual DOM. Intent reaches pixels directly: parse, typecheck, serialize, layout, render.

One Source, Every Platform

The same .naze file targets web (WASM + Canvas), desktop (native window), and mobile. Compile once, run everywhere.

Introducing FAAD

A paradigm where AI agents manage the complete software lifecycle. Humans provide intent and approve results. Machines handle everything else.

F
Fully
A
Autonomous
A
AI
D
Development

Today, developers write code and AI assists. With FAAD, AI agents build, test, debug, deploy, and maintain software autonomously. Naze is engineered for this future — its grammar is small enough for local AI models, its components are self-contained for parallel generation, and its binary format is the API.

Token Complexity

A mathematical framework for measuring the true cost of AI-driven development.

Number of components (application size)
Tokens per component (language verbosity)
Files an AI must read per change (scatter)
Retry rate (incorrect code frequency)

The key insight: Naze achieves because every component is self-contained. This makes complexity linear instead of the or typical of multi-file frameworks.

Cost at 100 Components

Estimated token cost (Ψ) for a 100-component application

Naze52K tokens (1x)
Svelte5.9M tokens (113x)
React + Tailwind + TS69M tokens (1,330x)
Java Spring840M tokens (16,150x)

The Energy Equation

Token reduction isn't just a developer convenience — it's an environmental imperative.

Energy per token (~0.39 J on H100, FP8)
Total token cost (from formula above)
Grid carbon intensity (kg CO₂ per kWh)

The key insight: Every variable Naze minimizes — through minimal syntax, through self-containment, through unambiguous grammar — multiplicatively reduces energy and carbon. At 98.9% fewer tokens per page, the environmental impact scales accordingly.

At Scale

Processing 1 million pages

Energy

Naze0.19 MWh
Conventional Web17.9 MWh

CO₂ Emissions

Naze76 kg CO₂
Conventional Web7,160 kg CO₂

The IEA projects AI data centers will consume 945 TWh by 2030 — double today's levels. Every token we eliminate matters. Naze doesn't just make AI development faster — it makes the agent-first web sustainable.

Sources: IEA Energy and AI Report, HTTP Archive Web Almanac 2024, NVIDIA H100 benchmarks

The Three-Layer Architecture

One source file compiles into three distinct layers. Humans need all three. AI agents typically need only Layer 1.

L3

Presentation

UI tree, themes, animations, layout, colors, typography

L2

Interaction

Event handlers, navigation, actions, validation

L1

Data

State, computed values, server functions, data bindings

Three Outputs From One Source

app_data.bin
Layers 1 + 2 + 3 · ~7KB
Browsers / Humans
naze-manifest.json
Layers 1 + 2 · ~1KB
AI Agents
Headless binary
Layers 1 only · ~500B
Agent-to-Agent

The HTML/CSS/JS stack forces AI models to navigate three separate languages, a virtual DOM, bundler configurations, and framework-specific abstractions. Naze eliminates this waste — the compiled binary is the API.

Tiered Grammar

The grammar is partitioned into independent tiers. Lower tiers never depend on higher ones. An agent building dashboards needs only Tier 0.

T0
Core UI

Layout, elements, state, events, themes, components

T1
Data

Fetch, streams, server functions, storage, timers

T2
Database

Models, declarative queries

T3
AI

Prompt blocks, provider configuration

T4
Systems

Concurrency, file IO, networking (future)

The command nazec grammar --format gbnf exports the grammar for constrained decoding, enabling local 3-7B models to match cloud-scale quality at zero cost.

Train Any Model

Naze's grammar is small enough to fine-tune on a single GPU. Local or cloud, every model speaks Naze.

Tiny Training Footprint

The full grammar fits in ~52K tokens. Fine-tune a 3B-parameter model on consumer hardware in hours, not weeks.

Faster Development Cycles

Small grammar means fewer training iterations, faster convergence, and rapid iteration on model improvements.

Local Models

Run Naze-trained models entirely offline with Ollama. Constrained decoding via GBNF export guarantees syntactically valid output from any local model.

Cloud Models

Cloud models already excel at Naze — fewer tokens per prompt means lower cost, faster responses, and higher accuracy than multi-language stacks.

Traditional web stacks require models to master HTML, CSS, JavaScript, framework APIs, and build tooling. Naze replaces all of that with one grammar that exports directly to GBNF for constrained decoding. The result: any model, any size, produces correct code on the first try.

See It in Action

Components compose with use, themes resolve with dot notation — and the compiler inlines it all. σ stays at 1.

app "Counter" {
    state count = 0
    let title = "My Counter"

    column padding: 20px, gap: 16px {
        heading "{title}"
        text "Current count: {count}"

        rect width: 200px, height: 50px,
             color: #2563eb, radius: 8px {
            text "Increment"
            on click: set count = count + 1
        }

        rect width: 200px, height: 50px,
             color: #dc2626, radius: 8px {
            text "Reset"
            on click: set count = 0
        }
    }
}

Why curly braces? Modern LLMs handle indentation well in fresh code, but still miscount whitespace when editing existing files — exactly the agentic workflow Naze targets. In Python, one wrong indent is a syntax error; with braces, it's cosmetic. Braces make Naze fault-tolerant by design. Research source ↗

No build step requiredCompiles in millisecondsWASM output

The App Factory

Imagine typing a description into your browser and getting a working app back in seconds. Not a mockup. A real, running application.

In the future, users interact with agents, not browsers. Agents generate Naze apps on the fly to display data, run workflows, or answer questions.

1

Describe

User describes an app in natural language

2

Discover

Agent queries the registry for reusable packages

3

Generate

Agent writes .naze source importing packages

4

Compile

In-browser compiler produces a working app in seconds

5

Publish

User saves, forks, edits, or publishes back

6

Grow

Each app enriches the registry for future generation

build me a dashboard showing my portfolio performance...

The Naze Browser — type a description, get a running app. Every generated app becomes a composable building block in a growing ecosystem.

Help Build the Future

Naze is open source and actively looking for contributors. Whether you're a language designer, compiler engineer, or AI researcher — there's a place for you.