- Introduction
- Editor
- Get started
- Runtime / VM
Language Guide
Design-Notes
Introduction
Why Autolang exists: An extreme focus on startup latency and memory allocation for the AI Agent era.
The Motivation
Autolang was born to solve a specific, practical problem in modern computing: High-Frequency Short-Lived Tasks.
The Tracing GC Bottleneck
For tools and AI Agents that spin up thousands of micro-scripts per minute, traditional languages (Python, Lua) struggle. The VM startup time and Garbage Collection overhead often take longer than the script's actual execution.
The Systems Barrier
Languages like C++ or Rust offer high performance but are too verbose and rigid for generating quick automation scripts or rapid dynamic evaluation on the fly.
Autolang bridges this gap: The ease of an embedded script with a specialized architecture designed to hit a 2-5ms startup latency.
Core Philosophy
1. Arena over GC (The "Hot Restart" Strategy)
Autolang completely drops the traditional Garbage Collector. Instead, it pairs a custom Arena Allocator with Reference Counting.
"Memory leaks in short-lived scripts are natural. Let the Hot Restart clean everything."
By prioritizing allocation speed over long-term leak prevention, Autolang allocates 1M objects nearly 2x faster than Lua. When a task ends, the entire Arena is wiped instantly via a `restart()` mechanism.
2. Fail-Fast & Static-First
Implicit behavior is strictly minimized. Autolang enforces a static-first approach and aggressive lexing. If an AI agent or developer writes invalid syntax, the compiler throws an error immediately, enabling a lightning-fast feedback loop.
Current State & Benchmarks
Autolang is a single-pass compiler prioritizing developer experience and zero-warmup execution:
- Compile Time: Handles 100,000 classes in roughly 888ms.
- Startup Latency: ~2ms (Zero-JIT warmup).
- Standard Library: Fully decoupled and written in Autolang itself.
Future Plan: Runtime Optimization
Currently, general execution is 2x - 5x slower than LuaJIT as we are still optimizing the bytecode dispatch (if-chains). Bridging this runtime gap without bloating the 2ms startup is our next major milestone.
