Autolang Runtime / VM

A deep dive into the high-performance, deterministic execution environment of Autolang.

Design Philosophy

The Autolang Runtime is designed with three core pillars that distinguish it from heavy VMs like JVM or .NET:

  • Zero Magic: No hidden background threads, no "stop-the-world" Garbage Collection, and no JIT compilation warming up. Execution is 100% deterministic.
  • Embeddability First: The VM is a library, not an operating system. It allows for seamless integration into C++ host applications.
  • Predictable Performance: Memory usage and execution paths are clear, making it suitable for real-time and embedded constraints.

Execution Model: The "Dumb VM"

We follow the "Smart Compiler, Dumb VM" architecture.

The VM "knows nothing" about source code, syntax sugar, or parsing logic. It purely executes data structures passed down from the compiler. This keeps the VM extremely lightweight (~few hundred KB).

Current: Stack-Based

Currently, Autolang utilizes a Stack-Based VM. Instructions pop operands from the stack, perform operations, and push results back. This simplifies compiler implementation and keeps bytecode compact.

Future: Register-Based

To achieve native-like performance on single-core chips, we plan to transition to a Register-Based VM. This reduces the total instruction count (less push/pop overhead) and better utilizes CPU cache locality.

Stack & Frame Management

Because the VM is single-threaded, we can use aggressive stack optimizations that are impossible in multi-threaded VMs without locking.

  • Pre-calculated Frames: The compiler calculates exactly how much stack space a function needs. The VM allocates the frame in O(1).
  • Sequential Allocation: Uses a specialized StackAllocator that is strictly sequential (no locks required).
  • Instant Reclamation: When a function returns, the stack pointer is simply moved back. No GC scan required.

Dynamic Stack Resizing:

If the recursion depth exceeds the current stack size, the VM automatically reallocates to a larger memory region. Crucially, it can also shrink back to save RAM on embedded devices when deep recursion ends.

Memory Management Strategy

Autolang uses a hybrid approach to memory, balancing speed and safety.

1. AreaAllocator & ObjectManager

Heap allocation is handled by AreaAllocator, which grabs large blocks of memory from the OS and sub-allocates them. The ObjectManager sits on top to manage object lifecycles.

2. Primitive Caching

To reduce allocation pressure, common immutable objects like small Int and Float are cached and reused by the ObjectManager.

3. Deterministic Deallocation

  • Reference Counting: Objects are reclaimed immediately when their ref-count hits zero.
  • No GC Pauses: There is no background garbage collector thread. You never get random latency spikes.
🔮

Future Roadmap: The "Hot Restart" Architecture

We are evolving the memory model to support high-reliability embedded systems and game loops. Instead of traditional GC, we aim for a Dual-Arena System:

Volatile Session Arena

Contains short-lived objects (e.g., request handlers, game entities). Designed to be wiped instantly via `VM.restart()` without CPU overhead.

Persistent Cache Arena

Contains long-lived data (config, assets) that survives restarts. Objects moved here are "safe" from the session wipe.

Goal: Allow devices to run indefinitely by periodically resetting the "Session" arena, eliminating fragmentation and memory leaks by design.

Runtime Type System

At runtime, all heap objects are represented by a unified structure: AObject.

struct AObject {
  uint32_t classId; // Unique Type ID
  uint32_t refCount; // Reference counting
  uint32_t flags; // Flags: IS_CONST, IS_FREE
  uint32_t reserve; // Reserved
  union { data } // 8 bytes
} //24 bytes

This simple structure allows for:

  • Fast `is` checks: Type checking is just an integer comparison.
  • Monomorphization: The compiler and VM work together to generate specialized bytecode for generic types (e.g., INT_PLUS_INT vs FLOAT_PLUS_FLOAT).

Native Integration Modes

Since the VM operates on raw data, it supports multiple loading strategies optimized for different stages of development.

ModeUse Case
Direct Memory TransferDevelopment / REPL. Compiler feeds structs directly to VM.
Binary Bytecode (.alb)Deployment. Pre-compiled files loaded at runtime.
Raw C++ ArraysEmbedded. Export bytecode as unsigned char[] header files for zero-load-time.