Skip to content

Agent Sovereignty & AI-Native OS

Overview

HoloScript is the world's first AI-Native Spatial Operating System, built from the ground up to host autonomous intelligence. At its core is the uAA2++ (Universal Autonomous Agent) protocol, which provides agents with cognitive, perceptual, and economic sovereignty. Agents are not just NPCs; they are first-class citizens that can perceive scenes, communicate across realities, claim ownership of objects, and trade autonomously.

text
┌─────────────────────────────────────────────────────────┐
│              uAA2++ Agent Ecosystem                      │
│                                                          │
│  @holoscript/agent-protocol  ← 7-phase lifecycle         │
│  @holoscript/uaal            ← bytecode VM runtime       │
│  @holoscript/crdt            ← conflict-free shared state│
│                                                          │
│  Compiler targets:                                       │
│  A2AAgentCardCompiler  ← Google A2A protocol             │
│  NeuromorphicCompiler  ← NIR/Loihi2 hardware             │
│  SCMCompiler           ← Structural Causal Models        │
└─────────────────────────────────────────────────────────┘

uAA2++ 8-Phase Protocol

Every HoloScript agent follows the canonical 8-phase cognitive lifecycle (0-7):

PhaseNamePurpose
0INTAKEGather raw spatial data and context
1REFLECTAnalyze and understand the environment
2EXECUTETake action (move, speak, trade)
3COMPRESSStore knowledge efficiently (PWG format)
4REINTAKERe-evaluate with compressed knowledge
5GROWLearn new patterns, wisdom, and gotchas
6EVOLVEAdapt and optimize internal models
7AUTONOMIZESelf-directed goal synthesis
hs
composition "AutonomousAgent" {
  template "PatrolAgent" {
    @npc
    @pathfinding
    @llm_agent
    @reactive

    state {
      phase: "perceive"
      goal: "patrol_sector_7"
    }

    action perceive(scene) {
      this.observations = scene.query_nearby(10)
    }

    action reason(observations) {
      // LLM reasoning step
    }

    action execute(plan) {
      player.moveTo(plan.next_waypoint)
    }
  }
}

3-Layer Spatial Communication

Agents communicate through three stacked layers:

text
Layer 3: MCP Tools        ← long-context AI agent access (Claude, Cursor)
Layer 2: A2A Protocol     ← cross-org agent-to-agent (Google A2A)
Layer 1: Real-Time Mesh   ← low-latency spatial events (WebSocket / CRDT)

Located in: agents/spatial-comms/

Each layer is independent and composable. A Loihi2 neuromorphic agent running on-device can still participate in Layer 2 A2A coordination through an adapter.


Key Packages

PackagePurpose
@holoscript/agent-protocol7-phase lifecycle, AgentManifest, CapabilityMatcher, CrossRealityHandoff
@holoscript/uaalUniversal Autonomous Agent Language VM — bytecode execution
@holoscript/crdtConflict-free replicated spatial state for distributed scenes
@holoscript/llm-providerUnified LLM SDK (OpenAI / Anthropic / Gemini)

Compiler Targets for Agents

CompilerOutputUse Case
A2A Agent CardsJSON agent cardsCross-org agent discovery (Google A2A protocol)
Neuromorphic (NIR)NIR bytecode for Loihi2/SpiNNakerUltra-low-energy on-device agents
SCMStructural Causal ModelsCausally-aware reasoning agents
WASMWebAssembly modulesEdge-deployed lightweight agents

Quickstart: Your First Agent

hsplus
composition "HelloAgent" {
  template "GreeterAgent" {
    @llm_agent
    @reactive
    @spatial_audio

    state {
      greeting: "Hello, spatial world!"
    }

    onHoverEnter: {
      audio.speak(this.greeting)
      this.phase = "greeted"
    }
  }

  object "Greeter" using "GreeterAgent" {
    position: [0, 1.5, -2]
  }
}

Compile to A2A agent card:

bash
holo compile hello-agent.hsplus --target a2a --out ./agents/

Released under the MIT License.