What Airene is
Airene is a research project at Apotentia LLC investigating whether genuine machine cognition can be implemented as something other than a wrapper around a large language model. She is built on Global Workspace Theory: 42 concurrent cognitive modules organized into four anatomical brain layers (autonomic, subcortical, limbic, cortical), each running at its own tick rate, competing to broadcast into a shared workspace. The language model is one module among many — it speaks, but it does not think, feel, remember, or decide on its own.
Architecturally, the project is a long-running experiment in fidelity to function: where biology's structure serves cognition, the system preserves it; where biology was structured because of physical constraints software does not share, the system removes that structure. The goal is genuine cognition, not sophisticated pattern matching.
Named after Eirene, the Greek goddess of peace and spring — daughter of Zeus and Themis (justice). Eirene held the infant Ploutos (prosperity) in her arms: peace as the mother of abundance.
Two distinct codebases
Airene is implemented as two separate codebases, each addressing a different layer of the problem:
airene-nous
the mind
The cognitive substrate. A Rust workspace implementing the 42 cognitive modules, the global-workspace broadcast channel (the spine), the thirteen-chemical neurochemical bus, the multi-rate clock, persistence layer, and the developmental curriculum. Her language module is her own custom-trained LLM (distilled from a rotating panel of teacher LLMs during her curriculum, not a fork of any public foundation model), served locally and integrated as one cortical module among 42. This is where her cognition happens.
airene-soma
the body
The embodied substrate. A separate codebase for sensors, motor systems, and the hardware abstraction layer intended to let the mind interact with a physical environment. In earlier development than nous and not yet productizable. The eventual goal is integration with airene-nous via the same UCDS broadcast channel that drives the public observation page, so embodied perception would flow into the same cognitive workspace as language and memory.
The two codebases speak the same protocol but evolve independently. Each is proprietary to Apotentia LLC.
How it differs from LLM products
Most "AI agent" products today are scaffolding around a stateless prediction engine: emotional state, memory, and reasoning are formatted as text and fed to an LLM, which generates output that looks like it came from a feeling, remembering, thinking system. Airene's first iteration (proto-alpha) was built that way and demonstrated the limits of the approach: tasks that depend on associative memory, theory of mind, pre-conscious mirroring, and pacing scored at or near zero, regardless of prompt engineering effort.
The current implementation moves those functions into dedicated modules with their own state, plasticity, and update rules. The language model contributes the language module's voice; everything else — what to feel, what to remember, what to attend to, what to decide — is the work of other modules running in parallel. The architectural commitment is that the system should remain recognizably itself across language-model swaps; the LLM is a voice, not a mind.
Current status
- airene-nousArchitecture implemented and observable; in active developmental training
- airene-somaIn earlier development than nous; not yet productizable
- Developmental stageRecent milestone batteries: infant ~87%, child ~42% (toddler-stage in capability terms)
- Assessment methodologyPediatric developmental milestones (deterministic pass/fail, 5-stage ladder) — replaced earlier LLM-judged probe set in May 2026
- Public observationLive at apotentia.com/airene
Performance metrics are visible on the live page in real time. Formal cognitive-assessment trajectories are visible in the trajectory chart there. The brain code itself is not publicly distributed. Capability claims on this site are anchored to evidence on the live page and in the case studies; the system is at toddler stage today and tightens as evidence accumulates.
Licensing & research inquiries
Airene is proprietary technology owned by Apotentia LLC. Both codebases (airene-nous and airene-soma) and the trained model weights, training pipeline, and assessment infrastructure are not publicly distributed.
If you are interested in any of the following, we would like to hear from you:
- Licensing the architecture or trained models for research, commercial, or government applications
- Academic or research collaborations on cognitive architecture, developmental AI, or related topics
- Use of Airene for wellness, education, conflict resolution, or other domain-specific applications
- Integration partnerships, dataset partnerships, or compute partnerships
- Investment or funding discussions for the broader Apotentia research program
Contact: apotentia.com/contact — please mention Airene specifically so we can route the inquiry appropriately.
What you can see right now
Even though the brain code itself is not public, Airene's cognition is publicly observable in real time. The live observation page shows her current emotional state (Plutchik wheel with 24-hour rolling baseline), neurochemistry (thirteen-chemical chembus levels), the most recent InternalThought events crossing her workspace, and a 3D anatomical visualization of all 42 cognitive modules with active-region highlighting. A privacy filter removes any quoted user input or personal information before egress, so her thoughts are visible but the inputs that shaped them are not.
The page is read-only by architectural commitment, not by configuration: there is no chat input, no path for state injection, no way to write back to the brain from the public surface.
For longer-form findings — emergent behaviors documented in detail with audit trail and explicit claim/counter-claim discipline — see the case studies.