— ARS · MAGNA —

O P U S

A  multi-agent  swarm  architecture
for  collective  reasoning

Scroll

§1 — Manifesto

Opus is not a model. It is a colony.

A single language model, however large, reasons in one voice. It produces a stream of plausible tokens, defends them, and moves on. It cannot meaningfully disagree with itself, cannot triangulate, and cannot be falsified except from outside.

We replace that lonely soliloquy with a structured swarm. Three concentric tiers — Scouts at the perimeter, Workers in the middle, a Hive Core at the centre — coordinate not by speaking to each other but by writing typed records to a single shared substrate: the Blackboard, an append-only event log. The environment is the conversation.

When the colony has deliberated enough, three stages of consensus run: weighted Borda aggregation across Worker rankings, an LLM-as-Judge adjudication on near-ties, and a Verifier pass that attempts to falsify the chosen answer. If verification fails, the swarm re-deliberates with the falsification as a new constraint. The loop is bounded. The colony does not lie about its certainty.

Solve et coagula. Dissolve a single mind into many; recombine the many into one well-considered answer. That is the Great Work.

§2 — Three principles

The Beehive Brain

I.

Parallel Exploration

Many Scouts read the world at once. None depend on the others; all write to the same Blackboard. Coverage, not coordination, is the unit of progress.

II.

Stigmergic Memory

Agents do not message each other. They modify a shared environment — the Blackboard — and respond to its state. Communication is a side effect of work.

III.

Consensus Synthesis

Three stages: Borda aggregation, Judge adjudication, Verifier falsification. The colony surfaces what survives, attached to its provenance and confidence.

§3 — Architecture

The Topology

Hover or focus a node

§4 — Stack

Materials

claude-opus-4-7
Workers, Judge, Verifier
ACTIVE
claude-sonnet-4-6
Scouts, lightweight critique
ACTIVE
asyncio + anyio
Concurrency runtime
ACTIVE
pydantic v2
Strict typing on every Record
ACTIVE
structlog (JSON)
Structured observability
ACTIVE
typer
CLI surface (`opus query …`)
ACTIVE
hatchling
Packaging backend
ACTIVE
pytest + pytest-asyncio
Test suite
ACTIVE
Next.js 14 + Three.js + GSAP
This site
LIVE
Qdrant
Vector memory (interface stubbed)
PLANNED
Neo4j
Graph memory (interface stubbed)
PLANNED
Redis Streams
Distributed Blackboard backend
PLANNED
OpenTelemetry
Distributed tracing
PLANNED

§5 — A swarm in motion

Live Swarm

— transcript —
$

§6 — Philosophy

The Great Work

Why a swarm?

A colony introspects. A lone agent does not. The cheapest unit of useful disagreement is two agents reading the same Blackboard and producing different syntheses. Once you have that, you can rank, you can adjudicate, you can falsify. Cognition becomes legible.

“Dissolve the one mind into many. Recombine the many into one well-considered answer.”

Lineage

OPUS stands on three traditions: Ramon Llull’s Ars Magna (1305) — combinatorial generation under a falsifier; Hearsay-II (CMU, 1971–1976) — the first blackboard architecture for cooperative cognition; and stigmergy (Grassé, 1959) — the principle that termites, and now agents, coordinate by modifying their environment.

Ars Magna

We do not claim novelty in the parts. We claim attention to the whole — combinatorial generation, shared substrate, stigmergic coordination, bounded falsification — applied to large language models with engineering discipline. Solve et coagula. The work continues.

§7 — Built in public

Build Log

2026-05-11

Day 0 — first runnable swarm

opus-core scaffolded end-to-end: Blackboard, eight agent roles, Hive orchestrator with 3-attempt verification loop, full provenance ledger, JSONL trace output, prompt caching plumbed through the LLM client.

2026-05-11

Day 0 — storefront online

opus-web scaffolded: Next.js 14, Three.js armillary sphere with mouse parallax, animated storm-cloud shader background, ten sections, custom palette and typography, full mobile fallback.

TBD

Day N — first cached query

Agent system prompts pushed past 4096 tokens to engage the Anthropic prompt cache; cost per query measurably drops.

— Initiation —

Join the Work

Daily builds are broadcast in public. The whitepaper is the source of truth. Early access is by request.