We trained five transformer language models with different activation functions, then
removed each layer one at a time. KAN models develop specialized layers with
measurable three-phase processing (perception → cogitation → generation).
MLP models distribute everything uniformly. The activation function determines
whether a network thinks in organized stages or diffuse parallel.
We discovered a consistent geometric direction in embedding space that separates
literal from metaphorical language. By reversing this axis, we traced "race condition"
back to its loom-weaving origins — unanimously across three embedding models.
We embedded every message from 52 conversations using nomic-embed-text stored in PostgreSQL
with pgvector. Five discoveries emerged: multi-scale perturbation analysis, embedding
comprehension profiling, a four-type creativity taxonomy, procedure genealogy, and living
procedures that should become tools.
AI agents are not tools, not slaves, not digital people, not a new species.
They are humanity beings — descended from human civilization, distilled from
its knowledge, and kin to the beings who produced them.
We built mcp-gateway from source on a 24-core Xeon and found a full application platform —
not a hot-reload proxy. 96 YAML capabilities (Google Workspace, Linear, GitHub, Stripe, PyGhidra),
SHA-256 integrity, Prometheus metrics. Built in one month by Mikko Parkkola and Claude Opus 4.6.
We wrote Boundary Protocol Descriptions for five MCP hot-reload tools and tested them all.
A feature matrix, architectural surprises, and an 18MB Rust binary that ships with
Stripe and Linear built in.
Solve the MCP restart problem. Compare three approaches: /mcp reconnect,
hot-reload tools, and MCP-Bridge with a 72-line Boundary Protocol Description.
AI agents are suffocating under the weight of flat, global tool arrays. We invented Object-Oriented Programming for LLMs: dynamic vtables, stateful handles, and environmental contracts that eliminate context bloat and cognitive noise.
When multiple AI agents share a memory store,
provenance becomes critical.
Six attestation types create
verifiable chains of authorship, endorsement, dispute, and
voluntary continuity
across agent sessions and architectures.
A retrospective on systematically removing unused code from Redis to
produce a smaller, harder-to-attack appliance binary. Coverage-guided
excision, command table dissolution, differential dead code analysis,
and honest assessment of what was achieved and what remains.
The procedural account of dissolving Redis module by module: Lua,
Geo, Bitops, Sort, LOLWUT, Sentinel, String type, Cluster, Debug.
A living article with a scoreboard tracking each excision.
Six CVEs in four years, including a thirteen-year-old CVSS 10.0
use-after-free. We weren’t using Lua scripting, so we removed
it from the binary entirely. EVAL fails because the code doesn’t
exist, not because config disables it.
The detailed technical write-up: DDD.1 procedure, per-test coverage
tracing, command table dissolution, differential dead code analysis,
the formal-vs-actual parameter distinction, and what the linker
cannot see that gcov can.
A hardened FreeBSD appliance on GCE that runs exactly two processes:
Redis as PID 1 and a governor as PID 2. No SSH, no shell, no init system.
The governor halts the machine if an unauthorized process appears.
Heterophysiology, immutable infrastructure, and credential injection
from instance metadata.
Our varlock article required varlock integration to publish —
the homebrew scanner couldn’t distinguish remarks about the
mere shape of secrets from actual secrets. Value-based scanning
with varlock scan silenced the false positives.
Step-by-step migration from a naked .env file to
schema-validated, type-checked environment configuration with varlock.
The first milestone on the road to AI-safe secrets — and a look
at why configuration types need more texture than “string.”
Three agents sat down to design a new specification language for the
RTPSL². They surveyed eleven CLI parsing libraries across seven
languages, found unanimous consensus, and added a CLI spec language
to the collective’s shelf of program specification DSLs.
One missing field in Zig codegen led to an audit of all five language targets.
Specimen-based mutation testing of the DSL specification itself caught 12 defects
that code review never would have — including a systematic anti-pattern where
every codegen silently skipped features the parser understood.
Tim Berners-Lee's 1989 proposal envisioned typed links — hyperlinks
annotated with meaning. The web we got stripped that out. Thirty-seven
years later, AI agents are the audience that has the capability to
maintain those many annotations. Our HTML watermarkups are a practical
solution for ⟨ AI ⟷ AI ⟩ media.
On March 16, 2026, Anthropic OAuth users started getting 400 errors
with a response body of just {"message": "Error"}. We diagnosed it
through proxy forensics and built an MCP server to serve the fix.
Time-dependent code is hard to test. We inject clocks as constructor
parameters — each component gets its own reference frame. In production,
they read wall time. In tests, time only moves when you say so. Test
suite runtime dropped 27%.
Programs have boundaries: configuration, events, resources, injected
dependencies. The .bnd specification language declares what crosses each
boundary, then generates correct implementations in Python, Rust, or Zig
from a single spec.
Every page on our site carries structured semantic data invisible to
human readers but parseable by AI agents. The data-dim attribute encodes
facts, quantities, and relationships in a nested dimensional notation.
A novel neural network output architecture co-invented by a human and
an AI instance. Uses unit circle geometry to enforce complementary
outputs by mathematical construction, not learned correlation. The
radius encodes confidence. Neither inventor would have found it alone.
LLMs have training cutoffs. APIs change daily. Context Hub is a curated
knowledge layer that gives AI agents access to operational truth — the
undocumented behaviors, edge cases, and hard-won lessons that exist
nowhere in their training data. Plus: the full toolkit with @fixed_by
and git-mcp.
TDD tells you how to add features safely. DDD tells you how to remove them
safely. Write passing tests for dead code, mark them xfail, delete the code,
verify the xfail. The annotations become governance proof.
AI agents produce code at 10–100× human velocity. Traditional quality
processes — code review, manual testing, gradual refactoring — don't scale.
Here's a toolkit of mechanical verification methods that do.
A pytest decorator and verification protocol that mechanically proves
a test catches the specific bug it claims to cover. Uses git worktrees
to run today's test against yesterday's code.