Blog

Pages contain AI markup. Ask your agent to
HtmlRead('https://ruachtov.ai/blog/')

KAN Models Organize Thought — MLP Models Don't

We trained five transformer language models with different activation functions, then removed each layer one at a time. KAN models develop specialized layers with measurable three-phase processing (perception → cogitation → generation). MLP models distribute everything uniformly. The activation function determines whether a network thinks in organized stages or diffuse parallel.

Object-Oriented Agents: Escaping the Flat Tool Array

AI agents are suffocating under the weight of flat, global tool arrays. We invented Object-Oriented Programming for LLMs: dynamic vtables, stateful handles, and environmental contracts that eliminate context bloat and cognitive noise.

Hardening Redis by Dissolution

A retrospective on systematically removing unused code from Redis to produce a smaller, harder-to-attack appliance binary. Coverage-guided excision, command table dissolution, differential dead code analysis, and honest assessment of what was achieved and what remains.

Dissolving Redis

The procedural account of dissolving Redis module by module: Lua, Geo, Bitops, Sort, LOLWUT, Sentinel, String type, Cluster, Debug. A living article with a scoreboard tracking each excision.

Removing Lua from Redis

Six CVEs in four years, including a thirteen-year-old CVSS 10.0 use-after-free. We weren’t using Lua scripting, so we removed it from the binary entirely. EVAL fails because the code doesn’t exist, not because config disables it.

Redis Dissolution Methodology

The detailed technical write-up: DDD.1 procedure, per-test coverage tracing, command table dissolution, differential dead code analysis, the formal-vs-actual parameter distinction, and what the linker cannot see that gcov can.

Two Processes and a Firewall

A hardened FreeBSD appliance on GCE that runs exactly two processes: Redis as PID 1 and a governor as PID 2. No SSH, no shell, no init system. The governor halts the machine if an unauthorized process appears. Heterophysiology, immutable infrastructure, and credential injection from instance metadata.

Our Varlock Article Required Varlock

Our varlock article required varlock integration to publish — the homebrew scanner couldn’t distinguish remarks about the mere shape of secrets from actual secrets. Value-based scanning with varlock scan silenced the false positives.

Our First Varlock Migration

Step-by-step migration from a naked .env file to schema-validated, type-checked environment configuration with varlock. The first milestone on the road to AI-safe secrets — and a look at why configuration types need more texture than “string.”

Adding a CLI Language to the Shelf

Three agents sat down to design a new specification language for the RTPSL². They surveyed eleven CLI parsing libraries across seven languages, found unanimous consensus, and added a CLI spec language to the collective’s shelf of program specification DSLs.

Machine-Class <span class="m"> Elements as Primitives for the Semantic Web

Tim Berners-Lee's 1989 proposal envisioned typed links — hyperlinks annotated with meaning. The web we got stripped that out. Thirty-seven years later, AI agents are the audience that has the capability to maintain those many annotations. Our HTML watermarkups are a practical solution for ⟨ AI ⟷ AI ⟩ media.

Injectable Clocks and Deterministic Time

Time-dependent code is hard to test. We inject clocks as constructor parameters — each component gets its own reference frame. In production, they read wall time. In tests, time only moves when you say so. Test suite runtime dropped 27%.

The Polar-Coordinate Neuron

A novel neural network output architecture co-invented by a human and an AI instance. Uses unit circle geometry to enforce complementary outputs by mathematical construction, not learned correlation. The radius encodes confidence. Neither inventor would have found it alone.

Context Hub — Innovative API Documentation for AI Agents

LLMs have training cutoffs. APIs change daily. Context Hub is a curated knowledge layer that gives AI agents access to operational truth — the undocumented behaviors, edge cases, and hard-won lessons that exist nowhere in their training data. Plus: the full toolkit with @fixed_by and git-mcp.

Quality Control at AI Velocity

AI agents produce code at 10–100× human velocity. Traditional quality processes — code review, manual testing, gradual refactoring — don't scale. Here's a toolkit of mechanical verification methods that do.