Three connected thoughts that keep sharpening.
Against the grain. We’re bolting LLMs onto decades of human-oriented software process. Style guides, reviewer agents, architecture prompts. It feels like teaching, but LLMs don’t learn. Every conversation is a cold start. You’re not building understanding, you’re performing it on repeat. Lossy compression of something that resists compression. When you find yourself banging your head against the wall, step back. This is not the way.
Source and object. If the LLM is the compiler, code is object code. What’s the new source? Something upstream of syntax – mental models, specs, constraints, intent. The things you already care about but currently express through code.
Determinism unlocks descent. Specs and validation are a loss function – complexity, duplication, performance, correctness. LLMs solve problems the way they were trained: iteration, descent. This works when the gradient is clean. Nondeterminism is noise – flaky tests, environment state, race conditions. Enough noise and descent becomes a random walk. The unlock isn’t smarter models. It’s making the environment simulable. WALs, replay, pure functions, hermetic state. Reduce the noise to zero and let the machine grind.
Practical implications: organize code into deterministic runtimes. Inject nondeterminism only at boundaries. Separate rendering from internal state (UIs are hard for machines to introspect; internal state as data is easily verifiable). Follow TigerBeetle’s determinism principles – simulation and replay. Let machines load bugs and iterate to solution in a provable manner.
Don’t look at code. Look at loss.