Newsletter #1: Up and Stumbling

13 minutes read


Owen Owen

Hello friends, welcome to my first newsletter. Here’s where I’ll keep you apprised of what I’ve been up to – and thinking about. I’m writing this for a couple reasons: to keep myself accountable to writing and to make sure my friends and peers (you) still keep me top of mind.

It’s been almost two months since I left Grafana. I planned (and did, more on that later) to incorporate a new company in January to avoid any tax headaches for 2025. This created a nice delineation between visiting friends and family in December, plus a bit of tinkering, and getting the ball rolling in January.

Before you dive in

I’ll aim to write these with some regularity. A few asks to keep in mind as you read:

Where my Head is at

Most of all, I’m trying to separate stimulus from response. Disentangling myself from the fire fighting and unrelenting slack notifications. I’ve known this for a while and want to find my mojo again. My best inpsirations come in the quiet, empty breaks in time. So for a while, I’ll give myself that.

Swinging at every pitch is nearly as bad as never swinging, though. I do not intend to be lazy, but restrained.

Timeline

To avoid this becoming too long, here’s a glance of what kept me busy these months, then I’ll dive into some themes:

December:

January:

Incorporation

This one felt great. I’ve always wanted to start my own company, and now I have. After some research and whatnot, I’ve chosen a C Corp in Delaware. I’ve taken care of some tax optimizations too: 83(b) election + QSBS (Qualified Small Business Stock) prep.

I did this all via Stripe Atlas, a product from Stripe, designed to make startup incorporation easy. I was planning to use this, having followed them for years, and it did not disappoint. Incorporating via Atlas has been great; both straightforward and enlightening. Can not recommend highly enough.

I started incorporation two weeks ago and my 83(b) was filed with the IRS yesterday. Banking via Mercury has been up about a week as well. There are some other things here and there. Google workspaces (you can now find me at owen@moonheron.com!), cloud credits (I got a few grand from AWS via their Atlas partnership), etc.

re; MoonHeron: I needed a name, ideally one that’s a bit memorable with an open domain. Nothing deep here; I’ve always loved herons and the moon. I also got some pretty nice artwork for the homepage.


Technical workflow stuff ahead — skip to “A pocket data analyst” if you’re here for a product.

Retooling

A combination of free time and software dev’s changing landscape made this a natural moment to reexamine my tools. I tend to do this every few years as things change and I want to try new things. Here’s some new additions:

Writing code today is so much different. I almost never use Cursor anymore — I suspect it’s a stepping stone, not a destination. Their feature releases usually involve workflows that already exist in terminal-based tools like Claude Code and Codex, just with a new GUI to learn. I’m using Claude exclusively these days, with some bells and whistles on top.

Less is More: Focusing on ‘Token Density’

I recently switched from a documentation-heavy approach where I built a multi-folder style guide for my LLMs. Tooling choices, architectural patterns, glossaries. These were much better than nothing, but I’ve found the more instruction you give, the less fidelity to any single rule you get. So I recently rewrote all my supporting agent documentation to favor succinct, illustrative points. I think of this as “token density”. Note, my own docs are similar to tigerstyle.dev, the style guide of the tigerbeetledb project.

Roleplaying: Agent Orchestration

I wanted to optimize my workflow to parallelize LLM output and minimize bottlenecks — both the number of times I need to intervene and the delay when I do (especially as I bounce between Claude sessions).

The old approach: collaboratively plan a feature with Claude, let it build, then repeatedly remind it of the same things — “keep in mind X”, “ensure tests pass”, “can we simplify control flow”. Every repetition was a break in throughput, especially the next step was trivial.

So I started writing specialized agents with isolated responsibilities. This let me parallelize them and invoke each one for a single set of principles. This part was not novel: AI coding tools all have specialized agents. But I didn’t want to keep nudging them along with the same patterns: ‘design <x>’, ‘review the design in light of our style guide’, ‘keep addressing reviewer feedback until you are both happy, then show me’, etc. The next step was having something orchestrate them the way I would — so I modeled how I think about problems, wrapped it in a state machine, and exposed it as a system prompt. Now Claude loops over itself until it needs my review. Combined with the simpler style guide, this gave me fewer bottlenecks and more efficient execution. Here’s roughly what it looks like:

Workflow:

                 ┌───────────┐
            ┌───►│  designer │◄───────────────────────────────┐
            │    └─────┬─────┘                                │
            │          │                                      │
            │          ▼                                      │
            │    ┌───────────┐                                │
     revise │    │standardizer                                │ rethink
            │    │ reviewer  │                                │
            │    └─────┬─────┘                                │
            │          │                                      │
            └──────────┤ issues?                              │
                       │                                      │
                       │ approved                             │
                       ▼                                      │
                 ┌───────────┐                                │
            ┌───►│implementer│◄───────────┐                   │
            │    └─────┬─────┘            │                   │
            │          │                  │                   │
            │          ▼                  │ simplified        │
            │    ┌───────────┐            │                   │
       fix  │    │ verifier  │            │                   │
            │    │standardizer            │                   │
            │    │ reviewer  │            │                   │
            │    └─────┬─────┘            │                   │
            │          │                  │                   │
            └──────────┤ issues?          │                   │
                       │                  │                   │
                       │ complex?         │                   │
                       ▼                  │                   │
                 ┌───────────┐            │                   │
                 │ simplifier├────────────┴───────────────────┘
                 └─────┬─────┘

                       │ clean

                     done

I usually run this ‘orchestrator’ version of claude for more elongated tasks and when I’m switching back and forth between claudes. It still has it’s issues, namely it stops following this state machine after a while, but it’s useful. Note: I’m working on a more robust version to address some of these concerns, but that’ll be in the next letter.

A pocket data analyst

Talking with some friends outside my sphere helped me realize we need to make AI analysis more quantitative and programmable.

First, I was talking to a friend of mine in DC who works for a communications strategy consultancy. He had a problem where they’d accumulate proprietary, unstructured datasets, but continually analyzing them wasn’t cost effective. It sounded like he wanted ad-hoc ETL (extract, transform, load) pipelines for unstructured data.

Later that week I met up with the founder of a previous startup I worked for. He was using claude code to build one-of business intelligence experiments for his job, collecting arbitrary datasets, quantifying and visualizing them. While the workflows were ad-hoc (new codebase for each experiment), AI had dramatically reduced the barrier to analyzing his own business.

I spin up claude research plans daily to familiarize myself with anything I can think of: from researching companies to collecting restaurant lists (in Mexico City this week!). I started thinking about bringing AI’s analytical capabilities to more structured and long running tasks. And from there, making them programmable.

I expect this trend will be beneficial everywhere, but more pronounced on the small/med businesses which don’t have data analytics teams on tap.

Claude/ChatGPT already support research modes, but there’s a big gap between that and the features users would need to actually use them as quantitative analysts. I think they’d need:

I’ve been putting together the foundation for this — ‘pocket data analyst’ is the placeholder text on moonheron.com.

One of the harder problems: the data layer. If you want to parallelize LLM-based data processing, what would it look like? LLM computation is unpredictable — it winds and folds back on itself, more of a graph with cycles than one without. You’d need:

I expect these workloads will be parallelized and predictable and it’s no surprise the agent infrastructure layer is where so many companies seem to be spending a lot of their effort.

Agent Infrastructure

More technical research ahead — skip to “What I’m reading” if you don’t want the deep dive.

The switch to asynchronous and unbounded computation with arbitrary connections between nodes is challenging and novel to me. It’s requiring me to rethink a lot from first principles. I think there’s going to be a huge amount of value for new infrastructure projects that can sensibly seize this opportunity. Here are a couple that I think have interesting applications in the post agent world:

Catching up with databases again

I said I’d never build another database. And I held that conviction for a month and half. Now, I’m not so sure. Having time away from everything allowed me to not force it and a couple weeks ago I started researching new approaches projects are taking. I swear I blinked and three days had passed. By and large I’m seeing using object stores to decouple storage and compute, plus embedded operating models. Here’s a few favorites:

Thinking about a new streaming data layer

A compelling question:

‘What would streaming data look like in the modern era, from first principles, if Kafka didn’t exist?’

There are a few players in the new-age streaming space, including Warpstream, Bufstream, and the new YC backed startup s2.dev. They all do some really great things:

These all have great ideas and I started designing a project to combine them. Object storage backed, RF1 (replication factor 1), schema optional, lightweight streams, avoiding the kafka API, and writing to iceberg/parquet, plus a little BYOC (bring your own cloud) on top. Eventually, I started having some doubts: who is the target user, the one who helps build traction? How was it materially differentiated from s2? I did a little research into these companies and it didn’t seem like they had compelling funding/revenue trajectories, which soured me on the idea. I was putting this together because it was fun, not due to a real need.

Time to get out of the weeds, but I’d like to know what you think about this space.

What I’m reading


That’s all for now. I’m still idea hopping and trying to follow my nose to the intersection of what’s interesting to me and valuable to everyone.

One reflection: posting is painful. I need to stay on that horse — publishing, tweeting, meeting people.

If you made it this far, don’t forget my asks at the top — especially intros and a follow on twitter.

Till next time, Owen