<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.2.2">Jekyll</generator><link href="https://inze.ph/feed.xml" rel="self" type="application/atom+xml" /><link href="https://inze.ph/" rel="alternate" type="text/html" /><updated>2026-04-06T02:31:09+00:00</updated><id>https://inze.ph/feed.xml</id><title type="html">inze.ph</title><entry><title type="html">Moving Day</title><link href="https://inze.ph/writing/moving-day" rel="alternate" type="text/html" title="Moving Day" /><published>2026-04-05T00:00:00+00:00</published><updated>2026-04-05T00:00:00+00:00</updated><id>https://inze.ph/writing/moving-day</id><content type="html" xml:base="https://inze.ph/writing/moving-day"><![CDATA[<p>This blog has been on GitHub Pages for over a decade. Push to main, site updates, done. I never had to think about it, and that was the point.</p>

<p>I recently moved it to a little cloud server that I run myself. That was also simple. What wasn’t simple was getting to the point where I was willing to do it.</p>

<h2 id="the-dark-forest">The dark forest</h2>

<p>When I was in college, Facebook showed up as thefacebook, sophomore year or so. It seemed like it genuinely cared about connecting people, and the product was <em>fun</em>. Gmail was insanely fast and useful, and Google seemed like it truly wanted to give amazing things away for free, funded by ads that were actually relevant. “Don’t be evil.” The pitches seemed earnest, and the tech backed them up. It all felt so innocent, so optimistic.</p>

<p>I graduated into the open source world with the same energy. Open protocols, open standards, shared infrastructure. The internet was this beautiful thing that made it possible for anyone to build anything and put it in front of the world. I was a true believer, and I’d built my early career on that belief.</p>

<p>Then it all soured, from every direction, in what felt like a few years. Facebook discovered that outrage drives engagement and optimized for it. In a move that would be too on-the-nose if you wrote it in a novel, Google quietly dropped “Don’t be evil.” And in 2013, Snowden showed us what had been hiding underneath all of it: the open protocols I loved were being tapped, the networks I’d trusted were instrumented for mass surveillance.</p>

<p>What really hurt: the openness was the attack surface. Our idealism left us open for exploitation.</p>

<p>That messed me up. You can’t unsee it, and you can’t go back to building on something the same way once you’ve seen the rot in the foundations.</p>

<p>Over time my politics radicalized. Not all at once. I kept believing in specific companies, specific founders, specific missions. And I kept getting my heart broken. The more time I spent in the industry, the more I saw how it worked from the inside. Good people can try to build good things. But company values and mission statements, when tested against profits, almost never hold up. They dilute, become just air. There’s a gravity that pulls every company the same direction: exude enough honesty to attract talent, exploit it, generate returns, cut bait when things get too complicated. The system makes it almost impossible for things to stay good once the ideals of the people come into conflict with the ideals of capital, and the deeper you get into it the more it taints you.</p>

<p>And the same profit-maximization dynamics made the social web meaner. Engagement meant outrage, outrage meant attention, attention meant revenue. The pile-on became a genre. I watched people say something clumsy or half-formed and get absolutely shredded for it, and I learned the lesson everyone was supposed to learn: be careful. Think twice before you post. Maybe just don’t. The risk/reward math on sharing your thoughts in public started to look pretty bad.</p>

<p>There’s a selection bias when the public web gets this hostile. The voices that punch through tend to belong to people whose desire to be seen outstrips their risk calculus, or people who thrive on conflict. You look at who’s still talking and it doesn’t exactly inspire you to join in. And then the reasons stack up: anything I put out there is either an ego trip, or it gets torn apart, or it gets hoovered up by a corporation for profit. With all three of those on the table, of course you don’t want to say anything. The reasons not to are so varied and so well-founded that silence starts to feel like the only rational choice.</p>

<p>You can see it in this blog’s own history. In 2013 and 2014 I wrote earnest little “share what I’ve learned” posts, including a poetry-adjacent love letter to git commits. Then silence. Six years of it. I came back in 2020 because I got leukemia and that felt important enough to write about. Then mostly quiet again.</p>

<p>There’s a name for this that’s been kicked around a lot lately. Yancey Strickler called it the <a href="https://www.ystrickler.com/the-dark-forest-theory-of-the-internet/">dark forest theory of the internet</a>, borrowing from <a href="https://en.wikipedia.org/wiki/The_Dark_Forest">Liu Cixin’s sci-fi</a>: the forest is dark, and you stay quiet because the predators are listening. People retreat to newsletters, group chats, private accounts. The public web is for the brave or the reckless.</p>

<p>I was in the dark forest for a long time, in both senses. Technically: don’t run a server on the internet and you don’t have to worry about defending it. Socially: don’t put your thoughts in public and nobody can pick them apart. Both impulses pointed the same direction: stay behind the wall. And I did, for a long time. It’s safe in there. It’s also kind of lonely.</p>

<h2 id="stepping-out">Stepping out</h2>

<p>I’ve been writing more lately. Between the dark forest and my own self-doubt, I’d internalized that wanting attention is vanity. I believed that for a long time. I realized I wasn’t craving an audience, I was craving connection, and you can’t find community if you’re hiding.</p>

<p>Once I started writing again, I wanted stats. I used to think analytics were only good for number-go-up dopamine, but they’re also about figuring out what resonates, who’s referring people your way, who you should get to know. And I wanted those analytics to be mine, not Google’s or Microsoft’s, which meant self-hosting, which meant a server on the open internet.</p>

<p>I work on <a href="https://miren.dev">Miren</a>, a deployment platform. I’d been running it on my homelab for a while, all behind Tailscale, all invisible. At some point the contradiction got hard to ignore: I’m building a tool for putting things on the internet, and I won’t put things on the internet. It was time to admit that a little VPS running a blog is not a high-value target. The bots will knock on port 22 and move on.</p>

<p>And there was a practical reason too: using Miren on the open internet is different from using it on a homelab behind a VPN. The tech industry calls this “dogfooding,” which never quite captures the right feeling. It’s more like being a baker who eats their own cake. You get to improve your recipes for the customers tomorrow, and you get to eat cake. …yum.</p>

<p>So I rented a small server from <a href="https://www.hetzner.com/cloud/">Hetzner</a> in Helsinki for three euros a month. I name my servers after cryptids, and this one is <a href="https://en.wikipedia.org/wiki/Selkie">selkie</a>, a shape-shifting seal from Scottish folklore. Felt right for a small creature bobbing along on the open sea.</p>

<p>The setup was easy. I would know, I helped make it that way. It also broke in a few interesting ways (I also helped make it that way, lol). More vanilla in the frosting next batch. I also decided to move the URL from <code class="language-plaintext highlighter-rouge">phinze.com</code> to <a href="https://inze.ph"><code class="language-plaintext highlighter-rouge">inze.ph</code></a>, something a little weirder and more playful. Old links still work, cuz <a href="https://www.w3.org/Provider/Style/URI">cool URIs don’t change</a>.</p>

<p>This site comes from a place now. Heavily virtualized though it may be, it’s a computer in Helsinki. It has a name. Its IPv4 address is <code class="language-plaintext highlighter-rouge">204.168.143.238</code>. You are communicating with it right now.</p>

<p>Despite my only-somewhat-rational fears, ole selkie (young selkie, really) has not been hacked or DDoSed yet. It has plenty of headroom. There’s room for more.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[This blog has been on GitHub Pages for over a decade. Push to main, site updates, done. I never had to think about it, and that was the point.]]></summary></entry><entry><title type="html">Build the Software, Use the Software</title><link href="https://inze.ph/writing/build-use-loops" rel="alternate" type="text/html" title="Build the Software, Use the Software" /><published>2026-03-28T00:00:00+00:00</published><updated>2026-03-28T00:00:00+00:00</updated><id>https://inze.ph/writing/build-use-loops</id><content type="html" xml:base="https://inze.ph/writing/build-use-loops"><![CDATA[<p><em>This is a draft — feedback welcome! Last updated 2026-03-29. <a href="https://github.com/phinze/phinze.github.io/commits/main/_posts/2026-03-28-build-use-loops.md">Edit history on GitHub.</a></em></p>

<p>Every nerd wants a Jarvis. A personal AI that knows your stuff, handles your tasks, extends what you can do. The dream has been kicking around since long before the tech was anywhere close.</p>

<p>The tech is getting close. And I’ve been having a lot of fun exploring what it feels like to actually build toward this, one small tool at a time.</p>

<p>I keep coming back to a loop: build the software, use the software. Using it shows you what to build next. The loop tightens until building and using blur together into the same activity. No roadmap. No backlog. Just friction driving features, in real time, in the same session.</p>

<h2 id="what-it-looks-like">What It Looks Like</h2>

<p>I have a CLI called <code class="language-plaintext highlighter-rouge">pim</code> that handles my email, calendar, and docs from the terminal. It started as a way to do email triage in Claude Code sessions. Forty-some commands now, across Gmail, Fastmail, Google Calendar, Drive.</p>

<p>One session I was trying to export an email thread and noticed it only pulled 2 of the 6 messages. Mid-conversation:</p>

<blockquote>
  <p><strong>me:</strong> 2 messages hm there were 6 in that thread, but we might want to figure out if we have a bug in thread read</p>

  <p><strong>me:</strong> do we have/need a ‘read-thread’ primitive to fetch the content you can see in a given gmail read view?</p>

  <p><strong>me:</strong> yeah let’s build it</p>
</blockquote>

<p>Three minutes later, <code class="language-plaintext highlighter-rouge">read-thread</code> exists for both Gmail and Fastmail. We test it on the original thread — all 6 messages. I go back to writing my reply, which surfaces another gap (doc-import doesn’t handle markdown well), and we build that too.</p>

<p>I wasn’t planning to build <code class="language-plaintext highlighter-rouge">read-thread</code> that day. I was trying to reply to an email. The feature emerged because I was using the tool and it couldn’t do what I needed. That’s the loop.</p>

<h2 id="radiohead-day">Radiohead Day</h2>

<p>The music one is my favorite because it shows the whole arc.</p>

<p>It didn’t start as a software project. It started as “help me pick some music.” Just vibes in a Claude session — “something beepy boopy but not super high energy” — and Claude would suggest artists. That worked. Then: “can you spin it for me?” Well, now we need something that can control playback. “Can you inspect my library to make recommendations?” Now we need an API integration. “I also listen to DJ mixes on YouTube, can you recommend those?” “Can you play those for me?”</p>

<p>Each question pushed past what conversation alone could handle, and a tool got built at the boundary. That’s how <code class="language-plaintext highlighter-rouge">yt-dj</code> happened — a menubar music player wrapping mpv and Apple Music, grown out of a series of “can you also…” moments.</p>

<p>One session I wanted to listen to Radiohead’s whole discography backwards, newest to oldest:</p>

<blockquote>
  <p><strong>me:</strong> let’s do a radiohead day, queue their whole discography backwards, so we’ll listen to newest album first</p>
</blockquote>

<p>Nine albums get queued. Then playback breaks. We dig into the daemon code, find two bugs — no playback state observer, queue entries not fully loading metadata. We design a verification loop that polls for metadata resolution instead of blind sleeping. Build it, restart the daemon, re-queue all nine albums.</p>

<p>Then:</p>

<blockquote>
  <p><strong>me:</strong> okay can you uh skip to the track we were on before</p>

  <p><strong>claude:</strong> I’m not sure how far you got before it stopped — do you remember which track or roughly how far into A Moon Shaped Pool you were?</p>

  <p><strong>me:</strong> “it’s too late the damage is done” was the lyric</p>

  <p><strong>claude:</strong> That’s Decks Dark — track 3.</p>
</blockquote>

<p>That’s one continuous conversation. Queue the discography. Debug the player when it breaks. Redesign the queue system. “Now something higher energy for the afternoon.” Skip to a song by quoting a lyric. Fix a bug. Find a mix. You can’t separate the building from the using because they’re not separate activities. It’s all one thing — an assistant that’s improving itself as you lean on it.</p>

<h2 id="the-feed-reader">The Feed Reader</h2>

<p>I built a unified feed reader for Bluesky and Mastodon. Go backend, React frontend, SQLite cache. The first month of development is fifty-some Claude Code sessions, and the pattern is the same every time.</p>

<p>Deploy the feed reader. Start reading. Notice something:</p>

<blockquote>
  <p>“I see repeats in the feed”</p>
</blockquote>

<p>Fix it. Keep reading.</p>

<blockquote>
  <p>“this context page doesn’t load”</p>
</blockquote>

<p>Fix it.</p>

<blockquote>
  <p>“when I click back it doesn’t remember my scroll position”</p>
</blockquote>

<p>Fix it.</p>

<blockquote>
  <p>“we show images but not videos, let’s fix that”</p>
</blockquote>

<p>Build video support. One day I see a Mastodon quote post that doesn’t render right:</p>

<blockquote>
  <p>“Masto seems to have a quote post format… can we render that?”</p>
</blockquote>

<p>That conversation ends with us building an entire Go Mastodon client library. Another day, I’m reading an epic Bluesky thread and want to understand the full argument:</p>

<blockquote>
  <p>“is there a way we can get a full dump of all the activity on it, get some stats, and wrap our head around the argument?”</p>
</blockquote>

<p>That spawns the entire thread analysis feature. And a week later, on my phone:</p>

<blockquote>
  <p>“i cant access the left hand menu on phone view, do we need to hamburgerize it?”</p>
</blockquote>

<p>Every feature traces back to a moment of actually using the thing. I never sat down and wrote a roadmap. The roadmap was using the software.</p>

<h2 id="houseplant-programming">Houseplant Programming</h2>

<p>Hannah Ilea has a lovely framing for this kind of work: <a href="https://hannahilea.com/blog/houseplant-programming/">houseplant programming</a>. Tiny software, just for yourself. It doesn’t need to scale. It doesn’t need to be production-ready. It just needs to be alive enough to be useful to you.</p>

<p>At work we’ve been running with a related idea — <a href="https://miren.dev/blog/garden-server">the garden server</a>. A little cluster where internal apps grow. Sixteen applications now, from meeting bots to design docs to emoji search. The key insight there was that having a place for things to live made it way easier to build them. The plant survives when there’s a place for it.</p>

<p>My <code class="language-plaintext highlighter-rouge">-stuff</code> repos are the personal version of this. Each one is a pot on the windowsill. <code class="language-plaintext highlighter-rouge">pim-stuff</code>, <code class="language-plaintext highlighter-rouge">social-stuff</code>, <code class="language-plaintext highlighter-rouge">music-stuff</code>. They don’t need to be polished. They don’t need users. They just need to be useful to me, and they get better every time I use them.</p>

<h2 id="every-nerd-wants-a-jarvis">Every Nerd Wants a Jarvis</h2>

<p>There’s a lot of energy right now around the idea of a personal AI assistant. Projects like <a href="https://github.com/nichochar/openclaw">OpenClaw</a> are going at it top-down — build the unified assistant, give it broad permissions, let it self-modify, add a community skill marketplace. Design Jarvis, then deploy him.</p>

<p>I’m going at it from the other end. No grand architecture. Just a growing collection of scoped tools, each one built in sessions where I’m sitting right there, each one earning trust by being useful. <code class="language-plaintext highlighter-rouge">pim-stuff</code> handles my email. <code class="language-plaintext highlighter-rouge">social-stuff</code> reads my feeds. <code class="language-plaintext highlighter-rouge">music-stuff</code> plays my music. None of them talk to each other yet. None of them run autonomously. They’re individual houseplants, not a connected garden — not yet.</p>

<p>But they compound. The email commands make triage faster, which gives me more time to hack on the feed reader, which teaches me patterns I bring back to the email tool. Each tool makes the next one easier to build. And because I’m using them every day, the build/use loop keeps each one getting better.</p>

<p>I think both approaches will teach us things. Top-down will probably get to Jarvis-shaped faster. But I trust the bottom-up path because I can see every piece working. Trust earned, not granted.</p>

<h2 id="what-changed">What Changed</h2>

<p>I should be honest about what’s doing the work here. I’ve been writing software for twenty years. I know when to reach for an API, how to evaluate a stack, when an architecture smells wrong. That knowledge matters — I’m not just throwing prompts at a wall.</p>

<p>But honestly? Most of the time I’m not doing architecture. Most of the time it’s just “can you also do this now?” The expertise shows up as taste — knowing what to ask for, recognizing when something’s going sideways, having a sense for what’s structurally possible. It operates in the background. The foreground is just describing what I want to happen next and seeing if it works.</p>

<p>I’m building more personal software than I have in years. Ideas I’d been sitting on — a unified social feed, a hardware dashboard, a DJ concierge — are suddenly real. The activation energy dropped.</p>

<p>The thing that makes it work isn’t the AI generating code. It’s that the AI is <em>in the session with me</em> while I’m using the software. I hit a wall, I describe what I wanted to happen, and we fix it without switching contexts. The friction between “I wish this worked differently” and “now it does” got very small.</p>

<p>I don’t know what shape this takes long-term. Maybe Jarvis, eventually, assembled from the bottom up. Maybe just a windowsill full of houseplants that I tend and use and enjoy. Either way, the loop is the thing I like most about it right now. Build it, use it, notice what’s missing, build that too.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[This is a draft — feedback welcome! Last updated 2026-03-29. Edit history on GitHub.]]></summary></entry><entry><title type="html">Who Owns the Attention Capital</title><link href="https://inze.ph/writing/attention-capital" rel="alternate" type="text/html" title="Who Owns the Attention Capital" /><published>2026-03-22T00:00:00+00:00</published><updated>2026-03-22T00:00:00+00:00</updated><id>https://inze.ph/writing/attention-capital</id><content type="html" xml:base="https://inze.ph/writing/attention-capital"><![CDATA[<p>In <a href="/writing/economy-of-attention">the last post</a> we built a frame: software is accumulated attention. LLMs have given us incredible new tools for manipulating text and code, but they cannot substitute the attention required to construct and maintain a working system. You can chase the attention around, but you can’t skip it.</p>

<p>The AI industry would beg to differ. Their product is a glorious ur-solution to all these problems. And the new problems created by the solution? AI solves those too!</p>

<h2 id="the-spice-shaker">The Spice Shaker</h2>

<p>The pitch, as delivered by frontier model companies, the LinkedIn-consultant-CEO-AI-booster class, and a thousand breathless threads, goes something like this: the LLMs have ingested a hyperdimensional superset of all the attention ever paid into their massive training corpora. Accumulated human comprehension has been distilled into an API. It’s done, folks. We did it. No more thinking required. Shake the shaker, get the flavor. Pay by the token for the privilege.</p>

<p>Okay but jokes aside, I’m not here to dismiss the whole thing. LLMs really have encoded patterns from an enormous volume of human thought. The model really can produce things that look like the output of comprehension, and that output can have truly immense value! The spice shaker really does taste like something.</p>

<p>But the value of what comes out depends entirely on the attention going in. The pitch is that LLMs are a <em>replacement</em> for human attention. My claim is that they’re an <em>amplifier</em>.</p>

<p>Good technology replaces comprehension all the time! Millions of people ride the bus without understanding combustion engines. I’ve been writing code for twenty years and I can count on one hand the number of times I’ve had to think about CPU registers (either school, or very very bad times). That’s what a strong abstraction does: it lets you build on top of something without understanding its internals. And LLMs genuinely do this for a lot of use cases. Non-technical people are getting real value from them right now, no comprehension of transformers required. The abstraction holds, up to a point.</p>

<p>The abstraction leaks when you need to <em>change, debug, or extend</em> the system. The bus rider doesn’t need to understand the engine. The mechanic does. And if you replace all the mechanics with people who also just ride the bus, your fleet works great until the first breakdown. The AI industry is claiming they’ve built a non-leaky abstraction over human comprehension. The conservation law says that’s not possible above a certain complexity threshold. For simple stuff, the abstraction holds fine. For complex systems that other people depend on, it leaks, and when it leaks, someone has to have the comprehension to deal with it.</p>

<p>So where LLMs help you spend attention more <em>effectively</em>, automating the mechanical work so you can focus your comprehension on the parts that matter, the multiplier improves. That’s good. That’s technology doing what technology is supposed to do. But where they help you <em>skip</em> attention entirely, the output degrades. The flavor without the nutrition. Someone downstream has to pay the attention debt.</p>

<p>(Sure, the AI-heads will say: fine, we’ll train more directed models, let a thousand large-language-flowers bloom. Maybe! But each of those flowers will need its own directed attention to cultivate. The conservation law doesn’t care how many models you train.)</p>

<p>Marx would call the broader pitch <a href="https://en.wikipedia.org/wiki/Commodity_fetishism">commodity fetishism</a>: treating a product as if its value is inherent, forgetting the labor that produced it. The frontier model companies stand atop a massive accumulation of other people’s attention — the entire written output of humanity, more or less — and say <a href="https://nedroidcomics.tumblr.com/post/41879001445/the-internet">“I made this.”</a> The model is presented as their innovation, their product, their capital. But the value in it is the comprehension-labor of millions of people who never agreed to the deal.</p>

<h2 id="the-saaspocalypse">The SaaSpocalypse</h2>

<p>The logical extreme of this pitch is the “SaaSpocalypse,” the market-level panic that every SaaS product is just one <code class="language-plaintext highlighter-rouge">.md</code> file away from being replaced by a Claude skill. Nearly a trillion dollars in software market cap wiped out on the premise that all the accumulated attention baked into these products can be reproduced by shaking the shaker hard enough.</p>

<p>There’s a site called <a href="https://deathbyclawd.com/">Death by Clawd</a> that’s savage, funny, and — intentionally or not — a perfect satire of the hysteria. You feed it a company URL and it generates a fake death certificate, a replacement SKILL.md file, and a eulogy. Replacement cost: ~$0.003 per run. Cause of death: “Claude learned to do it better than your silly little web app.”</p>

<p>I ran it on <a href="https://deathbyclawd.com?url=miren.dev">Miren</a>, the infrastructure platform I work on. It gave us a eulogy that begins “Dearly beloved, we gather here to remember Miren — a platform that dared to ask, ‘What if Heroku, but again, but this time we call it modern?’” and a 31-line SKILL.md that generates Dockerfiles and Terraform configs.</p>

<p>I laughed, I cried, I laughed again. I ran it on my friends’ companies. This thing is <em>merciless</em>!</p>

<p>Gauche as it is to defend my workplace against a takedown from a chatbot, there <em>is</em> a difference between the markdown file and the thing I help build every day. We think human experience and taste and sustained attention to a problem will produce something with way more value than you’ll get out of a Claude skill. The SKILL.md can generate configs. It can’t generate taste. (Or I’m wrong and we’re killed in a year by a chatbot. Only time will tell!)</p>

<p>And look — the Marxist in me has to say it — some of this SaaS sweating is <em>deserved</em>. There <em>is</em> a class of business whose fat margins depended on knowledge being scarce, not on understanding being deep. Information wants to be free, and it just got a lot freer. The spice shaker <em>can</em> reproduce thin attention. That’s the forces of production disrupting the relations of production. Please enjoy the capitalism you ordered, dear sirs.</p>

<p>If LLMs <em>can</em> democratize knowledge and capability, if they make it so a solo developer or a tiny team can do things that used to require an enterprise sales call, I’m all for it. That’s <a href="https://miren.dev">the world I’m helping build toward at work</a> anyway. Shake that shaker.</p>

<h2 id="who-owns-the-attention-capital">Who Owns the Attention Capital</h2>

<p>The frontier models are the largest accumulation of attention capital in history: the entire corpus of human comprehension, scraped and compressed into a handful of proprietary systems. And the per-token pricing model is — if we’re being honest about it — rent extraction on that capital. Metered access to distilled human thought, sold back to the people who produced it. Oh good, a new class of landlords. Marx would recognize this one: enclosure of the commons, dressed up as democratization.</p>

<p>I don’t think this is the permanent state of things, but I’m not going to pretend the arc bends inevitably toward openness, either. The cloud era was supposed to democratize infrastructure, and what it actually produced was vast accumulations of capital into a handful of billionaire-controlled conglomerates. We’re all still pining for the web we lost, the one that got enclosed by the <em>other</em> attention economy, the demand-side one, the one that figured out how to monetize eyeballs and strip-mined the commons in the process.</p>

<p>So I hold the hope honestly but not naively: that the spirit of open source and the DIY ethos that built the internet will have its counterpunch here. The big corporate models are truly frontier <em>now</em>, but not forever. I’ll be honest that I haven’t done enough homework on the open-source model world to know exactly what the alternative looks like yet. Right now just understanding what the corpo-frontier tools can do is taking up enough of my brain.</p>

<p>But my dream is we can get to something shaped like: tools of attention-amplification that are collectively owned. Transparent models, trained on community data by community resources, runnable on commodity hardware. Not because the conservation law stops applying, but because the <em>gains</em> from better attention-tooling should flow to the people spending the attention, not to the platform extracting rent on every token. I’d like to learn more about the people building toward that, and I’d like to help where I can.</p>

<p>We’ll see. The substructure holds either way. But the superstructure, who benefits, who pays, who decides, that part is still being written. And that’s the part worth — well, you know.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[In the last post we built a frame: software is accumulated attention. LLMs have given us incredible new tools for manipulating text and code, but they cannot substitute the attention required to construct and maintain a working system. You can chase the attention around, but you can’t skip it.]]></summary></entry><entry><title type="html">The Economy of Attention</title><link href="https://inze.ph/writing/economy-of-attention" rel="alternate" type="text/html" title="The Economy of Attention" /><published>2026-03-21T00:00:00+00:00</published><updated>2026-03-21T00:00:00+00:00</updated><id>https://inze.ph/writing/economy-of-attention</id><content type="html" xml:base="https://inze.ph/writing/economy-of-attention"><![CDATA[<p>Ahoy, software-slinging mateys! There’s a lot of foam on the water right now. AI is remaking how software gets built, and our whole industry is metabolizing the change in public. Maintainers are overwhelmed! Systems thinkers are drawing new diagrams! Individual developers are writing about how their own workflows feel different in ways they can’t quite name! Is it a tailspin? An ouroboros of navel-gazing, the discourse eating its own tail? Or are we in a chrysalis, goo right now but about to emerge reformed and beautiful? So many takes! So many emotions! And rightly so. There’s a lot to figure out.</p>

<p>All of which leaves a body looking for anchor points. Navigation buoys on the salty seas. Fundamentals that were always there and hold true even when everything on the surface is churning.</p>

<p>I’ve been calling mine <strong>the economy of attention</strong>.</p>

<p>Not “the attention economy” — that’s the demand-side concept, who can capture your eyeballs. This is the supply side, the other usage of “economy.” Attention as a scarce resource that has to be <em>spent</em> to produce anything of value. Think of it as a conservation law: the attention required to produce correct, understood software can’t be created or destroyed, only moved around. If less is spent in one place, more has to be spent somewhere else.</p>

<p>AI has made generating code almost free. It has not made <em>understanding</em> code any cheaper. That gap is the source of almost everything we’re all feeling right now.</p>

<h2 id="software-is-dead-attention">Software is Dead Attention</h2>

<p>Here’s an old idea in new clothes: a software system is accumulated attention, crystallized into something useful.</p>

<p>Not lines of code. Not person-hours. Not story points. <em>Attention</em>. Someone had to understand a problem deeply enough to encode a solution, and then someone else had to understand that solution deeply enough to trust it, and then someone else had to understand the system deeply enough to change it without breaking what came before. Every layer of that understanding is attention, spent and compounded.</p>

<p>Time to bring in the big guy. Marx’s <a href="https://en.wikipedia.org/wiki/Labor_theory_of_value">labor theory of value</a> says the value of a commodity isn’t the materials: it’s the <em>socially necessary labor time</em> embedded in it. His “labor” is broader than what I’m calling “attention” — in a factory, you can dig a ditch without thinking too hard and the ditch still gets dug. But in knowledge work, what labor is there <em>besides</em> attention? You can’t write code, design a system, debug a problem, or review a PR without sustained comprehension. The typing is trivial. In our domain, labor and attention converge. Marx was writing about factories; we’re writing about software. But the theory fits like a glove. A codebase is what Marx would call dead labor: past understanding, crystallized. And when dead labor gets crystallized into something that produces value, Marx calls it <em>capital</em>. A working software system is attention capital. It’s the accumulated comprehension of everyone who built it, frozen into an artifact that now generates value for the people who use it.</p>

<p>Why does this matter now? Because the <em>whole point</em> of building software is to create an attention surplus for the people who use it. We spend attention building the thing so that users don’t have to spend attention doing the thing the hard way. Technology is an attention transformer: living attention goes in on the building side, attention capital comes out, and that capital saves effort for everyone who uses it. The measure of a useful piece of software is that multiplier: how much attention was spent building it versus how much attention it saves across everyone who uses it.</p>

<p>The best software has a spectacular ratio. Years of focused human comprehension on the building side, millions of hours of saved effort on the usage side. That’s the whole game. That’s why we do this.</p>

<h2 id="my-look-at-all-this-discourse">My, Look at All This Discourse!</h2>

<p>Once you see software as an attention economy with a conservation law, a lot of the current discourse snaps into focus. I’ve been reading a stack of recent posts, all circling the same thing from different angles: the attention has to be spent, and every apparent escape hatch just moves it somewhere else.</p>

<p>A Django maintainer recently wrote <a href="https://www.better-simple.com/django/2026/03/16/give-django-your-time-and-money/">“Give Django your time and money, not your tokens”</a>, a post that nails the problem from the receiving end. Open source maintainers are getting flooded with AI-generated contributions that cost the submitter almost nothing to produce and cost the reviewer <em>everything</em> to evaluate.</p>

<p>The old implicit contract was roughly balanced. Writing a patch took effort. Submitting it meant you’d probably tested it, probably understood the codebase at least locally, probably cared enough to engage in review. The cost of producing the contribution was a natural filter for its quality, and — just as importantly — a signal of good faith. You’d spent attention. That meant something.</p>

<p>Now you can generate a plausible-looking patch in thirty seconds. The code might be correct. It might even be good. But <strong>the reviewer has no way to know that without spending their own attention</strong>, and their attention hasn’t gotten any cheaper.</p>

<p>The conservation law is doing its thing. The attention cost of a correct patch didn’t go down; the submitter’s cost dropped to near-zero while the reviewer’s stayed exactly the same. The asymmetry exploded. In Marxist terms, this is a familiar dynamic: the labor didn’t disappear, it just stopped being shared. The maintainer now does the comprehension-work alone.</p>

<p>There’s a common observation making the rounds lately (<a href="https://apenwarr.ca/log/20260316">one recent version</a> is sharp on the diagnosis, though its conclusions are questionable): every layer of review you add to a process costs dramatically more than the last. AI makes generation faster, but generation was never the bottleneck. Review is the bottleneck. And review is slow because it’s <em>attention-intensive</em>: someone has to actually understand what changed and why.</p>

<p>The booster-class response is obvious: just have the AI do the review too. Close the loop. AI generates, AI reviews, AI ships, value appears, no human attention required. You can practically hear the CEOs salivating: capital that generates value without labor, the one thing Marx said it couldn’t do. “Take <em>that</em>, old man.”</p>

<p>But review without understanding isn’t review. It’s just another layer of pattern-matching on top of the first layer of pattern-matching. A human reviewer catches problems because they bring directed attention: they understand the system, they know what <em>should</em> be true, they can spot when something is technically correct but architecturally wrong. An AI reviewer can check syntax and run tests, but it can’t know that this particular change undermines an invariant that three other services depend on, because nobody told it and it doesn’t understand why the system is shaped the way it is.</p>

<p>And if nobody in the loop has that understanding, the attention debt doesn’t vanish. It compounds silently until a support ticket is filed that the AI cannot resolve, because the fix requires knowing why the system was built the way it was, and nobody knows anymore, or never did. A human has to pay the debt all at once, with no context, under pressure. We’ve all had to dig into that one decrepit corner of the codebase that nobody remembers, tiptoeing through ancient ruins with a flashlight, hoping not to fall into a snake pit. Now imagine that’s the <em>whole system</em>.</p>

<p><a href="https://en.wikipedia.org/wiki/W._Edwards_Deming">Deming</a> saw this decades ago in manufacturing: adding QA inspection layers <em>reduces</em> individual accountability. When everyone knows there’s another check downstream, they stop checking their own work. Review layers are attention-routing systems, and badly designed ones leak. The total attention spent might even go <em>down</em>.</p>

<p>There’s a whole genre of post right now about spec-driven development: <a href="https://haskellforall.com/2026/03/a-sufficiently-detailed-spec-is-code">write the spec, let the AI build it</a>. Make the code <a href="https://aicoding.leaflet.pub/3majnyfydzs2y">disposable and regenerable</a>, keep only the intent. It’s appealing. But as Gabriella Gonzalez (creator of <a href="https://dhall-lang.org/">Dhall</a>) observes: a sufficiently detailed spec <em>is</em> code. If you write a specification precise enough for an AI to generate correct code from it, you’ve already done the hard work. You’ve spent the attention. The spec didn’t save you anything; it just moved where the attention went.</p>

<p>You can shuffle the apparent cost around: write a spec instead of code, generate a PR instead of typing it, use a framework that hides the complexity. And yes, you can have the AI write the spec too. Turtles all the way down! But the actual attention required to produce something <em>correct and understood</em> doesn’t compress. At some point someone has to actually understand the problem being solved.</p>

<p>Then there’s the emotional layer. Developers are <a href="https://notes.visaint.space/ai-coding-is-gambling/">comparing AI coding to gambling</a>: pull the lever, get a result, pull again. It’s “preposterously addicting” precisely because it removes the cognitive burden. Simon Willison and the Oxide folks have been calling it “<a href="https://simonwillison.net/2026/Feb/15/deep-blue/">deep blue</a>,” the existential dread underneath: <em>what was I even for?</em> But the work also feels hollow. You’re spending your time “mopping up how poorly things have been connected” rather than actually solving problems.</p>

<p>That hollow feeling is attention debt, experienced from the inside. Marx has a word for it: <em><a href="https://en.wikipedia.org/wiki/Marx%27s_theory_of_alienation">alienation</a></em>. The worker separated from the product of their labor. When you vibe-code, you produce artifacts you don’t understand. The slot machine gives you the dopamine of production without the satisfaction of comprehension.</p>

<p>The pattern repeats at every level. Submit it and the reviewer pays. Add review layers and they leak. Write specs instead and it’s the same work in different clothes. Skip it entirely and it feels hollow. The attention has to be spent.</p>

<h2 id="did-you-spend-the-attention">Did You Spend the Attention?</h2>

<p>I find the whole “which lines did the LLM write” discourse tiresome. I get the impulse — maintainers are drowning in low-effort slop and need some way to sift through it. But “did AI type this bit” is a bad proxy for what they actually care about. If the frame is useful, it should give us a better one: <strong>did you spend at least as much attention as you’re asking someone else to spend?</strong></p>

<p>That cuts through a lot of the nitpicking. For those of us experimenting with these tools — and that’s what you do with new tools! — a culture that polices toolchains instead of outcomes creates fear around exactly the kind of experimentation we need right now.</p>

<p>(If you want to opt out of these tools entirely, I respect that. I share concerns about transparency, consent, attribution, and resource usage, and I feel the dissonance. I’ll keep grappling, and maybe write about that grappling at some point. But right now I’m focused on the middle of the discourse: where are the lines for those of us who <em>do</em> use these tools?)</p>

<p>The heuristic works across the whole spectrum of trust. On a high-trust team where you’ve built up a long history of deposits into the shared attention budget, the dials can be set differently. Your teammates know you. They’ve seen your judgment. When you say “Claude and I looked into this and here’s what we found,” that’s a meaningful signal because they trust that <em>you</em> actually spent the attention, regardless of who typed the characters. The AI is a tool in the hands of someone they trust.</p>

<p>First-time open source contributions are on the opposite end. The maintainer has no way to tell whether the person on the other end actually understands what they’re submitting. Squinting at PRs for LLM-iness is a dead end. Better to focus on systems and norms that carry real signal: <a href="https://github.com/mitchellh/vouch">webs of trust</a> that let humans vouch for humans, or disclosure norms that ask contributors to show up as themselves — “hi, human here, worked with Claude on this, I think it’s right, please let me know.” That’s a person spending attention. You can work with that.</p>

<p>Cards on the table: I used LLMs heavily in writing this very post. You could dismiss it as slop because an LLM typed most of the prose, but my claim is there’s something worthwhile in here. This is the result of <a href="https://github.com/phinze/phinze.github.io/commits/main/_posts/2026-03-21-economy-of-attention.md">hours of back-and-forth</a> across multiple LLM sessions. I’ve typed thousands of words into those sessions; very few of them directly into my editor. If I had typed “yo Claude, read these five articles and give me a post that gets me into The Discourse” the result would have been thin, heavy on LLMisms, light on insight. The difference is the attention I brought to the process. Or at least I think so! If you think these ideas are wrong, I’d love to hear why — that’s a conversation worth having. But if you’d dismiss them solely because an LLM touched the prose, I’ll gently point out that you <em>did</em> make it to the bottom of the piece. ;)</p>

<p>The seas haven’t settled; hard not to get a little seasick. But hey, at least I found this buoy, and we’ll keep charting the seas together.</p>

<p>In <a href="/writing/attention-capital">my next post</a> I take this frame and ask some new questions: what exactly is the AI industry selling us, and who should own the tools that amplify our attention?</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Ahoy, software-slinging mateys! There’s a lot of foam on the water right now. AI is remaking how software gets built, and our whole industry is metabolizing the change in public. Maintainers are overwhelmed! Systems thinkers are drawing new diagrams! Individual developers are writing about how their own workflows feel different in ways they can’t quite name! Is it a tailspin? An ouroboros of navel-gazing, the discourse eating its own tail? Or are we in a chrysalis, goo right now but about to emerge reformed and beautiful? So many takes! So many emotions! And rightly so. There’s a lot to figure out.]]></summary></entry><entry><title type="html">Surfboard Makers</title><link href="https://inze.ph/writing/surfboard-makers" rel="alternate" type="text/html" title="Surfboard Makers" /><published>2026-02-20T00:00:00+00:00</published><updated>2026-02-20T00:00:00+00:00</updated><id>https://inze.ph/writing/surfboard-makers</id><content type="html" xml:base="https://inze.ph/writing/surfboard-makers"><![CDATA[<p>I penned this one as a stake in the ground for how we’re thinking about AI at Miren. It’s got both Evan’s and my name on it because it reflects where we’ve landed together, but I drove the drafting — with, fittingly, a little help from our robot friend.</p>

<p><a href="https://miren.dev/blog/surfboard-makers">Read it over on miren.dev →</a></p>

<hr />

<p><em>Full text copied below from the <a href="https://miren.dev/blog/surfboard-makers">original post</a>.</em></p>

<p>We are a small team that builds deployment software, and we use AI tools
extensively to do it. Poke around our repos and you’ll find a CLAUDE.md
file right there in the root, sitting alongside a homepage that says
“human-centric.” We don’t think that’s a contradiction, and we’d like to
explain why.</p>

<h2 id="the-surfboard-maker">The Surfboard Maker</h2>

<p>Imagine a surfboard maker. She loves surfing — the craft of shaping a board,
the thrill of catching a wave, the community at the beach. That’s why she got
into this work.</p>

<p>The ocean right now is choppy. Big swells, shifting currents, undertow you
can’t always see. For surfers, choppy water is a mixed bag: uncertainty, yes,
but also some of the biggest waves anyone’s ever seen. It’s a hell of a time to be a surfer.</p>

<p>But not everybody’s focused on the waves. Some people are watching the water
itself, asking harder questions. Who owns the beach? Who’s polluting the water?
What happens to the ecosystem when the currents shift this fast? How do we make
sure everybody has access to surf, not just people who can afford the gear?</p>

<p>The surfboard maker has thought about this. She knows what happens if the
systems that support surfing break down. Neglecting them could kill the
thing she loves.</p>

<p>She’s a surfboard maker, but she’s also a citizen of the beach. She goes
to work, she shapes boards, she rides waves — and she doesn’t pretend that
her craft exists in a vacuum.</p>

<p>That’s us. We’re surfboard makers. We build deployment tools for small teams
because we are a small team, and we know what it feels like when your tools
let you focus on the work instead of fighting the infrastructure. Shipping
something to people who find it genuinely useful, iterating on it together,
watching it get better — that’s catching the wave.</p>

<h2 id="big-waves">Big Waves</h2>

<p>We are a handful of people building infrastructure software. The work we do — container
orchestration, distributed state, controller reconciliation — requires deep
understanding of systems. AI doesn’t replace that understanding. But it removes
friction around it.</p>

<p>Our posture is: try it for everything. Can it help us write design docs?
Capture notes? Make diagrams? Write code? Review code? Sometimes it’s magical.
Sometimes it’s hilariously wrong. Sometimes it’s frustratingly inconsistent.
We experiment and we adjust. Overall, it’s making us faster, and enabling a
small team to explore far more ideas than we otherwise could.</p>

<p>But speed isn’t the point. The question we keep coming back to — with AI, with
Miren, with all of our tools — is: <em>is this helping humans build and
communicate with other humans?</em> When the answer is yes, we keep going. When
it’s not, we back off.</p>

<p>This is also why Miren exists. Our entire product is built on the belief that
AI enables small teams to ship more software than ever, and that software needs
to go somewhere. We’re trying to build a business around that — we won’t
pretend otherwise. It’s early, and our actions over time will say more than
any blog post.</p>

<h2 id="the-work-is-ours">The Work Is Ours</h2>

<p>There’s a real concern behind the skepticism, and it’s worth naming: a lot
of AI-generated code is bad. Slung out without understanding, without review,
without taste. We get why people see “AI-assisted” and think “AI-generated
slop.”</p>

<p>Craft is understanding what you’re building and why. That doesn’t change
when you pick up a new tool. When AI helps us write a controller or a
migration, the understanding is still ours — and so is the responsibility.
We read it, test it, and judge it. We don’t catch everything — nobody does.
But we build software that’s easy to change, so when something slips
through we fix it, we learn from it, and we write the test.</p>

<p>The most important things we make are the design documents we write for
each other. RFDs that lay out the shape of the system, the ideas we’re
considering, the decisions we’ve made and why. AI might help us draft them,
but their purpose is to capture and convey human thinking. They’re written
for humans first. The fact that AI tools can also read them and use that
context to help us build is a nice byproduct, not the point.</p>

<p>We think about Miren itself the same way. LLMs are in the mix. Container
technology is in the mix. But none of that is the point. The point is
taking something from a builder, presenting it to a user, and making that
feedback loop as smooth as possible. The technology serves the connection
between humans — not the other way around.</p>

<h2 id="what-we-pay-attention-to">What We Pay Attention To</h2>

<p>We’re not oceanographers. We’re not going to write a white paper on AI
governance. But “we just build tools” isn’t a position we’re interested in.
We want to be citizens of this community, not just consumers of it. We’re
still figuring out what that looks like in practice.</p>

<p>We’ve worked at small companies and large ones. We believe smaller teams
can stay truer to their values — the bigger you get, the more the forces
of growth dilute the original vision. The fact that small teams are
able to do more in this wave of technology means there’s more
potential than ever to stay small and stay focused. That’s something worth
protecting.</p>

<p>The frontier AI tools are a case in point. They come from companies that
are clearly chasing growth, clearly subject to the forces that growth
brings into play. We don’t love being tethered to the whims of a large
company to get the benefits of AI. But we’re building for teams that use these tools — we want to understand
them firsthand, not theoretically.</p>

<p>At the same time — in the same way that we built Miren to run on nearly
any computer, not locked to any one cloud vendor — we’re paying close
attention to the progress of more open, locally controlled AI tools. The
instinct is the same: don’t build a dependency you can’t walk away from.</p>

<h2 id="were-still-surfers">We’re Still Surfers</h2>

<p>We don’t have this all figured out. Nobody does. We’re out here learning
alongside everyone else, feeling the tensions and contradictions, eyeing
up which waves feel surfable and which ones feel dangerous. We’ve taken
some spills already, and we’re certain we’ll take more.</p>

<p>But we know who we are. A small team that loves building software, building
for other small teams who love building software. The waves are big right
now, and we’re not going to sit on the beach and watch.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I penned this one as a stake in the ground for how we’re thinking about AI at Miren. It’s got both Evan’s and my name on it because it reflects where we’ve landed together, but I drove the drafting — with, fittingly, a little help from our robot friend.]]></summary></entry><entry><title type="html">Level of Detail</title><link href="https://inze.ph/writing/level-of-detail" rel="alternate" type="text/html" title="Level of Detail" /><published>2026-02-07T00:00:00+00:00</published><updated>2026-02-07T00:00:00+00:00</updated><id>https://inze.ph/writing/level-of-detail</id><content type="html" xml:base="https://inze.ph/writing/level-of-detail"><![CDATA[<p>In 3D graphics, there’s a technique called <a href="https://en.wikipedia.org/wiki/Level_of_detail_(computer_graphics)">Level of Detail</a> (LoD). The idea is simple: why spend GPU cycles rendering every vertex of a distant mountain when the player can’t tell the difference between ten thousand triangles and a hundred? So the engine swaps in a lower-polygon model. As you get closer, it swaps in a higher one. Done well, the player never notices.</p>

<p><img src="/assets/images/stanford-bunny-lod.png" alt="The Stanford bunny rendered at three levels of detail — from thousands of polygons down to a few hundred" /></p>

<p>The algorithms have gotten wildly sophisticated over the decades. Modern engines don’t just swap between a few discrete models. They can continuously stream geometry, dissolve between levels, even procedurally generate detail on the fly. But the core insight hasn’t changed: <strong>don’t compute what nobody’s looking at</strong>.</p>

<p>I keep coming back to this idea because I think it describes one of the central activities of building software. Not the code part—the <em>thinking</em> part.</p>

<h2 id="models-all-the-way-down">Models All the Way Down</h2>

<p>We spend our days building and navigating models. Code is the most visible kind, but the mental models are what actually matter. When I’m debugging a production issue, I’m not holding the entire system in my head. I’m holding a low-polygon version, just enough shape to know where to look, with the ability to zoom in when something catches my eye.</p>

<p>Abstraction is the core operation here. When I draw a box on a whiteboard and label it “database,” I’ve loaded a low-LoD model. I know there’s a sprawling world of B-trees and query planners and buffer pools in there, but right now I don’t need those polygons. I just need to say “data goes here.” I need the silhouette.</p>

<p>Even the phrase “black box” is a kind of low-polygon model: a cube with no visible internals. You only need the shape of it. What goes in, what comes out.</p>

<p>This is something experienced engineers do instinctively. A senior engineer waves their hand and says “that part’s fine, the bug is over <em>here</em>.” Zoom out to understand the architecture. Zoom in to chase the bug. Zoom back out to check whether the fix makes sense. The skill isn’t knowing everything about the system. It’s knowing what resolution you need right now.</p>

<h2 id="context-windows">Context Windows</h2>

<p>Here’s what’s been rattling around my head: LLMs have a version of this problem, and it’s <em>weirdly</em> parallel to our own.</p>

<p>When you work with an LLM, context is everything. Too little context and it makes dumb assumptions: it fills in the missing polygons with whatever its training data suggests, which might be completely wrong for your situation. Too much context and it gets lost: the relevant details drown in noise, the model starts contradicting itself, the reasoning goes soft.</p>

<p>Getting an LLM to do good work is largely a LoD problem. You need to load the right model of the situation into its context window, at the right resolution. High detail on the part you’re working on. Lower detail (but not zero) on the surrounding architecture. Maybe just a sentence about the broader system.</p>

<p>We do the same thing with our own brains all day long. We just never think of it that way.</p>

<h2 id="fifty-thousand-lines-a-day">Fifty Thousand Lines a Day</h2>

<p>So here comes AI, and the geometry starts generating itself.</p>

<p>Adam Jacob gave <a href="https://youtu.be/yxzghm3Fdj8?t=10718">a talk at CfgMgmtCamp</a> this week where he laid it all out pretty bluntly. He’s fresh off shutting down System Initiative (six years, seven product iterations, didn’t find fit), and he’s rebuilt a prototype in three days with AI. He says people he knows and trusts are generating 50,000 lines of working code per day, single-threaded. His message to the infrastructure community: the time for skepticism is over. The velocity increase is too high. Adapt or get left behind.</p>

<p>His framework for what’s left for humans is design and planning. Implementation, testing, review: that’s all agent work now. “Are you reading the code? The answer is not really. Not really. I can’t. It’s going too fast.” Code principles like DRY don’t matter anymore because you’re never reading the code. The only thing that matters is software architecture: giving the agents enough structure to stay coherent.</p>

<p>It’s a “let it rip” vision. Crank the polygon count to maximum. The GPU can handle it now, so why hold back?</p>

<h2 id="the-rigor-move">The Rigor Move</h2>

<p>On the other end of the spectrum, the Oxide folks had <a href="https://oxide-and-friends.transistor.fm/episodes/engineering-rigor-in-the-llm-age">a conversation recently about engineering rigor in the LLM age</a> that lands in a very different place.</p>

<p>One example: Rain Paharia wrote one implementation by hand, then had the LLM replicate the pattern across four variants: 20,000 lines plus 5,000 doc tests in under a day. Without the LLM this library might never have shipped at all. The tedium-to-value ratio was just too punishing. The LLM didn’t replace the rigor. It made the rigorous version <em>feasible</em>.</p>

<p>The pattern across the whole conversation is the same: LLMs remove friction from the <em>details</em>, which frees you up to spend more time on the parts that actually require careful judgment. More rigor, not less. The polygon budget went up, and they’re spending it on quality rather than quantity.</p>

<h2 id="carving-back">Carving Back</h2>

<p>Adam’s right that the velocity increase is real and not going away. But I think the “50,000 lines a day” framing mistakes output for progress. We’ve always known that lines of code is a terrible metric. The interesting question isn’t how much code you can generate. It’s how much code you can <em>justify</em>.</p>

<p>My hunch is that we’ll spend just as much time and energy carving code back as we will generating it. If generating code is nearly free, the scene fills up with geometry fast. But your rendering budget—working memory, attention, the ability to reason about what you’ve built—doesn’t scale the same way. And sometimes the right move isn’t a more sophisticated LoD strategy. It’s simplifying the scene itself. Delete the sprawling implementation and replace it with something you can actually reason about.</p>

<figure>
  <img src="/assets/images/frustum-culling.gif" alt="Frustum culling in action — as the camera sweeps around a 3D city, everything outside its field of view vanishes" />
  <figcaption>via <a href="https://github.com/Falmouth-Games-Academy/comp350-research-journal/wiki/View-Frustum-Culling-(VFC)">Falmouth Games Academy</a></figcaption>
</figure>

<p>The GPU story played out the same way. GPUs are absurdly more powerful than they were twenty years ago. And the results are real: photorealistic worlds spanning kilometers, running at hundreds of frames per second. But you don’t get there by throwing the whole map at the hardware. That gets you a very pretty slideshow. You get there because graphics engineers got better at managing what to render and what to skip. Stream in the right portion of the map so the player doesn’t hit a loading screen. Drop everything outside the viewport as they look around. Cull what’s behind that wall. Photorealism is a bunch of dances with data: deciding what to load, what to keep, and what to throw away, hundreds of times per second.</p>

<p>The raw power didn’t eliminate the LoD problem. It moved it. The engineers aren’t hand-placing low-poly stand-ins anymore, but they’re still spending their days figuring out what the player needs to see and what they can get away with not rendering. The work changed shape, but the discipline is what delivers the fidelity.</p>

<p>I think that’s where we’re headed with code. The bottleneck was producing it, and that bottleneck has loosened. We’re going to build better software because of it, just like GPUs gave us better-looking games. But the pressure moves to a part of the work that’s always been there: knowing what should exist and what shouldn’t. That takes human judgment, but the same tools that can generate 50,000 lines a day might also help us figure out which 5,000 to keep.</p>

<h2 id="the-constant">The Constant</h2>

<p>The tools around this activity are changing fast. I can load a low-LoD model of a subsystem I’ve never even seen by asking an LLM to summarize it. I can vaguely describe a building and get back a ream of floor plans. These are real, meaningful changes to the speed of the work. But the work itself—the deciding, the choosing, the constant question of “how much do I need to know right now?”—that part hasn’t changed at all. I don’t think it can. Somebody still has to decide what the thing should <em>do</em>, and somebody has to navigate what’s been built. That’s not the bottleneck. That’s the work.</p>

<p>A distant mountain doesn’t need every triangle. But the thing in the player’s hands, the thing they’re interacting with every single frame, needs all the polygons you can give it. No amount of GPU power changes that. The player is always looking at <em>something</em>.</p>

<p>Knowing what that something is, that’s the gig. It always has been.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[In 3D graphics, there’s a technique called Level of Detail (LoD). The idea is simple: why spend GPU cycles rendering every vertex of a distant mountain when the player can’t tell the difference between ten thousand triangles and a hundred? So the engine swaps in a lower-polygon model. As you get closer, it swaps in a higher one. Done well, the player never notices.]]></summary></entry><entry><title type="html">idk, try it</title><link href="https://inze.ph/writing/idk-try-it" rel="alternate" type="text/html" title="idk, try it" /><published>2026-02-03T00:00:00+00:00</published><updated>2026-02-03T00:00:00+00:00</updated><id>https://inze.ph/writing/idk-try-it</id><content type="html" xml:base="https://inze.ph/writing/idk-try-it"><![CDATA[<p>I wrote a post for the <a href="https://miren.dev/blog/">Miren blog</a> about the joy of rapid iteration and low-friction deployment. It’s the story of an internal project that started as a simple Google Meet bot and evolved into something genuinely useful because we made trying things free.</p>

<p><a href="https://miren.dev/blog/idk-try-it">Read it over on miren.dev →</a></p>

<hr />

<p><em>Full text copied below from the <a href="https://miren.dev/blog/idk-try-it">original post</a>.</em></p>

<p>There’s a feeling you get when you’re hacking on something with friends and the tooling just <em>gets out of the way</em>. You make a change, you ship it, you see what happens. “Like this?” “No, like <em>that</em>!” “OMG yeah!” It’s pure feedback loop. It’s play.</p>

<p>Lately I’ve been getting that feeling from a little internal project called Mirener. It started as a Google Meet bot but has slowly grown into… I don’t know, a general-purpose ambient software thingy? Our little Slack buddy that does whatever we need it to do this week.</p>

<h3 id="a-bot-is-born">A Bot is Born</h3>

<p>Back in August, we wanted something simple: type <code class="language-plaintext highlighter-rouge">/meet</code> in Slack, get a Google Meet link. There were a couple of integrations in the Slack marketplace, but none of them felt quite right. You couldn’t customize the notifications. They didn’t work in threads. You couldn’t tell if anybody was actually in the meeting or not. Auto-transcripts and summaries were a new feature, but they’d get lost in permissions vortexes.</p>

<p>Evan, as he often does, said something like “surely we can do better by writing a little software.” In no time he had something basic up and running. A few days later, voila — meeting notes posted back to the channel automatically. Already better than anything we’d found in the marketplace.</p>

<p>Then he did another Evan thing: “Hey, anybody should feel free to add stuff to this.”</p>

<h3 id="jumping-in">Jumping In</h3>

<p>So I started poking at it. I wanted to add thread support so you could <code class="language-plaintext highlighter-rouge">@mirener meet</code> in any conversation.</p>

<p>I got something half-working and hopped on a call with Evan to talk through it. I was sharing my screen, walking through my WIP code, and he said “idk, try it.”</p>

<p>“What, like… deploy this?”</p>

<p>“Sure, why not.”</p>

<p>And that’s when it hit me — Miren was just going to take the code from my machine and put it on the server. My brain pathways had atrophied after years of working in CI pipelines. I had forgotten that you don’t <em>have</em> to commit → PR → CI → review → merge → deploy. You can just… deploy.</p>

<p>I typed the command. It broke. We fixed it. I deployed again. We made a tweak. I deployed again. Then eventually I committed and pushed.</p>

<p>I’d been dipping my toe in. Evan gave me a push. Now we were all just splashing around.</p>

<h3 id="playing-together">Playing Together</h3>

<p>When trying things is free, something clicks. You start riffing.</p>

<p>“The summaries are cool, but what if we split out the non-work stuff into its own section?” An hour later, we had off-topic summaries — little moments of delight for people who weren’t in the meeting, trying to guess how exactly we got to talking about Road Runner cartoons.</p>

<p><img src="/assets/images/mirener-summary.png" alt="A Mirener meeting summary in Slack showing work topics and off-topic chat" /></p>

<p>“What if we could see who’s in the meeting without joining?” Live participant lists. “What if we could search slackmojis and upload them right from Slack?” <code class="language-plaintext highlighter-rouge">/emoji search</code>. “What if we could see who’s around before starting a call?” Presence tracking.</p>

<p>None of it was planned. No roadmap, no sprint planning, no tickets. Someone would say “wouldn’t it be cool if…” and an hour later it was live.</p>

<p>Here’s the weird part: by treating this as a toy with low stakes, we ended up with something genuinely useful. We’ve been able to use Mirener to shape how we collaborate — we wanted it to be effortless to hop on a quick live chat, and to let the computers take notes so we could focus on the conversation. That’s exactly what happened.</p>

<h3 id="what-i-forgot">What I Forgot</h3>

<p>I’ve spent many years in environments where deploying anything requires a lot of ceremony. Heck, I’ve often been the person responsible for <em>building</em> that ceremony! The linters must OK, the tests must pass, the reviewers must approve. This stuff is all important and has its place — our core Miren services go through CI, which is the right call for customer-facing code.</p>

<p>But not everything needs that. Somewhere along the way I forgot how good it feels to just ship.</p>

<p>Mirener reminded me. The tools you choose reflect your values. The tools you build spread them.</p>

<p>Damn, I missed this. Enjoy the deploy.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I wrote a post for the Miren blog about the joy of rapid iteration and low-friction deployment. It’s the story of an internal project that started as a simple Google Meet bot and evolved into something genuinely useful because we made trying things free.]]></summary></entry><entry><title type="html">Climate Class</title><link href="https://inze.ph/writing/climate-class" rel="alternate" type="text/html" title="Climate Class" /><published>2023-10-11T00:00:00+00:00</published><updated>2023-10-11T00:00:00+00:00</updated><id>https://inze.ph/writing/climate-class</id><content type="html" xml:base="https://inze.ph/writing/climate-class"><![CDATA[<p>This post serves as both my review of and my final assignment for <a href="https://terra.do/climate-education/cohort-courses/climate-change-learning-for-action/?">Terra.do’s Learning for Action Course</a>. If you’ve heard me refer to “my climate class” - which is highly likely if you’ve spent more than 30 seconds talking to me recently - this is what it was all about!</p>

<h3 id="joining">Joining</h3>

<p>When I started LFA, I was at a major crossroads in my life. I had just quit my job without having another one lined up. After 15 straight years of work, I planned to take a sabbatical. I wanted to rest and recharge and to research a career transition into climate. I was not expecting to commit to a 12 week course so soon, but LFA seemed so perfectly well suited to my situation that I decided to hold my breath and sign up.</p>

<h3 id="what-it-is">What it is</h3>

<p>The class pairs enthusiastic and caring instructors with an absolute avalanche of information. You can tell there are experts in pedagogy behind the course design. It shows both in the structure of the content as well as the constant guidance for students on how to effectively engage with the material.</p>

<p>The course material comes in several different forms:</p>
<ul>
  <li><strong>20 “Classes” of self-directed material</strong> - This is where the avalanche comes in. These things are densely packed, heavily linked, regularly updated gold mines of information. Students are expected to have these at least skimmed by each week’s lab session, and it’s a bear to keep up. (More on keeping up below.)</li>
  <li><strong>Weekly guest lectures from experts in the field</strong> - These were a selling point for me joining the course as I was excited about the opportunity to hear first-hand talks from bona fide climate experts. I’m happy to report that they’re just as valuable as I was hoping they’d be. I wasn’t able to attend all of them live, but they’re all recorded and for the ones I did attend live it was exciting to be able to participate in the Q&amp;A at the end.</li>
  <li><strong>Weekly lab sessions</strong> - In these you get together with with a smaller lab group (mine was about 25 students) and an instructor to discuss the week’s course material. These are pitched as the “beating heart” of the course and it’s easy to see why - getting to spend time each week with a group of real human beings going through the same course was a key motivating factor for me. These sessions helped make the experience feel like an actual class and not just a set of resources for consumption.</li>
  <li><strong>Bonus deep dive lab sessions</strong> - These are optional extra events giving an opportunity to go deeper on a topic in the course. I wasn’t able to attend many of these but the ones I did attend or watch were solid.</li>
</ul>

<h3 id="how-it-went">How it went</h3>
<p>On joining I felt like the students were disproportionately coming from tech/business/entrepreneurial backgrounds, and I worried that the course would be narrowly focused on that persona. I’m fairly allergic to the aggressively positive inspo/grindset mood that is prevalent in tech, and I knew I would bounce off any class delivered in that style. Luckily the course content proved to be comprehensive and broad and as I got to know my fellow students more I found that none of them actually fit the stereotype I was worried about.</p>

<p>The first guest lecture is called “Climate Science 101,” delivered by <a href="https://www.soest.hawaii.edu/soestwp/about/directory/chip-h-fletcher/">Dr. Chip Fletcher</a>. The instructors warn that it’s a gut punch of a talk, and they intentionally program in the subsequent slot a workshop on emotional resilience delivered by <a href="https://www.linkedin.com/in/nikyta-palmisani-20a5877/">Nikyta Palmisani</a>, an eco-psychologist. This combination was like a thesis statement for the course: “We are going to give you the unvarnished facts, and the facts are dire. But we are also going to give you tools for taking care of yourself, because you need to be clear-eyed but also capable of action.”</p>

<p>The other major early message repeated by several voices was that transitioning into climate is less about re-skilling than it is about finding ways to apply your existing talents. The climate emergency requires an economy-wide response, which means that the solution space is big enough to require every different kind of skillset in existence.</p>

<p>So that’s the frame of the course as it moves through the content: we want to help make you useful, so take from the materials what makes you useful and leave the rest. This ethos allowed the class to cover controversial areas and conflicting takes in a sensible way: by presenting the conflict directly, allowing multiple perspectives to shine through. This pluralistic approach meant that we got exposure to a lot of different ideas without getting bogged down in debates.</p>

<p>I hit a doldrums point around the middle of the course where the sheer scale and complexity of the problem space was daunting me, squelching my enthusiasm. I was hit hard by the Climate Justice modules which illuminate how seemingly straightforward solutions can have devastating  secondary effects on already disadvantaged communities. How could I ever make a dent in a problem so big? How could I ever know that what I was working on was really advancing environmental justice?</p>

<p>What carried me through this low point was the preparation the class had given me by not shying away from the gravity of the underlying crisis, acknowledging that anxiety and sadness are natural and expected responses, and then asking us to take care of ourselves, because that’s an integral part of responding to the climate crisis too.</p>

<p>In the back half of the course, I found my spirits recovering, but my class progress lagging. In the beginning I was able to get all the way through every slide of the classes before lab sessions, but now I was struggling to keep up. Again here the ethos of the course supported me: it was very clear what I could skip, what I could skim, and what I should focus on. The fact that everything is recorded meant that I could spend a few free hours on a weekend catching up as needed. A mantra of “this is meant to be useful for you” helped me quell my inner honor student and focus on quality of learning over checking boxes.</p>

<h3 id="what-i-learnt">What I learnt</h3>

<p>There’s so much information that it’s difficult to comprehensively summarize, so barring that here are a few highlights that stuck with me:</p>

<ul>
  <li><strong>We must get to zero carbon emissions by 2050.</strong> That’s 50% by 2030. Climate scientists around the world work on thousands of climate modeling projects. The <a href="https://www.ipcc.ch/">IPCC</a> is a UN body that meets each year and tries to distill the latest science into communications designed for policymakers and the public. There’s a lot of inherent complexity to climate models and aggregating scientific findings is going to involve confidence intervals and error bars. There’s a tension between staying true to the science by exposing the ambiguity and clearly communicating with the public. I’ve found the <a href="https://www.iea.org/reports/net-zero-by-2050#">IEA’s Net Zero by 2050</a> report does a good job of summarizing a high level story of what needs to happen (focused on the energy sector).</li>
  <li><strong>Collective action is more important any personal pro-climate changes</strong>. The notion of a ‘personal carbon footprint’ was <a href="https://www.theguardian.com/commentisfree/2021/aug/23/big-oil-coined-carbon-footprints-to-blame-us-for-their-greed-keep-them-on-the-hook">pushed by none other than big oil</a> to atomize people into individually competing virtue chasers. Climate-conscious choices in life are good, but they’re not going to move the needle as far or as fast as it needs to move.</li>
  <li><strong>But paradoxically, fully addressing the challenge will require behavior change in individuals.</strong> We need to electrify the entire economy, which will involve people switching to electric stoves, cars, and HVAC. We need buttloads of solar, which will inevitably include a significant chunk of rooftop solar. Animal agriculture accounts for a <a href="https://drawdown.org/solutions/plant-rich-diets">ridiculously high percentage of global greenhouse gas emissions</a>, <a href="https://ourworldindata.org/food-choice-vs-eating-local">beef most of all</a>, so reducing meat consumption on a societal scale will play a significant factor in getting to net zero.</li>
</ul>

<p>More than any one fact, my biggest takeaway from the class is that pluralistic mindset. We need to recruit <em>literally everybody</em> into this challenge, and in order to do so we need to meet them where they’re at. What’s most exciting to me about climate is its potential to put us all on the same team. The problem threatens all of us, and the solution requires something from all of us.</p>

<h3 id="where-im-headed">Where I’m headed</h3>

<p>I started the class at a crossroads and I remain at a crossroads. This is okay! I knew going in that I wanted to take more time away from full time work, so I look forward to continuing to explore the space outside the context of a class.</p>

<p>I’ve built up a set of skills and knowledge about building software products and leading software teams. I know that I like complex systems and I’m particularly intrigued by the energy system and its parallels with my expertise in internet infrastructure.</p>

<p>As a first step I’m working on sussing out what <em>form</em> I want my next chapter to take: Start something or join something? Sign up for one full time job or collect part time commitments? I’ve got plenty of experience with full-time gigs, so up next I’m planning to learn more about the world of advising, investing, and consulting.</p>

<p>My overall strategy for the next few months is to continue to be open to lots of different ideas, people, and organizations, on the hunt for a <em>feeling</em>, that spark of intuition that will tell me which paths are worth walking down.</p>

<p>Along the way I’m going to try and continue a practice of sharing what I’m learning. I have found an old adage from surgical medicine bouncing around my head lately “See one, do one, teach one.” I think this is a valuable idea generally applied to both pedagogy and movement building. To truly learn something, you must put it into practice and share it with others. To truly believe in an ideal, you must put it into practice and share it with others.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[This post serves as both my review of and my final assignment for Terra.do’s Learning for Action Course. If you’ve heard me refer to “my climate class” - which is highly likely if you’ve spent more than 30 seconds talking to me recently - this is what it was all about!]]></summary></entry><entry><title type="html">Requiem for a Strange Loop</title><link href="https://inze.ph/writing/requiem" rel="alternate" type="text/html" title="Requiem for a Strange Loop" /><published>2023-08-23T00:00:00+00:00</published><updated>2023-08-23T00:00:00+00:00</updated><id>https://inze.ph/writing/requiem</id><content type="html" xml:base="https://inze.ph/writing/requiem"><![CDATA[<p>The last instance of the <a href="https://thestrangeloop.com/">Strange Loop</a> tech conference in St. Louis just wrapped up. I’m so glad I was able to be here. This conference made an important impact on me in my early career, and this year’s experience inspired me to write my own farewell.</p>

<h2 id="up-up-and-away">Up up and away</h2>

<p>In 2010, I was 2 years out of university, working at a cool startup with an amazing team. We were pair programming full time and it felt like I was learning at a hundred miles an hour every day. I was a few years junior to the team, but we were all in our early-to-mid twenties working our asses off to turn code into a bootstrapped startup. Due in part to our age, in part to the personalities on the team, and in part to the zeitgeist of those years, it felt like the air was electric with enthusiasm and optimism about all things tech.</p>

<p>One of our team was from St. Louis and encouraged us to go to this seemingly-esoteric tech conference in his hometown. My first year attending was 2011, where Rich Hickey keynoted with his now-iconic “<a href="https://www.thestrangeloop.com/2011/simple-made-easy.html">Simple Made Easy</a>” talk. I was in love. This was industrial philosophy! We had tech luminaries! Nerd rockstars! (We still used that word unironically back then.) And they were full of mind-expanding ideas to push our craft forward! The whole place felt like it was filled with such smart people talking about tech so deep that I could barely follow their ideas. But damn did I want to!</p>

<p>The next year I remember being absolutely delighted to see that Daniel Friedman, the author of <a href="https://mitpress.mit.edu/9780262560993/the-little-schemer/">my favorite textbook from school</a>, was just as joyful and weird as his writing. He presented “<a href="https://www.infoq.com/presentations/miniKanren/">Relational Programming in miniKanren</a>” with his research partner (and co-author of the later <em>Schemer</em> books) William Byrd. They both playfully bickered the whole time as they showed off demos and just thoroughly broke everyone’s brain throughout. To this day I regularly think about the moment where Friedman jumps the gun on their agenda yelling “IT’S <a href="https://en.m.wikipedia.org/wiki/Quine_%28computing%29">QUINE</a> TIME!” Byrd immediately chastise him kid-brother style; “Nooo! We’re gonna do numbers! You’re messing up,” to which Friedman sighs, “Fine fine fine… it’s not quine time.” It eventually did become quine time and he yelled it again, delivering the punch line the room was eagerly waiting for. We a room full of 100+ kids messing with legos…but the bricks were lamba calculus.</p>

<p><a href="https://www.destroyallsoftware.com/talks/a-whole-new-world">Gary Bernhardt’s talk</a> from that year also stays with me to this day. He was flying high off the viral success of his now-iconic “<a href="https://www.destroyallsoftware.com/talks/wat">Wat</a>” lightning talk delivered earlier that year, and it felt like he took all of that energy and packed it into a big swing. The presentation has a bit of indirection, so it’s better to experience than to read about. It had me rapt in the balcony of the theater and absolutely primed for the message payload.</p>

<p>What tied all these experiences together was a community of technologists committed to trying out big ideas and interrogating received wisdom while embracing the weird and having fun along the way. It’s such a special combination. There’s nowhere else like it.</p>

<h2 id="booster-detach">Booster detach</h2>

<p>The following year, I switched jobs and lost my connection with the crew of coworkers that kept going annually. I eagerly awaited each year’s videos, but life prevented me from returning.</p>

<p>In the intervening decade, the world changed so much, and so did I. I had my techno-optimist heart broken time and again as the superstructures of power wrung the soul out of the internet. It has taken me a long time to find pieces of my previous hopeful worldview that I can start to put back together into a new vision of what tech means to me. In those personal wilderness years, even as I celebrated their release, I couldn’t muster the energy to watch many Strange Loop talk videos.</p>

<h2 id="reentry">Reentry</h2>

<p>In the last couple of years, I’ve taken some first steps on a journey of rebuilding my value system in relation to tech. I’m still suspicious of most tech idealism, wary of what capitalism will stifle, but I’m starting to get a feel for the shape of community that I’m seeking and seeing glimmers of it here and there.</p>

<p>In comes the announcement about 2023 being the final Strange Loop ever. Wow, end of an era! A contingent of that first startup’s early-employee diaspora all bought nostalgia-fueled tickets and made our way to St. Louis for one last rodeo.</p>

<p>Going into this last conference in 2023, I was unsure. Would I feel alone as a cynic in a sea of unbridled tech boosters? I’d watched the magic drain out of so many other things I had loved in tech - what would it be like coming back to one of the early fonts of my enthusiasm?</p>

<p>I shouldn’t have worried. I’m relieved and grateful to report that Strange Loop still has the magic! It managed to inspire that same mix of enthusiasm and wonder in me that I remember from a decade ago. The community has evolved with the world, and while the techno-optimistic spirit is still in there, it’s married with a clear-eyed view of the world in all its faults and a drive to change it for the better.</p>

<p>“<a href="https://thestrangeloop.com/2023/playing-with-engineering.html">Playing With Engineering</a>” was a wonderful combination of tech and science and creativity and education to kick off the conference. After, I did a plinko dive through <a href="https://thestrangeloop.com/2023/the-attacker-has-expensive-radio-equipment-but-your-android-phone-is-resilient.html">cell tower spoofing</a>, <a href="https://thestrangeloop.com/2023/lessons-from-building-github-code-search.html">code search</a>, and <a href="https://thestrangeloop.com/2023/ipvm-seamless-services-for-an-open-world.html">a software stack that rethinks the fundamental architecture of the web</a>. The conference party at <a href="https://en.m.wikipedia.org/wiki/City_Museum">City Museum</a> was a perfect extension of the ethos of the conference—a place all about exploration, play, surprise, and whimsy.</p>

<p><a href="https://thestrangeloop.com/2023/an-approach-to-computing-and-sustainability-inspired-from-permaculture.html">My favorite talk of this year came from Devine Lu Linvega</a> of <a href="https://100r.co">Hundred Rabbits</a> on the second day. They have a way of combining tech, philosophy, and art that I find intellectually challenging in the absolute best way. I might write more about this talk because I’m still processing it.</p>

<p>The closing three keynotes were sweet, funny, and pitch-perfect to close out the final Strange Loop. I found myself tearing up right along with conference founder Alex Miller in his send off keynote. He built something really special, he recognized it was time to let it go, and we all got to stand in gratitude and say goodbye to the thing together. As the applause filled the beautiful theater we were participants in our very own self reflective loop: in that moment we were Strange Loop saying goodbye to itself. It was one of the most special communal moments I’ve been a part of.</p>

<p>I was surprised that Alex didn’t program that talk last, but having Julia Evans and Randall Munroe headline afterwards was exactly the right move. The playful, joyful, humble energy they both brought was the goodbye that Strange Loop deserved. Their parting messages were respectively “remember to share what you’ve learned” and “remember to be kind as you share.” Coming off a deeply technical body of the conference that at times borders on inscrutable, these were apt reminders to close on.</p>

<h2 id="forest-for-the-trees">Forest for the Trees</h2>

<p>Alex was clear that there won’t be any direct successors to Strange Loop; “If you are inspired to start something, it should reflect your values, not mine.” He used the metaphor of a tree falling in a forest, which lets light into the canopy and becomes food for new shoots to grow. With the forest analogy resonating, Alex wondered aloud if conferences with as big a carbon footprint as this should continue to exist. That was a tough, brave idea to present in this moment. Maybe this was a bigger, deeper ending. Perhaps it has to be. The idea was almost too heavy for a room full of teary eyes, already mourning.</p>

<p>But Alex had some seeds to share. He sketched out a few ideas for hybrid models where local meetups could meet at medium-scale venues, record their talks, and exchange them digitally across geographies. And he offered his expertise as a resource for anybody who wanted to consult with him as they experimented.</p>

<p>I’m hopeful that Alex is right about the birth of new sprouts, and I’m inspired to try and contribute to their growth. I will be looking for folks in Chicago who might be interested in working together on this (please reach out if this might be you!).</p>

<p>I think invoking the image of an ecological system (which is made up of self-reinforcing loops inside loops inside loops) as well as asking us to consider our own ecological and community impact (nested circles of reciprocity) was the perfect end: one that reckons bravely with the endings while portending future beginnings.</p>

<p>So thank you and farewell, Strange Loop. Function complete. End thread. Yield execution to the surrounding system.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[The last instance of the Strange Loop tech conference in St. Louis just wrapped up. I’m so glad I was able to be here. This conference made an important impact on me in my early career, and this year’s experience inspired me to write my own farewell.]]></summary></entry><entry><title type="html">Treatment</title><link href="https://inze.ph/writing/treatment" rel="alternate" type="text/html" title="Treatment" /><published>2020-12-24T00:00:00+00:00</published><updated>2020-12-24T00:00:00+00:00</updated><id>https://inze.ph/writing/treatment</id><content type="html" xml:base="https://inze.ph/writing/treatment"><![CDATA[<p>It’s been several months since I wrote about my <a href="/writing/diagnosis">diagnosis</a>. I figured it’s about time to cook up a public update. So, please join me for a tour of Acute Promyelocytic Leukemia!</p>

<h2 id="what-was-it-supposed-to-do-before-we-broke-it">What was it supposed to do before we broke it?</h2>

<p>The human body is made up of various substances which can be classified into goos and non-goos. Blood is one of the most beloved of the goos. It scoots around the body doing various chores, delivering packages, cleaning up trash, and generally keeping the place in order. Blood can be further divided into three sub-goos:</p>

<ul>
  <li>Red Blood Cells - USPS for oxygen</li>
  <li>White Blood Cells - Spies and assassains</li>
  <li>Platelets - Occassionally useful gunk</li>
</ul>

<p>All of these are born and raised in the bone marrow. Bone marrow sits at a spongy midpoint of the goo/non-goo spectrum. It has found a convenient hiding place within the bones (those stalwarts of non-goo). It stays busy by breeding hemocytoblasts (impressionable youth) and putting them through <a href="https://en.wikipedia.org/wiki/Hematopoietic_stem_cell#/media/File:Hematopoiesis_(human)_diagram_en.svg">various training programs</a> to fill all of the necessary roles in the blood club.</p>

<h2 id="what-did-we-break">What did we break?</h2>

<p>The name of the disease tells most of the story if you break it down:</p>

<ul>
  <li>Acute - Speedy</li>
  <li>Promyelocytic - Having to do with promyelocytes (white blood cell teenagers)</li>
  <li>Leukemia - Wow ok there are way too many of these things</li>
</ul>

<p>Cue the ukelele, as this is a classic case of arrested development. You start young blasts on a path to successful careers as white blood cells and they lose their way as adolescent promyelocytes. They’ve failed to become members of functioning society. All they can really do is procreate, which they do, a lot. The marrow starts to fill up with these useless teens, and the whole blood creation system grinds to a halt.</p>

<p>This is how we caught my APL. The routine blood work from my annual physical came back with a note that said “Excuse me sir, but your blood is missing some blood. Please check into the ER immediately.” This was followed by a few days on a general hospital floor as the doctors marveled at how I could be feeling totally fine with my blood counts so low-my very own episode of House. A bone marrow biopsy showed the promyelocyte party indicative of APL and I was transfered to the hem/onc unit so my treatment journey could begin.</p>

<h2 id="how-do-we-fix-it">How do we fix it?</h2>

<p>I’m going to tell you the treatment for APL and it’s going to sound like a joke but it’s not a joke. Ready?</p>

<p>It’s <a href="https://en.wikipedia.org/wiki/Arsenic_trioxide">arsenic</a> and <a href="https://en.wikipedia.org/wiki/Tretinoin">acne pills</a>. Yep.</p>

<p>There’s a whole big story of how we stumbled on this unlikely combo of drugs. I’m still researching the details so I’m saving that saga for a future post. For now let’s cover the basics:</p>

<ul>
  <li>It’s a non-standard regimen that’s specifically used for APL.</li>
  <li>It’s technically not even classified as chemotherapy, since its central mechanism of action is to trigger <em>cell differentiation</em> (“grow up you deadbeat cell kids!”) rather than just being broadly <em>cytotoxic</em> (“death to all cells who dare to challenge me!”).</li>
  <li>Broadly speaking these drugs are well tolerated. This means that I get to sidestep many of the “classic” chemotherapy side effects like hair loss. The main side effects I’ve had to battle are headaches and dry skin from the pills and fatigue from the arsenic. Sure, it’s no walk in the park, but it’s not all that bad a deal for curing leukemia!</li>
</ul>

<h2 id="when-will-it-be-fixed">When will it be fixed?</h2>

<p>While the drugs are manageable, the treatment schedule is somewhat intense. Cancer treatment is divided into three phases:</p>

<ol>
  <li><strong>Induction</strong> aka <em>“Get out!”</em> - This is an initial whallop of treatment intended to induce remission. This is done in the hospital. It’s a 4 week course, which took 5 weeks for me because my liver threw a hissy fit that caused us to have to back off for a minute.</li>
  <li><strong>Consolidation</strong> aka <em>“Aaaand stay out!”</em> - This is continuing outpatient treatment that is intended to stomp out the last traces of the disease and to prevent recurrence. It’s an on/off schedule over the course of 8 months. I’m in the middle of this right now.</li>
  <li><strong>Maintenance</strong> aka <em>“And don’t you ever come back!”</em> - For APL patients who don’t present independent risk factors, there’s no specific maintenance therapy. So if all goes well there will be a point next year where I am no longer taking any specific drugs for this. From there it will just be a lifetime of checkups and general vigilance. For such a serious disease, this is a pretty amazing outcome. Go go medical science.</li>
</ol>

<p>When you’re on arsenic, which is half of the time during consolidation, you’re going down to the cancer clinic five days a week. It’s like a job! And indeed that’s how my doctors framed it for me when trying to help me understand how much other stuff I should be expecting to fit in - “receiving your treatment is your full time job; anything else you can get done is great, but treatment has to be your first priority.”</p>

<p>Consolidation is 4 cycles of 8 weeks so 32 weeks / ~8 months. I’m currently nearing the midpoint. It’s a long road, but every day is progress.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[It’s been several months since I wrote about my diagnosis. I figured it’s about time to cook up a public update. So, please join me for a tour of Acute Promyelocytic Leukemia!]]></summary></entry></feed>