Skip to content

The AI Prisoner's Dilemma

Published: at 09:44 PM

Two years ago, I wrote 90-100% of my code by hand. Today, it’s maybe 30%. Each step felt reasonable, but I think we might all be trapped in what I’m calling the AI prisoner’s dilemma.

It’s like the classic coordination problem where everyone makes sensible individual choices that add up to something nobody wants.

If we avoid using AI, we might miss out on productivity gains and fall behind those who do adopt it. If we embrace AI, we get those benefits, but we might also be accelerating toward outcomes we can’t fully predict. It could be a future where many jobs change significantly, as Dario Amodei recently suggested, or something we haven’t even considered yet.

There’s one question, maybe among many, that’s nagging at me: what happens to our craft when we use tools built to replace it?

We’re all making rational decisions that add up to outcomes we can’t control. And unlike other coordination problems, we can’t easily organize to avoid it.

The AI Ratchet

Once you’re on the AI path, it feels inevitable. GitHub Copilot arrived in my life, and suddenly I wrote maybe 70-80% of my code while AI handled boilerplate and common patterns. A year later, with VSCode and agent selection, I was down to 50-60% human contribution. Now I’m at around 30%.

I still read, review, and own every line, but I’m not the one typing most of it anymore. The 30% I do write is higher-leverage: architecture, tricky logic, and directing. Basically, the parts that benefit from taste and experience. (And I argue with the LLM until the other 70% meets my quality bar.)

My role is changing from musician to conductor. I wrote about building a game with AI recently where I barely wrote any code.

I can see a near future (perhaps 1-2 years away) where I’m not writing code in an editor at all, and rather managing teams of AI agents that implement what I describe.

The major tools are all moving toward an agent orchestration model: OpenAI’s Codex, Claude Code with a Claude Code MCP, Cursor’s Background Agents. The direction seems clear to me.

The Sports Gear Analogy

A friend recently compared AI adoption to sports equipment evolution: “Do you really want to play baseball in 1930s gear? Or basketball in what they wore in the 70s?” He mentioned that ultramarathons had to be made harder over time because gear improvements made the original courses too easy.

The analogy misses how big this change feels. Imagine if cycling went from Tour de France bicycles to e-bikes to motorcycles, all within five years. At some point, you’re not cycling anymore.

Magic Hour

There’s a moment in photography called magic hour. It’s that brief, beautiful period after sunset when the sky is still light.

Magic hour at Pike Place Market Magic hour at Pike Place Market

That’s where we are with AI and knowledge work: a golden hour where AI multiplies our capabilities rather than replacing them. I’m more productive than I’ve ever been. The work is often more interesting because the tedious parts are automated away. Something new is coming, and I genuinely don’t know what’s on the other side of it.

Do we actually have a choice?

Individual developers, companies, even entire industries keep choosing to adopt AI tools. Each choice seems reasonable on its own. Yet these small decisions add up to big outcomes that no one controls. Coordinating across a whole industry feels nearly impossible.

Even if we could somehow coordinate to slow AI adoption, the incentives for breaking rank would be huge. The people who adopt AI first may get such significant advantages that staying out starts to feel impossible.

Where this might go

I’m not sure where this all goes. That said, here are a few possible futures:

Soft Landing: AI becomes a powerful collaborator and doesn’t fully replace human cognitive work. We find a new state where humans and AI work together, similar to how spreadsheets made calculations easier and created new types of analysis work.

Hard Transition: Displacement happens quickly, and we figure it out. Maybe universal basic income, shorter work weeks, or new economic models emerge. Humans have adapted to big changes before, even when they felt overwhelming at the time.

Post-Hype Crash: AI hits limits faster than expected. The technology plateaus, investment dries up, and we’re left with useful but incremental tools rather than significant change. The dilemma resolves itself as adoption naturally slows.

Overshoot: We automate faster than we can adapt. Social and economic systems strain under the change. The benefits of AI are captured by a small group while the effects spread through society faster than we can build safety nets.

What’s in our control

When I feel overwhelmed by all this, I try to focus on what’s actually in my control versus what isn’t.

Out of my control: Industry direction, macro economic changes, whether AI development slows down or speeds up, what other companies or developers choose to do.

In my control: My curiosity about these tools, willingness to learn how to use them effectively, staying informed, and how I adapt to changes.

Even with that perspective, the bigger question remains.

Living in the question

The honest answer is that I don’t know which future we’re heading toward, and I’m not sure how much our individual choices matter. We’re all making decisions that make sense for us personally, though I’m not sure any of us really knows where it goes.

Recognizing this pattern doesn’t mean we’re stuck with it. If we’re going to keep adopting AI tools anyway, we might as well do it intentionally rather than drifting into whatever happens next.

Maybe it’s because we’re expecting our first child any day now, but I find myself thinking less about what this means for me and more about what world we’re creating for him. What will work and life look like when he’s my age? And what will it be like to grow up in this age?

The choices we make now are shaping the future in ways we can’t really see yet. What gives me hope is that some of the best innovations have come from periods of uncertainty, when we had to figure things out as we went along.

Magic hour doesn’t last forever, and neither does the uncertainty. Eventually, we’ll find our footing in whatever comes next.


Thanks to Brian Sandoval and Noam Katz for their feedback.


Next Post
I made a game with AI and I don't know how to feel about it