We Are All AI Puppeteers Now
Ā· 5 min read

We Are All AI Puppeteers Now

Six months ago I wrote code. Today I write prompts and validate outputs. The craft changed. The team changed. The coffee break changed. And nobody is talking honestly about what we lost and what's coming next.

#ai #developer-experience #career

I went for coffee. When I came back, the feature was done.

No PR waiting for review. No Slack message asking for clarification. No teammate explaining why the approach needed to change. Just four AI agents that finished the work while I was gone.

That should feel like a win. Somehow it doesn’t always feel that way.

The Last Six Months Changed Everything

Six months ago, my day looked like this: open the IDE, read the ticket, think through the problem, discuss it with a teammate, write the code, push, wait for review, iterate.

Today it looks like this: open the terminal, write a prompt, review what came back, validate it against the acceptance criteria, move on.

I’m not coding less because I’m lazy. I’m coding less because the AI does it faster, and my value has shifted. I’m no longer the person who writes the logic — I’m the person who decides if the logic is right.

That’s a real shift. And for many developers, it happened so fast they didn’t notice until the skill started to feel rusty.

Your Team Is Now Four Agents

I used to start my morning by syncing with the team. A quick standup, some async messages, someone sharing an interesting problem they ran into the day before.

Now my ā€œteamā€ is a set of agents I’ve configured. They don’t have opinions. They don’t push back on bad ideas. They don’t tell you when you’re going down the wrong path unless you prompt them to. They execute.

There’s a certain efficiency to it. There’s also a certain loneliness to it that nobody is talking about.

The accidental problem-solving is gone — the junior’s question that exposed a gap in your own understanding, the hallway conversation that reframed a design you thought was settled. You don’t get those anymore.

We shipped faster. We lost something in the process.

The Token Burnout Is Real

Classic burnout came from overwork, bad management, unclear requirements, endless meetings. You know the type.

The new kind hits differently.

ā€œI can’t continue. I’m out of tokens. We’ll pick this up tomorrow.ā€

I’ve heard that sentence more than once this year. It’s strange every single time. We traded human fatigue for computational limits — didn’t eliminate the constraint, just outsourced it to an API quota.

When a human gets tired, you understand it. When an AI stops mid-task because it hit a token ceiling, there’s a specific frustration: the work isn’t done, the context is gone, and tomorrow you’re picking up a thread the next session may not reconstruct correctly.

Burnout didn’t go away. It just changed shape.

The Roles Are Dissolving

Designers are shipping code. Developers are designing interfaces. Product managers are writing SQL. Job title boundaries that held for decades are blurring fast.

Some of it actually works. A designer implementing their own vision skips the handoff. A developer who can mock a UI skips the design sprint. Both ship faster.

But there’s a shadow side.

When roles dissolve, accountability goes with them. A designer who ā€œbuilt an appā€ with AI-generated code may not know — and in most cases doesn’t — what lives inside that codebase. The auth flow, the data storage, the third-party APIs with their own privacy policies. They shipped a product. They didn’t ship understanding.

That gap is where the real problems start.

The Accountability Vacuum

Non-technical people are building applications, selling them, and collecting user data without understanding what that means.

I’ve seen it. Someone builds a SaaS with an AI coding assistant, starts charging $20/month, stores user emails, payment data, maybe health-related inputs — and has never once thought about encryption at rest, GDPR, SOC 2, or what happens when their vendor has a breach.

They’re not malicious. They’re just operating in a moment where ā€œI can build itā€ no longer requires ā€œI understand what I built.ā€

The barrier to creation dropped. The barrier to responsibility didn’t.

When you called customer support five years ago, a human answered — someone who could make judgment calls, escalate the weird edge cases, recognize when something didn’t fit the script. Today you get a chat agent that handles the common cases and fails in ways that are hard to predict and harder to recover from.

We automated the interaction. We didn’t automate the judgment.

Where Does This Go?

Are we heading toward a workforce that doesn’t need people?

Not yet. But most of us are moving through this transition without stopping to think about it.

The developers who matter in the next two years aren’t the ones resisting AI tools — that ship has sailed. They’re the ones who understand what the AI doesn’t: context, consequence, accountability, ethics, user trust.

An AI can build the feature. It can’t decide whether the feature should exist. It can’t weigh the legal exposure, the user relationship, the long-term product direction. That’s still a human job.

Just a different one than writing for loops.

The real question isn’t whether AI replaces developers. It’s what kind of developer you’re becoming while the AI does the work. Are you the person who approves the output? Or the person who actually understands it?


What’s your experience been over the last six months? Progress, loss, or something harder to name? Share this with someone on your team, or find me on social — because this conversation is worth having out loud, not just in our heads while the agents are working.

Share this post