We Are All AI Puppeteers Now
Six months ago I wrote code. Today I write prompts and validate outputs. The craft changed. The team changed. The coffee break changed. And nobody is talking honestly about what we lost and what's coming next.
I went for coffee. When I came back, the feature was done.
No PR waiting for review. No Slack message asking for clarification. No teammate explaining why the approach needed to change. Just four AI agents that finished the work while I was gone.
That should feel like a win. Somehow it doesnāt always feel that way.
The Last Six Months Changed Everything
Six months ago, my day looked like this: open the IDE, read the ticket, think through the problem, discuss it with a teammate, write the code, push, wait for review, iterate.
Today it looks like this: open the terminal, write a prompt, review what came back, validate it against the acceptance criteria, move on.
Iām not coding less because Iām lazy. Iām coding less because the AI does it faster, and my value has shifted. Iām no longer the person who writes the logic ā Iām the person who decides if the logic is right.
Thatās a real shift. And for many developers, it happened so fast they didnāt notice until the skill started to feel rusty.
Your Team Is Now Four Agents
I used to start my morning by syncing with the team. A quick standup, some async messages, someone sharing an interesting problem they ran into the day before.
Now my āteamā is a set of agents Iāve configured. They donāt have opinions. They donāt push back on bad ideas. They donāt tell you when youāre going down the wrong path unless you prompt them to. They execute.
Thereās a certain efficiency to it. Thereās also a certain loneliness to it that nobody is talking about.
The accidental problem-solving is gone ā the juniorās question that exposed a gap in your own understanding, the hallway conversation that reframed a design you thought was settled. You donāt get those anymore.
We shipped faster. We lost something in the process.
The Token Burnout Is Real
Classic burnout came from overwork, bad management, unclear requirements, endless meetings. You know the type.
The new kind hits differently.
āI canāt continue. Iām out of tokens. Weāll pick this up tomorrow.ā
Iāve heard that sentence more than once this year. Itās strange every single time. We traded human fatigue for computational limits ā didnāt eliminate the constraint, just outsourced it to an API quota.
When a human gets tired, you understand it. When an AI stops mid-task because it hit a token ceiling, thereās a specific frustration: the work isnāt done, the context is gone, and tomorrow youāre picking up a thread the next session may not reconstruct correctly.
Burnout didnāt go away. It just changed shape.
The Roles Are Dissolving
Designers are shipping code. Developers are designing interfaces. Product managers are writing SQL. Job title boundaries that held for decades are blurring fast.
Some of it actually works. A designer implementing their own vision skips the handoff. A developer who can mock a UI skips the design sprint. Both ship faster.
But thereās a shadow side.
When roles dissolve, accountability goes with them. A designer who ābuilt an appā with AI-generated code may not know ā and in most cases doesnāt ā what lives inside that codebase. The auth flow, the data storage, the third-party APIs with their own privacy policies. They shipped a product. They didnāt ship understanding.
That gap is where the real problems start.
The Accountability Vacuum
Non-technical people are building applications, selling them, and collecting user data without understanding what that means.
Iāve seen it. Someone builds a SaaS with an AI coding assistant, starts charging $20/month, stores user emails, payment data, maybe health-related inputs ā and has never once thought about encryption at rest, GDPR, SOC 2, or what happens when their vendor has a breach.
Theyāre not malicious. Theyāre just operating in a moment where āI can build itā no longer requires āI understand what I built.ā
The barrier to creation dropped. The barrier to responsibility didnāt.
When you called customer support five years ago, a human answered ā someone who could make judgment calls, escalate the weird edge cases, recognize when something didnāt fit the script. Today you get a chat agent that handles the common cases and fails in ways that are hard to predict and harder to recover from.
We automated the interaction. We didnāt automate the judgment.
Where Does This Go?
Are we heading toward a workforce that doesnāt need people?
Not yet. But most of us are moving through this transition without stopping to think about it.
The developers who matter in the next two years arenāt the ones resisting AI tools ā that ship has sailed. Theyāre the ones who understand what the AI doesnāt: context, consequence, accountability, ethics, user trust.
An AI can build the feature. It canāt decide whether the feature should exist. It canāt weigh the legal exposure, the user relationship, the long-term product direction. Thatās still a human job.
Just a different one than writing for loops.
The real question isnāt whether AI replaces developers. Itās what kind of developer youāre becoming while the AI does the work. Are you the person who approves the output? Or the person who actually understands it?
Whatās your experience been over the last six months? Progress, loss, or something harder to name? Share this with someone on your team, or find me on social ā because this conversation is worth having out loud, not just in our heads while the agents are working.
Share this post