Take the Leap Before You Are Pushed

The fear of AI replacing your work is real. So is the shift. The teams that build structure around agents instead of resisting or rushing find the job gets better, not smaller.

Silhouetted figure standing at the edge of a crumbling stone cliff, facing a vast expanse of warm golden light. Fragments of ground fall away behind them into mist.
I had every reason to wait. I jumped anyway.

I talk to engineers every week who haven't started using AI agents for real work. I don't mean autocomplete, not asking ChatGPT to explain a stack trace. I mean agents writing production code, reviewing it, testing it, debugging it, shipping it.

They all have reasons. The accuracy isn't there yet. It's a security risk. It won't fit how we work. We tried it a year ago and it didn't help. It's overhyped, it's a bubble, it might take my job. Some of them mention the environmental cost.

I understand every one of those concerns. Some of them I shared, not that long ago. If you've spent a decade or more building a career around writing good code, code you're proud of, code that reflects how you think, and someone tells you an agent can do it now, that doesn't land as a technical argument. It lands as an identity one.

I'm not here to dismiss that. I'm here to tell you what I found on the other side.

The fear is real

Let's not pretend this is comfortable.

Shopify's CEO told his entire company that AI usage is no longer optional, that it's part of performance reviews, and that teams need to prove why work can't be done by AI before requesting headcount. Shopify's workforce dropped from 11,600 to 8,100 between 2022 and 2024 while the company grew at least 21% per year. More output, less headcount, and the CEO making it clear the direction is permanent.

Fiverr's CEO told employees "AI is coming for your jobs. Heck, it's coming for my job, too," then cut 30% of the workforce. Jack Dorsey laid off 40% of Block, over 4,000 people, and said most companies will reach the same conclusion within a year. Nearly 55,000 U.S. jobs were cut in layoffs that companies attributed to AI in 2025, more than four times the roughly 12,700 in 2024. Junior developer employment dropped nearly 20% between 2022 and 2025 according to Stanford research.

Some of this is genuine structural change. Some of it is executives using AI as cover for cuts they'd have made anyway. Oxford Economics found many "AI layoffs" were actually correcting pandemic-era overhiring, and Block's cuts included policy and DEI roles that have nothing to do with AI capability. "AI" has become the justification layer for any kind of restructuring, and that makes the fear worse, not better. Whether the reason is real or cynical, the person on the other end experiences the same thing.

The tools aren't earning trust

And then the tools themselves make the distrust worse. Microsoft shipped a VS Code change that silently appended "Co-authored-by: Copilot" to commit messages by default, even when developers hadn't used an AI agent. Even when they'd manually rewritten the commit message, the co-author line appeared in the final git history anyway. The commit they reviewed wasn't the one that landed. Microsoft has promised a fix in the next release after the backlash, but the damage to trust was done.

When the tools themselves misrepresent what happened, the people being told to adopt them have every reason to be skeptical. Communities are responding. The CNCF surveyed nearly 100 projects and found over half of maintainers want mandatory AI disclosure on contributions, with another 20% wanting it in specific cases. They're willing to accept AI-generated code, but they want the transparency to adjust their review accordingly. That's a reasonable position, and it's the opposite of what Microsoft tried to force.

The Stack Overflow survey captured the broader tension: 46% of developers actively distrust AI tool accuracy, while adoption keeps climbing anyway. People are using tools they don't trust because the pressure to adopt is coming from above. That's a miserable place to be.

Hasty adoption is dangerous

When adopting AI, Klarna is a cautionary tale worth studying. In February 2024 they announced their AI assistant was doing the equivalent work of 700 full-time customer service agents, handling two-thirds of all chats. They projected $40 million in savings and declared victory. Then quality dropped. The AI handled volume but not complexity. Repeat inquiries climbed. Customer satisfaction fell. By May 2025 the CEO told Bloomberg: "Cost unfortunately seems to have been a too predominant evaluation factor. What you end up having is lower quality." They started rehiring humans.

But the lesson from Klarna isn't "AI doesn't work." It's "AI without structure doesn't work." They replaced humans and hoped the model would handle it. No workflow around the agents. No trust boundaries. No system for the agents to operate within. Same failure mode I see in engineering teams every week. Point agents at problems without giving them context and blame the tools when the output is bad.

The teams resisting AI entirely and the teams adopting it recklessly are making the same mistake from opposite directions. One group avoids the tools. The other group uses them without building anything around them. Both end up in the same place: convinced that agents can't deliver.

The shift isn't waiting

This would matter less if the shift were gradual. It's not.

Satya Nadella says 30% of Microsoft's code is now AI-generated. Google reports over 25%. Anthropic's CEO told the Council on Foreign Relations he expects AI writing 90% of code within months. Meta's Zuckerberg said he expects half of development to be done by AI within a year. Across 4.2 million developers, 26.9% of production code is already AI-authored.

Spotify's CEO told investors that some of their most senior engineers haven't written a single line of code since December 2025. They "only generate code and supervise it." Spotify built an internal system called Honk around Claude Code where an engineer can tell the agent to fix a bug from their phone on the morning commute and have a new build pushed to them on Slack before they arrive at the office. They shipped over 50 new features and changes throughout 2025.

These aren't startups chasing hype. These are the largest engineering and product organizations in the world restructuring around the assumption that agents write code.

The METR study from early 2025 found developers using AI were actually 19% slower. Skeptics cite that constantly. What gets cited less: by February 2026, METR couldn't recruit developers willing to work without AI for their control group. The tools changed that fast, and the people using them knew it even when the early metrics hadn't caught up.

The infrastructure layer has already shifted. The question isn't whether this happens. It's whether you shape how it happens for your team, or let it happen to you.

Structure sets the path to success

Every concern I listed at the opening of this post is real. Accuracy, security, cost, environment, job loss. But they're all characteristics of agents operating without structure.

I've written about what that structure looks like, what it costs when you get it right, and how it changes the work. I'm not going to rehash the details here. The short version: when you give agents conventions, workflows, review pipelines, and trust boundaries, the accuracy problem becomes a context problem you can solve. The security risk becomes an engineering problem with engineering solutions. The cost drops because structured agents waste far fewer tokens.

And the job concern, the one people are least willing to say out loud, resolves differently than you expect.

When you embrace the shift

Here's the thing nobody tells you: the job gets better. Not easier, not simpler, but more interesting. More focused on the work that actually matters.

Focusing less on the code means I started making more architectural decisions in a week than I used to make in a month. I run multiple agents in parallel and move between them making the judgment calls each one needs. I have time to think about the product in ways I never could when I was spending four days implementing and one day thinking. I refactor and rearchitect at a pace that would have been impossible when every extraction was a manual exercise.

The code was never the job. It was the understanding. The domain, the architecture, the users, the tradeoffs. The code was just how you expressed that understanding. Agents didn't make the understanding less valuable. They made it the whole job.

That shift feels like loss at first. Someone responded to my last post with something that stuck with me: "Writing code touches on very different muscle memory than reading it, and that's not muscle memory I'm willing to lose just yet." I understand that completely. It's not a bad argument. Coding is a craft, and the fear of losing a craft you've spent years developing is real.

But are we looking at it the wrong way? You don't lose the ability to understand code by stopping typing it. If anything, I understand our codebase more deeply now than when I was heads-down implementing, because I'm engaging with the architecture constantly instead of being stuck in one corner of it. The muscle memory that actually matters, the ability to reason about systems, to see how a change in one place affects three others, to know which approach sets you up for the next six months, that gets stronger, not weaker. What atrophies is the typing. And the typing was never the valuable part.

The deeper question is whether "at some point I'll want to write code again" is a prediction or a hope. I thought I'd miss it longer than I did.

The system reinforces itself

The teams that take the leap get something the teams standing still never will: a system that learns.

Every convention you add makes the next hundred tasks better. Every failure mode you capture becomes a constraint that prevents the same mistake everywhere, permanently. Early on you're catching obvious things constantly. Over time the system gets specific enough that agents rarely make mistakes your conventions don't already cover. Eventually new agents and new team members start with the benefit of every lesson the team has learned.

That compounding is the real argument. Not "AI is faster" or "AI is cheaper," though both can be true when the structure is right. It's that you're building a body of institutional knowledge that grows with every task and never walks out the door. No human team has ever been able to do that.

Where do senior engineers come from now?

If companies stop hiring juniors, and the data says they're hiring fewer, where do the next senior engineers come from? It's a legitimate worry. A 2025 LeadDev survey found 54% of engineering leaders plan to hire fewer juniors. The share of juniors in IT employment dropped from 15% to 7% in three years. That's not a trend. That's a collapse.

I think the conclusion most people draw, that there's no path for juniors anymore, is wrong. What's actually happening is the entry criteria are changing... fast!

The traditional model brought juniors in and valued their ability to type. Write code, fix bugs, build features. The grunt work. Architecture and systems thinking came later, years later, if it came at all. Most juniors spent their first two or three years learning syntax, frameworks, and patterns through repetition. The understanding of why things were built a certain way came slowly, almost by accident, through enough exposure. I see this all the time in the DevOps world, people focusing on how to write Terraform rather than to understand the cloud systems they are deploying to.

In an agentic world, that inverts completely. The typing is handled. What matters from day one is whether you can think about systems. Can you understand why an architectural boundary exists? Can you evaluate a plan and spot what's missing? Can you reason about tradeoffs?

Teach a junior good architecture, good patterns, how to reason about domains and tradeoffs, and they can begin to be as effective as someone with a decade of typing experience, because the agents handle the implementation. The entry criteria shift from "can you write code" to "can you think about systems." That's not a worse foundation for growing engineers. It's a better one. We spent decades watching juniors develop bad habits during the years they were learning to type. Now they can start where it actually matters.

The companies that figure this out will have an enormous advantage. The ones that just stop hiring juniors entirely are building a gap in their pipeline that will cost them.

Leap or be pushed

Every role that touches software is going to change shape. Not just engineers. Product owners, designers, engineering leaders. The roles don't disappear. They shift toward the parts that were always the most important and away from the parts that were always the bottleneck.

You can wait. Let someone above you mandate it on their timeline, with their priorities, under conditions you didn't choose. That's what getting pushed looks like.

Or you can jump now, on your own terms. Build the structure. Push through the uncomfortable early days. Let the system learn. It's harder than waiting, and what you find on the other side is worth it.

I understand the fear. I had it. I jumped anyway and it was the best decision of my career. I enjoy my job more now than I have ever done...