How The Moat Is Moving
How AI is really changing software engineering and the tech industry.
We’re living through one of the largest wealth transfers in modern history, and most people don’t see it yet. I’d like to share some thoughts on what I’m seeing as it’s unfolding.
That’s not hyperbole. The last time capital moved at this scale and speed, it reshaped the global order. During World War II, Britain—having spent centuries consolidating wealth through empire—found itself forced to liquidate that position to survive. The US absorbed it. The Lend-Lease alone moved $50 billion in goods (nearly $700 billion in today’s dollars). In 1944, Bretton Woods unseated the pound sterling as the world’s reserve currency. Britain didn’t finish paying off its war debt until 2006—sixty-one years later.
The dotcom crash wiped out $5 trillion in market value between 2000 and 2002. The 2008 financial crisis destroyed $16 trillion(!) in American household wealth in eighteen months.
Where are we now? Nvidia alone has created over $4 trillion in market cap since 2023—more than the GDP of Japan, India, or the United Kingdom. One. Company. There’s projections of spend on AI infrastructure between 2025 and 2027 dwarfing more than $1.15 trillion. I’ve already seen this at play in real time. AWS has had problems committing to signing Bedrock reserved compute contracts. They simply don’t have the compute to meet the demand.
Everyone loves to talk about bubbles, but the hype is real. This isn’t a bubble inflating. It’s capital and intelligence relocating.
The question isn’t whether the money is moving. It’s where it’s going—and who’s going to be left holding an empty position when the transfer completes.
I had a conversation with a friend recently that clarified the things I’ve been watching unfold for the past year. He asked me a lot of questions about AI—where it’s actually useful, what’s hype, what’s real and the classic “Are we all just fucked?” By the end, I realized I'd talked my way through what I think is happening: the moat is moving. And most SaaS companies are holding a depreciating asset, just like Britain in 1944.
The uncomfortable truth is: if your business model is "we built software, you pay us monthly to use it," and you don't own any unique data, you might be fucked.
The SaaS Reckoning
That might sound hyperbolic or blunt, but the game and the stakes have changed for real in the last year.
Claude Code—and tools like it—is a genuine 10x force multiplier in the hands of a competent engineer. I really like Martin Fowlers “Expert Generalist” description here. To be clear, this is not “10x more lines of code”, that’s the wrong metric. I mean: one or two skilled expert generalists can now build custom, bespoke solutions that replace SaaS products their company was paying six figures or more a year for in months. At a fraction of the cost.
Think about what that means for companies like Salesforce. Any platform where customers can export their data and leave you holding an empty shell. If your value proposition is the software layer and not the data underneath it, I’d be looking nervously at chapter 11.
The companies that survive this will be the ones that own something irreplaceable—proprietary data, network effects, or integrations so deep they’re load-bearing infrastructure. Everyone else is selling software that’s getting cheaper to rebuild every quarter. In my opinion, the doomers talking about software engineering going away are fundamentally wrong. Builders who know the most about the software stack are now only limited by the number of hours they can ride Claude Code as a steed that’s simply fed by their ambition. Even that is putting it lightly. Agent teams and agentic workflows are in their infancy and growing stronger every quarter.
Agentic workflows will continue to loosen the temporal lock on this equation:
time + ambition = built
The new equation will look something like:
time + (agents(compute($)) * ambition) = built
Where agents is simply a function of access to money to buy compute. Compute will continue to improve, so will the tools.
Security Gets Simpler (Counterintuitively)
My friend asked me about security, I think expecting me to say AI makes everything more insecure and this will all end in disaster. While that does hold true in some sense, the reality is more nuanced.
The more companies build in-house, the less attack surface they expose. A huge percentage of security incidents happen at the boundaries—cloud-to-cloud ingress and egress, third-party integrations, APIs talking to APIs, man in the middle and DNS poisoning attacks. If most companies switch to building internal tools that never leave the VPN, they’re not even on the radar of state-sponsored botnets scanning the internet for exposed services.
The security hardening work collapses down to one surface: the product you ship to web or mobile. That’s it. Bad actors getting into the network is a different problem, but that’s always been true.
This isn’t a security silver bullet. But as we’ve seen recently, Claude Code comes with security skills that provide more out-of-the-box security awareness than we’ve ever been able to teach in our industry. Building secure systems has been limited by engineers’ lack of understanding why and how exploits work and those engineers keeping top of mind security awareness when building software. Now, agents come with it built in, every time.
The New Primitives
There’s a lot of noise about whether AI is actually useful. I find it depends almost entirely on how the person holds the tool—and whether they understand what they’re actually working with.
The “it’s just next word prediction” dismissal falls apart immediately when you watch an LLM in skilled hands. Yes, it’s next word prediction. It’s also really, really, spectacularly fucking good at it. (And it’s the worst it’s ever been right now!) I see folks get stuck in that way of thinking about it. “Well, it’s not 100% every time.” The point isn’t to get to 100%, this isn’t a zero sum game. Training and correcting agents is more or less permanent in ways that don’t exist with humans. If you run it once and it fucks up, use your own reasoning and the agents’ reasoning to correct it, then run it again. Ride that loop into the sunset.
Anyway, here’s what matters: the primitives have changed. It used to be compilers, syntax, linters, servers. Now it’s models, context, prompts, and tool calls.
Understanding how context windows work is now a core engineering skill. What contributes to context noise? What’s useful versus useless context? How do you structure prompts to get consistent, high-quality outputs? What instructions actually work with the LLM vs what doesn’t?
And critically: context rot is real. Agents need tools to discover context in real time to avoid it.
Context generated yesterday is almost always stale today. Stale context often causes more damage than no context at all—it compounds over time, reflecting half-baked states, misnamed files, abandoned approaches and accumulated bullshit.
There’s a paper that quantifies this: “Context Length Alone Hurts LLM Performance Despite Perfect Retrieval.” The authors found that model reasoning accuracy dropped to roughly 78% with 50k tokens in the window, 61% at 100k, and 42% at 200k. Recall stays fine. Reasoning falls off a cliff. After 100k tokens, dumb mode activates.
I see this daily. Once context grows past 100k tokens and I ask for anything requiring real design thinking, the output quality craters. I’m skeptical the 1M context windows don’t suffer the same degradation—I’d love to see the data.
The Harder Problem: Adult Education
Everything I’ve described so far is technical. Context management, agentic architectures, the new primitives—these are learnable skills. Difficult, but learnable.
The harder problem is getting people to actually learn them.
I’ve been onboarding engineers into these tools for months now. The pattern is consistent: some folks get it immediately. They’re curious, they experiment, they start finding applications I hadn’t considered. Most land somewhere in the middle. They understand something significant is happening, but they’re overwhelmed, skeptical, or just too busy shipping features to fundamentally rewire how they work.
This is adult education at scale. Teaching old dogs new tricks, except the old dogs are talented professionals with years of hard-won expertise that suddenly feels less relevant.
We’ve had to be explicit with people: the expectations have changed. You are expected to use these tools. Not “encouraged.” Expected. And this isn’t isolated to one company’s culture or one manager’s preference—the entire industry is shifting. Any employer you move to will have the same expectation.
There’s real tension in this. People are being asked to ship product features on deadline while simultaneously learning an entirely new working paradigm. That’s not a small ask. It requires genuine self-motivation—curiosity, hunger, willingness to feel incompetent again after years of mastery.
I don’t have a clean solution for this. What I have is honesty: There is no secret sauce. I’m genuinely excited about this technology, and I’m picking it up as fast and as early as possible because I can see where the curve is heading. The engineers who lean in now will have a two-year head start on those who wait for it to become mandatory.
That gap will be hard to close. Sets and reps of experimenting and building right now will pay large dividends in the years to come.
Where This Leaves Us
The moat is moving. It’s not “we have software” anymore. It’s: we own unique, irreplaceable data; we’ve built agentic workflows that compound intelligence over time; we understand context as a first-class engineering concern.
Software engineering jobs are in jeopardy—not because AI replaces engineers, but because it amplifies the gap between engineers who understand these new primitives and those who don’t.
The next 12-24 months are going to be brutal for companies on the wrong side of this. And genuinely exciting for those who get it.
If you want to talk more about any of this, reach out. I’m happy to go deeper.

