The Meritocracy Subscription
AI won’t democratize software—it will commercialize productivity
There’s a lot of hate for AI right now. Some of it is even deserved. It hallucinates facts with confidence. It enables industrial-scale slop production. It gets things confidently wrong in ways that feel worse than regular ignorance because the output looks so polished.
The skepticism shows up in the data, too. Executives are bullish—AI is transformative, a paradigm shift, a major investment priority. Meanwhile, most employees report minimal impact on their actual work. This disconnect isn’t just corporate myopia. It reveals something important about what AI actually does well versus what we expect it should do well.
What AI Actually Does
LLMs excel at executive processing tasks: deciding which approach to try, switching between contexts, weighing multiple solution paths simultaneously. These are activities that scale with scope, not depth—the work that benefits from being able to consider many possibilities simultaneously rather than mastering one thing through repeated iteration.
What LLMs don’t do is learn from experience and refine through practice. They can’t discover a lesson from iteration five and apply it systematically to iterations six, and onward. They don’t build expertise through repetition. They aren’t a “they” at all. Every response is essentially independent, generated from the same static training distribution with, at most, a modicum of extra context within a single thread.
The only way for them to get better at a specific task is for the commercial operator to load it into the data in advance. And there’s one big domain where they’re doing just that: writing code.
AI is good at writing software. Very good, and getting much, much better. Three years ago, GitHub Copilot was a glorified autocomplete. You’d accept maybe a third of its suggestions, correct another third, and reject the rest as useless nonsense. Functional, occasionally impressive, but wrong frequently enough to establish a baseline of reasonable skepticism in many developer circles.
Now? With sophisticated workflows, an experienced AI engineer can implement more in a day than they could in a week two years ago. Not incrementally better—categorically different productivity. Multiple agents running in parallel, one reviewing the code another wrote, a third refactoring for performance, a fourth auditing security, a fifth writing tests. Developers are orchestrating these systems the way an editor manages a publication, steering the direction rather than typing every character themselves.
Why does coding work so well, while other tasks don’t? Fundamentally, it’s because code has a tight feedback loop that doesn’t require iterative learning, just contextual decision making. The software either runs or it doesn’t. The tests pass or fail. The code compiles, or the build breaks. You don’t need to master any new lessons from previous attempts—you need to generate syntactically valid, logically correct text that satisfies formal constraints. That’s pure pattern matching at massive scale, which is exactly what LLMs are built for.
But that massive scale comes with a price tag.
The Infrastructure Toll
Here’s where it gets expensive.
To achieve that 5x productivity gain, which elite practitioners are already surpassing, you need to consume a lot of LLM usage. This isn’t occasional queries to answer a question. It’s dozens of agents running in parallel for hours, processing entire codebases, generating thousands of lines of suggestions, grinding through complex implementations.
Current usage limits make it impossible to maintain peak productivity for a full forty-hour week, even with top-tier subscriptions, so developers overflow into extra usage fees. Claude Code’s Max subscription runs $200 monthly, plus overages when rate limits are hit. And that’s just one tool. Sophisticated developers are using Cursor, Claude, GitHub Copilot, Codex, Grok, and specialized models for different tasks. Prosumer usage easily reaches $300-500 monthly, and often significantly more for the most intensive workflows.
For an engineer earning $200,000 annually, it’s worth it. To the company paying their salary, it’s an obvious investment—why wouldn’t you spend $500 monthly to get 5x output from a $200k employee?
But for the past decade, we’ve sold ‘learn to code’ as the great equalizer—the one skill that could lift anyone into the middle class regardless of background or credentials. And now we’re breaking our promise.
Learning that skill is no longer enough. Now it’s learn to code, plus pay $300-500+ monthly just to stay competitive, or get destroyed by peers who can.
The Democratization Inversion
The most excited, accelerationist AI advocates claim that soon anyone will be able to build software. They’re half right at best. The engineering skill requirement might become optional—it hasn’t yet. But the promised democratization isn’t coming.
The old model was straightforward: high barrier to entry, low barrier to compete. Learning to code was hard—really hard—but once you learned it, you learned it. The knowledge was yours. You could practice for free, build for free, compete for free. A kid in Bolivia with a $300 laptop and a spotty 3g connection could learn Python from open-source tutorials, build a portfolio on a free GitHub account, and compete for remote contracts on genuinely equal technical footing with developers in San Francisco. Zero marginal cost to deploy your skill. No monthly fees. No recurring expenses. Just you, your knowledge, and what you could build.
The new model inverts this completely: low barrier to entry, high barrier to compete. Even if they’re right, that anyone can build software with AI, no training required, they’re missing the point. For anyone trying to build a career or business in software, AI hasn’t removed the need for coding skills. It’s added an LLM tollbooth.
The person who “doesn’t need to code” because AI does it for them isn’t competitive with the developer who is orchestrating half a dozen AI agents in parallel. The gap isn’t small. It’s 5:1 productivity, minimum, and widening. We’ve traded a one-time knowledge investment for a permanent subscription tax, priced in Silicon Valley margins.
The Compounding Problem
This isn’t like previous technological transitions. When CAD software emerged, it became a new requirement—but you could meet it. The productivity gain plateaued once you got it. Learn SolidWorks, get a license: you’re competitive. The tool didn’t get dramatically better every six months.
LLMs are different. The tools improve with each release. The workflows evolve as developers discover that agents can review other agents, that sophisticated orchestration multiplies productivity beyond what any single tool provides. Developers keep discovering new prompting tricks and contextual techniques that improve accuracy and performance. The skill gap isn’t just can you use AI, but rather: how sophisticated is your AI workflow, and how much can you scale it with API credits, subscription fees, and the latest release version?
As each new model is trained with more data, on more GPUs, with more power, the costs keep rising. Commoditization might eventually bring prices down. They haven’t yet. This isn’t a race, it’s a treadmill.
The developer who can’t afford the tools isn’t just slower. They’re unemployed. Once management sees what’s possible with AI augmentation, that becomes the new baseline. Everyone else is irrelevant. Software development has always been competitive—a marketplace of skills. But the competition used to be who can solve this problem better. Now it’s who can afford to solve this problem at competitive speed.
The Bolivian Problem
In 2015, an aspiring programmer in La Paz could learn for free, build on outdated, modest equipment, and compete on equal technical footing. Their lower cost of living meant they could even undercut US developers on price while maintaining great margins. Access to a $300 laptop wasn’t trivial—it represented a real barrier to entry—but it was the only one. Solve it and you’ve got a real shot.
Now, they still need the laptop, plus reliable internet for their entire working day, which remains spotty or expensive in much of the world.
They need $200 monthly in subscriptions, minimum—$2,400 annually, eight times the cost of that one-time $300 laptop. And they need it every year. Top end usage is equivalent to two of those barrier-to-entry-laptops, a month.
That subscription cost might be 30-50% of gross income in many developing economies. Every month, forever, or you fall behind. It only looks democratic—anyone can talk to AI. But the barrier to performance has skyrocketed. “Talking to AI” isn’t the same as your competitors orchestrating multiple, parallelized AI agents, while you’re asking ChatGPT to debug your for-loop.
The skills are democratized. Free learning resources exist everywhere, better than ever. But the infrastructure is privatized. Expensive, recurring subscriptions that represent trivial costs for established professionals and prohibitive barriers for everyone else.
The Pattern We Keep Missing
This is the same playbook we’ve seen before, dressed up in the latest fashion. We told everyone they should own a home, then turned homes into tax vehicles and watched prices decouple from wages. We told everyone they should go to college, made degrees all-but-mandatory, and watched costs explode while relative value cratered. Now we’re telling everyone they should learn to code while simultaneously making it impossible to compete without paying monthly rent to AI megacorps.
The democratization rhetoric always obscures the infrastructure capture. We confuse access to training with access to opportunity. Anyone can learn Python for free on YouTube. But can they compete without the tools that multiply productivity 5x? The barrier isn’t knowledge anymore—it’s subscription fees.
What makes this particularly insidious is that in previous cases, you could at least finish. Get the degree, buy the house, learn the language. You paid the price and you were done. With AI subscriptions, you never finish. The treadmill never stops. Your competition will use the newest model, whatever the cost. Which means you will too, or you’ve already lost.
The Uncomfortable Economics
The structure is almost elegant. Free skill acquisition plus expensive skill deployment equals perfect conditions for wealth extraction. They’ve figured out how to let you build human capital for free, then charge rent on the ability to use it competitively. You can spend a thousand hours learning Python and JavaScript without spending a dollar. But to deploy those skills at professional levels? That’ll be hundreds of dollars a month in service fees, for the rest of your career.
This isn’t to say AI is bad. And it certainly won’t be an economic, or even employment, apocalypse. That’s always the prediction. And, predictably, it’s always wrong. Tractors didn’t end employment, they ended hunger. The old family farms got gobbled up, but the former farmers got cars and factory jobs. If people don’t have income, they won’t pay the AI companies, or buy the software that AI engineers create. The transition will be messy. Real people will struggle. But history rhymes, and it sounds like progress.
The risk isn’t whether AI will destroy everyone’s livelihoods. Some people are already 5x more productive with it. That productivity shows up somewhere—in higher pay for AI users, in corporate profits, in cheaper software, in all of the above. Someone benefits. The only question is who.
But what we should be skeptical of is the democratization. Because what we’re actually getting is stratification, between those who can pay the toll versus those who can’t. Not based on skill. Not based on merit. Based on whether you can afford the monthly fee to make your skills competitive in the marketplace. The end of the artisan engineer, and the dawn of factory farm coding. That’s what the real AI economy looks like.

