The Regulation Deficit
Everyone knows we have too many regulations. Nobody’s counting the ones we don’t have.
You’ve heard the stories. It takes longer to get a rocket launch approved than to build the rocket. Nuclear power is the safest energy source on Earth—fewest deaths per kilowatt-hour, smallest physical footprint, lowest lifecycle emissions—and it’s all-but-impossible to build, because of the paperwork. The average new drug spends longer in regulatory review than in clinical development. None of this is controversial. Left, right, and center all agree: we have too many regulations.
Good news. Regulations are easy to cut. They’re written on paper. You can find the ones that don’t work and rip them up. You don’t need to invent new technology or change human nature. You just need a committee with some spine, and the willingness to tell a bureaucracy that the form it’s been stamping for thirty years no longer needs stamping. You can even suggest where they shove the stamp. Difficult politically. Trivial mechanically. The deregulation problem is a solvable problem, and we should solve it.
But that’s not the hard problem.
The Invisible Gap
The hard problem is the regulations we don’t have.
Not in the progressively anti-progress sense—the reflexive demand for more rules on whatever feels dangerous on Bluesky this week. The Left wants more regulation on firearms and less on abortion access. The Right wants more regulation on immigration and less on energy production. Neither side is “pro-regulation” or “anti-regulation” in any consistent way. They’re pro-regulation for things they find threatening and anti-regulation for things they find acceptable. The entire debate is downstream of political coalition-building, which is why it never produces anything resembling a coherent principle.
The question upstream of that debate is simpler and more uncomfortable: what domains are currently ungoverned where realistic failure modes exist, whose failures spill externalities onto the rest of us, and what’s the cost of finding out the hard way?
That’s not ideology. It’s closer to what an actuary does. Map the territory, estimate the exposure, price the risk. And unfortunately, nobody’s doing it—not because it’s impossible, but because regulatory attention is a finite resource, and we’ve allocated most of ours to legislating which sports leagues should permit testicles.
The Known Unknown
After 2008, we rebuilt the banking system’s guardrails. Banks now hold more capital in reserve, submit to stress tests, and operate under oversight designed to ensure that a wave of bad loans can’t cascade into a global meltdown ever again. Whatever you think of Dodd-Frank, the basic logic was sound: if your lending blows up, and taxpayers are on the hook for the fallout, then someone needs to be watching the books.
Which is why the banks moved their money off the books.
Private credit—a term most people have never heard, which is itself the problem—is what happens when instead of lending directly and keeping loans on their balance sheets, banks redirect them into private, third-party investment funds that don’t have to follow banking rules. It started small. Over the past fifteen years, it’s grown to roughly $2 trillion. To put that in perspective, that’s larger than the entire subprime mortgage market was in 2007.
The mechanics are simple enough. A company needs to borrow money. It’s too leveraged or too risky for a traditional bank, which would have to hold capital against the loan and justify it to regulators. So instead, a private investment fund makes the loan. But the fund isn’t just the high-risk play money of the rich. No, it raised its investment capital from pension funds, insurance companies, endowments—institutions managing the retirement savings of teachers and firefighters and state employees. You know, the pensions that you’re already on the hook for. The loan sits on no bank’s balance sheet. No regulator stress-tests it. No public filing discloses the terms. The company gets its money, the fund earns a fee, and the risk settles quietly into the retirement accounts of people who have no idea it’s there. And if it goes bad? Well, heads they win, tails you lose. Remember too big to fail?
This is by design. The money moved specifically to escape oversight, and the oversight didn’t follow. When the default cycle comes—and it will, because that’s what cycles do—the losses won’t surface at a bank, where regulators can see them coming. They’ll surface in pension funds and insurance reserves, and the people holding the bag will be the last to find out.
Everyone who was around in 2008 remembers the sickening discovery that nobody knew who owed what to whom. We’re building that exact opacity again, in a market most people don’t know exists, supervised by no one in particular. The reason nobody is regulating it isn’t that someone weighed the risks and decided it was fine. It’s that it grew up in a space where the rules weren’t, and the people shouting for regulations don’t know what’s actually risky.
The Unknown Unknown
Private credit is at least a risk you can point at. Somebody knows what it is, even if most people don’t. The harder category is the risks nobody’s identified yet—not because they’re unidentifiable, but because we don’t know what we don’t know.
Facebook ran internal studies showing that Instagram was damaging teenage mental health, especially for young girls. This is usually told as a scandal: they knew and they hid it. But step back and notice the stranger part. Nobody required them to look. No regulation mandated the study. Facebook did the research on its own, and when the findings leaked, the reward for having investigated their own product was a congressional hearing and a billion-dollar PR crisis.
Now imagine you run the next platform. The lesson is unmistakable: if you study your product’s effects and find something bad, you’ll be punished for knowing. If you never look, you never knew. Which means you can never have concealed anything. The rational move is to not ask the question. The public flogging goes much easier that way.
That’s where we are with most of the technologies reshaping daily life. The companies that could study their own effects have every incentive not to, and nobody outside those companies has the data to do it instead. Many of them still study their own products. These are good people, trying to build good things. But we’re increasingly incentivizing against good behavior.
The obvious alternative is mandatory disclosure—force companies to hand over their data so researchers can look for harm. But think about what that actually means. It’s someone showing up at your door and saying “give me access to everything—all your proprietary research, all your internal metrics, all your user data.” You ask what they’re looking for. They say: “We won’t know until we find it.” That’s not a workable basis for policy in any domain. It’s a fishing expedition, and everybody knows it. Which is why it never gets past the hearing stage.
So the unknown unknowns stay... unknown. Not because they can’t be found, but because the incentives encourage us not to look too closely. Social media’s impact on adolescent mental health wasn’t some deep mystery that required a scientific breakthrough. It required having a teenager. But, presumably, we could’ve started asking the question before a generation of children had already served as the experiment. Nobody asked. Not because they couldn’t. Because nobody’s job depended on it, nobody had the authority to demand the data, and the people who did have the data learned that revealing what they found was more dangerous to them than not looking.
The Response Gap
Even when problems do surface, the response time is measured in decades.
The systemic overprescription of opioids was visible in the data by the early 2000s. Meaningful federal action didn’t arrive until 2018. By then, half a million Americans were dead. The internal research on social media-driven psychological harm existed by 2017. Congress is still holding hearings. The pattern is always the same—a problem emerges, a decade of committee testimony follows, legislation eventually passes, calibrated to reliably solve the crisis from twenty years ago, instead of the one developing next. And just like with private credit, by the time the rules arrive, the industry has already migrated to whatever the rules don’t cover.
The lag isn’t a failure of will. It’s structural. Regulatory agencies are organized around industries that existed before those agencies were created. The SEC watches securities. The FDA watches drugs. The FCC watches communications. When something new doesn’t fit into any existing bucket—and the interesting things never do—it lands in a jurisdictional gap. Nobody has clear authority. Nobody has allocated budget. Nobody’s career depends on figuring it out. Prediction markets sit somewhere between gambling, securities, and opinion polling, which means several agencies could plausibly claim jurisdiction and none of them clearly do.
The result is an institutional system built to perfectly fight the last war, every time, a decade after it’s been lost.
The Actual Opportunity
AI lands in the middle of all of this, and the familiar arguments have already started. Regulate it now, before we know what it does. Leave it alone until we do. Both positions are exactly how every previous technology was handled, and there’s no reason to expect different results from the same script.
But AI has one structural property that none of the previous technologies did, and it’s worth taking seriously rather than waving at. A chemical plant can’t simulate its own explosion before it happens. A social network can’t predict what it’ll do to teenagers before the teenagers sign up. The risk assessment for every previous technology had to happen after deployment, performed by humans, slowly, with incomplete data. That’s why the lag exists. Not because regulators are lazy—because the problem can’t physically be understood until after the damage starts.
AI systems can, at least in principle, model their own failure modes before deployment. They can stress-test scenarios at a speed and scale that no congressional committee or regulatory agency could match. In fact, they could do adversarial testing against each other, which would align the incentives toward revealing findings that much more quickly, all while racing to repair similar flaws in their own products. That doesn’t mean they will. But it’s a genuine structural difference—the first technology in history that could help write its own safety manual.
The opportunity isn’t “regulate AI proactively” in the usual hand-wringing sense. It’s narrower and more interesting than that. The question is whether AI gives us a tool to compress the response gap—to shrink the decade between “we discovered a problem” and “we have a framework for it” down to something closer to the speed at which problems actually develop. Not writing rules for hypothetical risks. Building the institutional capacity to react in months instead of decades when the next unknown becomes known.
The Constraint
Building that capacity would require something the American regulatory system has never once demonstrated: the willingness to pay attention to problems that haven’t yet produced a political crisis. Every major regulatory body was created in response to a disaster. The SEC after the 1929 crash. The EPA after rivers literally caught fire. OSHA after enough workers died that inaction became more politically costly than action. The institutional DNA is reactive. We build the fire department after the fire.
The deregulation crowd is right that we have too many rules. They’re wrong that the answer is simply fewer. The answer is better-allocated attention—less time relitigating whether the 872nd rocket launch needs eighteen months of environmental review, more time asking whether anyone is watching the $2 trillion lending market that exists specifically because no one is.
That reallocation is harder than either side’s bumper sticker. Cutting rules is popular, at least when they’re rules you’d like to break. Building new capacity for risks that haven’t blown up yet is not—the beneficiaries are invisible, the costs are immediate, and no politician ever won an election by preventing a crisis nobody noticed was averted. Every incentive in the system points toward waiting for the disaster, then acting shocked.
We’ve run this experiment before. We know how it ends. The question is whether the tools exist, for the first time, to change the ending—and whether the institutions that could use those tools will be built before the next obvious-in-retrospect catastrophe that everyone saw coming and nobody prevented.
The regulations we have too many of are a nuisance. The regulations we don’t have are a risk. One of those problems is expensive. The other is dangerous.

