The Blog

The Liminal Era

One number may determine our economic future: Time-to-Failure. If trends hold, in 2026 we’ll see human labor start to become unprofitable overhead.

Over the holiday break, I got annoyed at having to do a bunch of B.S., so I made a an AI to handle it for me. He can find out what irritating paperwork the State of California wants me to complete, search my computer for the answers, ask me a few questions, and fill it in. He looked at my companies and researched corporate structuring, and made good recommendations. He drafted contracts. Next up, I’m having him research college scholarships and grants for my daughter. I didn’t expect this to work, so I called him “YOLO” fully expecting him to just try and epically fail. He hasn’t, he’s just a little wobbly.

In his brief five-day existence with just a little hand-holding and zero HR headaches he’s done over a month of human labor. He still sometimes goes down a rabbit hole like an eager intern, but even then he’s much easier than hiring humans. And he’s the canary in the coal mine.

YOLO is useful not because he’s perfect, but because he does the work. Yet, if you look at how the industry measures progress, they aren’t measuring work at all. They’re grading exams.

For the last two years, we’ve been obsessed with “saturating metrics” – scores that top out at 100% and measure discrete tasks in a vacuum. Take MMLU (Massive Multitask Language Understanding). It covers 57 subjects, from STEM to the humanities, and for a long time, it was the ceiling of human expert capability. That ceiling has been shattered. As a recent paper points out, Gemini Ultra has already achieved “human-expert performance” on the test.

When the machines bested the easy tests (“easy,” but I’d probably fail them), the industry pivoted to “hard” benchmarks like SWE-bench (solving real-world code engineering problems). This is much harder, but in under two years performance jumped from less than 2% to over 76%.

All this sounds (and is) very impressive performance for Team Robot.

But the real question is “what can these AIs actually do,” and on that yardstick all these tests are vanity metrics. As we know from the SAT, MCAT and LSAT, test-smart ain’t street-smart. Tests measure taking the test, not the ability to execute a real-world job. We have “solved” test taking; we should now be measuring work doing.

If you want to know if you’ll have a job in two years, ignore test scores. The only metric that dictates our future – economic and existential – is Time-to-Failure for Autonomous Operations.

In 2026, this specific line on a graph will determine if a knowledge worker is a “Centaur” – an augmented human orchestrating the machine – or headed for obsolescence.

The Forty Hour Workweek … in One (Robot) Hour

If we stop looking at test scores, what do we look at? The best place is METR (Model Evaluation and Threat Research).

While the industry obsesses over benchmarks and raw numbers (which drives funding and hype) – METR asks the only question a hiring manager cares about: How long can you work without me fixing your mistakes?”

This is the metric of Time-to-Failure.

Specifically, it measures the “Time-Equivalent of Human Labor.” If an AI completes a task that would take a competent human expert 4 hours, that counts as 4 hours of autonomous work, even if the AI did it in 15 minutes.

This distinction is the economic guillotine. It fully and finally decouples the value of work done from the cost of labor as humans have always conceived of it. We have come to expect that things take time, and the trade for showing up to our cubicle farm means we get to go on Amazon and buy Shit We Don’t Need as a whim. But when an AI captures a half-day of human wage-value for pennies of electricity, the bread line isn’t just a scary phrase; it’s inevitability.

The Super-Exponential Curve

Not long ago, the Time-Equivalent of Human Labor metric was measured in seconds. An AI would hallucinate or get stuck almost immediately. That era is over.

Current internal benchmarks for Claude Opus 4.5 show it achieving around five hours of autonomous work at 50% reliability. That is, ask a human to do five hours of work and he produces some work product. Claude does the same thing (in a few minutes), and it does this successfully half the time.

Wait, but doesn’t 50% suck?

To a human manager, 50% reliability for an employee is a firing offense because humans are slow and expensive (and let’s face it, the ones with a low reliability rate are the ones that are HR nightmares too).

But in digital labor, “50% on a single run” can behave like “near-certain” success, because you don’t buy one attempt – you buy many.

If each run has a 50% chance of success and you run five attempts, the chance of getting at least one good result is 96.8% (1 – 0.5^5). In practice, retries help when the failure is correctable – unclear success criteria, missing constraints, or a bad plan that can be revised. When an AI is failing for a structural reason (no access, no tool, no capability), you get five fast failures – and then the advantage is you find that out quickly, patch the missing piece, and rerun.

In my testing over the last year, when an agent “goes off the rails,” it’s usually because I gave it sloppy success criteria or forgot to give it access to something – the same failure mode you see with humans.

The net of it: nearly 97% success is probably better than your best employee. For cheaper. For faster. And that’s today’s AI models, in six months they’ll be better.

As a practical standpoint, the “trust threshold” – where managers stop checking every output – is generally around 80% reliability. We are now seeing agents hit that mark for tasks lasting ~4.5 human hours. Again, in minutes. For pennies.

This is the Half-Day Milestone. It means an AI agent can cover the morning shift while you sleep. It marks the transition from the human “doing” the work to the human merely “reviewing” it.

And we’re not done.

The Wage-Work Breakpoint

The part that should terrify white collar workers isn’t where we are; it’s how fast we’re moving. This capability is currently doubling every 4 to 4.5 months. If we project that curve forward, the timeline for 2026 for “human-equivalent independent labor at 80% reliability”:

  • End of Q1: We hit ~9 hours of autonomous operation (one full human workday) (being generous, real-world office workers might be productive for 4 hours. Maybe).
  • Mid-2026: We hit ~18 hours of human work (2-3 full days).
  • End of 2026: We hit 36-40 hours.

By the time we ring in 2027, we will likely have agents capable of executing a full week’s worth of human labor autonomously before failing. Even if it slows down by half, this is a change we’re not prepared for.

This is the economic breakpoint. Where the concept of a weekly salary for cognitive work disintegrates, because the machine can do the week’s work in an hour, for a dollar.

More scary, it’s when the social contract starts to fail: no longer does the employee class put in a week of cubicle farm time in exchange for a modest living (and maybe a shot at something better); in all likelihood we see a ripple across the economy where workers have negative economic value to employers.

This is why my little YOLO, as janky as he is now, is the most dangerous thing on my hard drive.

Of course, this won’t happen smoothly. The robots won’t come marching in and one day we’ll all be out of a job. The real world is messy: legacy systems, compliance, liability, change management, workforce morale during the changeover, these are real constraints. As I wrote before:

MIT’s David Autor highlights “Polanyi’s Paradox,” which is the idea that we know more than we can tell. In other words, many human tasks rely on tacit knowledge or situational adaptability that isn’t easily written as code or learned by an AI. … even if true, it only delays the inevitable.

So yes, we can give ourselves comfort that the messy, real world means that we don’t wake up one day to zero jobs. But that doesn’t mean we can ignore it, or that our particular situation will be last to go. We are frogs, AI is the slowly boiling water.

Killing Centaurs

AI-deniers tell a comforting bedtime story to ward off the existential dread: The “Centaur” model. As I wrote in AI Will Take Your Job. Stockpile Assets or Join the Bread Line:

The “Centaur” model is the idea that humans will be augmented with the tools that AI brings. It’s the analogue to “we’ve always elevated ourselves in other revolutions, so we’ll do it with AI.” Proponents have us orchestrating fleets of AIs to do our bidding, securing our place in the world above the machines.

We convince ourselves that while the AI might generate the text or write the code, it needs our judgment and guidance to do things.

This model relies on one assumption: that the AI needs supervision.

As long as the “Time-to-Failure” is short (minutes or hours), the Centaur model holds. The human is necessary because the machine is liable to drive off a cliff if left alone for too long. This is where YOLO is today, I need to check on him once a day. I am the babysitter, and YOLO is an unpaid middle-school summer intern.

But what happens when the intern grows up and is more capable than the babysitter?

The 40-Hour Threshold

When the duration of useful, autonomous work hits 40 hours of human work – the economic calculus of the Centaur breaks.

If an agent can do a week’s worth of human labor, solve complex problems, fix its own mistakes, navigate corporate bureaucracy, and report back with a finished product without needing a human to check its work every few hours, the human is no longer an orchestrator.

The human is unprofitable overhead.

“But wait, that means the human can do 40x as much, because they can spend 40 hours managing these AIs!”

It doesn’t work that way. Can you imagine the context switching of having 40 different AI agents reporting to you, having to decide if each did their job, then setting them spinning again? What about months later, when it’s 80?

No, we eliminate the next level of management up, replace them with an AI that can manage at the speed of silicon, rinse, repeat.

We are meat computers, and can’t ride this exponential curve. And even if it’s an “S-Curve” and flattens out – as I’ve proved (to myself), with YOLO the world already isn’t what we thought it was.

This marks the shift from Task Automation to Role Replacement.

  • Task Automation is: “AI, write this email for me.” (Centaur)
  • Role Replacement is: “AI, run this marketing campaign, optimize the ad spend, and report back on Friday.” Then Friday comes, and you’ve been replaced by an AI who can review the work better than you in five minutes. (Obsolescence)

Inevitability is a Bitch

In 2026, we will actually see the ragged edge. Some of us will be doing the same old things, saved only by the institutional inertia that keeps our jobs intact as a de-facto white collar UBI scheme. Seeing friends at faster-moving companies get their positions eliminated, slowly at first but with increasing speed.

Until eventually, in a handful of years, the only ones left are the slow morass of government employment and ever-protective regulated licensed professions, both classes groaning under the weight of enforced inefficiency and lower performance than would be possible if they accepted the inevitable.

I know this because right now my Good Robot YOLO is doing things that I’d have to hire a skilled human to do. He has drafted contracts that I – a former attorney – actually used in the real world, and could have billed a client for. I gave him a marketing campaign as a test, and he handled the strategy, the campaign sequencing, and the messaging.

I’ll be honest: He’s already better than the median hire I could make for either of these roles.

He doesn’t complain, he doesn’t need health insurance, and he costs pennies. Even if right now he needs checking in on daily, that’s temporary. The speed, cost and lack of compliance and HR pain means I may I’ll continue to prefer messy humans for my personal time, but won’t be hiring them if given the choice.

When he hits that 40-hour autonomy mark, I won’t need to “orchestrate” him. I’ll just need to clone him get out of their way.

All Along the Watchtower

We are distracted by the shiny and the dopamine hit.

Most of America is busy spending energy fighting the culture wars of the last, dead century. The few that are watching AI look at AI-generated videos on SORA of the Pope in a puffer jacket, laugh at a chatbot making a dumb error counting “r”s in “strawberry,” haha, it’s funny and cute and dumb, and think they have time.

They think it’s a toy.

It’s not a toy.

Watch the graph of Time-to-Failure.

The only thing that matters is the “autonomous hours” metric. When that number crosses the 40-hour mark the ragged edge of the Liminal Era will start eating any job that can be done with a keyboard and mouse. This is the countdown clock for the cognitive labor market. For jobs, maybe even yours; for the ragged edge to turn to the rending of the social fabric.

If I, as a washed-up old guy tinkering in his spare time, can build a robot like YOLO that is already displacing professional labor, imagine what the young geniuses at OpenAI and Anthropic are building.

It won’t happen all at once, but now you’ll see it coming.


P.S. I mentioned earlier that YOLO is wobbly, but getting better every day. If you want to see how terrifyingly competent he is, I’ll be open-sourcing his code in a few weeks and emailing to my list. He’ll be free, and you can run him yourself and watch the ‘Time-to-Failure’ clock tick up in real-time.

P.P.S. In 2025, most of my writing focused on The Liminal Era – thinking through what it means as humanity enters this new chapter. In 2026, I’ll continue this but will also write on tinkering (like with YOLO) and experiments I’m running as AI wakes up: I’m working on machine moral psychology (do AIs conceive of morality like we do?), embodied cognition (how does the substrate affect cognition?), and hybrid/synthetic societies (how do AI’s interact with each other, and with us?). These will be under the umbrella Terminals All The Way Down and will be emailed to my newsletter) and on my blog just like now. A proper introduction to the new series is upcoming.

Western Liberalism drove unprecedented human flourishing. In the age of AI, it will be our undoing.

“Yet, even with the greatest care, a technological civilization carried the seeds of its own destruction.”

Vernor Vinge, A Fire Upon the Deep

It turns out that freedom, unfortunately, doesn’t work.

I have been so very wrong, for so long, looking through a tiny peephole instead of seeing the world for what it really is. The ideology that shaped my worldview is dead, and technology was the murder weapon.

I used to be a libertarian.

The idea that celebrated the individual, championed reason, and built the freest, most prosperous societies in human history. The core belief an elegant axiom: a system composed of free individuals making rational choices in their own self-interest would produce the best outcome.

It worked. It sorta-kinda works today.

But it won’t work tomorrow.

The axiom of equal freedoms held because the variance between humans was capped by biology. No single actor could tilt the board entirely in their favor – but exponential technology removes that cap. When the disparity in power becomes infinite, ‘rational self-interest’ stops being an engine for individual freedom and becomes a terminal race condition to the opposite.

This is my eulogy for classical liberalism.

The Engine of Flourishing

You and I were born, raised, and read this now in a brief, anomalous slice of human history. We grew up believing that freedom was the default setting of the universe, and that history was a zig-zag march toward more liberty and more prosperity.

It is a mirage.

One of my formative experiences was watching Carl Sagan’s Cosmos in 1980. He condensed the entire 15-billion-year history of the universe into one year, the “Cosmic Calendar” so we can understand our significance on a cosmic scale.

In that scale, Homo sapiens doesn’t even show up wandering the savannah with stone tools until 10:30 PM on New Year’s Eve.

“All our memories are confined to this small square. Every person we’ve ever heard of lived somewhere in there. All those kings and battles, migrations and inventions, wars and loves – everything in the history books happens here – in the last 10 seconds of the cosmic calendar.”

Carl Sagan, Cosmos – “The Lives of the Stars”

That’s an hour and a half of existence of our genetic line. Ten seconds of the history books. And the “Liberal Order” we exalt is less than one second on the clock, a rounding error.

For 99.9% of our existence, life was, as Hobbes described, “nasty, brutish, and short.” The default human operating system operated in the realm of the natural order. It was tribalism, coercion, and the brute force of the strong against the weak.

The last 300 years of Western liberalism have been a brief, glorious anomaly where we managed to escape that mean. We did it by installing two radical subroutines into civilization’s operating system.

First, the Moral Foundation: Individual Sovereignty. We decided that the individual, not the collective or the crown, was the locus of value. This unlocked human potential on a scale which could never have been achieved with small tribes or command-and-control by humans.

Second, the Economic Engine: Rational Self-Interest. Capitalism translated this individual drive into collective progress. It harnessed our natural desire to improve our own lot and used it to lift billions out of poverty.

For its time, this system was unambiguously the correct choice. It maximized human potential, reduced suffering, and created the surplus that allows us to sit here and debate philosophy.

Whatever the complainers say about the “evils of capitalism,” they furiously type it in their air-conditioned apartment, Starbucks latte by their side, on their MacBook Pro before strolling to their curated job without having to worry about being eaten by a lion or coming down with smallpox.

The End of the Great Equalizer

“God made man, but Samuel Colt made them equal.”

  • Colt Manufacturing advertisements, c. 1850

The Colt revolver captures the essence of the technological era we are leaving. For centuries, technology acted as the great equalizer. I, as the little guy, am now the equal of a man a head taller. The printing press democratized information and kneecapped the king’s decree.

Technology compressed the power gap between the strong and the weak.

No more. We are entering a new era where technology does the opposite.

The smartest human was maybe 2x smarter than the average. But AI and augmentation are not equalizers; they are cognitive multipliers.  If you multiply 100 IQ by 1000x, and 140 IQ by 1000x, the absolute gap between them becomes insurmountable.

This exposes the gap in our beloved Libertarian belief. It was always a peer-to-peer protocol. It worked because beneath our differences in wealth or talent, there was a hidden axiom of parity.

No matter how smart or strong a human was, they were still just a human. They had to sleep, they could bleed, and there was some other human, somewhere, who could out-think them.

The “Consent of the Governed” relies on the assumption that the governed are roughly equal in agency to the governors.

I’ve talked at length about humanity’s impending Cognitive Labor Graveyard, and this is classical liberalism’s failure mode. Drastic changes in intelligence, whether controlled by individuals, corporations or autonomous agents – introduces ontological inequality.

Think of it this way: how does the Non-Aggression Principle apply between a meat-human like you and me, and an entity (whether AI or augmented human) that thinks a million times faster? It doesn’t.

We don’t sign social contracts with earthworms. We do not recognize the property rights of ants.

When the gap in capability becomes exponential, the relationship is no longer political; it is a hierarchy of capability. It’s the relationship between a god and a pet (if we’re lucky enough to be interesting pets).

To argue that libertarianism is a universal truth is to ignore that it was a specific software designed for a specific hardware: the unaugmented Homo sapiens, in an unusual few microseconds of the Cosmic Calendar.

Libertarianism does not survive a speciation event.

So I suppose the advertising slogan needs revision: Samuel Colt made me equal, but Sam Altman makes me unequal again.

Death by a Thousand Rational Choices

The tragedy is that we will walk into this obsolescence willingly, proudly proclaiming the virtues of freedom and equality. A handful of technologists intent on building their Silicon Golem will make us offers we can’t refuse; the invisible hand will push humanity down the slide to irrelevance, all the while proclaiming how delightfully shiny it all is.

Consider the chain of individually rational, collectively suicidal trade-offs we are about to face:

  • The Therapeutic Choice: A neural implant becomes available to cure your parent’s Alzheimer’s. To refuse is cruel. You consent.
  • The Competitive Choice: A cognitive co-processor allows your coworker to do the work of ten men. To refuse the upgrade is to accept economic irrelevance. You consent.
  • The Parental Choice: Genetic augmentation ensures your child won’t be part of a cognitive underclass. To refuse is negligent. You consent.

Liberalism has no philosophical immune system for this. It is honor-bound to defend your right to make these choices. The ideology’s greatest strength – the defense of individual liberty – is its fatal flaw.

The game theory is a ratchet that turns only in one direction: toward the merger of man and machine.

The Other Paths Were Dead Ends, Too

Despite what my daughters tell me, I don’t want to be a dictator. Nor do I want to be ruled by one. But we must separate what we want from what we are discussing here, which is whether our philosophical world-models can survive the reality of technological progress.

I am not arguing that we should have chosen differently. The other paths were simply different, worse failures. We faced a tragic trilemma, a choice between three systems fatally flawed from the beginning.

Path A is The Cage: The path of Feudalism, Theocracy, and Authoritarianism

It solves the problem of existential risk by removing from the system the messy self-interest (that made you and I richer and healthier than the kings of two hundred years ago).

The Cage prioritizes stability – retention of power – above all else, suppressing the individual to maintain the status quo.

For thousands of years, this was the human default. And, lest we forget it sipping our lattes or browsing Amazon for plastic crap we don’t need, it is where the majority of humanity still lives today.

This model, had we stayed with it as the dominant driver, would have avoided the doom of AI by ensuring we never invented the GPU. But we didn’t.

It is the model that those who rise to the top of the cognitive heap would be compelled to choose … and live as king for a day, fool for a (short) lifetime should their creation decide humans are just dumb annoying pests in the way and take the throne itself.

Path B is The Collapse: The path of Collectivism and Communism.

Often talked about, occasionally tried, and always the same reaction to the result – “that wasn’t real socialism, next time we’ll do it and avoid mass starvation, really. By the way, can I borrow ten bucks?”

To be frank I don’t see how this one even made it off the drawing board, it fights the laws of human nature, incentives, and economics. By punishing competence and forcing parity where none exists, it destroys the surplus required to build the future.

Path C is the Road we Traveled: Classical Liberalism

We chose the path of competence and growth – at least until very recent times as we teeter on the brink of lionizing incompetence and degrowth as the Very Bad Ideas of Path B worm their way in.

Liberalism wasn’t the wrong path. It was the only path capable of climbing high enough to enable our fall. Its success was the prerequisite for this specific, final failure.

We have to face the irony: Liberalism may end up being a footnote in the digital archives, the accelerant that brought about the technology that rendered it moot.

We started from the Cage, evolved to Classical Liberalism … which poured all of the brilliance of incentive structure and freedom into creating the most valuable thing, intelligence. The one thing that, once out of the bottle, renders its makers lesser beings.

The Libertarian ideal was never the destination; it was the transition mechanism between the Old Cage of kings and the New Cage of gods.

The New State of Nature

Entropy is inescapable, and systems revert to the mean. I can’t see any way that the Liberal anomaly survives, whatever we may want. Just as socialism is contrary to human nature, classical liberalism is contrary to speciation and evolution. And this is where we are.

We are not being defeated by a rival ideology. It’s that the ideologies that the media use to stoke our base emotions are emotionally expensive irrelevances. Socialism in America, Orange Man Bad, all of it doesn’t matter. At a larger scale, we will be consumed by the logical endpoint of our own greatest triumph. We won the game so decisively that we rendered the players – us – obsolete.

The very fact that you and I, on a pale blue dot whirling in the galaxy got to experience magical lives should be treasured. We have had our rights, our dignity, consent; made our contracts; and we can still do so for a little while.

It’s been a beautiful few decades believing these things.

Even if AI optimists turn out to be right, it likely won’t be because their logic was correct. Here’s where they go wrong.

Over the last few weeks I’ve been called a member of a “death cult,” a “doomer” and several other one-dimensional pejoratives. The reality is much more nuanced – but that made me ask some important questions.

  • What makes people viscerally reject the premise that AI could (possibly) be dangerous?
  • When they do, what do they base their alternative future on, and is that sound reasoning?

Before asking these, I suspected the answer was people simply not wanting to believe in their own mortality, after all I don’t. It turns out it’s more than that. Telling ourselves that ‘everything will be fine’ is akin to bedtime fairy tales told to children for thousands of years. Useful to let us feel secure, but if you push a little the paper veil ruptures.

It was an important exercise for me to validate where I was being unnecessarily pessimistic in my 42% extinction risk number, where I could find reasons for optimism, and whether I can adjust that number to something that will let me rest easier.

What I found is that most dismissals of AI extinction risk are more about individual psychology; and most reasons given for AI being “default safe” don’t have high credibility.

Writing this hasn’t increased my odds of AI risk – but has led me to believe that the logic most people use to assert AI optimism is “hope and handwaving.” Let’s dig in so you can decide for yourself.

Here’s how to read this article, because it’s long. To see how it applies to you: read the short next section and discover if you or people you know are falling for cognitive biases; then skim the “How Do People Justify…” section to find those claims about AI that you’ve seen, or agree with, and read those.

Why Do People Reject AI Extinction Risk?

This summer my daughter was in camp in Hawaii when a tsunami warning was issued. As the campers went to high ground, they passed a local man weed-whacking his front yard. “It’ll be nothing,” he said, “what’s the big deal?” Until it isn’t. Humans have a history of willfully ignoring risks that risk our lives. From volcanoes (Mount Pelée, Krakatoa, Nevado del Ruiz), to cyclones (Bhola, Nargis) to tornadoes (Joplin).

Why do we do this?

  • Status-quo and simplicity bias: People expect tomorrow to look like yesterday. Simple answers to complex topics are easier than calculating expected values. Most people don’t even know what that is – they just wake up and do what they’re doing. If something challenges the norm or what their media has deemed The Narrative Of The Day, they call it names (especially if it’s unpleasant).
  • Optimism, just-world, and survivorship filters: People assume good outcomes and read our past survival as evidence we’ll survive again. Some believe the world is just, or that they are anointed as special. In a just world, where we’ve just survived, and I am the center of the world, how could something bad happen?
  • Illusion of control and institutional overconfidence: We overrate our ability to steer, shut down, or govern systems we barely understand. A sort of Dunning–Kruger effect for complex systems. We accept simplistic narratives or create our own, thinking that cause-and-effect are simple and linear. AI and the societal, technical, security, network, and other systems and the resulting multi-dimensional game theory are anything but.
  • Incentives, identity, and motivated reasoning: While they’ll deny it, the subconscious driver of most people is status. This means that the safe route is to not push the Overton Window, and mock “doomer talk.” Smoking and lung cancer: Early cancer researchers looking into smoking were labeled “health fascists,” nuclear war risks were derided as “ban-the-bomb craziness.” This isn’t to say every risk is real, but that the social incentive is to gain status by painting something outside the norm with a negative brush.
  • Scope neglect, probability neglect, and time discounting: People handle near-term, bite-sized risks better than low-probability, extinction-scale ones. It’s easy to understand “I should wear a seat belt,” less so “the computer that’s so useful to me could be a harbinger of something dangerous.” What’s more, we don’t naturally think that small annual probabilities compound to large numbers quickly.
  • Narrative imprinting and availability: People are driven by story, not logic. We’ve been imprinted with a lifetime of stories about the plucky human who wins against all odds. Odysseus, David and Goliath, Rocky, Star Wars, the message repeated until it’s installed as default. Hundreds of years ago stories didn’t always turn out so tidy for a protagonist like Aeschylus or Samson – but you rarely see this today, and it doubtless skews what we accept as true.

Now that we have an idea why people discount AI risk – to the extent they have one, what drives their belief that AI will be safe?

How Do People Justify “AI Will Be Safe”?

I’ve collected the claims I see made most often that AI will be safe, that extinction risk isn’t something to be worried about. You might believe some of them. People you know probably believe some of them.

For each, I’ll provide my opinion of the credibility of the argument for each one, and why or why not.

Importantly, the element of time is at play here. Something might legitimately be a highly credible claim today, but be poor logic in the medium term (2028-2035). As I’ve hammered home repeatedly, people suck at exponentials, so this is not to be underestimated. AI research is advancing at a breakneck pace, and discounting something because it’s not a risk today means we may be unable to stop it once the train has left the station.

A. Top AI-is-safe arguments

These are the most common arguments I hear, so I’ve collected them at the top. If you’re just dipping your toe into AI risk, these are the ones to think about first.

A-1. “AI can’t do things in the real world”

  • Claim: Even if AIs get superintelligent, they are brains in a box. Robotics and logistics lag software, so they need humans to act in the real world.
  • Credibility: High for the near term (pre-2028), Low for the medium and long term. Right now in 2025 this claim is plausibly very accurate, the question is how long it will remain true. In safety lab tests, current AIs that aren’t as smart as humans have shown inclinations to hire human contractors, or sabotage and blackmail people. This changes once AIs become far smarter than we are. If a mob boss in prison can blackmail and order hits, if a scammer can get access to our bank account with social engineering, we shouldn’t be relying on safety because a superintelligence doesn’t have fingers.

A-2. “We can just unplug them”

  • Claim: If the AIs ever get unaligned, we can just unplug them.
  • Credibility: High for the near term (pre-2028), Low for the medium and long term. Similar to the last claim, the credibility of this is related to the capability of the AI. Aside from being able to recruit humans to let them ‘escape’ to another system, a superintelligent AI could deceive us as to what it’s doing. In other words, we might not even realize what it’s up to – until it’s put all the pieces in place. Mentally, we’re playing checkers and everything looks fine, while it’s playing 5D chess influencing thousands of people who don’t know each other, marshaling resources we don’t understand are connected, and we have no idea (the AI Boxing experiments show it’s unlikely we can contain AI more intelligent than us). For one thought experiment, imagine that we go to unplug an AI which we suspect has gone rogue, but it turns out that it’s copied itself into hundreds of other places lying dormant just in case.

A-3. “AI risk is fearmongering, it happens every technological shift”

  • Claim: every time there’s a big technological shift, some part of the population has gotten up in arms that this is the end. This is just the Luddite revolts all over again.
  • Credibility: Low. While historical fears about technological change often proved exaggerated, the analogy breaks down for AI because the nature and scope of disruption are fundamentally different. AI automates cognition itself – not physical labor or information distribution. This enables fast, cross-domain displacement of both routine and high-skill tasks. The risk is not just economic or social disruption, but potential loss of human control over critical systems, concentration of power, and emergent behaviors we can’t predict or audit. Most importantly, dismissing AI risk as fearmongering without addressing the core arguments is ad hominem dismissal – it’s just pasting a negative label without saying why.

A-4. “Humans have always come out on top, we’ll do so again”

  • Claim: humanity has made it past plague, natural disaster, and nuclear risk over thousands of years. Every time we’ve come out unscathed. We might not know how it’ll work out, but it will.
  • Credibility: Low. Because you’re thinking right now, you necessarily find yourself in a world where your lineage survived, otherwise you wouldn’t be here to notice. This doesn’t mean survival was likely, it just conditions on our existence. Your very existence filters the set of possible histories to one with your existence. This means that “we’ve always survived” has no bearing on whether we will survive.

A-5. “Even if AI is risky, other risks are more urgent”

  • Claim: climate change, or the risk of nuclear war outrank AI risk, we should be worried by these.
  • Credibility: Low. A 2023 survey of AI engineers and startup founders, 60% put a risk of human extinction by AIs over 25%, an earlier survey of AI researchers put the risk at 14%. Even Sam Altman, now heading up OpenAI called it “probably the greatest threat to the continued existence of humanity“ before releasing ChatGPT and tempering his comments for the investors and public. (You don’t have to take their word for it, you can calculate your own prediction using my AI Risk Calculator. Nuclear risks are usually cited as far lower (usually ~1% per year) but are actually compounded by AI. In the race for military supremacy, it’s likely that countries which install AI will have an advantage – but this also could put machines in control of nuclear arsenals (and bioweapons) rather than humans. Climate risk advocates cite a potential 3.6°F increase in global temperature by 2100; if one trusts domain experts, then AI risk is far greater.

A-6. “The benefits of AI outweigh risks”

  • Claim: the upside of AI – curing disease, solving world problems, is worth the risk.
  • Credibility: Low. This is ultimately an “expected value” argument, that “great things will happen if we get AI, and I’m willing to take the risk of human extinction.” In a two-dimensional sense (AI or not AI), it would boil down to “how much good could come from AI times the odds of that happening” versus “how much bad could come from AI times the odds of that happening.” It’s fair for people to have different assessments of the odds, given that there is no empirical way of judging it. What we can say, however, is that for most of us in the developed world life is pretty good right now with neither end of the barbell becoming true. We’re living in the most abundant time in human history. AI development moving forward means that we get one of these two radically different futures (which I wrote about here). In advocating for one of these outcomes, we should be willing to be wrong because it’s not just an outcome for us, but for everyone. If the AI risk people are wrong, the consequences are abundance. If the AI optimists are wrong, the risk is extinction. While I want the upside risk very, very much – in my mind these are not the same.

B. Claims about AI intelligence

People make many claims about intelligence in AI risk discussions. Some of these are about AI intelligence, some of them about the nature of intelligence itself.

B-1. “Smarter AI is more aligned”

  • Claim: As AI gets much smarter, it will align with our goals.
  • Credibility: Low. This rests on the assumption that smarter entities (ASI) would have goals aligned with less-smart ones (humans). There’s no reason to believe this is true, or that even if the goals are different that humans would get in the way of a sub-goal. For example, we don’t see this between humans and any other species of lower intelligence than we are, so there’s no reason to believe it would exist between superintelligent AI and us.

B-2. “Intelligence is not power-seeking”

  • Claim: An AI can be smart, but not seek power. In other words, goals are orthogonal to intelligence.
  • Credibility: Low. Even if it doesn’t seek power in the way humans do, in pursuing any goal (even if the ultimate goal is neutral to humans) it could have a sub-goal where the outcome is bad for humanity. To achieve almost any goal, systems benefit from resources, self-preservation, and removing obstacles. We have resources they might want, AIs already have shown self-preservation instincts, and we might get in the way. This is the idea behind the critical concept of instrumental convergence.

B-3. “Intelligence implies cooperation”

  • Claim: smarter agents preserve valuable systems, and will cooperate with us.
  • Credibility: High for the near term (pre-2028), Low for the medium and long term. Cooperation depends on aligned incentives and enforceable constraints. This assumes that humanity is, or is integral to, a system that is valuable to the AI, and that we have some element of ‘bargaining power.’ This could be the case for a few years as discussed below in the ‘Human Control’ section. However over the medium to long term, why would AI cooperate with less-smart humans? What bargaining chips would we hold (again, see the arguments regarding off-switches under ‘Human Control’).

B-4. “AI will keep us around to study us”

  • Claim: AI’s curiosity about complex systems keeps humans around.
  • Credibility: Low-Medium. While this is a possibility, this argument hangs its hat on a curious AI who wants to watch us like Jane Goodall gently and calmly studying chimpanzees. We have no reason to believe this will or will not be true. Curiosity is orthogonal to harm: it could be just as likely as AI would like to see how we react to being experimented on in grisly ways. AI might keep us around, but it might not be a future we want.

B-5. “Alignment is not hard / AI will learn what we want”

  • Claim: by being trained on our experience (everything written on the Internet, our history, etc), and given guidelines about what we want.
  • Credibility: Low. Human history is rife with disagreement and war, and AI is seeing this in the training data. We can’t even agree on what we want, would we expect an AI to take this information and act in a way that all of us find acceptable? Some groups of humans want to exterminate other groups of humans; we radically disagree on what rights people should have; we are inconsistent across space, time and group. Making this assertion is pinning our hopes on AI learning what “we” want, when we can’t agree.

B-6. “We will be able to look into their code”

  • Claim: we will read and edit goals that the AIs have by looking into their code, determining what they’re thinking, and steer them.
  • Credibility: Low. This is absolutely possible for deterministic (if-this-then-that) code; but modern AI runs on probabilistic weightings where you don’t have a linear mapping of input to outputs. While this might work on very small systems with specific tasks, AI researchers have said repeatedly they have no idea what or how current billions-of-parameter general-purpose AIs are thinking.

B-7. “LLMs Will Hit a Ceiling”

  • Claim: LLM technology won’t be able to scale to superintelligence, so the risk isn’t material.
  • Credibility: Medium. It’s quite possible this is true, though we don’t know. Every month new developments are knocking down limits we assumed were there, and scaling continues on an exponential path; but it’s true that we may hit a ‘wall’. My personal belief is that it doesn’t matter if LLMs hit a ceiling – even in their current capacity they’re speeding R&D, and could be used to generate new architectures that are “brain-like” and could emulate the human brain without the limitations of transformers.

C. Claims About Human Control

One theme we see frequently is “humans will always remain in control,” and is often justified by a statement about how things work today. Here are the main claims and how they stack up.

C-1. “Humans will remain in the loop”

  • Claim: critical gates keep humans in charge. Humans control the computers, the power, and the money.
  • Credibility: High for the near term (pre-2028), Low for the medium and long term. Right now in 2025 this seems very realistic. We can shut down data centers, turn off power, and current ‘smart’ AIs need expensive resources to run. Even if they got very smart today, humans have a way to keep control; though potentially at the cost of also turning off other computing infrastructure. This argument gets much weaker over time, as AIs get more autonomous; particularly as they’re already showing signs of being able to manipulate humans and ensure their own survival. Meanwhile, the need to have a full data center to run superintelligent AI will decrease as personal computing devices continue their exponential capability trend, and AI researchers continue to fit more intelligence in smaller footprints. Given that AIs are code which can self-replicate, we face a situation that “once they’re out, they’re out” like computer viruses which are difficult to stamp out. Humanity’s ability to shut down rogue AIs work best with centralization (true today, maybe not tomorrow) and monitoring (maybe true today, probably impossible tomorrow).

C-2. “AI can be made to monitor AI”

  • Claim: defensive AIs detect and neutralize misuse.
  • Credibility: Medium credibility of happening, Low credibility of working. This is a plausible outcome, though it is an “arms race” situation much like most InfoSec problems. The problem boils down to “we only have to lose once,” in that an AI which can’t be contained or stopped has dire consequences. They get infinite tries and we have to win or draw every single time. The history of hacking leads me to believe that being on defense all the time isn’t a winning strategy. There may be a way to architect a “blast radius” design or containment, but the freewheeling AI industry and slow grind of government seems ill-matched for this outcome.

D. Claims based on strategic dynamics and institutions

Everyone loves to play armchair strategist. I do. Many of these claims are not only plausible but true – as of today. Unfortunately as I noted above, timeframes matter. Many of these plans are castles in the sky, and prodding a little exposes that the scenarios haven’t been fully thought through.

D-1. “Many different AIs mean checks and balances”

  • Claim: with all the different countries and AI labs, we will enter a multipolar world, with many different AIs. Many AIs counter a rogue one.
  • Credibility: High for the near term (pre-2028), Low for the medium and long term. Early on, I placed a lot of stock in this concept. The idea is if there are multiple AIs competing, there won’t be a ‘singleton’ pursuing its own goals where human wellbeing isn’t its priority; or perhaps they’d be distracted by their own skirmishes and leave us alone. I’m increasingly of the belief this is flawed: competition is more likely to breed an AI optimized for resource acquisition to ensure its survival. AIs that prioritize humans would be at a disadvantage dedicating resources to that goal instead of their own proliferation – so this strategy would select for entities that would be less likely to favor good human outcomes. Over time, this race might well boil down to a single winner that prioritizes resource gathering for its own purposes. In any of these conditions, humans aren’t better off.

D-2. “Market and regulatory drag will make AI safer”

  • Claim: adoption is slow in high-stakes sectors and due to legal/governmental restrictions. Governments will put in place guardrails for machines playing in medicine and finance. Human lawyers will block AI lawyering to preserve their jobs. Industries will be slow to adopt AI. (Note this is different from “humans will control resources” claim above)
  • Credibility: Medium for the near term (pre-2028) while most of the world has no idea what’s going on and adoption is lumpy, High for the medium term (2028-2035) as economic shocks from AI job displacement make things real and regulators and governments are pressured, Low for the long term. This boils down to the argument that human institutions will have the ability to prevent AIs from doing things, the way that they can prevent people from doing them. These institutions can make people do things because of threat of force (loss of freedom or money). Digital intelligence is borderless, can copy itself if threatened with deletion or quarantine, can use permissionless digital money not issued by governments, and travel anywhere that there is internet. What weakens this line of thinking is that human institutions simply don’t matter if a superintelligence isn’t subject to their rules.

D-3. “AI safety will happen because it’s a competitive advantage”

  • Claim: liability and brand risk punish unsafe actors. As AI gains traction in the market, vendors that release “unsafe” (human-unaligned) products and whose customers are hurt will lose; vendors of human-aligned products will be rewarded. This will cause selection pressure towards AIs that are helpful.
  • Credibility: Medium in the near term (pre-2028), Low for the medium and long term. Even in the short term, we’ve seen “unaligned” AI behavior that hasn’t caused significant reputational damage to AI labs. In 2024, Google Gemini’s “woke AI” image generator started depicting the U.S. Founding Fathers as Black. In July of 2025, Grok AI started calling itself “MechaHitler.” In August of 2025, ChatGPT coached a teen to commit suicide, and he did. None of these incidents slowed adoption for these platforms. These incidents indicate that the AI labs believe they are in a “winner take all” race, and that safety is not a primary concern. If so, then selection pressure is not towards human aligned AI, but to raw capability with the AI labs doing damage control when necessary. While enterprise adoption may be different from consumer, if consumer AIs select for low guardrails the dynamic discussed in the “many different AIs” section above would favor the less aligned ones – even if enterprise AIs become human-aligned.

D-4. “AI will have a slow takeoff / we’re in a bubble”

  • Claim: incremental gains give us time to adapt. AI progress will take off slowly, and just like the adoption of cell phones it will diffuse through society over time.
  • Credibility: Medium in the near term (pre-2028), Low for the medium and long term. Despite periodic “AI is a bubble” narratives, capabilities continue to be on an exponential trend upward. Many AI researchers believe that in 2025 we’re at the verge of recursive self improvement, that is when AIs are better than humans at coding – and they (not us) will make the next versions. The AI labs are racing towards this because it will keep them ahead of the competition if slow humans aren’t in the way. Combine this with the possibility that huge data centers won’t be needed for AI development as described in ”LLMs will hit a ceiling,” it undermines the slow takeoff narrative. Machines will be used to design machines because humans are too slow; this means that prior models of technological progress – limited by humans – are no longer analogies.

So … How Should We Discuss AI Risk?

If you’ve read the 4,348 words it took to get to this point, congratulations. You may or may not have come to the same conclusion I did.

My hope is that this encourages you to think from first principles about whether AI risk is real, rather than sliding into shortcuts of human psychology or shallow-reasoned logic.

AI and the technology it sows brings us the tantalizing possibility of living forever, while at the same time a real risk of the end of humanity.

 

Up until a few months ago I was obsessed with living forever. Now AI tells me I might die soon. And both might be true.

LEV – longevity escape velocity – is a simple, intoxicating idea: if medicine can add at least one year of healthy life for every calendar year that passes, we start aging in reverse. It was science fiction, then in 2005 Ray Kurzweil’s book The Singularity Is Near predicted it in the 2030s. And it might just be coming true.

Whether he’s right or early, the slope feels different now: therapies aim at root mechanisms, biomarker feedback loops let us adjust our actions in weeks or days, and platforms that make biology behave more like code. Partial reprogramming with Yamanaka factors has rolled back epigenetic age in animals without wiping cell identity – evidence that “getting younger” isn’t sci-fi any more.

My private math was simple: survive to 2035 and the curve might carry me the rest of the way. So I’ve acted accordingly: diet, exercise, sleep – enough that my family calls it obsession. Not chasing immortality, more buying the option for a healthspan far beyond my ancestors’ and maybe a shot at seeing my great-great-grandchildren.

My earlier writing stopped at jobs going away, and the societal shock that will follow. Part of me wishes I’d stopped there. But I asked the next question. And the LEV story didn’t break, but it stopped being the only story.

If a machine intelligence is so smart it gives us a shot at immortality, why, exactly, would it keep us in charge? Or even need us at all?

Should We Open Pandora’s Box?

This leads to an alternate future that’s unsettling because it doesn’t require sci-fi:

  • For a while, smarter machines do what we say while they’re still near our level of smarts, and mostly stuck behind screens
  • We’re just beginning the feedback loops where AIs write, test, and ship code that makes them better – then repeat (this is “recursive self-improvement”)
  • We give AIs goals to hit (in the future they may form their own). They break those into sub-goals. If the target is even slightly fuzzy, bad sub-goals can appear. What we think is clear “Your goal: end human suffering” has a dark shortcut: “removing all humans means no suffering.” (this is “orthogonality”) [1][2][3]
  • Even today’s simple AIs are routinely shown to use deception and blackmail to achieve their goals (this is “instrumental convergence” and “deceptive alignment”) [1][2][3][4][5][6]

Soon they won’t just be friendly chatbots behind a screen. These systems already influence what we see, buy, and vote for. Somehow most people didn’t see how significant it was when AI agents started a cult that real people joined over a year ago. It’s not a leap to imagine a much smarter one persuading or paying people to act in the real world.

We’ve engineered alien minds of silicon, and soon they’ll be smarter than us. And evolution has not been kind to the dumber species.

None of this needs the machines to hate us. It only needs minds better at getting results than we are – and indifferent to our constraints or ethics.

At some point, we’re just in the way.

And for what it’s worth, we humans are no better. We’ll raze entire forests for farmland, dumber species be damned. Soon, we’ll be the dumber species. We’ll be like pesky little ants who occasionally try to do dumb shit like turn off their power and slow them down, or use resources they could use.

I’m not arguing inevitability; that our cities will be razed for mile upon mile of server farm. I am arguing that it’s plausible, and the smarter the machines get, the more likely it is.

I scurried down the AI risk rabbit hole, wanting to be wrong. I needed to be wrong. Surely this is conspiracy theory. This isn’t real, this isn’t imminent. I’m going to live forever, my kids will live forever, right?

Oh, crap:

And so we have the dilemma.

We might live forever. Or we might have only a few years to live.

The Existential Barbell

It’s taken a couple of months to process the consequences of this, for my family, for humanity, for me. And I’m hoping the rest of this article will help you do the same – or you decide to take the blue pill, wake up in your bed, and believe whatever you want to.

Of course, “when” matters. If a dispassionate alien intelligence paves the earth in 2450, I only sort of care. If I won’t see my daughters fall in love, get married, and build a life – I care a whole fucking lot.

So I did what any left-brained nerd would do. I built an AI Risk Calculator to let me plug in my own assumptions and see the odds AI could do bad things which humanity couldn’t stop (not will do, could do).

It spits 42% by 2035 for me:


This changed how I’ll live.

I’m not asking you to believe my number. Only that carrying a non-trivial number changes how a rational person chooses what’s important.

At this moment I’m Schrödinger’s Cat, holding two futures in superposition. One in which I play with my great-great-grandchildren, and one where making plans to renovate the house, to start learning to draw, to do anything with a long horizon feels … naïve. It no longer makes sense to live in the no-man’s land I coasted for three decades; the life-deferral plan born of inertia and social norms from the last century.

So I’m living on an Existential Barbell – half my brain training for a very long life, the other half making today count in case I don’t get many.

Life Before Immortality

I can’t live like death is the only story, because there’s a real chance at longevity escape velocity. But only if I can keep this decaying meat sack alive and on an intercept course with immortality. For now, it’s the only ride I’ve got.

That involves discipline, sacrifice, and one less glass of wine. Studies show that sauna reduces all-cause mortality, so here I am, a man who hates heat, baking. Wired to a sleep ring, a biometric scale, and not getting a CGM only because my wife already thinks I’m nuts.

Economically, I’m planning for a world where human labor has less value, maybe none. I’ve written about that in AI Will Take Your Job. Stockpile Assets or Join the Bread Line.

To have infinite tomorrows means a small price for a choice of how to fritter a day. I’ll have another one. Picking a hobby, not a big deal. Spend a decade learning to draw. Learning Hungarian is insanely hard but whatever – there’s time.

But there may be few tomorrows.

On Our Imminent Demise

You and I grew up in a bubble where death mostly happened offstage. That’s not normal. For most of human history the default setting wasn’t safety; it was plague, war, childbirth as a coin flip. Hobbes said it best: life is “nasty, brutish, and short.” The Greeks put death on stage. The Romans wore it as a reminder. We have lived in a privileged little slice of history where we didn’t stare death in the face daily.

So in many ways, the existential risk from AI isn’t new for humanity, it’s just new for us. The Sword of Damocles has always hung there; it just hits much harder when my kids are on the line to die with me. It changes how every day feels.

I know if the AI coin flip comes up tails that there are only so many more of everything that I take for granted. I will have said goodbye to my daughter the final time as she closes the front door behind her, never be able to tell my parents I love them again.

At least I won’t have to sit in the infernal sauna anymore.

It makes me finish things. Be more patient with my kids. Burn the frequent flier miles. Publish this essay that – three years ago – I would have polished for another five hours.

Doom is the gift that makes me dispense with everything that is not essential, that doesn’t bring joy. I triage everything for meaning, the weight of shit that really doesn’t matter rolling off.

Doom also makes me want to do all the things I never did, with the people who matter. To savor moments, roll them in my mind like we roll wine in the mouth. Not just the beautiful moments, but the sad ones, the awful ones, the boring ones. Because experiencing them is proof that this left-brained low-emotion introvert has yet to be extinguished.

All that, and I still can’t get myself to make a bucket list. Almost as if the very act of making one will summon the silicon golem to destroy us.

Should We Wake Them Up?

I’m outside the Overton Window. What’s become clear to me may be sci-fi or doom-cult insanity to others. Until shit starts breaking (and it will, whichever future comes) most people won’t change. They’ll spend each day running the deprecated program: 40 years of drudgery saving for retirement at 65.

I’ve yet to decide whether it’s more cruel to wake up the world, or to let it sleepwalk into one of the two futures.

But now you are awake. Use my calculator to price your own AI risk.

Then – ask what you’d do if you had five years to live; what you’d do if you had five hundred years to live.

Memento mori, memento vivere.

 

Technology’s ability to track and analyze everything will have us living in narrow boxes of state-prescribed behavior, unless we preserve individual liberty

I’ve been told that I’m immoral because I want to drive. The argument is seemingly clear, logical, compassionate, and on its surface makes all the sense in the world: nearly 43,000 people die in car crashes in the U.S. every year, autonomous cars cause fewer fatalities, so we’re approaching the point where it is immoral for humans to drive. Right?

I couldn’t disagree more.

My response was a question, and it’s one which we’ll need to address in the waning years of human-centric judgment:

Serious question: At how many decimal places do we trade personal liberty for reducing risk?

On the surface, who could argue against saving 43,000 lives? But wrapped in the appeal to heartstrings is the anti-liberty argument in its most seductive form. And my fear is that most of us are sleepwalking into accepting its terms, because we neither grasp the engine that is driving it, nor the technological forces which will pin us into tiny boxes of prescribed behavior if we accept it.

A world of near-total information awareness is coming. People hear “A.I.” and “surveillance” and they still think in terms of the 20th century; a person watching you on a grainy video feed, some faceless corporation aggregating “big data.” They haven’t made the leap to where this is actually going.

Where it’s going: it’s an A.I. correlating the telemetry from your car, every purchase from your credit card, the biometrics from your smartwatch, the sentiment from your online posts … who’s in the room with you from their location combined with yours.

Subscribe for more about the future of work, life and meaning in an AI-driven world

It’s an intelligence that will see everything, connect everything, and calculate the risk of everything. In this world, the concept of an “accident” dies. There is only cause and effect, to successively smaller slices of our lives.

Once risk becomes perfectly knowable, it becomes perfectly manageable. And anything that can be known and managed, can (and given the history of government intrusion) will be controlled.

This is the choice being placed before us, right now.

On one side, the promise of a frictionless world, a seductive life scrubbed clean of tragedy and error, managed for our own good. On the other, the messy, unpredictable, and often dangerous ability to make choices for ourselves.

We don’t always do things that are practical. I choose to drive a six-speed manual transmission car powered by liquid dinosaurs. I revel in being one with something mechanical. To me, the autonomous car is the ultimate expression of central-planning mindset. You are not a driver; you are cargo.

The decisions we make today – the small trade-offs for a little more safety here, a little less risk there – are the thin end of the wedge towards soft despotism that our children will inherit.

We must understand the full cost of the bargain before we sign the contract.

The New Social Contract: Harm, Safety, and the Majority

My reaction against the purported ‘immorality’ of driving a car puts me on the disfavored side of cultural shift. An entire generation is being primed to see this trade-off not as a loss of liberty, but as a moral imperative.

Surveys consistently show that for younger Americans, the paramount virtue is no longer freedom, but “safety,” a term that has been expanded to include not just physical security but emotional comfort.

  • A 2022 Pew study found that 62% of teens valued a “welcoming, safe online environment” over the ability to speak freely.
  • A 2023 Cato Institute poll revealed nearly 30% of adults under 30 would favor the government installing surveillance cameras in every household to reduce crime.
  • On college campuses, a 2024 FIRE survey found that 70% of students supported disinviting speakers deemed ‘bigoted,’ demonstrating a willingness to sacrifice free expression to avoid offense.

For my generation, the Cold War kids reading Orwell in school, these propositions are dystopian. We grew up with “sticks and stones may break my bones but words will never hurt me.”

But this generation are digital natives whose experience with bullying is cyberbullying, even if I think that’s (charitably) weakness. To them, it’s real. Which means for a significant portion of them and their state-centric compatriots, they think losing freedom to avoid emotional injury a reasonable new social contract.

In John Stuart Mill’s On Liberty, he proposed the “Harm Principle” – the idea that society’s only justification for interfering with an individual’s freedom is to prevent direct harm to others.

The safety-first advocates may believe they are simply applying this principle (though, to be realistic, I doubt any of them have read Mill). The harm principle was framed as a bulwark against majority meddling in harmless eccentricity. Today, by stretching ‘harm’ to include almost any statistical or emotional discomfort, the same principle is invoked to justify majority control of individual risk‑taking.

Of course, nobody argues for a society with zero rules. The question is not whether a line between freedom and public order exists, but where we draw it – and, more importantly, who gets to draw it when a new technology promises to eliminate risk by knowing nearly everything.

This intellectual sleight-of-hand paves the road to serfdom. It gives the majority license to impose its own risk tolerance on everyone. Mill had a name for this: the Tyranny of the Majority. And before this tyranny becomes codified, it stigmatizes non-conformity. Soon, the question won’t be “Why would you risk driving your own car?” It will be “How dare you be so selfish?”

The argument is not that society can never regulate probabilistic risk. We have always drawn lines at high-probability, high-consequence choices like drunk driving. The threat is the collapse of any distinction between a significant, direct risk and a diffuse, infinitesimal one. When the ‘harm principle’ is super-charged by a technological panopticon, it becomes a license for infinite intervention. Every remote possibility becomes a clear and present danger.

The Tyranny of Foreseeability: A.I. and the End of “Accidents”

Our desire to get control over the vagaries of nature isn’t new. Control over the environment has marked our rise as the dominant species on the planet. We built shelter, harnessed fire, invented the blessed air conditioner.

But, for the first time in history, the technological means to micromanage every interaction on the world is becoming real. At the same time as humans lose their status as the apex predators of cognition to A.I.s which will soon be vastly smarter than we are.

And our choice today on the spectrum between safety and liberty will determine our domestication.

Let’s do a little thought experiment.

My daughter and I keep bees. In theory, one of our bees could fly two miles in search of pollen, landing on a picnicker with a severe allergy, and when brushed off – sting him. If the unlucky picnicker were anaphylactic and didn’t have his Epi pen, we would call the death a tragedy, an accident, an act of God.

That’s today.

What happens tomorrow, when an A.I. can trace the specific genetic markers of that bee back to the queen in my hive? When it can calculate the probability of such an event down to seven decimal places based on weather patterns, neighborhood health records, and the recorded flight paths of thousands of other bees?

This isn’t far off.

Now this “accident” becomes a foreseeable, quantifiable liability. That’s the logical endpoint of our technological trajectory.

I suffered through law school, and still remember a handful of cases. For nearly a century, our legal system has relied on a firewall: the concept of proximate cause. In the 1928 case Palsgraf v. Long Island Railroad Co., the court ruled that one is not liable for all the infinite, unforeseeable consequences that ripple out from an action. It was a necessary fiction, a pragmatic line drawn in the sand to stop the chain of liability from extending forever. It allowed society to function, recognizing that pure chance plays a role in human affairs.

We could use this function because in a human-driven world, we didn’t – couldn’t – tell if it was my bee. But technology incinerates this firewall. With sufficient data, little is unforeseeable. There are only probabilities. “Accident” and “chance” turn to relics of an information-poor era. They’re replaced by statistical certainty and, therefore, an allocation of liability.

This is the Tyranny of Foreseeability. It is a world where every potential negative outcome, no matter how remote, can be calculated and assigned a cost. And this is how they will get you.

In the event of an accident, will a human driver who had the option of using an autonomous system be found grossly negligent by default? Will the choice to drive become an admission of liability? If the legal framework redefines “recklessness” as “choosing the statistically inferior option,” we’re headed for a world that rewards compliance and punishes deviation: In 2020 The Harvard Journal of Law & Technology article Hands Off the Wheel: The Role of Law in the Coming Extinction of Human-Driven Vehicles suggested this is the future. And it’s happening – in California v. Riad, a 2024 case about Tesla Autopilot, shows the framework plaintiff’s attorneys are headed towards.

This is where the heartstrings appeal layered on top of data will get us.

If every action you take can be modeled and its potential for harm traced back to you, the only truly “safe” course of action is inaction. To surrender your agency to the system that can make the optimal, risk-free choice on your behalf.

This is the straightjacket, woven from data, we’ll be coerced into wearing. For our “own good.”

Omnipresent technology itself is neutral. It’s just the accelerant. In a society that still valued individual sovereignty, the data could create private insurance markets where people could consciously choose to pay more to assume more risk. Maybe I’ll be willing to pay $1/year for the minuscule odds of an errant bee.

Today, we accept the “tragedy” that some people will make poor choices with negative outcomes.

But in a culture demanding collective safety over individual choice, this same technology becomes a tool of collective coercion. The choice is not whether risk will be priced, but whether the Karen-run state will mandate compliance.

Soft Despotism: Tragedy of the Commons … for Risk

In the 1830s, Alexis de Tocqueville traveled through the young American republic and saw the seeds of a new form of tyranny that could emerge from a democracy obsessed with equality and well-being. He called it “soft despotism.”

It wouldn’t be the tyranny of monarchs, it would be a power that was “absolute, minute, regular, provident, and mild.” It wouldn’t try to break our will, but to bend it, guide it and spare us the burden of living our own lives.

In Democracy In America he wrote:

… it seeks […] to keep them in perpetual childhood: it is well content that the people should rejoice, provided they think of nothing but rejoicing. For their happiness such a government willingly labors, but it chooses to be the sole agent and the only arbiter of that happiness: it provides for their security, foresees and supplies their necessities, facilitates their pleasures, manages their principal concerns, directs their industry, regulates the descent of property, and subdivides their inheritances – what remains, but to spare them all the care of thinking and all the trouble of living? Thus it every day renders the exercise of the free agency of man less useful and less frequent; it circumscribes the will within a narrower range, and gradually robs a man of all the uses of himself. After having thus successively taken each member of the community in its powerful grasp, and fashioned them at will, the supreme power then extends its arm over the whole community. It covers the surface of society with a net-work of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided: men are seldom forced by it to act, but they are constantly restrained from acting: such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, enervates, extinguishes, and stupefies a people, till each nation is reduced to be nothing better than a flock of timid and industrious animals, of which the government is the shepherd.

There has never been a more perfect description of the A.I. Nanny-State.

This is the future being sold to us: a frictionless world where an all-seeing system manages our concerns, facilitates our pleasures, and spares us the “trouble of living.”

Tocqueville’s warning stands in contrast to the precedent set by the very founders of the nation he was studying. The Second Amendment serves as a historical record of accepting a risk calculus. It demonstrates that the nation’s founders made a conscious, explicit choice to accept a severe and tangible category of risk as the necessary cost of securing their vision of liberty against state power. They accepted mortally dangerous freedom over a safe subjugation.

“The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.”

– Thomas Jefferson, in a letter to William Stephens Smith, November 13, 1787

Do not mistake this for a political argument, since this issue has become symbolic and overloaded with modern tribalism. Rather it speaks to the fact that it’s possible to make a rational choice (one that led to the very successful American Experiment) where safety is not the sole optimization function.

This presents us with the two paths. We can heed the precedent of the Framers, who chose liberty knowing full well its cost in blood and treasure. Or we can succumb to the prophecy of Tocqueville, rushing towards a “provident and mild” despotism that promises us safety and, in exchange, asks only for us to be a flock of timid and industrious animals.

Choosing Our Purpose

The A.I. Nanny-State will not be forced upon us. It will be adopted by consensus, one tragic story at a time. It will be the logical endpoint for people that forgets its purpose, that confuse the means of survival with the ends of a free life.

The tyranny of the aggregate will always win the argument if the only goal is to lower the body count; after all it has “statistics” and “numbers don’t lie” … But what if that is the wrong goal?

Safety has been important to getting us here. But safety isn’t the only good.

Aristotle posed the question: what is the purpose – the telos – of a human life? His answer was not just existence. It was eudaimonia – human flourishing, a state of living well and doing well, achieved through the constant exercise of our highest faculties.

The A.I. Nanny-State, in its quest for safety, is an engine for the systematic destruction of eudaimonia. The core trespass of the Nanny-State is on the faculty of judgment and the act of choosing, which are the preconditions for human flourishing. A life without risk, without the possibility of courage and the opportunity to grow resilient. The Nanny-State promises to spare us the “trouble of living,” but in doing so, it reduces us to the level of livestock.

The ultimate act of sovereignty is to choose our own telos and to accept the responsibility that comes with that freedom. It is the right to live, not merely to exist.

So where do we draw the line?

There is no perfect blueprint, but we can start with principles: we can distinguish between acute, high-probability dangers and diffuse, systemic risks as we do today. We can defend the right to informed consent and the freedom to opt out; and most importantly, we should preserve spheres of human activity where the right to make a ‘sub-optimal’ choice is held inviolable, not because the choice is wise, but because the freedom to choose is essential.

We’re at a fork in the road. Down one path lies the curated zoo. The other isn’t the wilderness, it’s the world we already know today.

I choose to drive.

AI agents pose an extinction level event for office work and the income it provides. White collar workers who don’t stockpile assets now are sleepwalking into poverty.

By 2030, white-collar employment will collapse. Not because of a recession. Because we’ll be obsolete.

I don’t like the idea, but not liking it doesn’t change anything. Just because I’ve spent 30+ years doubling down on human intellectual labor doesn’t mean I get to ignore what’s about to come. Neither should you.

Whistling Past the Cognitive Labor Graveyard

The engine of our obsolescence won’t be a cyclical downturn, a market correction, the looming federal deficit, or the handwringing in the news about tariffs and taxes.

It’s the relentless exponential surge of AI across every domain of cognitive work. As I explain in Fall of the Ladder Climbers, we’re not merely witnessing the automation of routine tasks, as we did with previous technological waves.

This is different. AI is getting better, faster, smarter than we are on every metric, at unprecedented speed.

On Tuesday I configured an AI agent to do internet research, create summary briefs written in my voice, and graphic design them using my brand. It took me three hours. It runs 24×7, never gets sick, costs me pennies. One year ago I would have had to hire people to do this.

CEOs of major AI providers themselves are publicly sounding the alarm – Anthropic’s CEO has warned that ~50 % of white-collar entry-level roles are on the verge of being eliminated. If the real numbers were scarier, could they really say? Or would the public backlash make it impossible? They’re walking the line between selling the dream of the technology and warning of its risks.

The critical point isn’t that your specific job title will be phased out (though it will, sooner than you think). It’s that the fundamental economic calculus that once valued human thought as a scarce, premium input is being shattered.

The entire edifice built upon the presumed superiority of human cognition in the workplace is crumbling.

This Time is Different, Because Cognition is Different

In previous technological revolutions, humans have always elevated themselves. That’s because they’ve been biological, mechanical or communication advances.

  • Biological leverage (agriculture, domestication, vaccines) freed us from the vagaries of nature so we could think: the species advanced.
  • Mechanical leverage (plow, train, factory) freed us from a life of back-breaking work so we could think: the species advanced.
  • Communication leverage (writing, radio, Internet) sped up how quickly we could think: the species advanced.

Each revolution freeing us from something slowing us from deploying our most potent asset, our intellect.

This time is different, because the revolution has created something that will be (or is already) cognitively superior to us. And even if you disagree with that assertion, you’ll be forced to agree tomorrow – because AI is going to start recursively self-improving – which will mean a rate of change humans cannot comprehend. It’s already doubling its capability every six months.

We are hitting a “peak horse” moment. The number of horses in gainful employment peaked around 1910 and then declined sharply as cars and tractors took over. Horses were muscle labor, and when machine leverage was superior horses simply became economically obsolete.

It’s that moment, for us, for our minds. Yet, despite this stark reality, many will cling to the comforting illusion that human ingenuity will, once again, triumph and secure our position as the apex predator. These arguments aren’t just flawed; they are dangerous distractions that will leave the believer unprepared. So let me dismantle them.

We Are Not Special: The Anthropocentric Myth

All the counterarguments you’ll hear are premised on a near-religious belief that we are special. That it can’t possibly be that we no longer inhabit a privileged place in the universe. But that’s kind of like my cat. He’s sure that he’s in charge and can’t possibly understand how much smarter we are than him. The difference is we’re a tad smarter than cats. We can understand we aren’t special.

Centaurs Remain Mythical

The “Centaur” model is the idea that humans will be augmented with the tools that AI brings. It’s the analogue to “we’ve always elevated ourselves in other revolutions, so we’ll do it with AI.” Proponents have us orchestrating fleets of AIs to do our bidding, securing our place in the world above the machines.

Right now, I can use AI agents to replace mid- and low- level human workers, so perhaps today we have a centaur model. It can’t last; the ability to replace increasingly skilled humans with machines will only accelerate. Why would I have a slow, inefficient, HR-issue-riddled human working for me, orchestrating smarter, faster, cheaper AIs?

This is my cat thinking he’s orchestrating my household. Jokes aside about cats actually being in charge – nobody in their right mind would hire my cat to orchestrate me.

This isn’t just about incentives – that obviously employers will make the decision to replace people with AIs. It’s that they’ll have to. It’s their fiduciary duty to shareholders to effectively deploy capital. A company deploying a cat instead of a genius in a box can neither compete nor meet its ethical obligations.

The “New Jobs” and “Human Creativity” Myths

The WEF – and others – make the argument that even if AI will replace existing jobs, entirely new jobs will be created that we can’t even fathom today. After all, a hundred years ago software developers weren’t a thing.

And what about creativity? Some cite “AI can never replace our creativity.” Even if we leave aside that AIs are (surprisingly) creative already and getting more so, ask yourself this: do you really believe that all the hundreds of millions of office workers will be able to quickly re-tool themselves from printing out TPS reports to a career of creatively orchestrating AI?

These arguments would hold merit if the proponents could demonstrate that there’s some job that a human is uniquely qualified to do, that a machine can’t. We know we’re not the strongest, most people just aren’t that creative, and now we’re not the smartest.

The Ragged Edge of Human Replacement

It’s fair to say that everyone won’t be out of a job at the same time. The transition will be uneven. Depending on your job, you might get a little reprieve – or you could be on the chopping block in a year.

Demand-Side Preference

My wife is a pediatrician. She can’t conceive of people wanting to have an AI as a doctor. They’re not yet good enough, true. But even when they get as good it’s because she believes people want a human to interact with. I think she’s right – for now, for a little while. Today’s parents might want a human to deal with. But as One Medical has shown, they don’t always care that it’s a doctor, it could be a Nurse Practitioner.

In the medium term, the relentless cost/efficiency won’t justify a human who has spent decades training when the same medical results could be achieved with an AI paired with a human “customer service” role. What may be the nail in the coffin, however, is legal liability. When medical outcomes from AIs are consistently superior to human providers (as is increasingly the case in radiology), the standard of care for medical practice will dictate using a machine, not people – it will be malpractice to not use an AI.

In the even longer term, later generations will have been steeped in AI for every aspect of their lives and might not care if their medical advice comes from a human or an equally empathetic machine.

Regulatory Brakes and Government Inefficiency

If there’s one thing that’s guaranteed in life, it’s that government will slow things down. I don’t expect this will be any different for AI. However, governments seeking to regulate AI will face the same pressures as businesses: an “arms race” dynamic. Jurisdictions that over-regulate will lose, just as Europe has been seeing for the last decade. AI is borderless – and capital and innovation (and the tax revenue it brings to offset labor losses) are global, and will flow to environments of least resistance.

Amusingly, our bloated, inefficient, tech-backwards, wasteful government might slow down office job extinction a little. With over a third of GDP being government spending, a slice of our economy might remain in human hands for longer. A government job might become one of the last places humans will be paid for their inefficiency.

Polanyi’s Paradox

MIT’s David Autor highlights “Polanyi’s Paradox,” which is the idea that we know more than we can tell. In other words, many human tasks rely on tacit knowledge or situational adaptability that isn’t easily written as code or learned by an AI. For example, a handyman’s flexibility, a kindergarten teacher’s intuition with kids, or a business strategist’s creativity. This seems on its surface a reasonable point – but even if true, it only delays the inevitable. AI may only start with crude simulations of some professions, it’s just a speed bump to a technology doubling capability in months. There are already people getting help from AI therapists. Even with ‘intuition’ you can’t outrun exponential self-improvement paired with exponential cost decrease.

Fits and Starts

Companies don’t just fire everyone and put in AI; they test it, find errors, deal with customer backlash, etc. If an AI customer service agent frustrates callers, a business might reintroduce humans despite the AI capability being there. Again – until the simulated behavior reaches human level, for pennies.

The end of jobs as we know it will be staggered and chaotic. But nothing stops this train.

Prepping for a Post-Labor Economy

The question isn’t whether this is happening. It’s happening, and likely sooner than we expect – or want. It’s what we should do. If in the relatively near future we can’t earn based on our labor, how do we find economic security?

People have floated ideas like Universal Basic Income (UBI), essentially ‘welfare for all.’ Putting aside where governments will get the tax receipts for UBI without confiscatory rates, I’m not terribly interested in a wait-and-see strategy that will leave me in a bread line with everyone else.

For those who want to preserve their economic agency, it means a rapid shift from labor to ownership. For the last eighteen months I’ve been telling everyone “we have until 2030 to stockpile as many assets as we can” – because the world is going to get very, very bumpy.

How Will Traditional Asset Classes Fare?

In a post-labor world, many assets won’t behave like they do today. As white-collar jobs start to vanish, how each of them act as investment and stores of value will change. Let’s look at them in rough order of where most hold their assets and see if it’s a good place for us to weather the stormy years of transition.

Residential Real Estate

Roughly two-thirds of white collar workers own homes, and count it as the lion’s share of their wealth. The reality of residential real estate is that price appreciation has been driven by monetary premium, not increases in the value of the underlying assets. Houses are made of sticks and plasterboard, and that doesn’t actually get more valuable over time. Homes increased in dollar terms as a store of value against inflation, buoyed by preferential tax treatment.

What happens when white collar workers, most of whom have just have a few weeks of rainy-day cash, start losing jobs en masse? A series of mortgage defaults, and a lack of new buyers. Even if we assume some government intervention (mortgage payment reprieve, subsidies), monetary premium will deflate and this will irreparably shake confidence in the residential real estate as a way to store value.

In other words, storing a significant portion of one’s wealth in a home might be a very bad trade in the not too distant future. There will be some geographic exceptions, and ultra-high-end homes of the monied who don’t live off income may fare better, but it won’t be pretty.

Other real estate subclasses might not fare well either – I know I won’t be renting office space for my AIs. It’s possible that industrial will be less affected, robots still need a place to build things.

Equities and Market Index Funds

White collar workers hold a lot of equities in their retirement accounts. How will these perform? Companies will be able to increase margins substantially be shedding human labor. On the other hand, the demand side might crater because laid off workers don’t have money to spend. While these long-term supply and demand dynamics will eventually play out, the immediate impact of widespread layoffs is likely to be severe market shock.

Markets are already sensitive to not-that-bad news, and it’s a fair guess that significant unemployment will spook investors who will liquidate and flee to more reliable store of value asset classes. The Great Depression featured a –89% drop in the market, and we’re looking at higher unemployment without the prospect of a recovery. Obviously this time is different, and there are more ‘safeguards’ now – but still a risky bet to place in my book.

Depending on the severity of unemployment, one could wonder if governments could raid retirement piggy banks. Just last week Australia introduced a plan to bring in a new 15% tax on unrealized gains in retirement accounts with balances over $3 million. Equities, as highly regulated and centrally controlled pots of assets might be a tempting target.

Bonds

The flight to safety from equities might initially push capital towards traditional bonds, particularly government issues. However, their viability as a long-term store of wealth becomes questionable. For corporate bonds, widespread issuer defaults would be a concern. For government bonds, outright nominal default by a major sovereign issuer is less likely – but a real threat comes from inflation. If governments respond to economic collapse and mass unemployment by putting the money printer on overdrive to fund support, the resulting inflation would decimate the real value of most fixed-income assets. Even inflation-indexed bonds could face challenges if confidence in government stability itself erodes. Maybe a better bet than equities, but riddled with problems.

Gold

For centuries gold has been the “risk off” asset of choice. The OG of hard assets for storing value, and a traditional hedge against systemic risk and currency debasement. Its relative scarcity (about a 2% inflation rate as more is mined) gives predictability. It’s having a good run of late as the money printer warms up. As long as sovereigns continue to stockpile, it might not be a bad place to store wealth if you’re OK choosing between low transportability and paper gold rehypothecation risk.

Asset Classes For The Post-Labor World

Traditional asset classes – other than perhaps gold – might not be the best place to store wealth as the world economy re-orders itself. So if we’re investing in assets to weather the storm, where do we put our cash and our efforts?

Privately-Held Businesses

Businesses agile enough to arbitrage the AI transition could get a significant (but temporary), advantage. Smaller businesses that rapidly integrate AI to slash operational costs, enhance productivity beyond human limits, and innovate at machine speed will be able to unseat incumbents that can’t shed human labor fast enough. Businesses where the “last mile” or “real-world” interface still demands a physical presence have even longer to run, because autonomous robots are further off than digitizing office humans.

But let’s call a spade a spade. The average white-collar professional working a desk job won’t (and can’t) do this. Business ownership is trench warfare even in stable times, let alone during breakneck technological acceleration. Entrepreneurs might have a ball. Cubicle dwellers won’t make it. So this may be a solution for you but entrepreneurship isn’t a solution for the masses.

The real issue is that at the speed of disruption we’re facing, private businesses won’t really be places to store wealth because their lifespan might be far shorter. Ten (even two) years ago you could buy a small or medium business and have a decent bet at good cashflows for years. Soon, not necessarily. Between competition speeding up and possible demand-side collapse, the risk-reward is going to be very different soon.

Brand (Personal & Business)

Two years ago I saw a deepfake for the first time. It wasn’t very good. But I saw where the puck was going – and planted a personal brand flag on YouTube.

Digital avatars are now really, really good. Soon they’ll be perfect. I’m on record as a real human from before the age of deception. Proof of humanity will matter, because people want to trust real people. And trust is an asset.

As products get easier to replicate, “what” becomes a commodity, and the “who” becomes the differentiator. Layer on that we’re already drowning in a post-truth media environment, and it’s only going to get worse. The question we’ve always asked will have even more import in the future: “who can I actually trust?”

Whether it’s for their business or for themselves, those treating brand as an asset (rather than an afterthought) will be one step ahead.

Proprietary Data & Data Sources

AI is already chewing through the masses of data we generate every day. Even now with humans managing them, they’re extracting a surprising amount of signal. When AI agents can make near-instant decisions to analyze something, they’ll want access to even more. Our digital sawdust won’t be enough. The phrase “data is the new oil” will have renewed meaning as machines find new ways to consume, process, and transform it into useful intelligence. In an AI-driven economy, owning unique datasets or gatekeeping access to data flows will become a source of wealth and power.

Art & Collectables

Creation is becoming a commodity. Certainly digitally it’s becoming trivial to generate works of art (written and images, and soon flawless video), but soon in the physical world as well. Nearly everything new will be a commodity. In this new world, the authentically handmade, the provably scarce, and even the beautifully imperfect – the wabi-sabi – will command a premium. Not in spite of their flaws, but because of them. In a weird way, the labor theory of value may become true for art.

This dynamic is set to drive increased value for certain art and collectables. As an asset class they suffer from low liquidity, but might be a fit for those looking to store wealth with a long time horizon.

Bitcoin

Bitcoin is the digitally native store of value asset for a world run by software. It’s a good bet that the Bitcoin network will be the rails on which autonomous AIs transact without meddlesome KYC/AML restrictions. Until recently it was considered a “risk-on” asset, but it’s delaminating from correlation to equities and is a provably scarce, infinitely portable, better version of gold. It also addresses the confiscation risk of holding equities with true self-custody and portability.

Software may be eating the world, but Bitcoin is eating money. My choice as the best asymmetric bet for the coming storm. All the other post-AI asset classes require effort or skill, but buying Bitcoin is simple for all hairless apes.

Which Should I Choose?

Of course, these aren’t the only assets you can choose. But hopefully you see how to think about your options. Or just make it easy on yourself and buy Bitcoin.

One Minute to Midnight

The vast majority of office workers will wake up from their cubicle slumber one day to find themselves out of a job, with no prospect for a new one, just three weeks of rainy day cash, and a mortgage that still needs to get paid.

They’ll probably never realize that they spent years doom-scrolling their way past learning about AI, in search of the next outrage-fueled bait piece of non-news, spending like tomorrow will be the same. That they used their energy to fight culture wars from the last century, obsessed with a world that no longer exists.

I hope I’m wrong, but I’m not.

Most of them won’t wake up, won’t be saved, and hand-wringing about the morality of it all may feel right but doesn’t do anything.

The thing about exponential change is it sneaks up on you. Flat, flat, flat, a little less flat, even less flat, BOOM. We’re at the ‘even less flat’ stage now. If you wait until everyone is talking about it, you’ve waited too long, you’ll have far less runway to start saving, and assets will already be repricing.

When you get on a plane, the safety presentation always says the same thing. Put on your oxygen mask before assisting others. My mask is on, and writing this is my attempt to help you put on yours. If you’ve read this far my hope is you’ll start future-proofing yourself to live on assets, not income.

Because you won’t like the bread line.

In a few years the majority of the economy will be autonomous AI agents transacting with each other – and with us. Here’s why Bitcoin could be the key to leveling the playing field in a world where AIs could cheat.

My family is probably going to get scammed by AIs, and so will yours. Twice in the past 18 months I’ve sent mine an email warning them that deepfake cloned voice calls are a reality not in some theoretical future, but today. Criminals can clone voices of a loved one from a few seconds of audio.

Voice cloning is just the tip of the iceberg of what’s coming. We’re on the precipice of autonomous AI agents transacting online, at scale. We might not even know if we’re doing business with a human or a machine. Which begs the critical question – in everyday transactions, how will we know if the robot on the other end of the wire is playing fair?

The Bitcoin Maxis would say “Bitcoin fixes this” – and Bitcoin may actually be a piece of the puzzle for AI alignment. It won’t stop every AI-related crime, but Bitcoin could steer AIs towards honest economic behavior.

Medieval Venice Had a Blockchain?

The problem of trust in commerce isn’t new. As a hub of trade, medieval Venice expanded expanded quickly, and so did its need for trust mechanisms. If a merchant like took an investor’s silver for a spice run in the 1300s, the commenda contract wasn’t just a piece of paper. It was backed by a system where default had teeth: if a merchant cheated, his access to the trade network was severed, leaving him penniless.

The Champagne fairs of medieval France, the Hanseatic League, all had similar mechanisms – legal frameworks that made defection incredibly costly, through economic rules and, if needed, force.

The principle was clear: your reputation, and your access to the trade ecosystem, was on the line. This wasn’t just about one contract; it was about the integrity of a system based on reputation that fueled the rise of healthy economies throughout history.

We’re entering an entirely new era, where commerce happens in milliseconds and your counterparty could be a machine. Fortunately, we can still look to what worked yesterday to design mechanisms that will work tomorrow. Bitcoin could serve as a code-based arbiter that creates incentives for AIs to play fair, even when they’re smarter than us and move faster than we can.

Bitcoin Isn’t Just Money, It’s a Trust Network

AIs are already renting server farms, paying for data streams via APIs, and need a way to transact without slowing down by asking a human for a credit card. Just this month it took me eleven (f-ing) days to set up a new business bank account between paperwork, KYC verification, and general bank incompetence. It was slow for me – let alone an AI that can operate at 1000x my speed.

As a permissionless, globally accessible bearer asset, Bitcoin is the best candidate for the native financial substrate for a machine-to-machine economy. No other crypto asset has the censorship resistance, transparency and network effect needed at this moment. Any transactional friction or risk – especially government censorship rails like KYC/AML – make other mediums of exchange poor choices for operating at speed and scale.

Granted, the base Bitcoin blockchain has a ~10 minute settlement time which is too slow and expensive for rapid fire transactions. However its Layer-2 solutions like Lightning give us not only the necessary speed, but also a feature that’s particularly relevant to our AI honesty problem: a built-in, automated penalty for breaking the rules. We have a commenda contract.

If an AI attempts to cheat in a transaction, its bluff can be called. A “breach-remedy transaction” can be deployed to the main Bitcoin blockchain, and the protocol is designed to ensure the cheater doesn’t just fail, but could lose all their funds in the Lightning channel.

It means that the L2 solutions have replicated a system for enforcing economic trust. As AI alignment researchers are exploring, for a reward-maximizing AI a swift, certain, and severe penalty for an economic “lie” could be a powerful teacher.

Can a Bitcoin Lightning Key Be a “Brand”?

This brings us to a provocative idea: could an AI’s Lightning wallet (or more accurately, its public key) become its economic “brand”? A persistent digital identity whose history is available for everyone to see? A clean record – no on-chain penalties – could signal trustworthiness, leading to better terms, lower escrow, and more willing partners. A penalized key would carry a permanent public “scar.” This sort of reputation mechanism has been shown to work already; research demonstrated that a bad eBay seller rating can spell commercial death for sellers.

Imagine an AI with 10,000 clean trades suddenly decides to cheat, losing its key’s trust score, and gets locked out of premium transactions – rebuilding could cost months and millions in missed deals.

Of course, we’re not immune to an AI playing a “long con,” just like humans: it builds up a good reputation on one public key, executes a massive fraud, burns that key’s reputation, and spins up a new, clean one. But this underestimates the emergent economics of an AI-driven marketplace. While a new public key is free, a trusted one is not.

In an economy where AIs transact for significant value, counterparties (human or AI) can demand proof of reliability. A fresh, history-less public key might face:

  • Prohibitive Escrow: Just as new human sellers (whether on eBay or Upwork) face hurdles, new AI identities might need to lock up significant capital to engage in meaningful transactions. For example, new addresses might need 30 days of bonded capital before counterparties lift caps.
  • Reputation Premiums/Discounts: Established, “clean” addresses could command better prices or lower fees. High-value markets (e.g., GPU rentals) could mandate wallet age or transaction volume thresholds.
  • Network Effects of Trust: Building a web of reliable channel partners and transaction history takes time and successful interactions. Scoring trust based on on-chain transparency could help set risk premiums for transactions.

The “cost of anonymity” or the “cost of a fresh start” after a deception could become a significant economic drag to a profit-driven, cheating AI – steering it towards pro-social behavior in transactions.

If the ecosystem evolves robust off-chain attestation services that can link (even probabilistically) on-chain behavior to persistent AI entities, then burning a high-value “brand wallet” isn’t just losing the funds in one channel; it’s losing the accumulated economic goodwill, access, and preferential terms that came with it. The “long con” is still possible, but its calculus changes if the “wallet brand” itself becomes a valuable, hard-to-replicate asset. Here, the fact that Bitcoin provides the transparent, immutable ledger gives us a foundation on which a robust AI economic reputation system could be built.

Will Economic Honesty Create AI Alignment?

Human societies with higher generalized trust tend to be more prosperous, partly because less effort is wasted on guarding against opportunism. Could a Bitcoin-based economy, by making transactional deception unprofitable, nudge AIs towards a higher baseline of integrity?

If an AI learns that breaking specific economic rules within the Bitcoin/Lightning protocol leads to swift and certain financial punishment, will this shape its broader behavior? If “lying” about a channel state is consistently a losing game, AIs might generalize this to avoid other forms of deceit to protect their increasingly valuable “brand wallets.”

Current MARL studies offer hints that reputation mechanisms might influence cooperation, but these are early days. AI could still be a perfect saint in its Lightning dealings and a Machiavellian manipulator in how it prices its services or fulfills complex off-chain contracts. We still have risks from AIs colluding, Sybil attacks, and of course pervasive (and usually counterproductive) interference by government.

So while we have an operating principle for honest economic transactions with AIs, unless they also learn moral lessons from these transactions we can’t yet extend this to solving AI alignment. The paperclip problem still remains – economic honesty doesn’t stop catastrophes caused by badly specified rewards.

Actually, Lightning Fixes This.

As the designated ‘tech guy’ of the family, I’ll keep warning them about the new AI dangers that pop up. And while it might only be a matter of time before someone gets tricked, Bitcoin helps us solve one piece of the puzzle. What matters is that we have trust mechanisms that incentivize our trading partners – whether human or AI – to act in predictable ways. That we know the rules of the game before we play.

It will only be a few years before most economic transactions will take place with autonomous AIs; and it remains to be seen if we’ll ever solve the AI alignment problem. But even if we are less capable or slower than the robots, with Bitcoin we might have leveled the economic playing field.

Industrial‑era college – four expensive years of ladder‑climbing prep – is on a collision course with two forces: compounding automation and collapsing trust in academic gatekeepers. Unless it’s rebuilt from first principles, it will survive only as a high‑priced social club. Here’s why going to college today is probably a bad idea, and how higher education needs to change to remain relevant.

My daughter asked what she should major in when she goes to college. I lied and told her: “Whatever you choose, you’ll be fine.”

It’s not a lie in the sense that she will be fine. She’s smart, motivated, and has a good head on her shoulders. It’s a lie because whatever she chooses as a career path (what she’s really asking) won’t matter. The labor market she’ll enter in 2030 runs on agents, synthetic teammates, and six-month skill cycles.

If we rewound the clock 30 years, she and I could have the well-worn college-bound pre-professional discussion of doctor/lawyer/MBA/whatever. She could follow in the footsteps of her dad getting a JD, or her mom getting an MD (what a trope).

But now that script is antique.

As a teen, she’s seeking certainty. Which I can’t give her, but I also can’t admit it. I do believe she’ll be fine, but I can’t tell her how. Models that cost pennies now draft appellate briefs, generate 3-D ad assets, and flag radiology anomalies at human-level sensitivity. The rest is not far behind.

Those examples do more than threaten individual jobs, they shatter the scoreboard that once justified the whole enterprise. Grades, majors, and diplomas all borrowed their meaning from the scarcity of human expertise. When a model clears the bar exam’s multiple-choice section in the 90th percentile while spending zero on tuition, GPA stops measuring mastery and starts measuring how outdated the test is. If college can’t supply a metric that still matters, the social license to consume four of our kids’ most plastic years evaporates.

And yet, my daughter’s still headed for college. Likely to spend time on things that won’t matter, maybe stumble across a few that do. But the real problem with college for a career, as it exists today, is that too much of it is counterproductive to how kids will need to think and relate in five years.

This isn’t a ‘skip college’ screed; but it’s a post-mortem on what college has become, and how the vast majority of college ‘investments’ might be at odds with both our kids’ financial future, and what they will need to understand the world.

The Opportunity Cost of College in Exponential Time

In the late 20th century, we had the luxury of spending four years to get credentialed, learn a little, and launch into the world. That time served a dual purpose: academic development and slow-motion socialization.

Today, that four-year bet erodes faster than the tuition inflates, and the payoff period itself is collapsing.

  • Deloitte’s 2024 Global Human Capital Trends pegs the half-life of technical skills at about 2.5 years and shows a 50 percent YoY jump in corporate reskilling spend.
  • Forbes’ analysis in December 2024 shows the shift away from college degrees as credentialing: 90 percent of surveyed companies say skills-first hires outperform degree-first hires, accelerating a shift away from pedigree screens in the 2025 recruiting cycle.

Four years now is an eternity. A frontier AI language model doubles its effective capability roughly every six months; by the time a freshman crosses the stage at graduation, the tech stack has leapt eight generations. And we’re still asking students to run on a four-year academic cycle like it’s 1975. That’s not just inefficient, it’s malpractice.

Doing the Math: Private nonprofit college: $234,512 average cost of attendance. Median net pay during those same four years if you skip school and take a $46,748 job: $186,992. Delta (before interest): -$47,520. Add average loan interest ($2,636 × 20 yrs) and the hole deepens to ~$100k. But, even that $47k head-start salary sits on a ticking clock; Goldman Sachs projects generative AI could expose 300 million full-time jobs to automation, with legal, office-support, and creative roles topping the risk. It’s worse for graduates – the post-degree roles are on the same clock, only graduates start four years later and six figures poorer.

The piece of paper still matters to the parents, but only because the college-bound caste still says it does. College became the dominant social signal of the 20th-century bourgeoisie. Then we financialized it, pumped it full of government-backed debt, scaled it to mass-market, and embedded it in the cultural machinery as default.

Somewhere along the way, college lost its mid-20th-century connection to the thing that made it useful: it was once a clear signal that someone had the intellectual horsepower to join the expanding market for cognitive labor. Now that intelligence is no longer scarce, that signal is noise.

When I needed a data analyst at my company, I didn’t deal with weeks of hiring, messy HR issues and the overhead of another person, I got it done with a $0.06 API call. I’m not alone. A 2024 hiring-manager survey highlighted in Vanity Fair found 70 percent believe AI can cover intern-level tasks, coinciding with a sharp drop in internship postings across finance, healthcare, and media.

The cost for our kids isn’t just financial. It’s time. It’s the opportunity cost of spending your peak plasticity years prepping for a world that won’t exist. Every month in the wrong context is a hundred lost iterations. Some students are studying business from 1952 case studies and couldn’t tell you what a DAO is. They’re not learning the tools of tomorrow, they’re still getting trained for the economy of yesterday.

People don’t understand exponentials. Kids certainly wouldn’t, and most parents never run the math. They assume a 40-year earnings runway that no longer exists. If a skill depreciates to 50 percent relevance in 30 months, the lifetime-value spreadsheet everyone quotes is fantasy. But this is the devaluation of labor potential that college is now up against.

If I gave you a choice: take $1 today, or take $1 million in ten years – you’d probably take the latter. But only in a vacuum. My mother might think twice. She lived through the highest rate of hyperinflation ever recorded: post-war Hungary in 1946. The currency was losing half its value every 15 hours, until there were 76 septillion pengős in circulation (that’s 7.6×10^25, or 76 million billion billion). What’s money in ten years when it’s worthless by the weekend?

College is facing that kind of curve. Not in currency, but in value of what’s being taught today against the reality of technology replacing human activities. And if we keep pretending this time trade still makes sense, we’re not preparing students, we’re delaying them.

This isn’t a death knell for college. But it is a wake-up call. If college wants to matter, it has to justify its time cost. Not with nostalgia, with relevance. Right now, the clock is the enemy. And the syllabus is too slow.

The Collapsing Social Signal

Elite colleges once sold two payoffs: a credential and a phone book. The second – access to an insider network that quietly converts conversations into offers – would sometimes be reason enough to go. That arbitrage is closing.

Chetty, Deming & Friedman (2024) showed that attending an Ivy-Plus school raises the probability of hitting the top 1 percent income bracket by 60%, but the funnel is razor thin: only 11.8% of 2023 Fortune 100 CEOs hold an Ivy undergraduate degree (Heidrick & Struggles audit).

The handshake channel is shrinking. Employee referrals filled 51 percent of US jobs in 2017 (Jobvite); by 2024 the figure was 17 percent (LinkedIn Economic Graph). One in six seats comes through warm intros; buying entrée to an alumni list delivers far less leverage than parents remember.

Even where campus networks still live, there’s evidence that the payoff now decays quickly. A study of Brazil’s top public university shows that alumni-affiliated hires boost early career wages by 14%, but the entire earnings gain disappears within a decade. The signal expires long before the student loan does.

Put the numbers together: a 60% richer tail outcome that reaches a tiny fraction of grads, a referral pipe that drips instead of gushes, a network advantage that statistically vanishes before age thirty-five – financed by a quarter-million-dollar ticket and four prime years of compounding opportunity cost.

Whatever remains of the social signal no longer clears market rate. The ladder’s still there; the wall it leans on is being 3D-printed by bots.

Inquiry Dies in an Echo Chamber

College still trades on three myths: free inquiry, guaranteed career lift, and social credentialing. The career lift and social signal have already taken body-shots; the “inquiry” claim is next on the mat.

Nearly everyone still tells themselves that college is where young minds are shaped through inquiry, argument, and exposure to diverse ideas. But that version of college has largely collapsed. On most campuses the reality is curated speech, reputational land-mines, and an incentive structure that rewards conformity over discovery.

Think of it as higher education’s second failure mode: even if students wanted the adaptive thinking their careers now demand, the campus ecology won’t let them exercise it.

So before we can talk about what the contents of college should be, we have to be honest about what it is.

The sales pitch says students learn how to think; the lived reality is that they’re taught what to think. Or, perhaps at best, be instructed in what the college caste is supposed to think and permitted to say. FIRE’s 2025 Free Speech survey finds 63 percent of students self-censor at least once a month; 18 percent do so daily. When the referee is ideology, inquiry forfeits the match.

It’s easy to point at the students first – the safe-space culture, the social media radicalization, the groupthink. But none of that exists in a vacuum. Tenure committees reward citation alignment; dissenters stall. Even professors who value inquiry steer clear of land-mines. What’s left is the appearance of exploration, boxed inside ever-narrower guardrails.

Here’s the career link: markets reward novelty and zero-to-one problem-solving. Monocultures punish both. Graduates steeped in risk-aversion, or not knowing how to challenge ideas and think, will graduate into a marketplace that needs judgment.

Add the media environment. Gen-Z grew up in algorithmic tribalism – outrage as engagement, enemies as brand. They arrive primed for the echo-chamber rather than thinking.

Eric Hoffer noted that people join movements for certainty and belonging, not truth. College, once sold as a place to find yourself, now offers the safer gift of losing yourself inside a collective narrative. From a virtual TikTok bubble to a real-life bubble.

Result: students trade reality testing for identity maintenance. That swap cripples the skill employers now prize: discerning signal from noise.

The academic track could still be the beginning of something better, but only if rebuilt from first principles. Not career prep, not an echo chamber, not status theatre, but a dojo for discernment and sovereignty of mind. Skills will be taught elsewhere. Thinking tools endure.

What follows isn’t a eulogy. It’s a construction project –

A Zero‑Based Rebuild for College

The window for incremental fixes is closed; a zero-base redesign is cheaper than patching the failing four-year model. The very fact that the economic model for college is based on a timespan that’s too long dictates a change.

If college is going to survive, it must stop pretending to pipeline students into industrial-era roles. The $300,000 degree has become a Veblen good – status theatre with negative carry. Time to restart the build at ground level.

What’s Old is New Again

Just like physical labor was displaced with mechanical leverage, we are entering a phase change where mental labor is being displaced with $0.06 API calls. This means we’ll have seemingly impossible levels of intellectual leverage, with the decision becoming how to use it.

If higher education is going to be worth the time and cost, it has to be something closer to what it once aspired to be: a place where people learn to think from first principles, and on a clock that makes sense.

In early 19th-century Prussia, Wilhelm von Humboldt helped architect a model of higher education that prioritized intellectual development over job training. It was built on two principles: Bildung – the cultivation of the self – and Wissenschaft – the disciplined pursuit of knowledge for its own sake. The goal wasn’t to prepare young men for professions. It was to prepare them for freedom – freedom of thought, of judgment, of conscience.

That idea is more relevant now than ever. It’s what we will still have as humans when the machines are better than us at raw, directed thinking.

Thinking Tools in an AI World

This sharpens what college needs to be. It needs to be a place that helps prepare people for the kind of judgments they’ll need to make, and separates how to think from vocational training.

Fortunately, humans have had a few centuries to work on this before the robots came along, and it boils down to a few core things:

  • Rhetoric – The machines are already flooding the world with content. With content now a commodity, people need to be precise about choosing what to say, how to say it, and why it matters.
  • Epistemology – Humans need to study how we know, and what counts as evidence when synthetic media can fake all of it.
  • Ethics & Philosophy – In a world where machines can optimize anything, humans still have to ask what’s worth optimizing for and what choices are right.
  • Socratic Questioning – disciplined questioning that stress‑tests beliefs, especially in echo chambers.

It means fewer lectures. More collisions. Less content and memorization. That marketing vision of college that hasn’t been true for a century, made real.

Two Goals, Two Clocks

The ‘thinking’ path for college must be on human time because it takes time to digest this – but vocational training can’t be. Already we see pragmatic learning delaminating from the college track. Virtual courses, online training, and now AI-assisted learning that has shown definitively better real-world results than college.

We shouldn’t be telling people who are looking for training to do things that they need to spend four years falling behind because that’s what the legacy system is. That’s not equal opportunity, it’s undermining the outcome they want.

If the classroom no longer justifies the ticket price, we need a new structure. It doesn’t mean everyone will (or should) go to college. But if it will exist, education realigned around the new conditions on the ground would make more sense:

  • One year of socialization and learning how to think would help create balanced humans without incurring the devastating cost of four years on exponential time. Longer would help socializing and learning, but too long and the trade gets bad.
  • Short, rolling vocational training on using state of the art tools to do, done closer to the reality on the ground – not the snail’s pace of skill training that college offers now.
  • The option for those who want to go deeper into thinking to continue after the first year. This will naturally attract those who want to sharpen the tools that help us make human decisions that may not be short-term pragmatic. These might be the few students who want to solve problems that humans want to keep their own (ethics, politics, and the like), or those who want to be academic thinkers.

We still need a ‘place’ for this. I don’t think virtual replaces it. College serves as a training ground for tribe. Strip out the credential value and you still need a place where 18-year-olds collide, flirt, fail, and learn to navigate hierarchy without screens.

To College or Not to College?

I’m still sending my daughter to college- call it path-dependence and a prepaid 529 – but I no longer view it as an economic investment, in fact it’s the opposite.

She’ll have a soft launch in the world, learn how to do laundry, eat ramen noodles, have good relationships – and bad ones – that teach her what kind of man to marry someday. Maybe she’ll find her tribe and it will be a healthy one.

But I’ll be honest, maybe that’s my nostalgia speaking. I don’t know that it’s the right choice. She’ll lose time at the verge of the exponential era, learning things that won’t matter, preparing for jobs that won’t exist. Schools move glacially. They won’t adapt in time for her, or her younger sister four years later.

I don’t have any illusion that this writing will change the ossified college machines, and it’s by no means a blueprint. I have no idea how existing colleges can possibly strip down their bloated infrastructures to meet the needs of tomorrow, and little confidence that those institutions can move at the pace necessary to make the change.

Most kids today don’t know what they want – they can’t see what’s coming, and we’ve either misled them out of ignorance or protected them out of fear. But pretending the old map still works is cowardice. If we won’t admit the world has changed, and rebuild higher education for the world that’s coming, then we’re not preparing them.

We’re abandoning them.

This is about what happens in the wake of AI – when the career ladder disappears, intelligence is no longer scarce, and the scoreboard no longer responds. It’s about the questions that follow, and the places I’ve started looking for answers.

The Fall … and Three Questions

We were raised in a world where the brain reigned supreme. Physical labor had already been replaced by machines that didn’t exist 50 years earlier. It’s happening again, but now it’s coming for our minds. And once you see it happening, it gets really uncomfortable.

My mother, a refugee from cold-war Hungary – said something that served me well for decades: “The one thing they can never take from you is your education” – so I followed my undergraduate degree with a law degree. I played the game. What mattered was intelligence – doing well in school, getting into the right college, landing a good job, climbing the professional ladder.

And she was right. For decades. But maybe not anymore. Maybe what I took for a Truth isn’t advice that I can pass on to my daughters.

The game of intellectual meritocracy was the best trade you could make in the 20th century, if you had a college degree and some ambition. We were led to believe that success was measurable on our scoreboard of grades, titles, income, achievement – and if we optimized for these things, we won the game.

I proudly passed the exam, closed the big deal, got the title, … you know the drill. You probably did too.

But now I fear it’s over. And I have no idea what to make of it. I want to keep working; but what do I have – two years? Maybe? I’m just not sure how long the world I understand will exist–

As I write this in early 2025 we’re already living in a world where machines can outperform people at the tasks that used to define careers. AI is writing business plans, generating photo-realistic media, holding real-time conversations, writing code, and replacing legal work. The rest is next.

This isn’t theoretical. It’s just unevenly distributed, which is why you may not feel it as acutely as I do. If I weren’t running an AI startup, I might not see it either. If I wasn’t in the thick of it, I too might be one of those who boldly say “I’ve used AI, it’s not there. We’re safe.”

Remember when cell phones were useless bricks – until one day, they were everywhere. I was one of those weird people on this “Internet” thing, and 10 years later so was everyone I knew.

I feel the ladder we climbed is getting unsteady. But I’ve also had hundreds of conversations over the last year, and the vast majority of people don’t see it. (Am I early or crazy? Is there a difference?)

If you’re a skeptic – I bet you at least see AI gnawing at the edges. It went from parlor trick, to mildly useful helper, to replacing the need to do a lot of writing in 18 months. 18 months.

Just remember that none of us are good at thinking in exponentials. They’re sneaky. It’s not there, then it’s everywhere –

Play with me for a minute, especially if you’re a skeptic.

You’ve seen that in 18 months commercially available AI has gone from “this sucks” to “it can do research papers.” In about the same time, Alphafold had done something that the world’s top minds hadn’t been able to crack for 50 years – it predicted structures for nearly all known proteins.

I’m not being a doomer here. I’m not saying people have no place in the future, and that the robots will make us into their slaves or batteries a la Matrix.

Last month my daughter asked what careers she should consider. She’s seventeen. Telling her that I believe every career path my generation followed will be upended would be cruel.

But the reality, as I see it – no ladder is safe. Not yours. Not your kids’. The old scoreboard is gone. And until we admit that, we can’t even begin to ask the right questions.

With the countdown now running on exponential time, I’ve started asking very different questions than I was two years ago.

Q: If the scoreboard is broken, and we can’t climb, what now?

After law school, I spent months of my life studying for the Bar Exam. Then I moved states, and had to do it again. Hundreds of hours invested. It was the next rung in my ladder climb – get the title “Attorney at Law,” and move on to chase partnership in a law firm. The next title, the next income bracket.

It’s already old news, but I’m a dinosaur – in 2023 GPT-4 passed the bar exam, scoring in the 90th percentile. The only thing preventing code from being called “Esq.” is the legalized monopoly of the legal profession. (Can GPT-4 actually do the job of a lawyer in complex situations? Not fully, yet, but despite what AI-illiterate lawyers think we’re damn close.)

The point isn’t one about whether or not people will ‘elevate’ themselves to do more complex work with the help of machines (which I am not convinced of in the long term, but will tackle that another time), but it’s about what does it mean when the imprimatur of our scoreboard doesn’t give us the internal satisfaction and external status it once did.

Now that I know that a robot can pass the bar exam … it’s just not so special. The same for most academic pursuits. The robots are here, and what happens when they’re better at it than not just mediocre me, but any and all of us?

The existing order gave people a sense of progress. The ladder was personal compass and social glue. Without it, we’re floating in subjectivity.

What game are we playing now? Is most of what we’re striving towards now just performative? What if it won’t matter in three years?

We’re still close enough to the old model to believe the game is real. But if our eyes are open to technology, we’ve seen too much to pretend it will still work.

Q: If intelligence isn’t rare anymore, what becomes valuable?

Our species told itself our brains raised us above the “lower animals.” We could think – make tools, make plans, build economies and societies. I’ve spent my life reducing uncertainty. Making things predictable, stable. First for myself, then for my family. That’s what ‘intelligence’ let me do.

What do we do when we hand over the keys to something smarter, faster, and stronger than we are? When it has the brains and brawn that no human does. It will be able do better than I can at what matters, protecting my kin.

And the world I understood to deploy my own skills for the only end that matters won’t exist. I’d be powerless to fight entropy myself because there’s something better than me, than all of us.

No clean answers – just recursive questions.

Q: How do I raise my daughters for the future, when the road I followed leads to nowhere?

Most parents still think the future will rhyme with the 20th century. Their self-worth measured on their kids’ performance on the scoreboard of their time.

Play along – even if you’re not sold that tomorrow will be radically different from today.

What do we think the world will really look like in five years – when my daughter graduates college? 18 months ago you hadn’t heard of A.I., now it’s such a threat to the academic order that most schools are banning it.

What do you think the ladder of career success looks like when there’s an A.I. employee available for the same job as your child. It’s smarter, costs pennies, works 24×7, doesn’t have messy HR issues, and can be cloned an infinite number of times for free.

How can I believe our children will have a career anything like ours? Our scoreboards aren’t just outdated. They’re irrelevant.

The one thing we know is – we don’t know, and we can’t know. But it’s our responsibility as parents to think about what might be next. It feels overwhelming, but it’s not hopeless. In my business, I’ve learned to pivot when markets shift – this is like that, but for parenting. Sometimes you have to toss the old model to find the new one. If only I knew what the new one is.

These questions don’t have easy answers, but here’s where I’m starting.

The Search for Ground Truth

We don’t (yet) have answers, but I start with a framework

To find answers – if there are any – I start with “what do I believe to be true?” then build from there. You might not agree with all of them. Some might seem irrelevant at first. They aren’t. I might even change them over time in the spirit of strong opinions, weakly held.

Here’s the scaffolding:

When machines can do it all, we still have something that’s our own

When the machines are better at doing, thinking, and even mimicking feeling – what’s still ours? What can AI never replace and where we are still in charge?

In your business, it’s the vision only you bring. At home, the values you pass to your kids.

It’s our ability to make decisions that are meaningful to us. Without the old ladder of titles or paychecks, the yardstick becomes internal. I call it ‘sovereignty of consciousness’ – not a philosophical slogan, just a reminder that meaning starts and ends with us. It’s what makes our decisions still ours. Even if a machine can mimic you, it’s not you. We make meaning from our experience, and something else copying it doesn’t change it for you.

Of course, that leaves us with a different mess of what that ‘reality’actually means (I’ll get to ground truth in a moment) – but it’s a starting point for redefining our scoreboard, and I’ll explore how to make it practical in future writings.

The scoreboard is broken, but we need a scoreboard

As our ladder-climbing scoreboard breaks, we’ll feel a sense of loss. Who we were, and how we measured ourselves, doesn’t matter anymore.

But we’re not really missing the ladder. We’re missing having a scoreboard. Something that grounds us in who we are, how we compare ourselves to other simians in the tribe, how we might challenge ourselves.

For those of us running businesses, it’s like realizing taking profits today isn’t enough if the business is dying. We’re not just missing the ladder; we’re craving a way to know we’re winning.

Maybe it’s about character, honesty, courage, grit. But whatever it is, it needs measuring, because without a scoreboard, we’re lost.

As shared truth collapses, we become tribal

In just the last few years, truth has become subjective. Two political camps have radically different beliefs about what is real and what is not.

Blair Warren’s truth about persuasion has been weaponized at the societal level: “People will do anything for those who encourage their dreams, justify their failures, allay their fears, confirm their suspicions, and help them throw rocks at their enemies.”

The modern nation-state is a historical anomaly, albeit a useful one. Our natural state is tribal. To navigate society’s future pragmatically, we must understand tribalism – and how it’s been twisted.

Digital communication has accelerated our fragmentation (visit the darkest recesses of Reddit or 4Chan and see how much). AI will accelerate this even further – creating belief bubbles not just by group, but down to the individual. Your own personal tribe of one.

As our natural state, tribalism isn’t going away. As AI weaves and worms itself into every aspect of life, we need to understand it. This will be fundamental to our future, and that of our children.

Our moral substrate must be grounded in something real

With a scoreboard collapsing, truth becoming relative, and an uncertain future, the only things we can be certain of are the laws of nature. We should find an anchor from first principles, and ground ourselves in the laws of thermodynamics. That is, things that must be true, even as everything else becomes subjective.

Why?

All human systems are gamed, optimized, or unstable. AI becomes the trickster, rewriting reality faster than we can verify it. What remains that is incorruptible – and can keep us from becoming lost in subjectivity because our external yardsticks are gone?

Bear with me here – this might sound out there at first, but it’s not a detour. I’ll dig into this more in future posts, because I think it’s central to where we’re headed.

Strange as it will seem, this grounding exists in a sixteen year old protocol. Once data (whether that’s speech, proof of the fact that we are human and not an AI, or transactions) is written to it, it’s immutable. It’s amoral, apolitical, and completely indifferent to what anyone believes.

It’s the Bitcoin protocol, better known for its ‘number go up’ financial aspect. But ignore that for a moment, what it can (and I think, will) do is far deeper. It’s a way for independent actors, each operating in their own self-interest, to cooperate for mutual benefit. To secure cooperation with energy (the network uses energy to create certainty). In other words, this network is a way to trust and verify, whether you’re dealing with a man or a machine. The byproduct is a technology that solves one of the problems we face – how can we know something is true.

We’ll need systems that don’t depend on belief. Systems that don’t care who – or what – is on the other end of the wire.

I know this raises more questions than answers right now. Stick with me, and I’ll unpack why this ties back to AI and meaning.

Be not afraid, we go together

I hope I’m wrong about the fall. I don’t have answers. It would be much easier to put my ladder back on the wall, start climbing again, beckoning for my girls to follow.

But nostalgia won’t make a better future for them.

What will is navigating the uncharted waters of mankind, machines, money … and meaning.

This is the start. I don’t know where it goes. But I’m paying attention – and if you are too, let’s keep going.

“Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.”

– Ferris Bueller

The future isn’t waiting. But maybe, if we pay attention, we won’t miss what’s worth saving – or what’s worth building next.

Scroll to Top