Technology’s ability to track and analyze everything will have us living in narrow boxes of state-prescribed behavior, unless we preserve individual liberty

I’ve been told that I’m immoral because I want to drive. The argument is seemingly clear, logical, compassionate, and on its surface makes all the sense in the world: nearly 43,000 people die in car crashes in the U.S. every year, autonomous cars cause fewer fatalities, so we’re approaching the point where it is immoral for humans to drive. Right?
I couldn’t disagree more.

My response was a question, and it’s one which we’ll need to address in the waning years of human-centric judgment:
Serious question: At how many decimal places do we trade personal liberty for reducing risk?
On the surface, who could argue against saving 43,000 lives? But wrapped in the appeal to heartstrings is the anti-liberty argument in its most seductive form. And my fear is that most of us are sleepwalking into accepting its terms, because we neither grasp the engine that is driving it, nor the technological forces which will pin us into tiny boxes of prescribed behavior if we accept it.
A world of near-total information awareness is coming. People hear “A.I.” and “surveillance” and they still think in terms of the 20th century; a person watching you on a grainy video feed, some faceless corporation aggregating “big data.” They haven’t made the leap to where this is actually going.
Where it’s going: it’s an A.I. correlating the telemetry from your car, every purchase from your credit card, the biometrics from your smartwatch, the sentiment from your online posts … who’s in the room with you from their location combined with yours.
It’s an intelligence that will see everything, connect everything, and calculate the risk of everything. In this world, the concept of an “accident” dies. There is only cause and effect, to successively smaller slices of our lives.
Once risk becomes perfectly knowable, it becomes perfectly manageable. And anything that can be known and managed, can (and given the history of government intrusion) will be controlled.
This is the choice being placed before us, right now.
On one side, the promise of a frictionless world, a seductive life scrubbed clean of tragedy and error, managed for our own good. On the other, the messy, unpredictable, and often dangerous ability to make choices for ourselves.
We don’t always do things that are practical. I choose to drive a six-speed manual transmission car powered by liquid dinosaurs. I revel in being one with something mechanical. To me, the autonomous car is the ultimate expression of central-planning mindset. You are not a driver; you are cargo.
The decisions we make today – the small trade-offs for a little more safety here, a little less risk there – are the thin end of the wedge towards soft despotism that our children will inherit.
We must understand the full cost of the bargain before we sign the contract.
The New Social Contract: Harm, Safety, and the Majority
My reaction against the purported ‘immorality’ of driving a car puts me on the disfavored side of cultural shift. An entire generation is being primed to see this trade-off not as a loss of liberty, but as a moral imperative.
Surveys consistently show that for younger Americans, the paramount virtue is no longer freedom, but “safety,” a term that has been expanded to include not just physical security but emotional comfort.
- A 2022 Pew study found that 62% of teens valued a “welcoming, safe online environment” over the ability to speak freely.
- A 2023 Cato Institute poll revealed nearly 30% of adults under 30 would favor the government installing surveillance cameras in every household to reduce crime.
- On college campuses, a 2024 FIRE survey found that 70% of students supported disinviting speakers deemed ‘bigoted,’ demonstrating a willingness to sacrifice free expression to avoid offense.
For my generation, the Cold War kids reading Orwell in school, these propositions are dystopian. We grew up with “sticks and stones may break my bones but words will never hurt me.”
But this generation are digital natives whose experience with bullying is cyberbullying, even if I think that’s (charitably) weakness. To them, it’s real. Which means for a significant portion of them and their state-centric compatriots, they think losing freedom to avoid emotional injury a reasonable new social contract.
In John Stuart Mill’s On Liberty, he proposed the “Harm Principle” – the idea that society’s only justification for interfering with an individual’s freedom is to prevent direct harm to others.
The safety-first advocates may believe they are simply applying this principle (though, to be realistic, I doubt any of them have read Mill). The harm principle was framed as a bulwark against majority meddling in harmless eccentricity. Today, by stretching ‘harm’ to include almost any statistical or emotional discomfort, the same principle is invoked to justify majority control of individual risk‑taking.
Of course, nobody argues for a society with zero rules. The question is not whether a line between freedom and public order exists, but where we draw it – and, more importantly, who gets to draw it when a new technology promises to eliminate risk by knowing nearly everything.
This intellectual sleight-of-hand paves the road to serfdom. It gives the majority license to impose its own risk tolerance on everyone. Mill had a name for this: the Tyranny of the Majority. And before this tyranny becomes codified, it stigmatizes non-conformity. Soon, the question won’t be “Why would you risk driving your own car?” It will be “How dare you be so selfish?”
The argument is not that society can never regulate probabilistic risk. We have always drawn lines at high-probability, high-consequence choices like drunk driving. The threat is the collapse of any distinction between a significant, direct risk and a diffuse, infinitesimal one. When the ‘harm principle’ is super-charged by a technological panopticon, it becomes a license for infinite intervention. Every remote possibility becomes a clear and present danger.
The Tyranny of Foreseeability: A.I. and the End of “Accidents”
Our desire to get control over the vagaries of nature isn’t new. Control over the environment has marked our rise as the dominant species on the planet. We built shelter, harnessed fire, invented the blessed air conditioner.
But, for the first time in history, the technological means to micromanage every interaction on the world is becoming real. At the same time as humans lose their status as the apex predators of cognition to A.I.s which will soon be vastly smarter than we are.
And our choice today on the spectrum between safety and liberty will determine our domestication.
Let’s do a little thought experiment.
My daughter and I keep bees. In theory, one of our bees could fly two miles in search of pollen, landing on a picnicker with a severe allergy, and when brushed off – sting him. If the unlucky picnicker were anaphylactic and didn’t have his Epi pen, we would call the death a tragedy, an accident, an act of God.
That’s today.
What happens tomorrow, when an A.I. can trace the specific genetic markers of that bee back to the queen in my hive? When it can calculate the probability of such an event down to seven decimal places based on weather patterns, neighborhood health records, and the recorded flight paths of thousands of other bees?
This isn’t far off.
Now this “accident” becomes a foreseeable, quantifiable liability. That’s the logical endpoint of our technological trajectory.
I suffered through law school, and still remember a handful of cases. For nearly a century, our legal system has relied on a firewall: the concept of proximate cause. In the 1928 case Palsgraf v. Long Island Railroad Co., the court ruled that one is not liable for all the infinite, unforeseeable consequences that ripple out from an action. It was a necessary fiction, a pragmatic line drawn in the sand to stop the chain of liability from extending forever. It allowed society to function, recognizing that pure chance plays a role in human affairs.
We could use this function because in a human-driven world, we didn’t – couldn’t – tell if it was my bee. But technology incinerates this firewall. With sufficient data, little is unforeseeable. There are only probabilities. “Accident” and “chance” turn to relics of an information-poor era. They’re replaced by statistical certainty and, therefore, an allocation of liability.
This is the Tyranny of Foreseeability. It is a world where every potential negative outcome, no matter how remote, can be calculated and assigned a cost. And this is how they will get you.
In the event of an accident, will a human driver who had the option of using an autonomous system be found grossly negligent by default? Will the choice to drive become an admission of liability? If the legal framework redefines “recklessness” as “choosing the statistically inferior option,” we’re headed for a world that rewards compliance and punishes deviation: In 2020 The Harvard Journal of Law & Technology article Hands Off the Wheel: The Role of Law in the Coming Extinction of Human-Driven Vehicles suggested this is the future. And it’s happening – in California v. Riad, a 2024 case about Tesla Autopilot, shows the framework plaintiff’s attorneys are headed towards.
This is where the heartstrings appeal layered on top of data will get us.
If every action you take can be modeled and its potential for harm traced back to you, the only truly “safe” course of action is inaction. To surrender your agency to the system that can make the optimal, risk-free choice on your behalf.
This is the straightjacket, woven from data, we’ll be coerced into wearing. For our “own good.”
Omnipresent technology itself is neutral. It’s just the accelerant. In a society that still valued individual sovereignty, the data could create private insurance markets where people could consciously choose to pay more to assume more risk. Maybe I’ll be willing to pay $1/year for the minuscule odds of an errant bee.
Today, we accept the “tragedy” that some people will make poor choices with negative outcomes.
But in a culture demanding collective safety over individual choice, this same technology becomes a tool of collective coercion. The choice is not whether risk will be priced, but whether the Karen-run state will mandate compliance.
Soft Despotism: Tragedy of the Commons … for Risk
In the 1830s, Alexis de Tocqueville traveled through the young American republic and saw the seeds of a new form of tyranny that could emerge from a democracy obsessed with equality and well-being. He called it “soft despotism.”
It wouldn’t be the tyranny of monarchs, it would be a power that was “absolute, minute, regular, provident, and mild.” It wouldn’t try to break our will, but to bend it, guide it and spare us the burden of living our own lives.
In Democracy In America he wrote:
… it seeks […] to keep them in perpetual childhood: it is well content that the people should rejoice, provided they think of nothing but rejoicing. For their happiness such a government willingly labors, but it chooses to be the sole agent and the only arbiter of that happiness: it provides for their security, foresees and supplies their necessities, facilitates their pleasures, manages their principal concerns, directs their industry, regulates the descent of property, and subdivides their inheritances – what remains, but to spare them all the care of thinking and all the trouble of living? Thus it every day renders the exercise of the free agency of man less useful and less frequent; it circumscribes the will within a narrower range, and gradually robs a man of all the uses of himself. After having thus successively taken each member of the community in its powerful grasp, and fashioned them at will, the supreme power then extends its arm over the whole community. It covers the surface of society with a net-work of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided: men are seldom forced by it to act, but they are constantly restrained from acting: such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, enervates, extinguishes, and stupefies a people, till each nation is reduced to be nothing better than a flock of timid and industrious animals, of which the government is the shepherd.
There has never been a more perfect description of the A.I. Nanny-State.
This is the future being sold to us: a frictionless world where an all-seeing system manages our concerns, facilitates our pleasures, and spares us the “trouble of living.”
Tocqueville’s warning stands in contrast to the precedent set by the very founders of the nation he was studying. The Second Amendment serves as a historical record of accepting a risk calculus. It demonstrates that the nation’s founders made a conscious, explicit choice to accept a severe and tangible category of risk as the necessary cost of securing their vision of liberty against state power. They accepted mortally dangerous freedom over a safe subjugation.
“The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.”
– Thomas Jefferson, in a letter to William Stephens Smith, November 13, 1787
Do not mistake this for a political argument, since this issue has become symbolic and overloaded with modern tribalism. Rather it speaks to the fact that it’s possible to make a rational choice (one that led to the very successful American Experiment) where safety is not the sole optimization function.
This presents us with the two paths. We can heed the precedent of the Framers, who chose liberty knowing full well its cost in blood and treasure. Or we can succumb to the prophecy of Tocqueville, rushing towards a “provident and mild” despotism that promises us safety and, in exchange, asks only for us to be a flock of timid and industrious animals.
Choosing Our Purpose
The A.I. Nanny-State will not be forced upon us. It will be adopted by consensus, one tragic story at a time. It will be the logical endpoint for people that forgets its purpose, that confuse the means of survival with the ends of a free life.
The tyranny of the aggregate will always win the argument if the only goal is to lower the body count; after all it has “statistics” and “numbers don’t lie” … But what if that is the wrong goal?
Safety has been important to getting us here. But safety isn’t the only good.
Aristotle posed the question: what is the purpose – the telos – of a human life? His answer was not just existence. It was eudaimonia – human flourishing, a state of living well and doing well, achieved through the constant exercise of our highest faculties.
The A.I. Nanny-State, in its quest for safety, is an engine for the systematic destruction of eudaimonia. The core trespass of the Nanny-State is on the faculty of judgment and the act of choosing, which are the preconditions for human flourishing. A life without risk, without the possibility of courage and the opportunity to grow resilient. The Nanny-State promises to spare us the “trouble of living,” but in doing so, it reduces us to the level of livestock.
The ultimate act of sovereignty is to choose our own telos and to accept the responsibility that comes with that freedom. It is the right to live, not merely to exist.
So where do we draw the line?
There is no perfect blueprint, but we can start with principles: we can distinguish between acute, high-probability dangers and diffuse, systemic risks as we do today. We can defend the right to informed consent and the freedom to opt out; and most importantly, we should preserve spheres of human activity where the right to make a ‘sub-optimal’ choice is held inviolable, not because the choice is wise, but because the freedom to choose is essential.
We’re at a fork in the road. Down one path lies the curated zoo. The other isn’t the wilderness, it’s the world we already know today.
I choose to drive.






