When We Stop Trusting Humans, Who Do We Hand the Power To?

The Epstein files. Corruption so deep nobody can see the bottom. A war on Iran launched days after a peace deal was within reach. Most of us have lost faith in the people running things. And that loss of faith is opening the door to something we should all be watching closely: the steady transfer of decision-making power to algorithms and AI systems that are not neutral, not accountable, and not safe.

How this works

In 2007, the journalist and author Naomi Klein published The Shock Doctrine, a book that traced a pattern running through decades of political history. Her argument was blunt: those in power exploit moments of crisis, shock, and collective disorientation to push through changes that populations would never accept under normal circumstances. Wars, natural disasters, economic crashes, coups. While people are reeling, the structures of power get quietly redrawn.

Klein called it "disaster capitalism." Her point was that radical policy changes don't need popular consent if they land at the right moment, when people are too overwhelmed, too frightened, or too disgusted to push back.

I think something like this is happening right now. Not an economic shock this time, but a moral one.

The slow-motion shock of the Epstein files

The Jeffrey Epstein case has been unfolding for years, but 2025 was when the pressure cracked it open. Under public and congressional pressure, the US Department of Justice began releasing files from the federal investigation. The Epstein Files Transparency Act was signed into law in November 2025, after a 427-to-1 vote in Congress. By January 2026, over 3.5 million pages had been released, along with 180,000 images and 2,000 videos.

The DOJ identified six million pages of evidence. Barely half has been made public, much of it heavily redacted, with hundreds of pages entirely blacked out. The Trump administration actively tried to prevent the release before Congress forced it through. In July 2025, the DOJ declared there was no "client list" and tried to shut down further releases entirely. Faulty redaction techniques in the December 2025 batch meant members of the public could copy-paste blacked-out text and read what officials had tried to hide.

What we've seen is enough to destroy trust. What we haven't seen makes it worse.

This is where Klein's framework fits. The Epstein files aren't a single shock event. They're an ongoing drip of institutional corruption that eats away at faith in the justice system, law enforcement, political leadership, the media, the wealthy elite. Every new release, every new redaction, every new sign that powerful people are being shielded, deepens the feeling that human governance is broken beyond repair.

That feeling is justified. But it's also exploitable. Because when trust collapses, a vacuum opens up. And someone always fills it.

Two surveillance cameras mounted on a building corner, one facing left and one facing right, with a warm-to-cold colour split across the image

Watching everything. Accountable to no one.

The pitch that's coming

Nobody is going to stand on a stage and say "we should let algorithms govern us." It doesn't work like that. It happens one reasonable-sounding step at a time.

  • “Algorithms are more objective than biased human decision-makers.”

  • “AI can process more data and make fairer decisions.”

  • “Automated systems remove the corruption and favouritism from the process.”

  • “At least the algorithm doesn't take bribes or eat babies.”

These sound sensible. They sound especially sensible when you've just watched another tranche of Epstein documents reveal another layer of so far untouchable corruption and depravity. When human institutions have failed this badly, the promise of a system that offers objectivity, efficiency, and freedom from human weakness becomes very attractive.

And it's already happening. Not as a grand conspiracy, but as a long series of practical decisions by governments and corporations. Automated welfare assessments. Algorithmic credit scoring. AI-driven hiring tools. Predictive policing. Risk profiling at borders. Content moderation algorithms that decide what you see and what you don't. Digital ID systems sold on convenience and security.

AI already plays a role in governance. The real question is whether we drift into handing over more and more power to systems we can't see, can't understand, and can't challenge, or whether we go in with our eyes open.

Because we already know what happens when algorithms are given power over people's lives. It has already gone badly wrong, in wealthy democracies, to ordinary families.

The Netherlands: when an algorithm destroyed thousands of families

In the Netherlands, parents can claim a childcare allowance from the government. It's a standard, income-based benefit for working families. In 2013, the Dutch Tax and Customs Administration introduced a self-learning algorithm to detect fraud in these claims.

The system used nationality as a risk indicator. If you held dual nationality or were not Dutch-born, you received a higher risk score. Because the algorithm was self-learning, it adapted based on its own outputs, creating what Amnesty International described as a discriminatory feedback loop. People from ethnic minorities were flagged more frequently, which the system interpreted as confirmation they should be flagged more frequently.

There was also a blacklist. Once you appeared on it, your risk score increased automatically, and it could damage your credibility for other benefits and even private financial services. Getting on was easy. Getting off was, for all practical purposes, impossible.

The human oversight that was supposed to catch errors was meaningless. Civil servants reviewing flagged cases were given no information about why the system had generated a high-risk score. They just saw a flag. So they deferred to the machine.

What followed ruined lives. Families were ordered to repay every penny of childcare benefit they had ever received. Amounts averaged between €20,000 and €60,000, often with late fees, and without a repayment plan. Parents were locked out of housing and healthcare benefits. Marriages collapsed. More than 2,000 children were taken into care.

At least 35,000 families were affected. Families with roots in Suriname, the Dutch Caribbean, Turkey, and Morocco were disproportionately targeted. Roughly half were single-parent households.

The parliamentary investigation was titled "Unprecedented Injustice." The entire Dutch government resigned in January 2021. In May 2022, the government formally admitted that institutional racism was the root cause.

The algorithm worked exactly as intended. The political climate had been shaped by years of anti-fraud, anti-immigration rhetoric. The algorithm automated that prejudice and gave it a veneer of technological neutrality. Nobody had to stand up and say "target families with foreign-sounding names." The machine did it, and everybody pointed at the machine.

Amnesty called their investigation Xenophobic Machines. They described the system as a "black box" that created a "black hole of accountability."

Financial data chart with candlestick patterns transitioning from warm orange tones to cold teal, visualising how data is processed and scored

Behind every algorithm is a dataset. Behind every dataset is a decision about whose life counts.

Australia: the algorithm that drove people to their deaths

Between 2016 and 2020, the Australian government ran Robodebt, an automated system that identified alleged welfare overpayments by averaging annual income across fortnightly periods.

If you've ever worked a casual or seasonal job, the flaw is obvious. Someone who earned full-time wages for six months and nothing for the other six would appear to have been overpaid in every fortnight they weren't working. The algorithm's assumed employment profile applied to only 7% of welfare recipients.

Nearly 470,000 Australians received letters accusing them of fraud. The burden of proof was reversed: you had to prove you didn't owe money, sometimes for debts years old, using paperwork long since discarded.

A Royal Commission found the government had unlawfully raised A$1.73 billion in debts against 433,000 people. Debt notices were sent to people with disabilities, to deceased people, and to 663 vulnerable people who died soon after receiving them. The Royal Commission heard testimony from mothers whose sons took their own lives after being pursued for debts they did not owe.

Rather than saving money, Robodebt ultimately cost the government over half a billion dollars. A former Prime Minister was referred to the National Anti-Corruption Commission. Yet as of this week, despite all investigations being completed, no one has faced criminal charges.

The UK: when the computer was always right and the humans always wrong

The Post Office Horizon scandal is not strictly an algorithm story. It's something worse. It's what happens when an institution decides that its technology cannot be wrong, and that any human who says otherwise must be a criminal.

Between 1999 and 2015, the Post Office prosecuted more than 900 sub-postmasters for theft, fraud, and false accounting based on data from Horizon, an accounting system built by Fujitsu. The software showed money missing from branch accounts. In many cases, it was simply wrong.

Sub-postmasters told the Post Office the system was producing errors. They were ignored. Call centre staff were instructed to tell them nobody else was having problems. That was a deliberate lie. People who challenged their shortfalls were pursued through the courts. One sub-postmaster, Seema Misra, was convicted and sentenced to 15 months in prison while pregnant. A Post Office solicitor sent a celebratory email afterwards, saying the case should discourage others from "jumping on the Horizon-bashing bandwagon."

People went to prison. People went bankrupt. People lost their homes, their marriages, their reputations. At least 13 people took their own lives.

And the Post Office knew. A public inquiry concluded that Post Office and Fujitsu executives either knew, or should have known, about Horizon's defects. An internal lawyer advised in 2017 to stop fighting the cases. She was ignored. The cover-up ran for over 20 years.

It only broke open properly in January 2024 when ITV aired a drama called Mr Bates vs The Post Office and public outrage finally forced the government to act. The total cost of compensation now stands around £2 billion. Fujitsu, the company that built the faulty system, has not contributed a penny, despite being awarded another £362 million in government contracts after the scandal became public.

Although this case is not about a biased algorithm, it's about something more fundamental: the assumption that technology is more trustworthy than people. Once that assumption takes hold inside an institution, the people running it will go to extraordinary lengths to protect the machine's credibility, even when the evidence is screaming that it's wrong. Even when real people are being destroyed.

That's the assumption being scaled up right now, across welfare, policing, hiring, education, and governance.

The UK: class inequality dressed up as statistics

In 2020, an algorithm was used to award A-level grades to UK students after exams were cancelled due to Covid. The system downgraded students' predicted grades based partly on their school's historical performance. Students at state schools in disadvantaged areas were systematically marked down. Students at private schools with strong track records were largely unaffected. The algorithm encoded class inequality into exam results and called it objectivity.

After public outcry, the government reversed course within days.

Three countries, one pattern

Different countries, different systems, different policy areas. Same story every time.

A government wants to cut costs or tighten control. An algorithm gets brought in as a tool of efficiency and fairness. The system bakes in existing prejudices and amplifies them at speed. Human oversight gets removed or hollowed out. The weight falls on people least able to fight back. When things go wrong, nobody carries personal responsibility because the machine made the decision.

None of these algorithms went rogue. Each one did exactly what the politics behind it intended.

What comes next

The systems I've described were deployed for narrow purposes: welfare fraud detection, exam grading. They still destroyed tens of thousands of lives. Now picture the same dynamic applied more broadly. That's the direction we're heading in.

The UK government is building a digital ID infrastructure that Big Brother Watch has described as a "domestic mass surveillance infrastructure". It was announced on 26 September 2025, right in the middle of the most intense phase of the Epstein file releases, while public attention was consumed by the corruption of the elite. A petition against it gathered nearly 3 million signatures. The mandatory element was dropped, but the infrastructure is still being built.

The EU is debating the Child Sexual Abuse Regulation, which proposes scanning private messages before encryption is applied. Cryptographers have warned repeatedly that this creates security vulnerabilities affecting everyone.

The US is ordering its diplomats to lobby against data sovereignty laws worldwide, arguing that restrictions on how American tech companies handle foreign citizens' data threaten the advancement of AI.

OpenAI has signed a deal giving the US military access to its AI for classified operations, after Anthropic refused the same deal over concerns about mass surveillance and autonomous weapons.

Any one of these developments, on its own, can be framed as practical, necessary, even benign. Taken together, they add up to a steady accumulation of algorithmic power over people's lives, with less transparency and less accountability at every step.

This is where it all connects. Klein, the Epstein files, the collapsing trust in institutions.

We don't need a conspiracy theory to explain what's happening. We don't need to believe that someone planned to release just enough of the Epstein files to generate the right amount of public disgust. We just need to recognise a pattern that keeps repeating: when people lose faith in existing institutions, they become more open to radical alternatives. The people who stand to benefit from those alternatives don't need to manufacture the crisis. They just need to be ready when it arrives.

The disgust people feel at the Epstein evidence is real and warranted. The failures of the Dutch welfare system, the Australian welfare system, and the UK education system are real and documented. The corruption in political institutions is real and ongoing. And right now, as I write this, the US is bombing Iran alongside Israel, striking civilian targets, damaging heritage sites, and killing children, days after a diplomatic breakthrough was reportedly within reach. The same AI tools now deployed in Pentagon classified networks are part of the military infrastructure making this possible. Trust in the people running things isn't being eroded by paranoia. It's being eroded by what those people are doing, in plain sight, right now.

But the idea being slipped in alongside all that justified anger, that automated systems and AI would be better, fairer, and less corruptible, doesn't hold up. Every time algorithms have been given real power over people's lives, the result has been the same. The algorithm doesn't remove corruption. It automates it, scales it up, and hides it behind code that most people can't read and aren't allowed to inspect.

None of this is inevitable

Every case I've described also has a story of people pushing back successfully.

The Dutch government resigned. Australia held a Royal Commission. The UK A-level algorithm was reversed within days. The UK digital ID petition gathered nearly 3 million signatures and the mandatory element was dropped. The QuitGPT campaign drove a 295% surge in ChatGPT uninstalls in a single day after OpenAI's Pentagon deal, and Claude (the chatbot from the company that refused the same deal) hit number one on the App Store for the first time. OpenAI had to publicly rewrite its contract language within days.

Klein herself has argued that shocks don't have to lead to exploitation. In her 2018 TED talk, she pointed out that the Great Crash of 1929 led not just to suffering but to the creation of social safety nets and aggressive regulation. What comes out of a crisis depends on who is organised and ready when it hits.

That can be us. That can be now.

What you can actually do

Know your rights. If you're in the EU or UK, Article 22 of the GDPR says you should not be subject to a decision based solely on automated processing that significantly affects you. You can demand to know the logic behind the decision, request a human review, and contest it. The EU AI Act, fully enforceable from August 2026, bans certain AI practices outright (including social scoring) and requires human oversight for AI used in recruitment, benefits, law enforcement, and essential services. Hardly anyone knows these rights exist. Now you do.

Ask the question. Whenever you get a decision that affects you, whether that's a benefits determination, insurance quote, job rejection, or credit decision, ask whether automated processing was involved. Put it in writing. Most organisations aren't expecting anyone to ask, and just asking it shifts the power in the conversation.

Minimise what you feed the machine. Every piece of data you hand over is a potential input to an algorithm you'll never see. Using privacy-respecting tools (Proton over Gmail, Signal over WhatsApp, Brave or Firefox over Chrome, Mastodon over X) isn't a quirk or a hobby. It's practical protection against systems that will use your data in ways you never agreed to.

Watch education and healthcare closely. Algorithmic decision-making is spreading fastest in these areas, and the consequences are personal. If your child's school or your GP's practice starts using AI-driven tools, ask what data goes in, who reviews the outputs, and whether you can opt out.

Support the people doing the watching. Amnesty International, Big Brother Watch, Algorithm Watch (Berlin), noyb (Vienna), La Quadrature du Net (France), the Open Rights Group, and Liberty are doing critical work exposing algorithmic abuse and fighting for legal protections. They run on donations. Without them, we wouldn't know about most of these scandals.

Choose your technology consciously. The QuitGPT movement showed that consumer choices hit where it counts. When 1.5 million people walked away from ChatGPT and towards alternatives, even a company backed by hundreds of billions in investment had to respond. What you use, and what you refuse to use, carries weight.

Talk about it. One of the reasons the Dutch scandal ran for years was that the affected families were mostly vulnerable people without a public voice. The more people who understand that algorithms are not neutral, the harder it becomes to deploy these systems unchallenged. Share this. Bring it up at dinner. Write to your MP. Rights get eroded quietly. The response can't be quiet too.

The choice

There's a moment in Klein's Shock Doctrine where she describes what happens after the shock wears off. People look around, see what's been built while they were reeling, and have to decide whether to accept it or fight.

We're somewhere in that moment now. Not after a single shock, but after a long accumulation of them: pandemic, political upheaval, institutional corruption, the Epstein revelations, the steady expansion of surveillance, and now a war. On 28 February 2026, the same day OpenAI signed its Pentagon deal, the US and Israel began joint strikes on Iran. The stated aims were regime change and the destruction of Iran's nuclear programme. Within two weeks, over 5,000 targets had been struck, at least 1,444 Iranian civilians had been killed and over 18,000 injured, UNESCO heritage sites had been damaged, a school bombing killed at least 175 people, and the conflict had spread to nine countries across the region. This happened days after Oman's Foreign Minister announced a nuclear deal breakthrough was "within reach" and Iran had reportedly agreed to downgrade its enriched uranium. The US attacked anyway. The UK permitted its bases to be used. People are tired. Bone-deep, what's-the-point tired. And when you're that worn down, the temptation to let machines handle it, to step back from the mess of human governance and hand the decisions to something that at least looks clean, makes sense on the surface.

But we've seen where it leads. Children taken from their parents in the Netherlands. People driven to suicide in Australia. Students' futures pinned to a postcode in Britain. And in every case, the people who built and deployed these systems walked away without consequence.

We still have time to shape how AI is used in our lives. That won't always be the case. What we do now, the tools we choose, the questions we ask, the organisations we back, the conversations we have, will determine whether technology ends up working for us or whether we end up working for it.

The despair is understandable. Sitting with it and doing nothing is a choice too. The people who want to expand algorithmic power over our lives are banking on our exhaustion. They're banking on us being so sickened by human corruption that we welcome the machine as a relief.

We don't have to do that.

FAQs

  • Between 2013 and 2019, the Dutch tax authority used a self-learning algorithm to detect childcare benefit fraud. It used nationality as a risk indicator, disproportionately flagging families with dual nationality or migrant backgrounds. At least 35,000 families were wrongly accused, forced to repay tens of thousands of euros, and locked out of other benefits. Over 2,000 children were taken into care. The government resigned in 2021 and formally admitted institutional racism was the root cause.

  • An automated debt recovery programme that ran from 2016 to 2020. It averaged annual income across fortnightly periods to flag alleged welfare overpayments. That method only reflected the reality of 7% of recipients. Nearly 470,000 people received false fraud accusations. A Royal Commission found the government had unlawfully raised A$1.73 billion in debts. The scheme was linked to suicides and declared unlawful by the Federal Court.

  • Under Article 22 of the GDPR, you should not be subject to a decision based solely on automated processing that significantly affects you. You can demand to know the logic behind the decision, request a human review, and contest it. The EU AI Act, fully enforceable from August 2026, bans certain practices outright (including social scoring) and requires human oversight for AI in recruitment, benefits, law enforcement, and essential services.

  • It's a concept from Naomi Klein's 2007 book. She argued that those in power exploit moments of crisis and collective disorientation to push through changes people would normally resist. The connection to AI is that the ongoing collapse of trust in human institutions, from Epstein to political corruption to war, creates conditions where people become more willing to accept automated systems sold as objective and incorruptible. You don't need a conspiracy for this to happen. You just need opportunism.

  • No. Every algorithm is built by people, trained on data chosen by people, and deployed by people with an agenda. The Dutch system, Robodebt, and the UK A-level algorithm all encoded the biases of whoever built and commissioned them. Amnesty International described the Dutch system as a "black box" that created a "black hole of accountability."

  • ChatGPT uninstalls surged 295% in a single day after the Pentagon deal. One-star reviews jumped 775%. Around 1.5 million paid subscribers reportedly left in the first week. Claude hit number one on the US App Store for the first time. OpenAI had to publicly rewrite its contract language within days. The company is large enough to absorb the financial hit in the short term, but the reputational damage and the shift in public awareness about AI ethics are harder to undo.

  • Every case in this article also has a story of resistance that worked. The Dutch government resigned. Australia held a Royal Commission. The UK A-level algorithm was reversed in days. Nearly 3 million people signed a petition against mandatory digital ID in Britain and the mandatory element was dropped. Legal protections like GDPR Article 22 and the EU AI Act exist specifically to limit algorithmic power. None of it is guaranteed, but none of it is inevitable either. It depends on whether enough people know what's happening and refuse to stay quiet about it.

Further reading

Sophie Kazandjian

I am a digital ops partner, website designer and piano composer living in southern France.

https://sophiesbureau.com
Next
Next

Ditching Perplexity & Comet: A Guide to Ethical AI Alternatives