Why I Cancelled ChatGPT
And Why It Was a Long Time Coming
I cancelled my ChatGPT subscription yesterday. Not as a reaction to a single headline, but as the end point of a pattern I had been watching for a long time, and a single day that made that pattern impossible to ignore.
The Ethical Line I Could Not Uncross
On Friday 28 February 2026, the Trump administration ordered all US federal agencies to immediately stop using Anthropic's technology. The stated reason was Anthropic’s refusal to allow its AI to be used for domestic mass surveillance of American citizens and fully autonomous weapons systems. Within hours, OpenAI announced a deal with the Pentagon.
The detail I could not get past was this: OpenAI staff had just signed an open letter supporting Anthropic’s position, and their CEO had publicly said he shared Anthropic’s concerns. The contract still went ahead. I run a digital ethics audit as part of my services, and I help clients think carefully about the tools they use and what those tools represent. I could not, in good conscience, keep paying for a product whose parent company had just demonstrated, under direct political pressure, that its stated ethics were negotiable.
Anthropic held its line and is now taking legal action rather than capitulating. That is not a minor decision.
Why Claude Replaced ChatGPT in My Work
The honest version of this story is that Claude had already become my primary thinking partner by choice, not necessity. I have used both tools professionally for years, testing them across real client projects rather than synthetic benchmarks. A consistent pattern emerged: Claude for precision, depth and complex reasoning; ChatGPT for speed and quick variations.
Over time, that gap widened. When I am working through a problem with multiple dependencies, where changing one element affects three things downstream, I need an AI that can hold context and explain why something works or fails, not just what to change. Claude does that reliably. It is particularly strong at the kind of quiet diagnosis that is important in real operational work, where something breaks without an obvious error and you need to trace the actual cause rather than plaster over the symptom.
For complex statistical analysis and presenting data clearly, Claude handles layered, multi‑variable work in a way that feels carefully considered, flagging assumptions, noting limitations and explaining its reasoning. This is important when outputs are feeding into real decisions. For long‑form content, system documentation and anything that needs my authentic voice sustained over time, Claude stays closer to how I actually think and write.
If you are curious how this fits into my broader setup, I outline the full tool mix in My 2026 AI Stack: 20 AI Tools for Calm, Human‑Led Digital Operations.
The Document Capability Gap: Claude vs ChatGPT
There is one capability that does not get talked about enough and yet is highly significant operational work: Claude actually creates files. Real ones. Word documents, PowerPoint presentations, Excel spreadsheets, PDFs – formatted, downloadable and ready to use. Not just text you then have to copy and paste into a template, but actual artefacts.
In February 2026, Anthropic launched Claude directly inside Microsoft PowerPoint. Unlike other AI presentation tools, it reads your slide master before it does anything – layouts, fonts, colour palette, brand elements – and the slides it generates match your existing deck. That single detail makes it genuinely useful rather than an interesting experiment you still have to redo manually.
For the kind of work I do – programme documentation, data presentations, client reports and operational templates – this is not a small advantage. It gives me hours back every week. ChatGPT produces text. Claude produces work.
How to Migrate Your ChatGPT History to Claude
If you are considering the same move, the practical question is what to do with your ChatGPT history. This is the process that worked for me.
I had multiple projects inside ChatGPT, each with its own documents, instructions and accumulated context. For each one, I asked ChatGPT to create a detailed training document summarising everything it had learned: working style, preferences, recurring tasks, tools, communication patterns and project‑specific knowledge. ChatGPT is very good at this; it often knows more about how you work than you realise until you ask it to spell it out.
I then copied the original project documents and instructions, paired them with those training documents, created equivalent projects in Claude and added everything in. Claude picked up the context quickly, and the more specific the training document, the faster that process was.
Before you cancel, export your full chat history from ChatGPT’s settings. That way, you have the full record of all your chats available, if needed. Migration takes longer than an afternoon if you have several active projects, but it is worth doing thoroughly. Cutting corners on context just means rebuilding it slowly through every future conversation.
If you would like structured help with this kind of operational change, this is exactly the kind of work I support clients with through my Digital Operations Partner services.
What Comes After ChatGPT: Claude, Mistral and Beyond
Cancelling ChatGPT does not mean ignoring what else exists. Comparing outputs across different AI tools has always been part of how I work. Single‑tool dependency is its own form of risk.
I am currently exploring Mistral, a French AI built in Europe, as part of a deliberate effort not to place everything in US‑based tools. The ethics question is not only about individual company decisions, but also about where data goes, which jurisdictions govern it, and whether the tools we rely on are subject to the same values frameworks we operate under in Europe. Mistral sits inside that framework in a way American tools simply do not.
I am not ready to recommend it yet. But I am watching it closely, which is exactly how any tool earns its place in my stack.
What I Am Not saying
I am not suggesting Claude is perfect or that Anthropic is beyond scrutiny. Every AI company is a commercial entity with investors and pressures. The healthiest relationship with any of them is engaged scepticism, not loyalty.
I am also not suggesting that everyone should make the same decision I have. Tools should earn their place through actual use, not headlines. But for the work I do, the values I ask my clients to take seriously, and what I watched unfold on that Friday, the decision became straightforward.
Claude is now my primary AI. It has been for some time.
A Small Step, but Not a Small Thing
It is easy to look at the scale of what is happening in tech and in the world in general and feel powerless. One person cancelling one subscription will not change a company's direction. But every tool we choose to pay for is a vote for how we want this industry to work. If enough of us make those small, deliberate choices, choosing transparency over convenience, sovereignty over dependency, ethics over speed, they stop being small. They become a pattern. And patterns are how things actually change.
One Final Note for Readers of My 2026 AI Stack
If you are reading this having used my 2026 AI stack article as a reference, you will have noticed ChatGPT in that line‑up. That changes today. The stack is being updated to reflect how I actually work now, which is how it should always have been written.
Calm. Selective. Human‑led. That part has not changed. Only which tools have earned the description.