How AI has made my life better
And why is my experience so different to everyone else?
POD! On The Abundance Agenda podcast this week, Martin and I speak to Shadow Energy Secretary Claire Coutinho! We discuss why she wants to abolish the Climate Change Act, make energy cheap, and when she took the ‘abundance’ pill.
Plus, we also dig into the findings of the government’s major nuclear review, and look at how we can build nuclear energy out more quickly and cheaply.
Go have a listen on Apple, Spotify, YouTube or Substack!
When we first started talking about the ‘digital divide’, a couple of decades ago, it was used to describe the difference between those who used computers, and those like my grandparents, to whom this new technology was completely alien.
Today, according to the Good Things Foundation’s estimates,1 there are around 1.5m people that don’t own a smartphone, tablet or laptop.
But this is not the only divide. According to Lloyds, 7.9m people lack what it considers the eight ‘foundational’ digital skills – such as knowing how to use two-factor authentication, or knowing how to spot a scam email.
I think the digital divide is a pretty useful concept. Especially if your job is designing new digital systems, as it reminds you to take into account the skills and interests of the less digitally savvy.
But I also think it’s useful because I wonder if we’re starting to see the emergence of the digital divide for the next generation. And that divide is, of course, defined by the gulf between people who are using the new AI tools that have emerged in recent years, like ChatGPT, Google Gemini, and so on – and those who are not.
I was reminded of this when I saw this post on Bluesky from the American political commentator Molly Jong-Fast.2
What struck me about this was that, well, it just seemed completely alien to me and my experiences. Molly and I do sort-of similar jobs. Though she is obviously much more successful than I am, fundamentally we write our opinions for a living, occasionally speak them into a microphone, and spend much of our days sat at a computer, tapping at a keyboard.
And far from not seeing any value in AI, I’ve found myself using ChatGPT… more or less constantly.
Leaving aside debates about whether the investments by Big Tech are sustainable (maybe not!), or whether or not the Tech Bros are a malevolent force (maybe they are!), and so on, I’ve reached a point where I think it is basically impossible to not improve your life using these tools.
This is for much the same reason that, say, a washing machine is obviously useful, even if Big Washing Machine were run by some of the most annoying people on Earth, who wield immense political influence. The reality is that if a washing machine suddenly appeared in your house, it would be an instantly useful device that would make your life better.
This is what has happened with AI.
And I’m far from the only person who has experienced this utility. According to Sam Altman, ChatGPT has 800 million weekly active users, and even if that figure is an overstatement, or putting an extreme amount of spin on the ball,3 the reality is that ChatGPT is the fastest growing consumer product of all time.
So frankly, that’s why if I were a worse person, I might have meanly snarked back to Molly that this is more of a you problem, as clearly an enormous number of people have already discovered that AI is extremely useful.
But I know that Molly is not the only person who holds this opinion. In fact, in my profession and social circles I am definitely an outlier for my naive AI boosterism.
So I thought that this week I’d do something more constructive, and explain a little about how I, personally, have been using AI, and how I think it has improved my life.
And to make my point clearly, I’m going to stick to a relatively narrow definition of AI here. I’m talking specifically about Large Language Models (LLMs), the most controversial technology that falls under the AI umbrella, rather than using it to describe machine learning more broadly.
Like mildly contrarian takes on politics, policy, tech and media? Then you’ll like my newsletter! Subscribe (for free!) to get more like this direct to your inbox.
The big list
So here’s a non-exhaustive list of the ways I’ve used LLMs – specifically, ChatGPT in most circumstances – in recent weeks:
Research and explaining things
What I’ve noticed recently is that ChatGPT has become essentially my new Google, to an extent that should terrify the latter’s shareholders.
This is for a couple of reasons. First, ChatGPT often gives me better responses than a traditional search engine, because it crafts an answer for the very specific thing I want to know.
For example, here’s something I was idly curious about just the other day.
When a new Boeing or airbus rolls off the production line, how do they test it with humans? Do they take parachutes? How rigorous are the tests?
What about when a new plane model is invented and has never been flown before?A stupid question, for sure, but that’s how my brain works. And yes, I could have spent several minutes digging into search results, and essentially piecing together the answer on my own. But this was quicker, I had obtained the information I desired, and my curiosity was satisfied.
However, it is not just about speed. The other reason I’m consistently choosing ChatGPT – and as someone who makes their living from content, this scares me – I’m convinced that in many circumstances, a chat-bot is preferable to the web, particularly on mobile. The interface is cleaner, it’s far less unwieldy than opening ten browser tabs, and I can ask follow-up questions or dig for specifics by essentially just typing a text message.
And over the course of this year, particularly since the release of the o1 and o3 models, I’ve found myself using these capabilities for serious work.
For example, whenever I start out on a new piece now, like writing about the Sunday trading laws, I’ll have ChatGPT pull together a detailed ‘deep research’ summary, such as having it detail previous attempts to change the law.
Similarly, if I’m interviewing a public figure on The Abundance Agenda, I have ChatGPT put together a detailed briefing on their political views and policy positions relating to ‘abundance’-type issues, which isn’t always clear from Wikipedia pages.
And sure, what ChatGPT is producing might not be entirely correct (more on that shortly), but its output is a ‘good enough’ starting point, and it cites sources, so I can easily click, verify and further research anything I’m planning to use.
Content production
Don’t worry, dear readers, all of the words you are reading now are written by me. Probably at around midnight, which, for whatever reason, my brain has decided is my most productive working hour.
However, when you create content for a living – whether text-based articles or podcasts – there are tonnes of annoying things to do in addition to the fun writing or recording part. And ChatGPT helps me take care of much of it.
For instance, before publishing anything on here now, I’ll typically paste it into ChatGPT along with the following prompt:
Check the below for spelling, grammar, missing words and flow (UK English). Do not critique the tone. Do not repeat it back to me, just list the required changes.This goes far beyond what pre-AI spell and grammar checks were capable of. For a start, it will reliably identify missing words – the words that my brain skips when I read back what I’ve written, because I’m too familiar with what I’m trying to say, and not just what is on the page.4
However, more recently with ChatGPT’s ‘Thinking’ mode, I’ve discovered that the AI can do something even smarter. Before worrying about grammar, I’ll usually paste in my piece as plain text and simply say “Factcheck this”, and it will go through the text and identify any empirical claims I am making, and check them against what it can find on the web.
I’m sure it doesn’t catch everything – but I can personally attest that it is extremely thorough, and it definitely reduces errors. And when you’re a one-man-band like I am, it’s a much more convenient second pair of eyes, than were I to ask my poor, beleaguered partner to read through several thousand words after she’s already had a long day at work.5
Coding and tricky computer stuff
I’m a hobbyist coder, which means I know just enough to be dangerous, though I could never write code professionally. In fact, my language of choice is PHP, which is a bit like saying that I’m a car guy because I drive a Nissan Micra.
Anyway, because of this there are still occasions when I’ll want to code up a little script to do something useful, like crunch through some data or use an API to do something clever. This used to be an extremely manual process, which typically involved several hours of me raging at my computer. But today with ChatGPT, it does almost all of the heavy-lifting on writing code.6
And don’t get me wrong, the code isn’t always perfect first time, but because I speak ‘conversational’ developer, I can give the model precise instructions and detailed feedback, as we7 work together to make the code work and function as I want it to.
Finally, on a similar note, I’ve also found ChatGPT invaluable in doing other miscellaneous, slightly annoying computing tasks like writing unwieldy terminal commands.
For example, every week after making the podcast, I have to create a video version to upload to YouTube. I could fire up my video editing software, but it’s much quicker just to ask ChatGPT in plain English to write me an ffmpeg command to combine that week’s audio file with a static image.
Travel and logistics
Similar to general research, I’ve found that ChatGPT is incredibly good for holidays and travel.
This isn’t surprising – when you visit an unfamiliar place for a few days, it is unlikely that you’ll stray too far from the beaten track. Which means “Tell me the best museums to see in Mexico City, and their opening times” yields similar results to what you might find on Google - albeit presented in a cleaner format, on a page that isn’t bombarding you with adverts and nonsense.
However, where ChatGPT has a huge advantage is that it does genuinely know a lot about my interests. So I can ask it to make recommendations that are “relevant to my interests”. And so in my case, this is a shortcut to recommendations not for the best nightclubs, but for interesting political and historical things to see, as well as a recommendation to explore the Bus Rapid Transit system. Fantastic.
And here there’s also a good example of ‘agentic’-style behaviour – the idea that we’ll be able to ask AI to do something, and have it go off and work on its own.
So a few months before heading to Mexico, we wanted to book tickets to see some Lucha Libre wrestling, because of course. Tickets were not on sale at the time, but rather than make a note in my calendar to check again later, I simply asked ChatGPT to monitor the mostly Spanish-language internet, and to let me know when something was available.
A couple of weeks later, I got the magic email saying tickets were available and was able to book, which resulted in an incredibly memorable evening out.8
Most significantly though, I think the real value of ChatGPT for travel is… when actually travelling. What I love about this technology is that I can essentially point my phone at something, and learn more about it in an instant.
For example on our final day in Mexico City we stumbled upon a small protest blocking the street. I’m sure I could have spent a few minutes with my face buried in my phone, trying to figure it out – but it was much quicker to snap a photo and ask ChatGPT to do it for me.9
And of course, this also worked for everything I saw, or every question that was triggered by my unfamiliar surroundings. Why is a convenience store chain so ubiquitous? Who is the guy on the mural? How robust is Mexican democracy and rule of law?
Genuinely, I think AI is transformative for just learning about your surroundings – and I can’t wait until I have smart-glasses that can do this more passively, without needing to pull out my phone first.10
Exploring stupid ideas
I think what I love the most about ChatGPT is that it is like having a permanent collaborator who I can just explore ideas with. I’ve realised that now this technology exists, I can ask it about almost any stray thought that enters my brain, and have surprising and interesting conversations about it. And it will never get tired of my nonsense.
For example, here’s something that inexplicably entered my brain the other day:
Imagine a world where there was no technological barrier to making nuclear weapons, where any private individual with say, £1000 could easily acquire the parts and manufacture a bomb equivalent to Hiroshima. In such a world, how would order be maintained? Could there ever be a legal or regulatory regime that allows society to grow, or even allows humans to survive as a species?Rather than just roll its virtual eyes, and wonder what is wrong with me, it took the thought experiment seriously, and as a result I had a genuinely interesting conversation about Nick Bostrom’s vulnerable world hypothesis, and the implications of the above scenario.
For example, ChatGPT outlined how any society would need to be a surveillance panopticon, where ‘safe zones’ would require continuous compliance checks, where every potential component would need to have an audited chain of custody.
Most interestingly, though, it suggested that the way to reliably force compliance would be to basically split society into sectors, and for the state to administer collective punishment on inhabitants when rules are breached – to incentivise peer-enforcement.
Anyway, I’ve gone dangerously off-topic, but my point is that despite the oft-heard criticism that AI cannot be creative, it really is a great tool for simply throwing around ideas and essentially shooting-the-shit in a somewhat mind-expanding way.11
Writing this list
Oh, and finally, I used ChatGPT to write this list. Not the text – but the headings, to group my prompts into categories, so I could get a better sense of the things I like to ask about. Much easier than doing it myself!
Different experiences?
Phew. So that’s a snapshot of how I use AI, and why it is now such an integral part of my workflow. But I guess the most interesting question is… why is my experience so sharply divergent from seemingly most people on my timeline? Am I doing something different?
I think to explain the divergence, it comes down to two specific factors.
First, there’s the underlying model I’m using to answer queries.
Frankly, I suspect many people who have decided that AI is unreliable, or does not work well enough for these sorts of use-cases have not used the latest models available. It’s absolutely true that the first iterations of ChatGPT (et al) suffered from excessive hallucination, and often gave misleading or unreliable answers.
But – for my use cases at least – the latest models are lightyears ahead of their predecessors. Specifically, for all of the above cases, I’ve almost exclusively used ‘GPT5.1 Thinking’ – OpenAI’s most sophisticated model. The ‘Thinking’ part is crucial, as there are essentially two types of models, ‘instant’ and ‘reasoning’.
‘Instant’ models, like the basic GPT5.1, are designed to respond quickly. It’s why the AI-powered results at the top of Google searches are so crap. Instead of forcing users to wait for a minute for the results page to load, Google’s instant version of its Gemini model just farts out its quickest guess, which is often terrible.
However, GPT5.1 Thinking is a ‘reasoning’ model. Instead of responding quickly, the AI basically loops its responses back into the model a few times over, which has the effect of the machine ‘reasoning’ through a response, similar to how typing a paragraph once and hitting send might be riddled with errors, but reading it through a few times can make it better.
And in my subjective experience, this makes a huge difference to the quality of responses. I think it’s genuinely astonishing how good they are, and how rarely I encounter hallucinations or other misunderstandings.
In fact, I was reminded of this other day, when my partner was attempting to install a new door handle on our bathroom. The mechanism was not coming apart like the instructions suggested, so I decided to use ChatGPT’s voice and video mode, to see if it had any suggestions.
This is a feature which only uses the ‘instant’ version of 5.1, as it is designed for real-time conversations, so it prioritises speed, and I quickly found that the robot became ‘stuck’, looping the same unhelpful suggestions instead of offering any useful advice.12
So the type of model really matters here.
This experience also brings me to the other reason I think I am much happier with the results I get. And that’s the types of things I ask, and how I process the responses.
Clearly, from the above, there are many things I am not asking. I am not a scientist working in drug discovery, or a mathematician looking for complicated, precise proofs. I’m not even looking for particularly novel ideas – that’s the value I hope that I, as a human writer, can still provide!
As we now know, there are definitely certain classes of questions that LLMs particularly struggle with. ChatGPT might be able to offer detailed, accurate notes on a lengthy government report, but it might still struggle to count the letters in “Strawberry”.
I don’t think this is too surprising. After all, we know that different parts of the brain handle different types of thinking. If ‘all’ LLMs can simulate is one particular type of reasoning… that’s still an incredible new capability.
But this is also why when I use ChatGPT, I know that I still have my own brain. Because I’m familiar with the flaws of LLMs, and know not to 100% trust responses, I can manage my expectations about what the quality of any given answer is likely to be.
For many queries, perhaps I’ll deem the LLM response ‘good enough’ to not require further investigation – I trust ChatGPT to generate a reasonable list of interesting sights in Tokyo or to explain to me how photosynthesis works. But if the query is for something where accuracy and precision matter, such as when I am writing, I can scrutinise each response using my own reasoning skills, treat extraordinary claims sceptically, and check the references to ensure the sources are credible, and that their content is accurately reflected in the AI answer.
A better life
Despite having written several thousand words in response to a Bluesky post, I can’t actually be sure what Molly was trying to say with her post. It was probably dashed off as an idle thought, not a serious analysis.
But if we do take it seriously and wanted to steel-man the perspective, we could suggest she was weighing her judgement, arguing that the potentially bad downstream consequences of AI for society, do not outweigh the direct utility of these new tools.
Or perhaps she has just never sat down with ChatGPT and discovered value in it like I have. It doesn’t really matter, as I was mostly using her post as a framing device for this piece.
But in any case, our sharply different subjective experiences does make me think this will be the new digital divide. Whether or not specific AI companies survive, the reality is that this new technology is not going to go away. Like the washing machine, humans have invented something new and demonstrably useful, and whether we like it or not, it is going to be a permanent part of our future.
Like mildly contrarian takes on politics, policy, technology, media and more? Then you’ll like my (human-written) newsletter. Subscribe (for free!) to get more direct to your inbox.
What a name for a charity! Hope more people donate to them than the Bad Things Foundation.
She was born in 1978, so is more of late-Gen-X than millennial, but close enough.
For example, perhaps that figure includes people using the OpenAI models via the API, or via Siri on their phone – when they might not be consciously using ChatGPT.
The note about tone is because earlier iterations of the model started offering some pretty unsparing critiques of my writing, and would tell me to remove jokes or references to make my writing more ‘professional’. Pfft!
Sometimes when I am struggling with a piece, I will paste it in and ask ChatGPT “Is this shit?”, and in its usual smiley-and-encouraging way, it will assure me that it is not, but then offer some pointed and genuinely useful suggestions around structuring my arguments, or which sections I need to make stronger.
I hope I’ll never have to write a JOIN query or a Regex ever again.
I feel like describing myself and ChatGPT as “we” is a sign of… something.
It won’t be long until ChatGPT is able to make the booking on its own.
It turns out it was a protest of indigenous street vendors, who were unhappy with how the state was treating them.
I’m sorely tempted to buy a pair of the new Meta Ray-Ban Display glasses when they go on sale over here, even though (a) as a first generation product it will be crap, and (b) as a Meta product it will only let me use Meta’s Llama model, rather that ChatGPT 5.1 or Google’s Gemini 3.
After our discussion about the implications, I even had it write me what I think is a half-decent short story, set in this imagined world.
In the end, Liz discovered that the complex, technical fix for the door handle was to whack it with a massive mallet.







Fantastic piece. Your experience of LLMs is very similar to mine.
Also, this is based on a pretty small sample size, but within my field (statistics / data science) I have found that my colleagues in industry are general far heavier and more enthusiastic users than those in academia.
Have you ever asked Chatgpt how your use compares to the average user? I did that the other day and it was very illuminating in showing that most people are still using it as a toy, rather than for serious work.