Discussion about this post

User's avatar
Andrew West's avatar

Entirely agree with this! Bunch of ad-hoc theories on why there's such a tech divide:

The tech is hard: you have to be a bit technical to know what it can do. When you start, you just get a blank page. If you're not technical you're not going to know that ChatGPT can't go sort your image library for you, or send an email. The difference between research and brainstorming and actually agentic stuff isn't that clear for the general public. If you're unlucky and your first idea doesn't work, that's strong evidence that the tech sucks.

Too fast to keep up with: it's more like sport than tech. Opinions from 6 months ago are as irrelevant as me talking about the premier league as it was in March. This is bloody annoying if you think of yourself as informed, but have other things to do. The tech works fantastically in some ways, poorly in others, and there's no decent way of educating yourself on that other than investing time and effort - which few people have.

Nobody ever lost money saying 'this is shit': most tech doesn't change the world, but somebody somewhere will try to sell you it on the claim it will. Plus a lot of tech ideas will just fail, just like anything else. Being aggressively macho about tech's shitness is a perfectly viable personality/career for a lot of commentators, and they'll drag a lot of less-confident people along for the ride.

Jarvis: people compare AI to perfection, not the status quo. I blame Jarvis and Spielberg. Every single review of anything AI says "it isn't perfect", as if it would be reasonable to expect it is.

Mistakes: people in tech are used to having to bend computers to their will. They don't always do what you want, and it takes some effort. But if you're not used to that, you see a mistake and assume it's a flaw.

It must be deliberate: if you don't understand the tech it is objectively weird to hear 'no human being programmed chatgpt to tell this kid they should run away from home' or whatever unfortunate thing has happened. It feels like a human must have made a decision at some point down the line, when in fact that's not how it works.

The tech is weird: as you say, modern AI can win math olympiads but fail to count the number of 'r's in 'strawberry'. Those things are both true. If you're technical you can understand that it's a jagged frontier, but if you're not then it's not unreasonable to model it as a human and to think 'can't count letters in words = it's stupid and won't work for harder stuff'.

The usual commentators aren't useful: the defacto commentators in the current zeitgeist are political commentators. But in the nicest possible way, they're often just not well versed in tech stuff. It's just not their wheelhouse. But they're the smart go-to people for a lot of political personal-dynamics stuff, and have been for years, so their probably not-so-great opinions carry a lot of weight. Plus their lens will always be techbros and rivalries and political dynamics and stocks and shares etc, all of which is irrelevant to how/whether the tech works.

Daily Mail effect: in the same way that there's always going to be a migrant with violent tendencies, or some awful person claiming child benefit for 12 kids and going on cruises, there's always going to be a schoolkid using it to cheat or some founder using AI to help you have an affair or whatever. These things loom large in the public consciousness and are assumed to be the tip of an iceberg.

AI Council: I think some people have a model of an AI council who direct all use of AI. There was a robotics company recently who were showing off their AI robot that could load the dishwasher, but it was also being livestreamed to a real person who could jump in at any point. Obviously nuts. People model this as 'look what the AI companies are doing now' as opposed to 'one company is using AI to do something mad, but there are probably lots of sensible ones going a bit more slowly'.

Normal tech doomerism: it's obviously built in to humans to think that any new tech is going to ruin the kids. AI will stop kids thinking in the same way phones stop them reading in the same way tv stops them going outside in the same way D&D stops them being christian in the same way Walkmans stop them hearing the world in the same way books stop them engaging with people around them in the same way writing stops them using their memory. Maybe the fears are correct this time, who knows.

Some people get (excessively?) angry about hype: like, everyone finds it annoying when a techbro makes grandiose claims. But some people absolutely spectacularly hate this with every fibre of their being, in the same way that nobody likes paying taxes but there are some who inexpicably find it cosmically enraging. Again these people get outriders.

Copyright: genuinely a hard problem, but again gets a lot of macho fuck-ai responses which are off-putting.

Age: the old Douglas Adams quote about anything invented when you're over 35 being an affront to nature is just obviously a thing that happens.

Lefty antagonism: if what you actually want to talk about is unions and fascism, AI gives you plenty of low-hanging fruit to shoehorn your way in to new discussions.

Expand full comment
James M's avatar

Have you ever asked Chatgpt how your use compares to the average user? I did that the other day and it was very illuminating in showing that most people are still using it as a toy, rather than for serious work.

Expand full comment
40 more comments...

No posts

Ready for more?