Britain seems to have taken the worst possible position on the AI/copyright debate
A binary decision needs to be made – and there is only one credible answer
POD! On YIMBY Pod this week, we speak to former TfL head of innovation Thomas Ableman about why Swiss buses and trains work in perfect harmony – and why we need to build our own ‘mini-Switzerland’ in Britain. Plus Martin digs into the wild success of the British wine industry, and how it’s all thanks to robots. Listen/subscribe here.
I thought I was going crazy during the Brexit wars.
For several years as Britain attempted to negotiate its exit from the European Union, the connection between the political debate and material reality seemed to be completely severed, to the point that there was a real sense of unreality about it.
If you followed the coverage closely, like I wasted my life doing, you’d have witnessed politicians and lobby journalists getting bogged down in Westminster parlour games about backstops and Malthouse Compromises, yet once you pushed the bullshit aside, the reality was an alarmingly simple, binary choice.
Britain had to either choose to draw the customs and regulatory border on the island of Ireland, separating Northern Ireland from the Irish Republic with a hard border, or it had to choose to place it in the Irish Sea, keeping Northern Ireland in the EU’s regulatory orbit.
One option would result in more internal trade friction with the rest of the United Kingdom. The other would result in border infrastructure dividing the island of Ireland. Both were pretty unpalatable, but those were the only options on the table – and everything else was just noise.
This remained true, even when the Windsor Framework was eventually agreed, seemingly settling the issue. However, this was not a magic solution that somehow subverted the binary choice; it just obfuscated it, with procedural mechanisms like the ‘Retail Movement Scheme’ and the ‘Stormont Break’.
This is because, fundamentally, the ‘Framework’ only holds because Northern Ireland remains de facto in the Single Market for goods, and Britain has implicitly accepted a degree of permanent regulatory alignment to ease border checks and avoid significant physical infrastructure. What nobody wants to really say aloud is that if the British government did, for some reason, decide to diverge significantly from the EU, it would just reopen the same wounds and the same divisions, and the status of Northern Ireland would be a crisis once again.
Anyway, I mention this not because I want to give us all trauma flashbacks, but because there’s another binary choice that Britain is currently weighing.
Like the status of Northern Ireland, this choice sits on a red-hot culture war issue, and is the subject of a deafening amount of noise and magical thinking.
That’s right, I’m talking about the interminable question of what to do about AI training on copyrighted works. And it’s time for everyone to get a grip, as like Brexit there is a clear choice – and in my view, only one credible decision we can make.
Press subscribe now before you know my opinion!
The question at hand
For Britain, the AI/copyright debate is not an academic debate. It’s a live policy issue.
As things stand, we do not have a settled legal regime to handle what is technically referred to as Text and Data Mining (TDM) – the act of scraping existing works to train AI models.
That’s why since 2021,1 successive governments have been attempting to figure out the best way to do it. Should we protect the creative industries with tight intellectual property protections? Or should we unleash the AI companies to train on whatever they’d like?
It’s a question with vested interests on both sides, and in recent months as it seemed as though a decision was going to be made, both opponents and proponents have been flexing their lobbying muscles.
For example, on the pro-TDM side, Microsoft hired the agency Public First to make the case for a more permissive training regime, putting together a slick website that outlines the different policy options along with a calculation of how each would impact the British economy. It claims that under the most permissive training regime, it could add £510bn to Britain’s GDP – that’s around a 15% boost. Conversely, if training is highly restricted, the uplift could be limited to just £290bn.
And on the other side, the News Media Association, which represents publishers who want to retain strict copyright protections, hired Oliver and Ohlbaum to make the case that the supposed economic uplift is overstated – and they’ve put out a long document taking issue with many of the assumptions in Public First’s modelling.
So this is basically why there’s a binary choice. Whatever the government decides to do, whether it likes it or not, it is going to make a whole bunch of people extremely unhappy.
That’s probably why last month, just as it looked like the government might take a firm position it decided to… not make a decision.
This was notable in itself, as it marked a backtrack from being quite keen on a permissive TDM regime. And it means for the time being at least, the government’s official position is “¯\_(ツ)_/¯”.
However, it wasn’t a total non-event, as at the same time it also published two important documents: a report on AI and copyright, which rounds up responses to the earlier consultation, and an impact assessment outlining the different policy options the government will have to choose from.
And to be fair to the government, I do understand why this feels like a difficult decision. The creative industries represent around 5.5% of the British economy in the here and now, and are an area where we punch well above our weight on the world stage – whereas conceivably, a permissive TDM regime would be placing a bet on a promising, but not-completely-proven technology.
In other words, doing anything that might disrupt this major sector of the British economy might feel unwise. And that’s before you even think about how it would inevitably lead to clips on the news of Ian McKellen, Judi Dench, and Mark Rylance et al slagging off the government.
So you can understand why Ministers may be reluctant to do anything which could conceivably harm one of our biggest cash cows.
However, I’m not sure the decision is actually that difficult once you strip away the noise.
The more I think about the issue, and the more heated I see the debate become, the more I can’t help but see the startling unreality in the discourse, that reminds me so much of the bluster and bullshit during the Brexit process.
And though I’m about a trillion times more sympathetic on an emotional level towards the creative industries than I was towards the Brexiteers, I can’t help but conclude that the only real option we have is to let the AI companies train.
Worst opinion alert
Hold your pitchforks steady – hear me out first!
Let’s imagine a world where the rights-holders get what they want, and training AI models using copyrighted data is banned or heavily restricted in Britain. Does this prevent British creative works from being ripped off, or protect the creative industries from the AI tsunami?
Assuming the rest of the world continues to exist, no.
The reason copyright has worked effectively over the last century or so is that it has been governed by a series of international treaties like the 1886 Berne Convention, and more recently the 1996 World Intellectual Property Organisation treaty. These agreements enshrined mutual recognition of intellectual property rights across 180 countries, including all of the largest and most important players like the United States and China.
But there is no equivalent consensus for TDM. There is no similar agreement, and several major jurisdictions have already enshrined extremely permissive training rules. For example, Japan, Singapore and the United States all variously allow AI training on copyrighted works, under the rules they have set.
And this reality means the die has already been cast. If a model is trained in one of these more permissive countries, if a British work is also available there (and it likely will be because this is the modern world), then it will be fair game to sweep it up for training a model, whether we like it or not.
The only way to stop it would be for Britain – a middle power – to talk the other countries into changing their rules, and forge some massive new international agreement. And it should go without saying that, obviously, this isn’t going to happen because… why would anyone go along with it? America is home to the major AI companies, China is also competing to be the AI leaders, and even in the fantasy reality where the superpowers could be persuaded to restrict training, every other country would have a strong incentive to stake out a more permissive position in order to attract some of that AI cash.
To think otherwise is, I think, like when the more delusional Brexiteers claimed that the German car makers would somehow save us, and persuade the EU to compromise the integrity of the Single Market for our sake.
Anyway, let’s go with the fantasy for one last point, and imagine a world where the British government was magically able to protect British creative works from AI training, by putting some sort of forcefield around Abbey Road, Pinewood Studios, Rockstar North and the Bake Off Tent.
Would this stop AI from transforming Britain’s creative industries? Of course not.
The reason the creative industries are such a big part of our economy is that British-made stuff is extremely popular around the world. So when AI changes what people consume, or how cultural works are produced, it is inconceivable that it won’t have a profound effect on Britain too. Regardless of whether we allow training or not, the reality is that jobs are going to change, business models are going to have to adapt, and the economics of creativity will shift.
The AI opportunity
I want to be clear, I’m not saying that creative types are wrong to feel aggrieved by the turmoil that AI will unleash on their industry. Or that it won’t feel like a violation to have your work scraped and trained on, if that’s not what you want.
But my point is that the real policy question here is not whether the anger is legitimate, but whether a restrictive TDM regime would actually protect the creative industries. I’m just not convinced that it will.
And anyway, equally importantly, the feelings of creators are not the only thing that matters here. The more important reason to allow training is that, without it, Britain would hobble its ability to take advantage of the opportunities presented by AI. Restricting training would carry real costs, to no advantage.
So why is this so important? The problem as things stand – and the entire reason the government wants to establish a legal regime for TDM in the first place – is that if an AI company wanted to train a model here in Britain using copyrighted works, it would carry huge legal uncertainties, compared to countries where the rules are clearer.
Without clearer permissions, the fear then is that it would push innovative AI companies abroad, and make Britain a less attractive place to build a tech business. Whereas if we get the rules right, there’s potentially a huge economic boost.
And this is not just an academic debate. At the moment, the government in Westminster, and Sadiq Khan in City Hall are currently courting Anthropic, the company behind Claude, and are trying to persuade it to move its headquarters to London.
This could be an easy win. The company definitely has a strong reason to want to move out of the US – just weeks ago there was the spectacular fall-out with US Defence Secretary Pete Hegseth, who threatened to punish the company for not allowing its tech to be used for mass surveillance.
So this is a big opportunity. If Britain could persuade Anthropic to make the move, it would be a huge win for the British tech industry, not to mention future tax receipts if the company continues rapidly scaling. But will it make the move? Maybe, but it will surely be significantly less likely to do so if training its frontier models could land the company in hot water.
However, this argument isn’t just about the big names in AI. The uncertainty is also affecting AI deployment in ‘normal’ companies too. For example, the Treasury Select Committee has already identified a lack of clarity over liability as a problem for the financial sector. The government’s own impact assessment, linked above, concludes that “under the status quo, UK copyright law would continue to act as a significant constraint on competitive general-purpose model training in the UK and could inhibit wider AI development and adoption.”
And the assessment also reveals why a much hoped for potential compromise option – requiring AI companies to licence content from creators – does not really work in the real world either.
Leaving aside the technical difficulty (and arguably impossibility) of identifying what training data was used when generating AI responses to prompts2 (which would be required to pay royalties), the impact assessment concludes that even this compromise wouldn’t be good for the economy, concluding that:
“This could dampen UK development and adoption of AI, reduce the productivity benefits of AI and hold back economic growth across the whole economy, including the [creative industries].”
We should take this point seriously. It doesn’t mean that the creative industries should simply suck it up and just take the pain. But it does mean that we’re going to have to think more carefully about what policy tools we do have to support creators – rather than reach for something which won’t work, and will harm the broader economy in the process.
We don’t have to like it
I’m writing this knowing that some people reading will likely be raging by this point.
To be absolutely clear, I don’t love that we face this difficult, binary decision either. I, too, am a normal human who enjoys watching human-made films and reading human-written books. And hell, as you can tell from reading my newsletter, I have a direct vested interest in people continuing to pay for human-created content in the future! (Don’t forget, it’s your last chance to get 25% off annual subscriptions!)
But the above is my honest analysis of the world as it is, and will continue to be. The fact is, we are in the midst of a technological transition, and even if OpenAI, Anthropic et al fail, and the current bubble bursts, the fact is that we are never going to un-invent Large Language Models and generative AI.
So the disruption is coming for better or worse, and the British government alone cannot hold back the tide. Whether we like it or not, our creative industries will be reshaped by forces much bigger than ourselves.
Given this, then, the real choice is not about whether to ‘protect’ the creative industries or not by restricting TDM. The choice is whether we let this transformation happen to us, or whether we try to adapt to the new reality, and seize the opportunities created by the change.
And that’s why creating a legal regime that reflects this new reality makes sense. We should try to capture the potential economic uplift from AI here in Britain – so that if the technology does prove as transformative as expected, we can use some of that extra cash to, say, subsidise things that are nice to have but unprofitable, like the arts and creative works.
Creatives are not doomed
All the above said, I don’t think that even if we accept the reality of the AI situation, it means that human-made creative works are doomed.
That’s because I think what will prove most valuable in an economy transformed by AI won’t be the visuals on the screen, or the noises coming from the speakers – it will be the connection between the artist and consumers of their art.
I genuinely believe that even if generating ‘content’ carries zero marginal cost, and Hollywood-quality visuals can be spat out with a few taps on a phone, people will still want to pay a premium for human-made works, both online and in real life.
This is because what’s going to matter most isn’t the art in and of itself, but the parasocial connection we feel with the artists and creators we like. It’s human nature to crave connections with people, and personalised AI can never replace collective experiences like being in the crowd at a gig where everyone is belting out the same song.
I think there’s already strong evidence of this. Look at the explosion of YouTube as a platform, which makes the direct connection between artists and fans way more tangible than traditional media. If you enjoy someone’s work, you become invested in their success.
And you can tell I mean this, because this is literally my business model too. I’m betting that people will still pay to hear my opinions, not because of a utilitarian desire to obtain factual information, but because you are interested in what flesh and blood human James O’Malley has to say.
So let’s get beyond the unreality of the current debate and admit what is real, and what the choices actually are in a world reshaped by AI. Let’s not be like the Brexiteers, denying reality because it is inconvenient. And let’s figure out new ways to support the creative industries, instead of making doomed attempts to cling on to the old.
Whether you’re nodding furiously in agreement or are simply just furious, you read all the way to the end. So support human content by subscribing to my newsletter (for free!) to get more politics, policy and tech takes direct to your inbox.
Amazingly the government first consulted on this a year before the launch of ChatGPT, so top marks for foresight.
A useful metaphor for why it is so hard to unpick sources is to imagine if I were to ask you to tell me what you know about World War I. You’d probably tell me a combination of things that you’ve read in books, learnt in Year 8 history, saw in Blackadder Goes Forth, or have been told by your friends during conversations in pubs over the years. But could you account for the specific source of every specific thing to give appropriate credit to each source? Of course not.




On a philosophical level, I'm not sure that an LLM "scraping" copyrighted material and using it as training data is all that different from human artists being "inspired" by other artists.
This is not Judy Dench clutching her pearls. Creatives are asking to be paid for their work. In a tight, restrictive, copyright regime, the AI megafauna could have all the training materials they wanted. They'd just have to pay. I think it is telling that the megafauna choose to pay vast amounts to lobbyists, not set up creative arts funds with their trillions.