While I share your dislike of snake oil salesmen, I am much less enthusiastic about this effort and I think you have taken an overly-optimistic view of the possible second-order effects of this kind of highly-automated and effective regulatory enforcement.
You say "It’d be like refusing to use nuclear power, just because the same scientific principles can be harnessed to build nuclear bombs" - I think this is a telling comparison, but not for the reason you meant. The *very real and dangerous* pathway from nuclear energy to nuclear weapons could be one of a small number of legitimate reasons to oppose the development of nuclear power in countries that do not already have nuclear weapons; the ostensible purpose of the Atoms for Peace programme and the treaties it spawned was to prevent people from moving along that pathway, because, for the most part, we all agree that more people having nuclear weapons is bad!
Similarly, there is an obvious pathway from the 'narrow use-cases' you describe to the mass identification and prosecution of 'problematic' speech, or other uses of AI to enforce laws that were written on the assumption that they would be sporadically enforced by human policemen (not to mention any future laws that are currently unthinkable, but might become attractive to a politician once they become enforceable via AI). You write, correctly, that "it will be important to build new norms around when pro-active bullshit detector regulation is acceptable or not", but omit the key point: that we *must* establish those norms, and in a more serious and lasting manner than social convention, *before* we start down the path.
To return to your nuclear analogy: Eisenhower recognized that the genie was escaping from the bottle, and that nations without nuclear bombs would eventually get them whether the secrets of nuclear power were released or not. And so, many serious people worked very hard over many decades to create an international framework that would decouple these things, blocking the pathway, in which nations could get assistance with the development of nuclear power in exchange for externally-enforced commitments that they would not proceed down the pathway to nuclear weapons. We need something similar in spirit here - if government is to be given this power, as you advocate, it must be under tight controls. The benefits of 'narrow use-case' applications are, like those of nuclear power, significant - but they should not be an unqualified excuse to run straight for the enrichment programme.
Presumably we're about to see an arms race which puts email spam blocking to shame.
"ChatGPT: Write me a convincing argument that technically satisfies Advertising Standards rules X through Y, whilst also conveying the impression that what I'm advertising is a valid and working treatment for baldness"
Or thousands of bad actors flooding the field with AI generated noise to spam up the works and waste regulators time chasing ghosts. I'm not optimistic about the usability of the internet for anything even five years from now.
I feel like in at least this use case, the one benefit is that most of these "treatments" require some sort of physical location to perform the "treatment". If you exclude ones that could be online-order homeopathic remedies from the regulation criteria, and focus on the ones that can be tied to an operating business at an address, it'll be hard to avoid a regulator (or worse, HMRC if they're an unregistered business) knocking on your door.
Several years ago there was a browser extension called FishBarrell that enabled you to fill in the required forms to submit quackery websites details etc and report them to the ASA. I used it back then. The problem then and still at the ASA end is that it is painfully slow in actually taking action on reported sites. So if you flooded them with reports, yes, they would need an automated process at their end to speed up resolution of cases. I suspect that each case would still require human verification and doubt the ASA would have the staff to cope to follow through on every individual report in the glacial manner that they usually do.
While I share your dislike of snake oil salesmen, I am much less enthusiastic about this effort and I think you have taken an overly-optimistic view of the possible second-order effects of this kind of highly-automated and effective regulatory enforcement.
You say "It’d be like refusing to use nuclear power, just because the same scientific principles can be harnessed to build nuclear bombs" - I think this is a telling comparison, but not for the reason you meant. The *very real and dangerous* pathway from nuclear energy to nuclear weapons could be one of a small number of legitimate reasons to oppose the development of nuclear power in countries that do not already have nuclear weapons; the ostensible purpose of the Atoms for Peace programme and the treaties it spawned was to prevent people from moving along that pathway, because, for the most part, we all agree that more people having nuclear weapons is bad!
Similarly, there is an obvious pathway from the 'narrow use-cases' you describe to the mass identification and prosecution of 'problematic' speech, or other uses of AI to enforce laws that were written on the assumption that they would be sporadically enforced by human policemen (not to mention any future laws that are currently unthinkable, but might become attractive to a politician once they become enforceable via AI). You write, correctly, that "it will be important to build new norms around when pro-active bullshit detector regulation is acceptable or not", but omit the key point: that we *must* establish those norms, and in a more serious and lasting manner than social convention, *before* we start down the path.
To return to your nuclear analogy: Eisenhower recognized that the genie was escaping from the bottle, and that nations without nuclear bombs would eventually get them whether the secrets of nuclear power were released or not. And so, many serious people worked very hard over many decades to create an international framework that would decouple these things, blocking the pathway, in which nations could get assistance with the development of nuclear power in exchange for externally-enforced commitments that they would not proceed down the pathway to nuclear weapons. We need something similar in spirit here - if government is to be given this power, as you advocate, it must be under tight controls. The benefits of 'narrow use-case' applications are, like those of nuclear power, significant - but they should not be an unqualified excuse to run straight for the enrichment programme.
Sorry mate but I’m 100% certain that AI will be used 1000 times more often to push and promote quack medicine than it is to prevent it
Presumably we're about to see an arms race which puts email spam blocking to shame.
"ChatGPT: Write me a convincing argument that technically satisfies Advertising Standards rules X through Y, whilst also conveying the impression that what I'm advertising is a valid and working treatment for baldness"
Or thousands of bad actors flooding the field with AI generated noise to spam up the works and waste regulators time chasing ghosts. I'm not optimistic about the usability of the internet for anything even five years from now.
I feel like in at least this use case, the one benefit is that most of these "treatments" require some sort of physical location to perform the "treatment". If you exclude ones that could be online-order homeopathic remedies from the regulation criteria, and focus on the ones that can be tied to an operating business at an address, it'll be hard to avoid a regulator (or worse, HMRC if they're an unregistered business) knocking on your door.
Several years ago there was a browser extension called FishBarrell that enabled you to fill in the required forms to submit quackery websites details etc and report them to the ASA. I used it back then. The problem then and still at the ASA end is that it is painfully slow in actually taking action on reported sites. So if you flooded them with reports, yes, they would need an automated process at their end to speed up resolution of cases. I suspect that each case would still require human verification and doubt the ASA would have the staff to cope to follow through on every individual report in the glacial manner that they usually do.