This is read by an automated voice.Please report any issues or inconsistencies here.
If you’re even minimally attuned to the zeitgeist, you’ve heard the term “AI slop.” It’s a generalized disdain for almost anything generated by “artificial intelligence,” as though the involvement of AI at all renders the product suspect, even dangerous.It’s easy to say that AI slop is everywhere.“AI slop is clogging your brain,” warns NPR.The problem with this complaint is that almost no one makes the effort to define “AI slop.” Is it anything produced with AI? Or any such content appearing in social media? AI images produced for commercial advertising? Any such content made to deceive? Or only content made to deceive politically?While election disinformation has existed throughout our history, generative AI amps up the risks.— Brennan Center for JusticePlainly, what’s needed is a taxonomy of AI slop: which manifestations are merely nuisances, which are actually entertaining, which are threats to social or political order.
There’s a range stretching from jokey but obvious fabrications all the way up to the use of AI to put words in the mouths of politicians to sway voters via deceit.So it’s time to really take the measure of AI slop: which of it is deceptive, which amusing and which merely an avoidable nuisance.
Commentary on economics and more from a Pulitzer Prize winner.By continuing, you agree to our Terms of Service and our Privacy Policy.
It’s certainly true, as my colleague Nilesh Chistopher reports, that tools such as OpenAI’s Sora video app has reduced the cost of generating unauthorized deepfakes of celebrities, dead figures and copyrighted characters to almost nothing.Sora’s rapid uptake by users, one expert told Christopher, “erodes confidence in our ability to discern authentic from synthetic.”But that’s the higher level of developments causing genuine concern.
Much of the hand-wringing about AI slop focuses on material...