In a series of Threads posts this afternoon, Instagram head Adam Mosseri says users shouldn’t trust images they see online because AI is “clearly producing” content that’s easily mistaken for reality. Because of that, he says users should consider the source, and social platforms should help with that.
“Our role as internet platforms is to label content generated as AI as best we can,” Mosseri writes, but he admits “some content” will be missed by those labels. Because of that, platforms “must also provide context about who is sharing” so users can decide how much to trust their content.
Just as it’s good to remember that chatbots will confidently lie to you before you trust an AI-powered search engine, checking whether posted claims or images come from a reputable account can help you consider their veracity. At the moment, Meta’s platforms don’t offer much of the sort of context Mosseri posted about today, although the company recently hinted at big coming changes to its content rules.
What Mosseri describes sounds closer to user-led moderation like Community Notes on X and YouTube or Bluesky’s custom moderation filters. Whether Meta plans to introduce anything like those isn’t known, but then again, it has been known to take pages from Bluesky’s book.
By Dorcy Rugamba . Dorcy Rugamba advocates for a theatre that enables us to embrace…
The Rangers have lost four in a row, getting just a point in the standings…
Balancer Labs, the team behind the decentralized finance protocol Balancer, is shutting down after mounting…
By 23 October he was ready to attack. It began with the largest British bombardment since…
Choosing the right connectivity solution before heading abroad can set the tone for a smooth,…
Photo: Michael Buckner/Variety via Getty Images It always comes back to Young Sheldon. Emily Osment…