Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Meta and OpenAI scramble to stop AI generated images posing as real ones

Watermarks and labels attempt to identify false images

Andrew Griffin
Wednesday 07 February 2024 17:54 GMT
Comments
Switzerland Davos Forum
Switzerland Davos Forum (Copyright 2024 The Associated Press. All rights reserved)

Meta and OpenAI are both scrambling to make ways to label images that have been generated by artificial intelligence.

The two companies are adding labels – hidden or otherwise – that should allow people to track the source of an image or other piece of content.

OpenAI will add new features to ChatGPT and DALL-E 3 that place a tag in the metadata of the image that will make clear it has been created with AI. It will use the C2PA standard that aims to allow for images to carry more information about how they have been created, and which is also being adopted by camera companies and others who make tools for generating images.

However, OpenAI noted that it “can easily be removed either accidentally or intentionally”, and so is not a guarantee. Most social media sites remove that metadata, for instance, and it can be removed simply by taking a screenshot.

Meta’s tools will attempt to detect content on Facebook, Instagram and Threads that have been generated by AI and label them.

The social media giant said it was currently building the capability and will roll it out across its social platforms in the “coming months” and ahead of a number of major global elections this year.

Meta already places a label on images created using its own AI, but said its new capability will enable it to label images created by AI from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as part of an industry-wide effort to use “best practice” and place “invisible markers” onto images and their metadata to help identify them as AI-generated.

Former deputy prime minister, Sir Nick Clegg, now president of global affairs for Meta, acknowledged the potential ability for bad actors to utilise AI-generated imagery to spread disinformation as a key reason for Meta introducing the feature.

“This work is especially important as this is likely to become an increasingly adversarial space in the years ahead,” he said.

“People and organisations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it.

“Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead.

“In the meantime, it’s important people consider several things when determining if content has been created by AI, like checking whether the account sharing the content is trustworthy or looking for details that might look or sound unnatural.”

Additional reporting by agencies

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in