OpenAI developing new tools to detect AI-generated images

OpenAI, Photo Courtesy: Representational image by Levart_Photographer on Unsplash

#OpenAI, #AI, #ArtificialIntelligence

Washington/IBNS-CMEDIA: American artificial intelligence (AI) research organization OpenAI is developing new provenance methods to enhance the integrity of digital content, according to a recent blog post by the company.

The tool will reportedly enable internet users to verify whether something is generated by artificial intelligence (AI) or not.

OpenAI, in the blog post, said that the company was joining the Steering Committee of C2PA (Coalition for Content Provenance and Authenticity), which is a widely used standard for digital content certification, developed and adopted by a wide range of actors including software companies, camera manufacturers, and online platforms.

C2PA can be used to prove the content comes from a particular source, and this is done by attaching an encrypted attestation that the content comes from its tool of origin, as per OpenAI.

“We look forward to contributing to the development of the standard, and we regard it as an important aspect of our approach,” the OpenAI blog post read.

Earlier this year, OpenAI started adding C2PA metadata to all images created and edited by  its latest image model DALL·E 3 in ChatGPT and the OpenAI API.

The AI research company will be integrating C2PA metadata for Sora, its video generation model, when the model is launched broadly as well.

“People can still create deceptive content without this information (or can remove it), but they cannot easily fake or alter this information, making it an important resource to build trust,” OpenAI said in the blog post, adding that as adoption of the standard increases, this information can accompany content through its lifecycle of sharing, modification, and reuse.

“Over time, we believe this kind of metadata will be something people come to expect, filling a crucial gap in digital content authenticity practices,” the blog post read.

According to the blog post, to drive adoption and understanding of provenance standards, including C2PA, OpenAI is joining Microsoft in launching a $2 million societal resilience fund, which will support AI education and understanding, including through organizations like Older Adults Technology Services from AARP, International IDEA, and Partnership on AI.

Besides investing in C2PA, OpenAI is also developing new provenance methods to enhance the integrity of digital content.

This includes implementing tamper-resistant watermarking – marking digital content like audio with an invisible signal that aims to be hard to remove – as well as detection classifiers – tools that use artificial intelligence to assess the likelihood that content originated from generative models, according to the blog post.

“These tools aim to be more resistant to attempts at removing signals about the origin of content,” it added.

In addition, OpenAI has also incorporated audio watermarking into Voice Engine, its custom voice model, which is currently in a limited research preview.