OpenAI launches tool to detect images from generative AI

OpenAI, the company that creates generative artificial intelligence (AI) software (ChatGPT and DALL-E), has just launched a tool for researchers to detect whether digital images have been created by AI programs. This is an expected feature as the origin of content posted online has become a major concern following the explosion of generative AI.

For several months, this new technology has been at the heart of discussions due to the numerous risks of creating false photographs or false audio recordings, in particular for the purposes of fraud or to produce deepfakes.

A tool only for DALL-E3

For the moment, this program is limited to images generated by DALL-E3. “It correctly identifies about 98% of DALL-E3 images. Less than 0.5% of non-AI-generated images were incorrectly assigned to DALL-E3,” the Californian start-up said, after carrying out internal tests on an earlier version of the tool. Obviously, everything is not yet perfect as the company has clarified. According to her, this program sees its performance reduced when images created with DALL-E3 have been retouched or when they have been generated by other models.

OpenAI also announced the addition of watermarks to images from generative AI. The objective is to respect the standards of the Coalition for Content Provenance and Authenticity (C2PA). This “Coalition” was founded by major players in the technology industry to establish technical standards for knowing the provenance and authenticity of digital content.

OpenAI thus follows a movement initiated by tech heavyweights such as the Meta group (Facebook, Instagram) and Google. Since May, Meta has been labeling AI-generated content based on the C2PA standard.

source site