It is increasingly difficult to determine whether an image is real or whether it was generated through software that uses artificial intelligence. In some cases the contents created with AI bear a watermark, in others they present small defects that can be identified by carefully observing the image. If the doubt remains, however, there are some specific software capable of recognizing “fake” images.
Ai or Not
Ai or Not is an online tool that allows you to upload and analyze an image to understand whether it is original or created by artificial intelligence. It promises to recognize the work created by the most sophisticated AI software, Midjourney, DALL-E and Stable Diffusion.
Google has also developed a tool for recognizing artificially created images. SynthId, currently available to Google Cloud users, inserts a digital watermark into the pixels of the image generated through Imagen. This is a mark that is invisible to the human eye, but which allows us to identify an image generated by AI.
The web app developed by the Hugging Face community, open source, allows you to determine whether an image is generated by AI or not. In particular, it provides a percentage probability that the analyzed object was made by a human being.
Illuminarty offers a wide range of features to recognize artificially generated content, but also which AI model was used and which sections of the image were generated. The basic functions are free while the more advanced ones, intended for professional use, are paid.
FotoForensics allows you to both upload an image from your PC and insert the URL of content coming from the web. On the portal there are tutorials to take advantage of the features of the service and an “Error level analysis”, which allows you to determine if there are areas in an image with different levels of compression: if they are uneven, the image has probably been modified.