Intox: When AI fuels the controversy over news images
In the era of online disinformation, social networks are becoming veritable battlefields where truth and manipulation wage a merciless war. One of the weapons of choice in this fight is artificial intelligence (AI). Indeed, thanks to AI, it is possible to create realistic images and videos that can fool even the most sophisticated people.
It is in this context that recently accusations were made against Israel regarding the release of an allegedly AI-generated image showing a burned child, supposedly killed by Hamas during the October 7 attacks. Many netizens claimed that this image was fake and was created by artificial intelligence, based on the results of a detection tool called AI or Not.
However, experts and the company responsible for this tool quickly questioned these claims. It turns out that the result given by AI or Not was actually a false positive. Image analysis specialists have confirmed that this image was not generated by AI. Thus, accusations of manipulation on the part of Israel have been denied.
Furthermore, other rumors circulated claiming that this image of a burned body had been created from the photo of a puppy. However, an image analysis expert explained that this puppy photo was actually doctored and was not the original.
It is therefore important to take a step back and verify information before sharing it or blindly believing it. In this specific case, the dissemination of false information contributed to fueling the controversy around these tragic images, amplifying tensions and disinformation in an already complex conflict.
In conclusion, the use of artificial intelligence in creating realistic images and videos may seem scary, but it can also be used to debunk fake news and combat online manipulation. It is essential to remain vigilant and verify sources before sharing sensitive information, so as not to contribute to the spread of disinformation and to maintain a reliable and balanced information environment on the internet.