There was a significant focus on suspected footage that showed no signs of A.I. tampering. For example, there was a video of the director of a bombed hospital in Gaza giving a news conference that some people claimed was “A.I. generated.” However, this video was filmed from different angles by multiple sources.

There were also instances that were difficult to classify. The Israeli military released a recording of a wiretapped conversation between two Hamas members, but some listeners believed it was spoofed audio. The New York Times, the BBC, and CNN reported that they have yet to verify the conversation.

To determine the truth from A.I., some social media users relied on detection tools, which claim to identify digital manipulation. However, these tools have proven to be unreliable. A test by The Times found that image detectors sometimes misdiagnosed obvious A.I. creations or labeled authentic photos as fake.

During the war, Mr. Netanyahu shared a series of disturbing images on X, claiming they depicted babies murdered and burned by Hamas. When conservative commentator Ben Shapiro amplified one of these images on X, he was accused of spreading A.I.-generated content.

One particular post, which received over 21 million views before being taken down, claimed to provide evidence that an image of a baby was fake. It included a screenshot from AI or Not, a detection tool, identifying the image as “generated by AI.” However, the company later corrected this finding on X, stating that the result was inconclusive due to compression and alterations to obscure identifying details.

AI or Not is currently working to indicate which parts of an image are suspected to be A.I.-generated.

While A.I. detection services could be part of a larger toolkit, they are dangerous when treated as the final word on content authenticity. This was noted by Henry Ajder, an expert on manipulated and synthetic media, who stated that deepfake detection tools provide a false solution to a complex problem.

Instead of relying solely on detection services, initiatives like the Coalition for Content Provenance and Authenticity, along with companies like Google, are exploring methods to identify the source and history of media files. These solutions are not perfect, as researchers have found existing watermarking technology to be easily removable, but they could help restore confidence in content quality.

Chester Wisniewski, an executive at cybersecurity firm Sophos, believes that trying to prove what’s fake will be futile. Instead, he suggests focusing on validating what’s real.

Currently, social media users attempting to deceive the public rely less on photorealistic A.I. images and more on old footage from previous conflicts or disasters, falsely portraying them as the current situation in Gaza. This was noted by Alex Mahadevan, the director of the Poynter media literacy program MediaWise, who highlighted that people tend to believe anything that confirms their beliefs or triggers their emotions.

//platform.twitter.com/widgets.js

Leave a comment

Trending