And recently, I was interviewed for this Cybernews article about AI’s impact on something much more fantastic: UFOlogy and the search for signs of an extraterrestrial presence on Earth. If we consider the rise of ever more sophisticated and perfect-looking AI videos can have a major impact on the UFO phenomenon and the public’s desire to know the truth about what these mysterious objects in the sky might be. Knowing how perfect-looking images can be created by AI, the technology could either inspire a more precise and scientific examination of alleged alien sightings and contact, or it could completely damage the field of UFOlogy.
For one, being aware of how easily fraudsters can create AI videos of virtually anything, those who are open minded to the possibility of UFOs being some kind of extraterrestrial, or interdimensional phenomenon, could be dissuaded from their beliefs more than they ever would by the arguments of skeptics from the worlds of science and academia. If enough UFO videos are exposed as hoaxes, if the public becomes just cynical enough about video evidence of any sort, the interest in trying to figure out the truth behind this phenomenon could fade. If we assume that anything could be faked, then the claim of something as extraordinary as UFOs will just automatically be assumed to be a hoax and visual evidence will automatically be discounted as anything worthy of further investigation.
However, I would add that the impact of cynicism on UFOlogy by AI is should not necessarily be seen as something negative. Even those who are open minded to the possibility or otherworldly visitation to the Earth should admit that our suspicion of AI trickery would merely demand absolutely solid, incontrovertible proof of the existence of UFOs. Disclosure will be accepted as being a real once someone can produce evidence better than just videos. The standard of proof for extraterrestrial visitation will just be raised. We would need to see actual alien aircraft or extraterrestrials themselves before we can believe in their existence. So, people like Lue Elizondo and Ross Coulthart and David Grusch would need to go beyond merely making claims about being given inside information by their unnamed sources—people who keep claiming to have seen crashed alien craft, alien creatures, and back-engineered technology but can never produce any actual physical evidence.
Governments might actually be able to use our growing suspicions over AI to further deny and obfuscate the UFO issue. Rather than just merely giving the official denials of the existence of extraterrestrial craft in the skies, as the government has been doing for decades, the best debunking effort could actually use deepfake AI videos to do the job. This could involve the government’s creation of debunkable AI-created UFO videos and flood the Internet and social media with them. After enough of the “shocking” UFO videos would be exposed as hoaxes, the general public’s interest in the topic would fade and so would calls for further investigations. In the public’s mind, there really would be no need for any disclosure, or costly investigations, into something that does not exist.
Furthermore, another purpose intelligence agencies could have to create AI-generated UFO videos would be as a part of a social/psychological experiment. There would be a lot of value in understanding how the modern world reacts to claims of the fantastic. It could lead to a more nuanced understanding of how people react to the unknown, to what extent they are afraid of the unknown, and how they form new belief systems in the new world of synthetic media.
This is perhaps among the greatest dangers AI could pose for the social fabric. Questionable videos and images all around us will ultimately erode consensus reality. It will erode a collective experience of reality and the objective world. Moreover, perfectly lifelike AI images flooding cyberspace will also dissuade people from believing any information that is not in line with their existing dogmas. People could say something to the effect that “I don’t care that the news showed me images of a war zone or a natural disaster or a crime being committed somewhere. Those images are probably AI fakes and I don’t believe that war actually happened or that a hurricane struck somewhere.” In 1993 the satirical novel “Wag the Dog,” by Larry Beinhart used a plot of the first Gulf War being nothing more than a hoax orchestrated by the government and Hollywood producers. Very soon, thanks to AI, we might actually wind up in the world of “Wag the Dog.”
New communication technology has always had profound impacts on people’s perceptions of reality. In the case of alien contact, we saw the kind of panic that could be created by a real-sounding radio broadcast in 1938 when Orson Welles famously touched off a panic among some listeners with his “War of the Worlds” broadcast. Although radio was not a new medium at the time, Welles’ adaptation of a news-broadcast-format for his dramatization of the H.G. Wells novel was unique. Some people in the audience simply could not believe that something presented on the radio could possibly be a hoax. Welles’ audience was given perhaps the same kind of shock as the kind we are feeling today in the age of AI: how can we possibly believe what’s real or not if it sounds so real?