A recent influx of videos supposedly showing “drones” or other spooky unidentified aerial phenomena flying over darkened US skylines appears to be the result, in part, of AI-trickery.
Since late November, residents in New Jersey and at least five other states have reported spotting bright objects flying overhead. The sightings have stirred speculation, amplified by celebrities, commentators, and prominent public officials, that this is nefarious, experimental technology. Or aliens.
[ Related: Sorry, but the mystery drones are pretty normal (probably) ]
Now, several of the viral videos surfacing on TikTok and X over the past week are capitalizing on the panic; they also appear to exhibit the hallmark calling cards of generative AI manipulation. Almost none of the videos reviewed by Popular Science had any official label or disclosure from social media platforms warning users about possible digital editing.
By searching for variations of “drones” and “New Jersey drones” on TikTok and X, Popular Science found numerous videos featuring odd distortions that seem to indicate that the content is AI-generated or digitally manipulated. In one example posted last week that had gained 36,000 likes, a group of two dozen large bright lights coalesce over a city skyline. Low-toned, creepy music plays in the background. A booming blue aura flashes underneath the swarm as “Large amounts of drones spotted flying over New Jersey” appears in massive text. The top-liked comment reads “that’s project blue beam,” a reference to a reappearing conspiracy theory that alleges a cabal of elites has plans to stage a fake extraterrestrial invasion. The video, screengrabbed below, was removed after Popular Science reached out to TikTok for comment earlier today.
Another, even more obvious example shows an orange, flying saucer-shaped orb levitating over New York City. As the object moves closer to the user it suddenly glows orange and hisses with electric sparks. A voice can be heard saying, “Wow, look at that, I’ve never seen such a strange thing in my whole life…amazing.” Other videos with hundreds of likes show a fantastical “mother ship” appearing to hover over the ocean, a fast-moving blue and red cylindrical looking object captured by a Ring camera, and a smaller object captured outside of a pane window that resembles an Imperial Interceptor from Star Wars. Collectively, these videos had hundreds of thousands of likes. The videos described above were uploaded over the course of the last week.
At least some of the commenters responding to these videos pointed out that they seemed like they were AI-generated or altered. However, the majority of commenters seem to believe that these videos were in fact proof of unexplained phenomena.
Fake images, especially of “UFOs,” go back decades. Photoshop and other digital editing tools increased the ease of making compelling fakes. But the new generation of generative AI tools available online amplifies that accessibility considerably to the point where a user can simply type in a few phrases and instantly have a convincing enough video they can post to social media. The YouTuber Higher E-Learning, demonstrated this by typing the simple prompt “group of drones flying over new jersey at night” into Google Video FX. After several minutes the model churned out four different short videos showing pseudo-realistic groups of drones, some remarkably similar to videos being posted online, hovering over a night sky.
“Don’t be foolish enough to believe that everything you are seeing on [social media] is real and get all freaked out by it,” the user says in a narration. “Use your brain and realize that this technology is actually this good right now.” He goes on to plead with viewers not to use these tools to post drone footage on social media.
TikTok, which had some of the most blatant and widely seen fake videos we reviewed, “encourages” users to label content that has been “completely generated or significantly edited by AI.” Though this is optional, the company’s policies say it will automatically apply an “AI-generated” label to content in some cases. Popular Science didn’t see that label on any of the videos we reviewed, though at least one video was taken down following our request for comment.
X, formerly known as Twitter, has even more opaque rules regarding the spread of AI-generated content. The company says it prohibits users from sharing “inauthentic” material that may “deceive people or lead to harm.” X says the consequences for violating that policy “depend on the severity of the violation.” It’s unclear to what degree X is actively monitoring or removing clearly fake done content. Reports estimate X has cut around 80% of engineers working on content moderation since Elon Musk took over the company in 2022. Complicating matters, X also has its own AI image generation model called Grok. X did not respond to our request for comment.
This isn’t the first time material made using AI has caused confusion online. Last year, viral AI-generated images of an explosion outside of the Pentagon caused a panic and even sent stock prices briefly dipping down. More recently, a TikTok account reportedly hosting dozens of AI-generated images depicting exploding cites, supposedly in Ukraine, was taken offline after a CBC investigation revealed they were inauthentic. Elsewhere, misleading AI-generated material has been used to depict supposed scenes of war-torn Gaza and survivors of recent destructive hurricanes.
Possibly AI generated material isn’t the only, or even the majority of misleading drone and UAP-related content flooding social media in recent days. Far more common were images of actual hobbyists and commercial drones being presented as supposedly nefarious. In other cases, videos taken from completely different parts of the world and shot in the past are being recirculated and presented as evidence. Several prominent videos depict what’s clearly a crashed Cessna-style plane as evidence for a “downed” drone or UAP.
@nordic._.shadow New Jersey drone down? 🛸
♬ original sound – After Cooking
What’s actually going on?
The current national obsession over all things drones and UAPs began with reports emerging from New Jersey around November 18. Since then, The Department of Homeland Security, FBI, FAA and the Department of Defense say they have received more than 5,000 drone sightings, a figure far higher than normal. Though reports began out of New Jersey, they’ve spread across the country. Speculators attributed the apparent presence of these objects to a number of unproven theories, with some fearfully suggesting they may be searching for a loose nuclear weapon. Others seem convinced they are evidence of extraterrestrial life.
A joint statement released by the DHS, FBI, FAA and DoD earlier this week attempted to pour water on the growing speculation. The groups say expert analyses of these sightings determined the vast majority of them appeared to be commercial and hobbyist drones or others belonging to law enforcement. In other cases, they said, people appear to have mistaken planes, helicopters, and even stars as planets. That statement, unsurprisingly, hasn’t been enough to quell concerns from posters on social media, some of whom apparently believe the government is involved in some form of misdirection.
The current information climate, one defined by declining trust in traditional media and rapid-fire social media posts speckled with inauthentic content online, seems to be making situations like this worse. Disingenuous social media users, some of whom receive income for widely viewed content, have an incentive to capitalize on conspiracies that capture the public’s attention. While that phenomena isn’t entirely new, easily accessible generative AI tools give those users even more ammunition to drive engagement. Efforts by critics to debunk or challenge that content either are largely going unnoticed or are brushed aside as evidence of a lack of imagination. Social media companies themselves, meanwhile, appear to struggle to quickly respond to the sheer volume of manipulated content feeding these narratives.