The production of fake news is becoming more automated thanks to artificial intelligence, which has led to a proliferation of online content that looks like true articles but actually spreads misleading information about elections, conflicts, and natural catastrophes.

The number of websites containing bogus articles generated by AI has skyrocketed from 49 to over 600 since May, an increase of over 1,000%, as reported by NewsGuard, a disinformation tracking organization.

In the past, propaganda campaigns have constructed websites that seem authentic by enlisting legions of low-wage laborers or highly organized spy services. However, AI is making it simple for almost anybody to create these sites, whether they are a teenager in their basement or a member of a spy agency. As a result, information that is occasionally difficult to distinguish from legitimate news is produced.

READ MORE: Tom Hanks: With AI technology, I Could Feature In Movies After Death

According to a NewsGuard investigation, one AI-generated piece told a fictitious account of Benjamin Netanyahu’s psychiatrist, claiming that he had passed away and left a message implying the Israeli prime leader was involved. The psychiatrist looks to be a phony, but the allegation surfaced on an Iranian TV program, was repeated in Arabic, English, and Indonesian on media websites, and was shared by individuals on Instagram, Reddit, TikTok, and Reddit.

How not to fall for fake news and AI photos on social media

Political candidates, military leaders, and humanitarian endeavors may suffer as a result of the increased churn of divisive and false content that makes it harder to discern what is true. The quick expansion of these websites, according to misinformation specialists, is especially concerning in the lead-up to the 2024 elections.

“A few of these websites are producing hundreds or even thousands of articles every day,” stated NewsGuard researcher Jack Brewster, who oversaw the inquiry. We refer to it as the next great disinformation superspreader for this reason.

READ MORE: Scientists Speculate That Within Our Lifetimes, Mind-Uploading Technology That Makes A Virtual Replica Of You To Live Forever Might Be Available

An era in which chatbots, picture fabricators, and voice cloners can create content that appears human-made has been brought about by generative artificial intelligence.

Pro-Beijing bot networks are amplifying the pro-Chinese propaganda that well-dressed AI-generated news anchors are spewing. Days before voters went to the polls, lawmakers in Slovakia who were running for office discovered that their voices had been cloned to say divisive things they had never said. Fake news presented as real is being distributed by an increasing number of websites, posing as legitimate in dozens of languages, including Arabic and Thai, and going by names like iBusiness Day or Ireland Top News.

The websites have the ability to easily trick readers.

The story on Netanyahu’s purported psychiatrist was published on Global Village Space, which is overflowing with articles on a wide range of important subjects. Articles about the United States’ sanctions against Russian arms suppliers, the massive oil company Saudi Aramco’s involvement in Pakistan, and the country’s deteriorating ties with China are included.

Essays from Harvard-educated lawyers, Middle East think tank experts, and the website’s main executive, Pakistani television news anchor Moeed Pirzada, are also available on the website. (A request for comment from Pirzada was not answered. Two authors attested to having written pieces that were published on Global Village Space.)

However, Brewster noted that among these commonplace items are AI-generated pieces, like the one about Netanyahu’s psychiatrist that was reclassified as “satire” when NewsGuard contacted the group as part of its probe. According to NewsGuard, the tale seems to have been inspired by a satirical article on the death of an Israeli psychiatrist that was published in June 2010.

Simultaneously displaying actual and AI-generated news increases the credibility of misleading articles. Professor of journalism at the University of Cincinnati and misinformation specialist Jeffrey Blevins stated, “You have people that simply are not media literate enough to know that this is false.” “It is deceptive.”

According to media and AI experts, websites like as Global Village Space might become widespread during the 2024 election and turn into an effective means of disseminating false information.

As per Brewster, the websites function in two ways. Certain stories are produced manually; users ask chatbots to find articles that support a particular political viewpoint, and the bots then posts the results to a website. Additionally, the procedure can be carried out automatically by using web scrapers to find articles that include specific keywords. These articles are then fed into a big language model, which rewrites the content to seem original and avoid accusations of plagiarism. The outcome is automatically shared on the internet.

According to the group, NewsGuard finds websites created using artificial intelligence by looking for error messages or other text that “indicates that the content was produced by AI tools without adequate editing.”

READ MORE: YouTube Introduces New Generative AI Tools For Content Makers

The reasons behind the creation of these websites differ. Some are meant to cause chaos or change people’s political opinions. To get clicks and earn money from advertisements, other websites frequently publish divisive content, according to Brewster. However, he continued, the capacity to greatly enhance fraudulent content poses a serious security danger.

Misinformation has always been driven by technology. Propaganda-promoting professional groups from Eastern Europe, known as “troll farms,” amassed a sizable following on Facebook in the run-up to the 2020 U.S. election by posting offensive material on Christian and Black group pages and reaching 140 million monthly users.

Media watchdog Poynter reports that pink-slime journalism sites, named after the meat waste, frequently develop in tiny communities where local news outlets have vanished, producing pieces that benefit the financiers funding the enterprise.

However, Blevins stated that in comparison to artificial intelligence, those methods require more resources. He stated, “The scope and scale of AI is dangerous, especially when combined with more complex algorithms.” “This information war is unprecedented in scope.”

Although it’s unclear, intelligence services may be utilizing AI-generated news to support foreign influence operations, this is a serious worry. Brewster stated, “I wouldn’t be shocked at all if this was used—definitely next year with the elections.” “It’s difficult to avoid seeing a politician create one of these websites to spread false information about their rival and fluffy content about themselves.”

According to Blevins, readers should keep an eye out for “red flags” in articles, such “really odd grammar” or sentence construction mistakes. Nonetheless, raising average readers’ media literacy is the most powerful instrument.

“Educate individuals about the existence of these kinds of websites. They are capable of doing harm similar to this, he claimed. However, keep in mind that not all sources are reliable. There is no guarantee that a website claiming to be a news source has journalists creating content.

He continued, “Regulation is essentially nonexistent.” Because they don’t want to violate the rights to free expression, governments might find it challenging to crack down on fake news information. That is left to social media businesses, who haven’t performed well enough thus far.

There are simply too many of these sites to handle promptly. According to Blevins, “it’s kind of like playing whack-a-mole.”

He went on, “You find one [website], you take it down, and there’s another one made somewhere else.” “You’ll never be able to catch up to it completely.”