Sat, Apr 20, 2024
Whatsapp

Beware! AI Chatbots create fraudulent news websites: Reports

The majority of the websites are content farms that produce low-quality content to bring in advertising.

Written by  Annesha Barua -- May 03rd 2023 01:06 PM
Beware! AI Chatbots create fraudulent news websites: Reports

Beware! AI Chatbots create fraudulent news websites: Reports

AI Chatbots: A report published by NewsGuard, a news-rating group, found dozens of news websites generated by AI chatbots, raising concerns about the possibility of fraudulent techniques. The 49 websites, which were independently reviewed by Bloomberg, run the gamut, with some disguised as breaking news sites, while others focus on lifestyle and celebrity news or publish sponsored content. 

However, none disclose that they are populated using AI chatbots such as OpenAI's ChatGPT and Google's Bard, which can generate detailed text based on simple user prompts. The majority of the websites are content farms that produce low-quality content to bring in advertising. The websites are based all over the world and are published in several languages, including English, Portuguese, Tagalog, and Thai, NewsGuard said in its report.


Several instances documented by NewsGuard showed that the chatbots generated falsehoods for published pieces. For example, in April alone, CelebritiesDeaths.com published an article titled "Biden dead. Harris acting President, address 9 a.m."

Also Read: What would scare AI? This two-sentence horror story by ChatGPT will give you goosebumps

Another site called TNewsNetwork published an unverified story about the deaths of thousands of soldiers in the Russia-Ukraine war, based on a YouTube video. The concerns are particularly challenging for Google, whose AI chatbot Bard may have been utilised by the sites, and whose advertising technology generates revenue for half.

The report suggests that companies like OpenAI and Google should take care to train their models not to fabricate news. "Using AI models known for making up facts to produce what only look like news websites is fraud masquerading as journalism," said NewsGuard co-Chief Executive Officer Gordon Crovitz. OpenAI didn't immediately respond to a request for comment, but has previously stated that it uses a mix of human reviewers and automated systems to identify and enforce against the misuse of its model, including issuing warnings or, in severe cases, banning users.

The actors pushing this brand of fraud "are going to keep experimenting to find what's effective," according to Noah Giansiracusa, an associate professor of data science and mathematics at Bentley University. "As more newsrooms start leaning into AI and automating more, and the content mills are automating more, the top and the bottom are going to meet in the middle" to create an online information ecosystem with vastly lower quality.

NewsGuard researchers used keyword searches for phrases commonly produced by AI chatbots, such as "as an AI large language model" and "my cutoff date in September 2021," to find the sites. The researchers ran the searches on tools like the Facebook-owned social media analysis platform CrowdTangle and the media monitoring platform Meltwater. They also evaluated the articles using the AI text classifier GPTZero, which determines whether certain passages are likely to be written entirely by AI.

Google said that the presence of AI-generated content is not inherently a violation of its ad policies but that it evaluates content against their existing publisher policies. The company regularly monitors abuse trends within its ads ecosystem and adjusts its policies and enforcement systems accordingly.

Also Read: Geoffrey Hinton flags dangers of chatbots as 'Godfather of AI' quits Google



- With inputs from agencies

adv-img

Top News view more...

Latest News view more...