The life, death and rebirth of an AI-generated news outlet

**EMBARGO: No electronic distribution, Web posting or street sales before THURSDAY 12:01 A.M. ET, JUNE 6, 2024. No exceptions for any reasons. EMBARGO set by source.** Dave Fanning, an Irish DJ and talk-show host, in Dublin, on May 15, 2024. BNN Breaking had millions of readers, an international team of journalists and a publishing deal with Microsoft. But it was just an AI chop shop. (Paulo Nunes dos Santos/The New York Times)

The news was featured on MSN.com: “Prominent Irish broadcaster faces trial over alleged sexual misconduct.” At the top of the story was a photo of Dave Fanning.

But Fanning, an Irish DJ and talk-show host famed for his discovery of the rock band U2, was not the broadcaster in question.

ADVERTISING


“You wouldn’t believe the amount of people who got in touch,” said Fanning, who called the error “outrageous.”

The falsehood, visible for hours on the default homepage for anyone in Ireland who used Microsoft Edge as a browser, was the result of an artificial intelligence snafu.

A fly-by-night journalism outlet called BNN Breaking had used an AI chatbot to paraphrase an article from another news site, according to a BNN employee. BNN added Fanning to the mix by including a photo of a “prominent Irish broadcaster.” The story was then promoted by MSN, a web portal owned by Microsoft.

The story was deleted from the internet a day later, but the damage to Fanning’s reputation was not so easily undone, he said in a defamation lawsuit filed in Ireland against Microsoft and BNN Breaking. His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative AI errors.

BNN went dormant in April, while The New York Times was reporting this article. The company and its founder did not respond to multiple requests for comment. Microsoft had no comment on MSN’s featuring the misleading story with Fanning’s photo or his defamation case, but the company said it had terminated its licensing agreement with BNN.

During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of “seasoned” journalists and 10 million monthly visitors, surpassing the Chicago Tribune’s self-reported audience. Prominent news organizations including The Washington Post, Politico and The Guardian linked to BNN’s stories. Google News often surfaced them, too.

A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the AI chatbot ChatGPT. BNN’s “About Us” page featured an image of four children looking at a computer, some bearing the gnarled fingers that are a telltale sign of an AI-generated image.

How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: AI-generated content is upending, and often poisoning, the online information supply.

Many traditional news organizations are already fighting for traffic and advertising dollars. For years, they competed for clicks against pink slime journalism — so-called because of its similarity to liquefied beef, an unappetizing, low-cost food additive.

Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy. Now, experts say, AI could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.

The result is a machine-powered ouroboros that could squeeze out sustainable, trustworthy journalism. Even though AI-generated stories are often poorly constructed, they can still outrank their source material on search engines and social platforms, which often use AI to help position content. The artificially elevated stories can then divert advertising spending, which is increasingly assigned by automated auctions without human oversight.

•••

REAL IDENTITIES, USED BY AI

“You should be utterly ashamed of yourself,” one person wrote in an email to Kasturi Chakraborty, a journalist based in India whose byline was on BNN’s story with Fanning’s photo.

Chakraborty worked for BNN Breaking for six months, with dozens of other journalists, mainly freelancers with limited experience, based in countries like Pakistan, Egypt and Nigeria, where the salary of around $1,000 per month was attractive. They worked remotely, communicating via WhatsApp and on weekly Google Hangouts.

Former employees said they thought they were joining a legitimate news operation; one had mistaken it for BNN Bloomberg, a Canadian business news channel. BNN’s website insisted that “accuracy is nonnegotiable” and that “every piece of information underwent rigorous checks, ensuring our news remains an undeniable source of truth.”

But this was not a traditional journalism outlet. While the journalists could occasionally report and write original articles, they were asked to primarily use a generative AI tool to compose stories, said Chakraborty and Hemin Bakir, a journalist based in Iraq who worked for BNN for almost a year. They said they had uploaded articles from other news outlets to the generative AI tool to create paraphrased versions for BNN to publish.

Bakir, who now works at a broadcast network called Rudaw, said that he had been skeptical of this approach but that BNN’s founder, a serial entrepreneur named Gurbaksh Chahal, had described it as “a revolution in the journalism industry.”

Chahal’s evangelism carried weight with his employees because of his wealth and seemingly impressive track record, they said. Born in India and raised in Northern California, Chahal made millions in the online advertising business in the early 2000s and wrote a how-to book about his rags-to-riches story that landed him an interview with Oprah Winfrey. A business trend chaser, he created a cryptocurrency (briefly promoted by Paris Hilton) and manufactured COVID tests during the pandemic.

But he also had a criminal past. In 2013, he attacked his girlfriend at the time, and was accused of hitting and kicking her more than 100 times, generating significant media attention because it was recorded by a video camera he had installed in the bedroom of his San Francisco penthouse. The 30-minute recording was deemed inadmissible by a judge, however, because the police had seized it without a warrant. Chahal pleaded guilty to battery, was sentenced to community service and lost his role as CEO at RadiumOne, an online marketing company.

After an arrest involving another domestic violence incident with a different partner in 2016, he served six months in jail.

Chahal, now 41, eventually relocated to Hong Kong, where he started BNN Breaking in 2022. On LinkedIn, he described himself as the founder of ePiphany AI, a large language learning model that he said was superior to ChatGPT; this was the tool that BNN used to generate its stories, according to former employees.

At first, employees were asked to put articles from other news sites into the tool so that it could paraphrase them, and then to manually “validate” the results by checking them for errors, Bakir said. AI-generated stories that weren’t checked by a person were given a generic byline of BNN Newsroom or BNN Reporter. But eventually, the tool was churning out hundreds, even thousands, of stories a day — far more than the team could “validate.”

Chahal told Bakir to focus on checking stories that had a significant number of readers, such as those republished by MSN.com.

Employees did not want their bylines on stories generated purely by AI, but Chahal insisted on this. Soon, the tool randomly assigned their names to stories.

This crossed a line for some BNN employees, according to screenshots of WhatsApp conversations reviewed by the Times, in which they told Chahal that they were receiving complaints about stories they didn’t realize had been published under their names.

“It tarnished our reputations,” Chakraborty said.

•••

COUNTLESS MISTAKES

Over the past year, BNN racked up numerous complaints about getting facts wrong, fabricating quotes from experts and stealing content and photos from other news sites without credit or compensation.

The story with the photo of Fanning, which Chakraborty said had been generated by AI with her name randomly assigned to it, was published because news about the trial of an Irish broadcaster accused of sexual misconduct was trending. The broadcaster wasn’t named in the original article because he had a super injunction — a gag order that forbids news media to name a person in its coverage — so the AI presumably paired the text with a generic photo of a “prominent Irish broadcaster.”

Fanning’s lawyers at Meagher Solicitors, an Irish firm that specializes in defamation cases, reached out to BNN and never received a response, though the story was deleted from BNN’s and MSN’s sites. In January, he filed a defamation case against BNN and Microsoft in the High Court of Ireland. BNN responded by publishing a story that month about Fanning that accused him of “desperate tactics in money hustling lawsuit.”

This was a strategy that Chahal favored, according to former BNN employees. He used his news service to exercise grudges, publishing slanted stories about a politician from San Francisco that he disliked; Wikipedia after it published a negative entry about BNN Breaking; and Elon Musk after accounts belonging to Chahal, his wife and his companies were suspended on X.

•••

A STRONG MOTIVATOR

The appeal of using AI for news is clear: money.

The increasing popularity of programmatic advertising — which uses algorithms to automatically place ads across the internet — allows AI-powered news sites to generate revenue by mass-producing low-quality clickbait content, said Sander van der Linden, a social psychology professor and fake news expert at the University of Cambridge.

Experts are nervous about how AI-fueled news could overwhelm accurate reporting with a deluge of junk content distorted by machine-powered repetition. A particular worry is that AI aggregators could chip away even further at the viability of local journalism, siphoning away its revenue and damaging its credibility by contaminating the information ecosystem.

Many audiences already struggle to discern machine-generated material from reports produced by human journalists, van der Linden said.

“It’s going to have a negative impact on trusted news,” he said.

In March, Google rolled out an update to “reduce unoriginal content in search results,” targeting sites with “spammy” content, whether produced by “automation, humans or a combination,” according to a corporate blog post. BNN’s stories stopped showing up in search results soon after.

Before ending its agreement with BNN Breaking, Microsoft had licensed content from the site for MSN.com, as it does with reputable news organizations such as Bloomberg and The Wall Street Journal, republishing their articles and splitting the advertising revenue.

CNN recently reported that Microsoft-hired editors who once curated the articles featured on MSN.com have increasingly been replaced by AI. Microsoft confirmed that it used a combination of automated systems and human review to curate content on MSN.

BNN stopped publishing stories in early April and deleted its content. Visitors to the site now find BNNGPT, an AI chatbot that, when asked, says it was built using open-source models.

But Chahal wasn’t abandoning the news business. Within a week or so of BNN Breaking shutting down, the same operation moved to a new website called TrimFeed.

TrimFeed’s About Us page had the same set of values that BNN Breaking’s had, promising “a media landscape free of distortions.” On Tuesday, after a reporter informed Chahal that this article would soon be published, TrimFeed shut down as well.

© 2024 The New York Times Company

Leave a Reply

Your email address will not be published. Required fields are marked *

*

By participating in online discussions you acknowledge that you have agreed to the Star-Advertiser's TERMS OF SERVICE. An insightful discussion of ideas and viewpoints is encouraged, but comments must be civil and in good taste, with no personal attacks. If your comments are inappropriate, you may be banned from posting. To report comments that you believe do not follow our guidelines, email hawaiiwarriorworld@staradvertiser.com.