Today : Sep 07, 2025
Business
06 September 2025

Business Insider Deletes Dozens Of Fake AI Essays

A wave of retracted personal essays exposes how fake bylines and possible AI-generated stories slipped through editorial safeguards at major publications.

Business Insider, one of the world’s most well-known digital news outlets, has come under scrutiny after quietly deleting dozens of personal essays that were found to be authored by individuals who may not even exist. The incident, which unfolded over several months between April 2024 and August 2025, has raised questions not just about editorial oversight, but about the growing threat posed by artificial intelligence in the publishing world.

According to an investigation by The Daily Beast, at least 34 articles—later determined to be written under 13 different, likely fictitious bylines—were removed from Business Insider’s website. The names included “Margaux Blanchard,” “Tim Stevensen,” “Nate Giovanni,” “Nathan Giovanni,” “Amarilis J. Yera,” “Onyeka Nwelue,” “Alice Amayu,” “Mia Brown,” “Tracy Miller,” “Margaret Awano,” “Erica Mayor,” “Kalmar Theodore,” “Lauren Bennett,” “Louisa Eunice,” and “Alyssa Scott.” Each deleted essay was replaced with a terse note stating it “didn’t meet Business Insider’s standards.”

The essays in question were all personal stories, for which the outlet typically pays between $200 and $300. Topics ranged from the relatable—"I’m 38 and live in a retirement village"—to the oddly specific—"Costco Next is the chain’s best-kept secret that’s free for members. I’ve already saved thousands of dollars using it." One essay, perhaps ironically, was titled "I was accepted into a well-regarded graduate program. I turned down the offer because AI is destroying my desired industry." The first of these pieces appeared in April 2024, with the most recent published just days before the “Margaux Blanchard” scam was exposed.

A closer look by The Daily Beast uncovered a slew of red flags. Several essays contained contradictory personal details, such as authors switching the gender or number of their children between stories. Submitted photographs were also suspect; many, when run through reverse-image searches, turned out to have been lifted from unrelated sources online. In one notable case, “Tim Stevensen” provided a picture he claimed showed himself and his daughters. The image, however, had previously been published in the UK’s i newspaper, credited to a different man, Gregory Stowe, who had written about his stepdaughters.

“Stevensen” himself was one of the most prolific and perplexing contributors. Across seven articles, he wove a tangled web of personal history: meeting his wife eight years ago, having children in their twenties, enduring years of 20-hour work shifts, leaving a decade-long teaching career to become a freelance writer, grappling with unpaid bills, and even betting $5,000 with his wife in a weight-loss challenge. The only real Tim Stevensen located in the U.S. did not respond to inquiries from The Daily Beast, and public records do not support the backstory claimed in the essays.

Other contributors were equally enigmatic. “Nate Giovanni,” sometimes credited as “Nathan Giovanni,” had at least five deleted essays with wildly inconsistent biographical details. In December, he wrote about convincing his wife to have a third child in their forties, naming two daughters and a young son. By March, the story had shifted: now there were two sons and a newborn at home. In May, Giovanni and his wife were globe-trotting house sitters, traveling to places like Charleston, Oregon, Australia, and even “London, for a quick three-day experience with a house cat.” Oddly, he listed “London” as a country and described visiting the “London Bridge.” By July, his narrative changed again—he was a former high school English teacher who’d lost his job at a failed startup.

One essay by “Amarilis J. Yera” described buying a home outside Houston at age 24. The accompanying photo, however, depicted a new-build house in Dallas, which had been sold a month prior. Interior shots were traced to a Kenyan Facebook group. Despite the similarity of her name to a real editor in Puerto Rico, the two were not the same person, and public records showed no other “Amarilis Yera” in the United States.

Some bylines appeared to be borrowed from real people. “Onyeka Nwelue,” for instance, is the name of a Nigerian-born writer who made headlines in 2023 for falsely claiming academic appointments at Oxford and Cambridge. Nwelue himself has since alleged that others have used his identity and likeness for scams.

Business Insider’s editor-in-chief, Jamie Heller—who took the helm in September 2024, after most of the dubious essays had already been published—addressed the issue in an internal memo. Heller wrote that the essays were removed "due to concerns about the authors’ identity or veracity," and assured staff that verification protocols had been "bolstered." She emphasized that no articles written by staff reporters were affected. A spokesperson for the company declined further comment, but a source noted that the 34 deleted articles represent only a tiny fraction of the roughly 70,000 pieces Business Insider publishes annually.

The controversy didn’t stop at Business Insider. According to The Washington Post, a "raft of articles" with questionable bylines have been retracted from other publications, including WIRED. Investigations are ongoing, with suspicion mounting that a broader scheme may be at play—one possibly involving the use of generative artificial intelligence to churn out fake stories under invented identities. The issue gained wider attention when the trade publication Press Gazette revealed that two essays published by “Margaux Blanchard” in April 2024 were likely fabricated and AI-generated. At least five outlets, including WIRED, were duped by the same fake author, whose true identity remains unknown.

Despite the suspicions, the role of AI in producing the deleted essays remains unclear. AI detection software used by The Daily Beast did not flag the essays as being fully AI-written. Still, the articles were peppered with odd turns of phrase and implausible details—like a Houston resident describing “apple pie” and “diners” as staples of Australian life, or a teacher being “summoned” to represent their school in Canada for up to a year. Such linguistic quirks and factual oddities, while not definitive proof, have fueled speculation that generative AI played at least some part in the scam.

The fallout from the incident has rippled through the media industry, prompting outlets to reexamine their editorial safeguards. The emergence of generative AI has made it easier than ever for bad actors to create plausible, if ultimately false, content at scale. As The Washington Post noted, the connections between these fake articles and suspect bylines have prompted several major publications to investigate the extent of the problem—and to question how many other stories may have slipped through the cracks.

For now, the motives behind the scheme remain murky. The so-called “essayists” have proven impossible to contact, and some appear to have published elsewhere, offering tips on freelance writing. Whether the goal was financial gain, mischief, or something more sinister is anyone’s guess. What’s clear is that the incident has exposed vulnerabilities in the editorial process, and put a spotlight on the challenges newsrooms face in the age of AI-driven deception.

As news organizations scramble to shore up their defenses, the Business Insider episode stands as a cautionary tale—reminding editors and readers alike that even in a world awash with information, not everything is as it seems.