Avoiding GenAI Source Hallucinations

Ever since ChatGPT burst onto the scene in November 2022, Generative AI tools and products have become ubiquitous. These apps can be great thought partners and productivity boosters in all sorts of tasks, but users beware–they can just as often provide inaccurate or misleading information. Such information is called a “hallucination,” and often comes in a confident, authoritative tone, making it harder to determine which parts of a chatbot’s output are factual and which are fabricated.

Can ChatGPT Spell?

One famous example of ChatGPT getting basic information incorrect is that up until the year 2026, it couldn’t count how many instances of the letter “r” appear in the word “strawberry.” When asked, it would repeatedly answer “two,” rather than the clearly correct “three.” Even when challenged, it would stubbornly stick to its guns.

Can ChatGPT Spell?

After much ado was made of this weird glitch on TikTok and across the internet, ChatGPT unveiled a new model that–among many other advances–can correctly spell this word. The model’s tongue-in-cheek name? “Strawberry.”

Granted, this example of a hallucination is low-stakes and easy to identify by anyone past childhood age. But what about more subtle, plausible hallucinations that could fool even the highly educated?

A Fictional Summer Reading List

One such example happened in May 2025, when several reputable newspapers, including the Chicago Sun-Times, published a syndicated list of summer book recommendations, complete with plot summaries. 

The only problem was that 10 out of the 15 recommended books–such as The Rainmakers by Percival Everett and The Last Algorithm by Andy Weir–don’t actually exist. After an outpouring of outrage on social media and an exposé by the website 404, the creator of this list eventually came clean and owned up to using GenAI to compile it (Blair).

Despite high-profile errors like the above, writers often employ GenAI in their research, since it can provide an extensive bibliography on a topic within seconds. The problem is that if it can’t find a particular source, it will make one up–usually one that seems highly plausible, at that. Often, GenAI will combine the name of an expert in the field with the actual title (or a similar one) to an existing article by a different author. 

A Hallucinated Citation

Here’s an example of a fake citation generated by a chatbot:

Twenge, Jean, and Amy Orben. “Social Media Exposure and Adolescent Depression: A Longitudinal Analysis.” Journal of Adolescent Psychology, vol. 34, no. 2, 2019, pp. 112–128. https://doi.org/10.1037/jap.2019.034.

This citation looks thoroughly plausible. Let’s look at each element, though, to break down where it fails.

  • Authors: Jean Twenge and Amy Orben are both experts who study adolescent mental health and social media, but they’ve never co-written an article together.
  • Article Title: The title is reminiscent of many others on the topic, such as Puukko, et al.’s “Social Media Use and Depressive Symptoms—A Longitudinal Study from Early to Late Adolescence” (an actual article found through PubMed). But the AI-generated title itself doesn’t appear anywhere on the web.
  • Journal Title: “Journal of Adolescent Psychology” sounds like a legitimate journal–but it’s not. There’s a “Journal of Adolescent Psychiatry,” but not Psychology.
  • DOI: Perhaps the biggest clue here, the DOI link doesn’t actually go anywhere, though it appears to follow the format a DOI for such a source would follow.

As you can see, while the hallucinated source appears real at first glance, a bit of sleuthing quickly shows the opposite. 

Here are some ways to avoid falling prey to such hallucinations.

Maintain healthy skepticism. Even GenAI tools recommend this–see the disclaimer at the bottom of your ChatGPT interface that reads “ChatGPT can make mistakes. Check important info.” But since legally, only human users–not LLMs–are held responsible for such errors, it pays to be skeptical.

Use specific prompts. Prompt engineering is key when working with GenAI. Make sure to specify to the chatbot that you’re looking only for actual published research, not hypothetical articles or examples of sources.

Beware uncited output. In your prompts, always ask for citations, and when reading the output, follow all links provided to ensure they match what the chatbot says. If a GenAI tool doesn’t–or won’t–provide a citation for a specific piece of information, treat that info as highly suspect until you can independently verify it.

Independently verify information. Beware of a search engine’s “AI Overview” at the top of search results. Instead, try to find at least 2-3 independent sources of information (such as an author’s CV, or the contents of a journal issue) before accepting a citation as real.

Works Cited

Blair, Elizabeth. “How an AI-generated summer reading list got published in major newspapers.” NPR, 20 May 2025, www.npr.org/2025/05/20/nx-s1-5405022/fake-summer-reading-list-ai.

Khan, Aasma. “AI chatbots struggle to spell ‘Strawberry’: Find out why.” YourStory, 28 Aug. 2024, yourstory.com/2024/08/ai-chatbots-strawberry-spelling-mistake.

GIVE YOUR CITATIONS A BOOST TODAY

Start your TypeCite Boost 3 day free trial today. Then just $4.99 per month to save your citations, organize in projects, and much more.

SIGN UP
Lucas Street

Lucas A. Street directs the writing center and is an assistant professor of English at Augustana College in Rock Island, Illinois.

Explore the Writing Help section