Large language models (LLMs) like OpenAI's ChatGPT all suffer from the same problem: they make stuff up. Researchers have found that LLM hallucinations can be exploited to distribute malicious code packages to unsuspecting software developers. This tendency to invent "facts" is a phenomenon known as hallucination, and it happens because of the way today's LLMs -- and all generative AI models, for that matter -- are developed and trained.