In the context of LLMs, hallucination is a frequently discussed issue.
My goal here is not to explain why hallucinations happen. You can read more about this here, for example:
- The Beginner’s Guide to Hallucinations in Large Language Models
- What are AI hallucinations?
- Hallucination is Inevitable: An Innate Limitation of Large Language Models
Here, I’d simply like to clarify what I think hallucinations are, and how they are different from things that just look like hallucinations.
Hallucinations
Let’s say I ask an LLM or a tool that uses LLMs for a paper on Topic X. If the tool then gives me a (link to a) paper that does not exist, it is a hallucination.
Or if the paper exists but is about a completely unrelated topic. For example, I ask about papers on “industrial processes in the chemicals industry” and I get back papers on “how to avoid injuries while skiing”, I would call this a hallucination.
A hallucination is when no reasonable human would suggest a piece of information that either does not exist, is false, or is absolutely irrelevant.
Interpretations
Interpretations are a lot trickier because they are in the eye of the beholder. You cannot define interpretations unambiguously and without context.
An interpretation is when I see a piece of information, and then based on this piece of information, I make some kind of inference that uses my personal “world knowledge”. I would probably not say that LLMs have world knowledge, but they have very sophisticated representations of what other information is likely to co-occur with the information I gave it (= the piece of information I saw).
An example: I want to find government-sponsored R&D projects on nuclear waste management. My LLM tool gives me some summarized or otherwise synthesized version of the following two projects:
- Zapping uranium!, a project that investigates bacteria cleaning up uranium waste
- DOE Announces $900 Million to Accelerate the Deployment of Next-Generation Light-Water Small Modular Reactors
Most people would probably agree that (1) is within scope.
But (2)? It does not talk about nuclear waste management explicitly. However, considering the project is about deploying small modular reactors, and waste management is probably part of the deployment schedule, some people might argue that waste management could be a part of the project.
An interpretation is when one reasonable human would suggest a piece of information but another reasonable human would not—and when it cannot be determined with a reasonable degree of certainty that only one human is correct.
Unknowables
Let’s say I look for companies that do rapid liquid biopsy (quick blood testing with very small amounts of blood). My LLM tool gives me a company—but the company is Theranos (the company claimed they could do rapid biopsy whereas in fact they could not).
Before it became known publicly that Theranos was a fraud, this was probably unknowable to many people, depending on what information they had access to. Therefore, if these people had suggested Theranos as a match, I would not call this “hallucination” but “unknowable”.
Perhaps similarly, if 3000 years ago someone had said that the Earth is flat, I would call this an “unknowable”.
“Unknowable” is when you make a statement that could be true, given the information you can access, but that turns out to be false once more information becomes available.