Okay, sorry, I just need a hole to scream into. I’ll be done in a minute.
[inhales]
If you read about the current crop of “artificial intelligence” tools, you’ll eventually come across the word “hallucinate.” It’s used as a shorthand for any instance where the software just, like, makes stuff up: An error, a mistake, a factual misstep — a lie.
- An “AI” support bot informs users about a change to a company’s terms of service — a change that didn’t actually happen? A hallucination.
- Some law firms used “AI” to file a brief riddled with “false, inaccurate, and misleading” citations? A hallucination.
- A chatbot on a rightwing social media website decides to start advancing racist conspiracy theories, even when nobody asked? A hallucination.
I have a semantic quibble I’d like to lodge.
Everything — everything — that comes out of these “AI” platforms is a “hallucination.” Quite simply, these services are slot machines for content. They’re playing probabilities: when you ask a large language model a question, it returns answers aligned with the trends and patterns they’ve analyzed in their training data.1 These platforms do not know when they get things wrong; they certainly do not know when they get things right. Assuming an “artificial intelligence” platform knows the difference between true and false is like assuming a pigeon can play basketball. It just ain’t built for it.
I’m far from the first to make this point. But it seems to me that when we use a term put forward by the people subsidizing and selling these so-called tools — people who would very much like us to believe that these machines can distinguish true from false — we’re participating in a different kind of hallucination.
And a far worse one, at that.
Footnote
Well, taking into account any subsequent “fine-tuning” of the model that humans may have performed. ↩︎
This has been “Hallucinating.” a post from Ethan’s journal.