New May 17, 2025

Hallucinating.

Top Front-end Bloggers All from Ethan Marcotte’s website View Hallucinating. on ethanmarcotte.com

Okay, sorry, I just need a hole to scream into. I’ll be done in a minute.

[inhales]

If you read about the current crop of “artificial intelligence” tools, you’ll eventually come across the word “hallucinate.” It’s used as a shorthand for any instance where the software just, like, makes stuff up: An error, a mistake, a factual misstep — a lie.

I have a semantic quibble I’d like to lodge.

Everything — everything — that comes out of these “AI” platforms is a “hallucination.” Quite simply, these services are slot machines for content. They’re playing probabilities: when you ask a large language model a question, it returns answers aligned with the trends and patterns they’ve analyzed in their training data.1 These platforms do not know when they get things wrong; they certainly do not know when they get things right. Assuming an “artificial intelligence” platform knows the difference between true and false is like assuming a pigeon can play basketball. It just ain’t built for it.

I’m far from the first to make this point. But it seems to me that when we use a term put forward by the people subsidizing and selling these so-called tools — people who would very much like us to believe that these machines can distinguish true from false — we’re participating in a different kind of hallucination.

And a far worse one, at that.


Footnote

  1. Well, taking into account any subsequent “fine-tuning” of the model that humans may have performed. ↩︎


This has been “Hallucinating.” a post from Ethan’s journal.

Reply via email

Scroll to top