index

AI is more like a ghost rather than animals' intelligence

I’ve been thinking about this a lot recently.

The current form of AI (LLMs like ChatGPT/Claude/etc) doesn’t feel like “another living intelligence” to me. Not like humans, not like animals.

It feels more like a ghost.

Not in a scary way. More like: something that can talk and appear intelligent, but doesn’t really “live” the way living beings do.

Why I call it a ghost

A ghost has no body. No hunger. No pain. No survival instinct. It doesn’t grow up. It doesn’t have a childhood. It doesn’t have skin in the game.

But it can still “show up, speak, and influence what people do.”

That’s how LLMs feel.

  • They can write text.
  • They can sound confident.
  • They can give ideas.
  • They can help you build things (code, content, planning, etc).

But they don’t have a biological “loop” like animals do.

Animals learn by living. They touch things, get hurt, get hungry, build memories from a real world, and evolve behavior because it matters for survival.

LLMs don’t have that.

They’re closer to a voice that you can summon.

The part that still blows my mind: sand can “think”

I’m still fascinated by the fact that we can make sand “think”.

Silicon → transistors → logic gates → chips → training → suddenly it can talk.

It’s not alive, but it’s useful. And sometimes it’s eerily convincing.

Like a ghost: not a new species, but a new kind of presence.

How LLMs actually work (simple version)

At the core, an LLM does something boring:

It predicts the next token.

“Token” is just a chunk of text. Sometimes it’s a whole word, sometimes half a word, sometimes punctuation.

So the model reads your prompt → converts it into tokens → then repeatedly predicts what token likely comes next.

It’s basically autocomplete… but trained on a massive amount of text, so the autocomplete becomes surprisingly smart.

Training: not memory like a database

A common misunderstanding is: “the model stores the internet”.

It’s not like a database where it can look up a fact.

During training, it learns patterns:

  • grammar
  • style
  • common knowledge relationships in text
  • how explanations usually flow
  • how arguments are structured
  • what sounds like a good answer

So when you ask it something, it generates an answer that fits the pattern.

That’s why it can be extremely helpful.

And that’s also why it can be wrong in a very convincing way.

Hallucination: ghosts and dreams

We often hallucinate about ghosts.

When we’re tired, stressed, or in a dark room, our brain tries to complete patterns:

  • a shadow becomes a “person”
  • a random sound becomes “someone calling my name”
  • a dream feels real until you wake up

LLM hallucinations feel similar.

The model is trained to continue text patterns, not to guarantee truth. So if the “most likely sounding” continuation is a fake citation, a wrong fact, or an invented explanation, it might output it anyway—smoothly.

It’s like dreaming in language.

A dream can be coherent, emotional, and detailed… but it’s still not reality.

That’s why for real-world use, you still need:

  • verification
  • sources
  • tools (search, docs, code execution)
  • human judgment

So what is it, then?

For me, LLMs are not “alive”. They’re not “animals in silicon”.

They’re closer to:

  • a ghostly voice made from patterns of human text
  • a new instrument (like a calculator, but for language)
  • a system that can simulate reasoning well enough to be useful

That’s not minimizing it.

If anything, that makes it even more incredible.

We didn’t create a new species.

We created a new kind of tool that can talk.

And now we have to learn how to live with it.