There’s also the fact that they can’t tell reality apart from fiction in general, because they don’t understand anything in the first place.
LLMs have no way of differentiating fantasy RPG elements from IRL things. So they can lose the plot on what is being discussed suddenly, and for seemingly no reason.
LLMs don’t just “learn” facts from their training data. They learn how to pretend to be thinking, they can mimic but not really comprehend. If there were facts in the training data, it can regurgitate them, but it doesn’t actually know which facts apply to which subjects, or when to not make some up.
There’s also the fact that they can’t tell reality apart from fiction in general, because they don’t understand anything in the first place.
LLMs have no way of differentiating fantasy RPG elements from IRL things. So they can lose the plot on what is being discussed suddenly, and for seemingly no reason.
LLMs don’t just “learn” facts from their training data. They learn how to pretend to be thinking, they can mimic but not really comprehend. If there were facts in the training data, it can regurgitate them, but it doesn’t actually know which facts apply to which subjects, or when to not make some up.
True, and they are so darn good at it, that it can be somewhat confusing at times.
But the current AIs are not the ones we read about in SciFi.
I’d argue that referring to it as “AI” is a stretch since it’s all A and no I.
This is why I strictly refer to these things as LLMs. That’s what they are.