For me I say that a truck with a cab longer than its bed is not a truck, but an SUV with an overgrown bumper.

    • Kaldo@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Nope, it’s only matching the prompt with the most likely answer from its training set. Do you remember in the early days when it would be asked slightly tweaked riddles and it would get them incorrectly, it’d just spew out something that sounded like the original answer but was completely wrong in the current context? Or how it just made up nonexistent court cases for that one lawyer that tried to use it without actually checking if it’s correct?

      LLMs are just guessing the answer based on millions of similar answers they have been trained with. It’s a language syntax generator, it has no clue what it is actually saying.

      • Viktorian@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I know this, I’ve worked on LLMs and other neural networks so I was wondering what kind of difference you could make out. Humans do the same thing, they just have more neurons and use more sophisticated training modes and activation mechanisms as well as propagation patterns.

        So what I’m saying is that you can’t tie intelligence to the fundamental mechanism because it’s the same, only humans are more developed. And maturity on the other hand is a highly subjective and arbitrary criterion—when is the system mature enough to be considered intelligent?