• Skyline@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    1 day ago

    I am not familiar with either Alan Watts or Stephen Chboski. But I searched that quote and didn’t find any reliable information about who said it: DDG produces only 2 results; Google produces a handful more, most of which are it being used on Facebook/Instagram/Twitter, and one being a Reddit post on an Alan Watts–related community in which the users don’t think it belongs to him.

    So, yes, it gave you the wrong source, but if the source is unknown or obscure to being with, that’s hardly surprising. It can summarise and refactor text, not produce information out of the blue.

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      1 day ago

      I searched that quote and didn’t find any reliable information about who said it.

      That’s literally all it needed to say! 😂

      I’ve had instances where I will ask for the reference, i.e. some kind of study of where it got the information from, and it quite literally hallucinates fictitious studies, authors, papers, etc.

      Unbelievable that it’s even programmed to do that.

      • zalgotext@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        22 hours ago

        It’s programmed to answer your questions, not to be correct, or have the self-awareness to say “I don’t know”.

        Chat-based LLMs are text generators. A really really really souped up version of the word predictor on your phone’s keyboard. They don’t “know” or “understand” anything - all it’s doing is calculating the most probable next word, based on the prompt it’s received and the words it’s already generated. But it doesn’t have knowledge. It doesn’t know what it’s outputting. Expecting it to know when it lacks knowledge isn’t really reasonable, because the concept of knowledge doesn’t really apply to an LLM.

        None of this is a defense of LLMs - quite the opposite. I think LLMs are a brute-force, dead-end approach to artificial intelligence. But I think it’s important to understand what they are, what they’re capable of, and what their limits are if we’re going to interact with them.