• zalgotext@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    ·
    1 day ago

    It’s programmed to answer your questions, not to be correct, or have the self-awareness to say “I don’t know”.

    Chat-based LLMs are text generators. A really really really souped up version of the word predictor on your phone’s keyboard. They don’t “know” or “understand” anything - all it’s doing is calculating the most probable next word, based on the prompt it’s received and the words it’s already generated. But it doesn’t have knowledge. It doesn’t know what it’s outputting. Expecting it to know when it lacks knowledge isn’t really reasonable, because the concept of knowledge doesn’t really apply to an LLM.

    None of this is a defense of LLMs - quite the opposite. I think LLMs are a brute-force, dead-end approach to artificial intelligence. But I think it’s important to understand what they are, what they’re capable of, and what their limits are if we’re going to interact with them.