I searched that quote and didn’t find any reliable information about who said it.
That’s literally all it needed to say! 😂
I’ve had instances where I will ask for the reference, i.e. some kind of study of where it got the information from, and it quite literally hallucinates fictitious studies, authors, papers, etc.
Unbelievable that it’s even programmed to do that.
It’s programmed to answer your questions, not to be correct, or have the self-awareness to say “I don’t know”.
Chat-based LLMs are text generators. A really really really souped up version of the word predictor on your phone’s keyboard. They don’t “know” or “understand” anything - all it’s doing is calculating the most probable next word, based on the prompt it’s received and the words it’s already generated. But it doesn’t have knowledge. It doesn’t know what it’s outputting. Expecting it to know when it lacks knowledge isn’t really reasonable, because the concept of knowledge doesn’t really apply to an LLM.
None of this is a defense of LLMs - quite the opposite. I think LLMs are a brute-force, dead-end approach to artificial intelligence. But I think it’s important to understand what they are, what they’re capable of, and what their limits are if we’re going to interact with them.
That’s literally all it needed to say! 😂
I’ve had instances where I will ask for the reference, i.e. some kind of study of where it got the information from, and it quite literally hallucinates fictitious studies, authors, papers, etc.
Unbelievable that it’s even programmed to do that.
It’s programmed to answer your questions, not to be correct, or have the self-awareness to say “I don’t know”.
Chat-based LLMs are text generators. A really really really souped up version of the word predictor on your phone’s keyboard. They don’t “know” or “understand” anything - all it’s doing is calculating the most probable next word, based on the prompt it’s received and the words it’s already generated. But it doesn’t have knowledge. It doesn’t know what it’s outputting. Expecting it to know when it lacks knowledge isn’t really reasonable, because the concept of knowledge doesn’t really apply to an LLM.
None of this is a defense of LLMs - quite the opposite. I think LLMs are a brute-force, dead-end approach to artificial intelligence. But I think it’s important to understand what they are, what they’re capable of, and what their limits are if we’re going to interact with them.