Sounds like you haven’t tried an LLM in at least a year.
They have greatly improved since they were released. Their hallucinations have diminished to close to nothing. Maybe you should try that same question again this time. I guarantee you will not get the same result.
Their hallucinations have diminished to close to nothing.
Are you sure you’re not an AI, 'cause you’re hallucinating something fierce right here boy-o?
Actual research, as in not “random credulous techbrodude fanboi on the Internet” says exactly the opposite: that the most recent models hallucinate more.
Wow. LLM shills just really can’t cope with reality can they.
Go to one of your “reasoning” models. Ask a question. Record the answer. Then, and here’s the key, ask it to explain its reasoning. It churns out a pretty plausible-sounding pile of bullshit. (That’s what LLMbeciles are good at, after all.) But here’s the key (and this is the key that separates the critical thinker from the credulous): ask it again. Not even in a new session. Ask it again to explain its reasoning. Do this ten times. Count the number of different explanations it gives for its “reasoning”. Count the number of mutually incompatible lines of “reasoning” it gives.
Then, for the piece de resistance, ask it to explain how its reasoning model works. Then ask it again. And again.
It’s really not hard to spot the bullshit machine in action if you’re not a credulous ignoramus.
Sounds like you haven’t tried an LLM in at least a year.
They have greatly improved since they were released. Their hallucinations have diminished to close to nothing. Maybe you should try that same question again this time. I guarantee you will not get the same result.
Are you sure you’re not an AI, 'cause you’re hallucinating something fierce right here boy-o?
Actual research, as in not “random credulous techbrodude fanboi on the Internet” says exactly the opposite: that the most recent models hallucinate more.
Only when switching to more open reasoning models with more features. With non-reasoning models the decline is steady.
https://research.aimultiple.com/ai-hallucination/
But I guess that nuance is lost on people like you who pretend AI killed their grandma and ate their dog.
Wow. LLM shills just really can’t cope with reality can they.
Go to one of your “reasoning” models. Ask a question. Record the answer. Then, and here’s the key, ask it to explain its reasoning. It churns out a pretty plausible-sounding pile of bullshit. (That’s what LLMbeciles are good at, after all.) But here’s the key (and this is the key that separates the critical thinker from the credulous): ask it again. Not even in a new session. Ask it again to explain its reasoning. Do this ten times. Count the number of different explanations it gives for its “reasoning”. Count the number of mutually incompatible lines of “reasoning” it gives.
Then, for the piece de resistance, ask it to explain how its reasoning model works. Then ask it again. And again.
It’s really not hard to spot the bullshit machine in action if you’re not a credulous ignoramus.