This is hardly unique to AI. When I used Reddit, r/bestof (a sub that reposted the “best” comments from Reddit threads) was consistently full of posts that confidently, eloquently, and persuasively stated bullshit as fact.
Because Redditors as a collective don’t upvote and award the truest posts - they upvote and award the posts that seem the most trustworthy.
And that’s human nature. Human beings instinctively see confidence as trustworthy and hesitation and doubt as untrustworthy.
And it’s easy to project an aura of confidence when you post bullshit online, since you have all the time you need to draft and edit your comment and there are no consequences for being wrong online.
Zero surprise an AI algorithm trained on the Internet replicates that behavior 😆
Maybe not on Reddit, but when a human author consistently writes bullshit, the usual consequence is a loss of reputation.
In the professional world, this could mean the loss of your job.
This doesn’t happen with AI, so it can spred all the bullshit it wants, and people just use it all the same. No consequences for being confidently wrong.
One step below AI would be anonymous posts online, but even then, there’s usually a profile history that you can check yourself for accuracy and honesty.
What really pisses me off about the use of AI is that you’d never know if it was completely wrong, unless you know about the topic or double-check. In that sense… what’s the point? 😂
This is hardly unique to AI. When I used Reddit, r/bestof (a sub that reposted the “best” comments from Reddit threads) was consistently full of posts that confidently, eloquently, and persuasively stated bullshit as fact.
Because Redditors as a collective don’t upvote and award the truest posts - they upvote and award the posts that seem the most trustworthy.
And that’s human nature. Human beings instinctively see confidence as trustworthy and hesitation and doubt as untrustworthy.
And it’s easy to project an aura of confidence when you post bullshit online, since you have all the time you need to draft and edit your comment and there are no consequences for being wrong online.
Zero surprise an AI algorithm trained on the Internet replicates that behavior 😆
Maybe not on Reddit, but when a human author consistently writes bullshit, the usual consequence is a loss of reputation.
In the professional world, this could mean the loss of your job.
This doesn’t happen with AI, so it can spred all the bullshit it wants, and people just use it all the same. No consequences for being confidently wrong.
One step below AI would be anonymous posts online, but even then, there’s usually a profile history that you can check yourself for accuracy and honesty.
What really pisses me off about the use of AI is that you’d never know if it was completely wrong, unless you know about the topic or double-check. In that sense… what’s the point? 😂
just the megawatts and water wasted powering and cooling all this hyperconfident incorrectness.