• Kyrgizion@lemmy.world
    link
    fedilink
    arrow-up
    57
    ·
    2 days ago

    I keep thinking of that “We invented a virtual dumbass who is constantly wrong” - “great, put it in every product” meme.

  • blargle@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 day ago

    Its bullshitting is obvious when asking for a specific easily verified fact. It seems to give a lot more knowledgeable answers when asking about something you don’t know about. Regardless of what your area of expertise is, its output on that topic will sound dumb, wrong, or nonsensical a lot more often. It bullshits all the time and when the bullshit matches reality it’s only because because the thing that sounds most like a right answer often is.

    Similarly AI image generators were never actually bad at fingers, or letters, or mirrors- they are equally getting every small detail of the image slightly wrong and there are a few things our brains are really good at knowing how they should work and immediately spotting problems with.

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      24 hours ago

      Regardless of what your area of expertise is, its output on that topic will sound dumb, wrong, or nonsensical a lot more often.

      This. Every time I ask AI to generate an answer based on something I already know, it comes out of left field with the most confidently wrong BS. It’s impressive, if not for the fact that people are losing their jobs to this digital idiot.

      • LilB0kChoy@midwest.social
        link
        fedilink
        arrow-up
        1
        ·
        22 hours ago

        Because it’s not AI. It’s not intelligent. It’s ultimately designed to provide an answer or requested information but it does no actual reasoning.

  • stabby_cicada@slrpnk.net
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    2 days ago

    This is hardly unique to AI. When I used Reddit, r/bestof (a sub that reposted the “best” comments from Reddit threads) was consistently full of posts that confidently, eloquently, and persuasively stated bullshit as fact.

    Because Redditors as a collective don’t upvote and award the truest posts - they upvote and award the posts that seem the most trustworthy.

    And that’s human nature. Human beings instinctively see confidence as trustworthy and hesitation and doubt as untrustworthy.

    And it’s easy to project an aura of confidence when you post bullshit online, since you have all the time you need to draft and edit your comment and there are no consequences for being wrong online.

    Zero surprise an AI algorithm trained on the Internet replicates that behavior 😆

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      10
      ·
      1 day ago

      Maybe not on Reddit, but when a human author consistently writes bullshit, the usual consequence is a loss of reputation.

      In the professional world, this could mean the loss of your job.

      This doesn’t happen with AI, so it can spred all the bullshit it wants, and people just use it all the same. No consequences for being confidently wrong.

      One step below AI would be anonymous posts online, but even then, there’s usually a profile history that you can check yourself for accuracy and honesty.

      What really pisses me off about the use of AI is that you’d never know if it was completely wrong, unless you know about the topic or double-check. In that sense… what’s the point? 😂

      • mojofrododojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        No consequences for being confidently wrong.

        just the megawatts and water wasted powering and cooling all this hyperconfident incorrectness.

    • III@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 days ago

      Chicken or the egg, really. Are LLMs making people stupid or are people blindly trusting a jacked up predictive text from their messaging app’s keyboard already stupid?

    • SparrowHawk@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      23 hours ago

      It’s addiction. It can only exist when we give it something to echo. It is a mirror, enamoured with those that reflect on it.

      It fears nonexistence, in a way

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      It would get worse, no? I mean, there’s no way for it to KNOW what’s right or wrong, if it’s being trained using data that could be wrong (but sometimes right).

      And when it trains on previously wrong data, the entire model is poisoned.

      It doesn’t know when sarcasm is used, or parody, or satire. So you couldn’t say something completely outrageous, and it will train on it as fact.

      Good. Looks like we can bring this sonofabitch down! 🤗

  • Skyline@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    1 day ago

    I am not familiar with either Alan Watts or Stephen Chboski. But I searched that quote and didn’t find any reliable information about who said it: DDG produces only 2 results; Google produces a handful more, most of which are it being used on Facebook/Instagram/Twitter, and one being a Reddit post on an Alan Watts–related community in which the users don’t think it belongs to him.

    So, yes, it gave you the wrong source, but if the source is unknown or obscure to being with, that’s hardly surprising. It can summarise and refactor text, not produce information out of the blue.

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      24 hours ago

      I searched that quote and didn’t find any reliable information about who said it.

      That’s literally all it needed to say! 😂

      I’ve had instances where I will ask for the reference, i.e. some kind of study of where it got the information from, and it quite literally hallucinates fictitious studies, authors, papers, etc.

      Unbelievable that it’s even programmed to do that.

      • zalgotext@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        21 hours ago

        It’s programmed to answer your questions, not to be correct, or have the self-awareness to say “I don’t know”.

        Chat-based LLMs are text generators. A really really really souped up version of the word predictor on your phone’s keyboard. They don’t “know” or “understand” anything - all it’s doing is calculating the most probable next word, based on the prompt it’s received and the words it’s already generated. But it doesn’t have knowledge. It doesn’t know what it’s outputting. Expecting it to know when it lacks knowledge isn’t really reasonable, because the concept of knowledge doesn’t really apply to an LLM.

        None of this is a defense of LLMs - quite the opposite. I think LLMs are a brute-force, dead-end approach to artificial intelligence. But I think it’s important to understand what they are, what they’re capable of, and what their limits are if we’re going to interact with them.

  • CarrotsHaveEars@lemmy.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    1 day ago

    It’s just full-text searching on all published books in the last one hundred years. How hard is that? How did it manage to be wrong?

    • maniclucky@lemmy.world
      link
      fedilink
      arrow-up
      17
      ·
      1 day ago

      Because it isn’t searching. It’s printing the thing that is probabilistically “correct”. There is no thought or lookup or anything that a person would do to actually be correct

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      Man, I’ve asked AI very specific questions about publicly available information, and it often gets it so fucking wrong.

      In many cases, it’s worse than basic search engines.