• Bell@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    7
    ·
    2 months ago

    The hallucinations will continue until the training data is absolutely perfect

    • hendrik@palaver.p3x.de
      link
      fedilink
      arrow-up
      39
      arrow-down
      3
      ·
      2 months ago

      That’s not correct btw. AI is supposed to be creative and come up with new text/images/ideas. Even with perfect training data. That creativity means creativity. We want it to come up with new text out of thin air. And perfect training data is not going to change anything about it. We’d need to remove the ability to generate fictional stories and lots of other answers, too. Or come up with an entirely different approach.

      • bjorney@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        AI isn’t supposed to be creative, it’s isn’t even capable of that. It’s meant to min/max it’s evaluation criterion against a test dataset

        It does this by regurgitating the training data associated with a given input as closely as possible

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      2 months ago

      In order to get perfect training data, they cannot use any human output.

      I’m afraid it is not going to happen anytime soon :)

    • magic_lobster_party@kbin.run
      link
      fedilink
      arrow-up
      10
      arrow-down
      2
      ·
      2 months ago

      Most improvements in machine learning has been made by increasing the data (and by using models that can generalize larger data better).

      Perfect data isn’t needed as the errors will “even out”. Although now there’s the problem that most new content on the Internet is low quality AI garbage.

      • NeoNachtwaechter@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        2 months ago

        Perfect data isn’t needed as the errors will “even out”.

        That is an assumption.

        I do not think that it is a correct assumption.

        now there’s the problem that most new content on the Internet is low quality AI garbage.

        This reminds me about a recommendation from some philosopher - I forgot who it was - he said that you should read only such books that are at least 100 years old.

        • magic_lobster_party@kbin.run
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          2 months ago

          I’m extrapolating from history.

          15 years ago people made fun of AI models because they could mistake some detail in a bush for a dog. Over time the models became more resistant against those kinds of errors. The change was more data and better models.

          It’s the same type of error as hallucination. The model is overly confident about a thing it’s wrong about. I don’t see why these types of errors would be any different.

          • NeoNachtwaechter@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            2 months ago

            I don’t see why these types of errors would be any different.

            Well it is easy to see when you understand what LLMs actually do and how it is different from what humans do. Humans have multiple ways to correct errors and we do it all the time, intuitively. LLMs have none of these ways, they can only repeat their training (and not even hope for the best, because to hope is human again)

    • hedgehog@ttrpg.network
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Hallucinations are an unavoidable part of LLMs, and are just as present in the human mind. Training data isn’t the issue. The issue is that the design of the systems that leverage LLMs uses them to do more than they should be doing.

      I don’t think that anything short of being able to validate an LLM’s output without running it through another LLM will be able to fully prevent hallucinations.