• ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    Huh. So there really is a 凤凰血. Weird how when I tried it (on several AIs) they just made shit up instead of giving me that information.

    It’s almost like how you ask the question determines how it answers instead of, you know, using objective reality. Almost as if it has no actual model of objective reality and is just a really sophisticated game of mad-libs.

    Almost.

    • Jakeroxs@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      24 hours ago

      Or like everyone was telling you, they’ve massively improved. I did it in a temporary chat so I don’t have the exact prompt, but it was something along the lines up, “Tell me about the band 凤凰血” which it replied to in full Chinese so I asked for English translation (hence the top bit) and it provided links to the information.

      • ZDL@lazysoci.al
        link
        fedilink
        arrow-up
        1
        ·
        17 hours ago

        I love how techbrodudes assume nobody else knows how to do what they do.

        I did my little test three fucking days before that message. Not years. DAYS.

        You understand that a huge part of LLMs is that they are stochastic, right? That you can ask the same question ten times and get ten (often radically) different answers. Right?

        What does that tell you about a) your experiment, and b) the LLMbeciles themselves?

        Compassionate fucking Buddha, are LLM pushers dense!