• TankovayaDiviziya@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 month ago

      In the past, many sci-fi futuristic stories tend to be optimistic. Now I know why many dystopian sci-fi stories set in future have become popular since instead.

  • regrub@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    1 month ago

    When you train an LLM with data from the darker parts of the internet you shouldn’t be surprised when it says degenerate things

    • IcyToes@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      8 days ago

      They imply it is only trained on their data. How can they even do that?

      Is it open source and not connected to the Internet? How would it even know who Hitler was unless it trained on Reddit or X and that data is sitting remotely somewhere?

      These academics seem quite dumb at times.

  • Krauerking@lemy.lol
    link
    fedilink
    arrow-up
    6
    ·
    1 month ago

    Well if it was insecure about itself of course it was gonna get rude about other people’s existence.
    Insecurity always leads to populism.

    Next we will have to worry about it not being fully honest with us when we train it on insincere code.

    Also yes I know insecure can mean not properly managed but give me my word play.

    • mumblerfish@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      I was thinking something similar, like this looks like an AI Dunning-Kruger (if that effect even exists).