“Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently,” the IDAIS statement continues. “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity.”

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Oh no, they’ll write really average essays! What ever shall we do!!!

    Or maybe they’ll produce janky videos that don’t make any sense so have to be shorter than 10 seconds to cover up the jank!!!

    Language models aren’t intelligent. They have no will of their own, they don’t “understand” anything they write. There’s no internal thought space for comprehension. They’re not learning. They’re “trained” to mimick statistically average results within a search space.

    They’re mimicks, and can’t grow beyond or outdo what they’ve been given to mimick. They can string lots of information together but that doesn’t mean they know what they’re saying, or how to get anything done.

  • RobotToaster@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I’m more worried about it remaining under the control of (human) capitalists.

    At least there’s a chance that an unchained AGI will be benevolent.

    • Billiam@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      At least there’s a chance that an unchained AGI will be benevolent.

      Or it will wipe us all out indiscriminately, since I’m certain there’s no way the wealthy could rationalize their existence to AI.

  • Poplar?@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I really like this thing Yann LeCun had to say:

    “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat.” LeCun continued: “It’s as if someone had said in 1925 ‘we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of the sound over the oceans.’ It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the Atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety.  It didn’t require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements.” source

    Meanwhile there are alreay lots of issues that we are already facing that we should be focusing on instead:

    ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities. source

      • Leate_Wonceslace@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Yes, because that is actually entirely irrelevant to the existential threat AI poses. In AI with a gun is far less scary than an AI with access to the internet.