“Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently,” the IDAIS statement continues. “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity.”
Removed by mod
Oh no, they’ll write really average essays! What ever shall we do!!!
Or maybe they’ll produce janky videos that don’t make any sense so have to be shorter than 10 seconds to cover up the jank!!!
Language models aren’t intelligent. They have no will of their own, they don’t “understand” anything they write. There’s no internal thought space for comprehension. They’re not learning. They’re “trained” to mimick statistically average results within a search space.
They’re mimicks, and can’t grow beyond or outdo what they’ve been given to mimick. They can string lots of information together but that doesn’t mean they know what they’re saying, or how to get anything done.
in the coming decades
Given that in the past 15 years we went from “solving regression problems a little bit better than linear models some of the time” to what we have now, it’s not unfounded to think 15 years from now people could be giving LLMs access to code execution environments
*20 years, not 15.
I’m more worried about it remaining under the control of (human) capitalists.
At least there’s a chance that an unchained AGI will be benevolent.
At least there’s a chance that an unchained AGI will be benevolent.
Or it will wipe us all out indiscriminately, since I’m certain there’s no way the wealthy could rationalize their existence to AI.
i want civil rights for AI as quickly as possible either way
I really like this thing Yann LeCun had to say:
“It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat.” LeCun continued: “It’s as if someone had said in 1925 ‘we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of the sound over the oceans.’ It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the Atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety. It didn’t require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements.” source
Meanwhile there are alreay lots of issues that we are already facing that we should be focusing on instead:
ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities. source
that feels like it ignores the current use of autonomous weapons and war AI
Yes, because that is actually entirely irrelevant to the existential threat AI poses. In AI with a gun is far less scary than an AI with access to the internet.