“Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently,” the IDAIS statement continues. “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity.”
Oh no, they’ll write really average essays! What ever shall we do!!!
Or maybe they’ll produce janky videos that don’t make any sense so have to be shorter than 10 seconds to cover up the jank!!!
Language models aren’t intelligent. They have no will of their own, they don’t “understand” anything they write. There’s no internal thought space for comprehension. They’re not learning. They’re “trained” to mimick statistically average results within a search space.
They’re mimicks, and can’t grow beyond or outdo what they’ve been given to mimick. They can string lots of information together but that doesn’t mean they know what they’re saying, or how to get anything done.
Given that in the past 15 years we went from “solving regression problems a little bit better than linear models some of the time” to what we have now, it’s not unfounded to think 15 years from now people could be giving LLMs access to code execution environments
*20 years, not 15.
https://en.m.wikipedia.org/wiki/Evolved_antenna