https://time.com/6273743/thinking-that- ... s-with-ai/
Tegmark是mit的物理学教授,很早就关注AI safety问题,这文章写的很好
If superintelligence drives humanity extinct, it probably won’t be because it turned evil or conscious, but because it turned competent, with goals misaligned with ours. We humans drove the West African Black Rhino extinct not because we were rhino-haters, but because we were smarter than them and had different goals for how to use their habitats and horns. In the same way, superintelligence with almost any open-ended goal would want to preserve itself and amass resources to accomplish that goal better. Perhaps it removes the oxygen from the atmosphere to reduce metallic corrosion. Much more likely, we get extincted as a banal side effect that we can’t predict any more than those rhinos (or the other 83% of wild mammals we’ve so far killed off) could predict what would befall them.
MAX TEGMARK关于AI对人类威胁的文章
版主: Caravel, TheMatrix, molen
-
- 论坛支柱
2024年度优秀版主
TheMatrix 的博客 - 帖子互动: 278
- 帖子: 13648
- 注册时间: 2022年 7月 26日 00:35
Re: MAX TEGMARK关于AI对人类威胁的文章
我认为还没有到该担心的时候 - well,社会问题可以担心一下,但还没到担心AGI的时候。
他的全部argument都建立在“奇点”到来的基础上。我认为5年之内我们就能看到AI的发展趋势(局限)。给它5年时间。
Once AGI is tasked with discovering these better architectures, AI progress will be made much faster than now, with no human needed in the loop, and I. J. Good’s intelligence explosion has begun. And some people will task it with that if they can, just as people have already tasked GPT4 with making self-improving AI for various purposes, including destroying humanity.
他的全部argument都建立在“奇点”到来的基础上。我认为5年之内我们就能看到AI的发展趋势(局限)。给它5年时间。
Once AGI is tasked with discovering these better architectures, AI progress will be made much faster than now, with no human needed in the loop, and I. J. Good’s intelligence explosion has begun. And some people will task it with that if they can, just as people have already tasked GPT4 with making self-improving AI for various purposes, including destroying humanity.