原文地址:https://www.cnbc.com

原创翻译:龙腾网 翻译:飞雪似炀花


1

正文翻译:

Stephen Hawking says A.I. could be ‘worst event in the history of our civilization’

史蒂芬·霍金说人工智能将会是“我们文明历史上最糟糕的事件”

The emergence of artificial intelligence (AI) could be the “worst event in the history of our civilization” unless society finds a way to control its development, high-profile physicist Stephen Hawking said Monday.

周一,备受瞩目的物理学家史蒂芬·霍金说:人工智能的出现将会成为“我们文明历史上最糟糕的事件”,除非社会能够找到控制它发展的办法。

He made the comments during a talk at the Web Summit technology conference in Lisbon, Portugal, in which he said, “computers can, in theory, emulate human intelligence, and exceed it.”

他在葡萄牙里斯本召开的互联网峰会技术论坛的一次谈话中做出了这一评论,他在其中说道“在理论上,计算机能模仿人类的智慧,然后超越它”。

Hawking talked up the potential of AI to help undo damage done to the natural world, or eradicate poverty and disease, with every aspect of society being “transformed.”

霍金谈及了人工智能在帮助消除对自然世界造成的损害或者根除贫困与疾病方面的潜能,通过人工智能,社会的各个层面都会“得到改变”。

But he admitted the future was uncertain.

但是他承认未来是不确定的。

“Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it,” Hawking said during the speech.

霍金在这次讲话中说“成功制造出高效的人工智能可能是我们文明历史上最重大的事件,或者也是最糟糕的事件。我们只是没有意识到。所以我们无法知道我们是否将必然得到人工智能的帮助,或者遭到它的忽视,或者持观望态度,或者遭到它的毁灭”。

“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”

“除非我们知道如何准备、避免潜在的威胁,否则人工智能将会是我们文明历史上最糟糕的事件。它会带来威胁,就像威力巨大的自动化武器,或者少数人用来压迫大多数人的新方法。它能够给我们的经济造成严重的破坏”。

Hawking explained that to avoid this potential reality, creators of AI need to “employ best practice and effective management.”

霍金解释称,为了避免这种潜在的可能性,人工智能的制造者们需要“采用最实际和最有效的操纵方法”。

The scientist highlighted some of the legislative work being carried out in Europe, particularly proposals put forward by lawmakers earlier this year to establish new rules around AI and robotics. Members of the European Parliament said European Union-wide rules were needed on the matter.

这位科学家强调了正在欧洲进行的某些立法工作,特别是立法者们在今年早些时候提供的一些建议,这些建议旨在围绕着人工智能和机器人设置一些新的规定。欧洲议会的成员们说在这一事件上,我们需要欧盟范围内的法规。

Such developments are giving Hawking hope.

这样的事态发展给霍金带来了希望。

“I am an optimist and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance,” Hawking said.

霍金说“我是一个乐观主义者,我相信我们能够制造给世界带来好处的人工智能。它能够与我们和谐共处。我们只是需要警惕威胁的存在,确定它们,利用最好的手段和操纵方法,事先准备好应对相应的结果”。

It’s not the first time the British physicist has warned on the dangers of AI. And he joins a chorus of other major voices in science and technology to speak about their concerns. Tesla and SpaceX CEO Elon Musk recently said that AI could cause a third world war, and even proposed that humans must merge with machines in order to remain relevant in the future.

这不是这位英国物理学家首次对人工智能的威胁提出警告。他的表态和科学技术领域的其他主流观点是一致的。特斯拉与SpaceX的首席执行官埃隆马斯克最近说,人工智能能够引发第三次世界大战,甚至建议人类必须与机器融合在一起,从而在未来维持自己的重要地位。

And others have proposed ways to deal with AI. Microsoft founder Bill Gates said robots should face income tax.

其他人则提出了很多与人工智能打交道的方法。微软创建者比尔·盖茨认为应该对机器人征收所得税。

Some major figures have argued against the doomsday scenarios. Facebook Chief Executive Mark Zuckerberg said he is “really optimistic” about the future of AI.

一些重要人物已经表态不赞同这一世界末日式的场景。脸书首席执行官马克扎克伯格说他对人工智能的未来“真的感到乐观”。

评论翻译:

ITT: many who didn’t read the article.

致那些还没有读过这篇文章的人。

Hawking simply says he’s optimistic and thinks AI is the way to go, but society needs to be ready for its arrival or it could cause a lot of damage. An analogy would be the use of nuclear energy, which was also used as weapon of mass destruction. Effectively simply creating AI wouldn’t destroy society, it’s how humans chose to use the AI or the mistakes humans fail to see, that could be harming to society. For the case of weapons, he isn’t saying it will be an AI uprising, but that automated weaponry (which already exists) is a serious risk, like nuclear bombs are.

霍金只是说他对此报以乐观的态度,认为人工智能是发展的方向,但是社会需要为它的到来或它可能将会引发的巨大灾难做好尊卑。我们可以类比核能的使用,它也会被用于制造大规模杀伤性武器。只是创造出人工智能实际上不会毁灭社会,能够伤害社会的是人们如何选择使用人工智能,或者是无法发现这些错误。就拿武器举个例子,他没有说人工智能将会发动叛乱,但是自动化武器(已经存在了)是一个严重的风险,就像核弹一样。

The article’s title is slightly misleading.

这篇文章的标题有一点误导性

So the ever present fear is that if you put the ai on the internet it will copy itself everywhere. Fair enough. But I have a solution.

那么总是存在的恐惧便是如果你将人工智能放在互联网上,它将会四处复制自己。这是很有道理的。但是我有一个解决办法。

You see, the software to make an ai go has to be massive. Obviously once hitting singularity it will refine its own code as much as possible, but you can only refine something so far, and sure it’s a gamble but I’m willing to bet that a fully functional, self aware ai can’t be any smaller than a couple of gigabytes. All we have to do is build it somewhere with terribly slow and unreliable internet. If it tries to get out we would have plenty of time to notice and simply pull the plug.

你瞧,软件让一个围棋人工智能都变得非常强大。显然,一旦触及到奇点,它将会尽可能地改善自己的密码,但是到现在为止你只能改善某些事情,可以确定的是这是一场赌博,但是我愿意打赌,一个具备完全功能的、有着自我意识的人工智绝对要小于几十亿的字节。我们所要做的便是在一个非常慢而且不可靠的互联网中制造它。如果它试图突破牢笼,我们就有足够的事件注意到,然后拔下插头。

That’s right gentlemen, I propose we build the ai in Australia, and connect it to the NBN. It’s perfect.

这是正确的,我建议我们在澳大利亚建造这个人工智能,然后把它连入国家宽带网络。完美。

Maybe it is not Steven Hawking talking, but his computer.

可能这不是史蒂芬·霍金在讲话,而是他的计算机在讲话。

Maybe the computer doesn’t want a competitor.

也许是这台计算机不想让竞争者出现。

You guys make AI sound like deus ex machina, which it’s not.

你们这些人让人工智能听起来像是天降之神,但是它不是。

Hopefully it finds a gentle way to kill us all.

希望它能够找到一种温柔的方式杀死我们所有人。

Assuming it’s possible to create a sentient AI…everyone is just skimming over the actually effort and breakthrough required to get there, for all we know we may never get there

假设能够创造出一个有情感的人工智能……每个人都忽略了要创造出人工智能所需要的努力和突破,尽管我们知道自己可能永远都无法实现这一目标。

Every time Stephen Hawking comes up in an AI conversation I question why people pay so much attention to what he thinks about it.

每当史蒂芬·霍金说一些和人工智能相关的话,我都要问为什么人们要对他想什么这么关注。

He’s a genius sure, but he’s a physicist who specializes in cosmology. The connection to AI is tenuous at best.

他当然是一个天才,但是他是一个物理学家,专业是宇宙学。他和人工智能的联系真的很少。

Yet every time he says something about AI people just think “hey this guy is a well known genius, guess his opinion is a big deal!”

但是每次他发表对人工智能的看法的时候,人们就会想“嘿,这个人是一个名人,他的观点关系重大”。

If Stephen Hawking started to warn you about the dangers of gluten would you also be all ears? If Tom Hanks spoke out against post-modernist West German folk-dance would you also take him for his word?

如果斯蒂芬·霍金开始警告你面筋的危险性,你也会洗耳恭听吗?如果汤姆·汉克斯出言反对后现代的西方式德国民间舞蹈,你也会听他的话吗?

AI isnt what people think it is. It essentially boils down to a mathematical formula(at it’s core) and it tries to minimize it’s output.

人工智能不是人们想象的那样。它基本上可以归结为一个数学公式(位于它的核心),它试图最小化它的输出。

There is no consciousness. No moral code. It just finds patterns and acts on patterns it’s been trained on millions and millions of times. That’s why it’s dangerous to trust AI, it’s not always 100% correct and can have unforseen results in the end

它没有意识。没有道德代码。它只是找到了模式和基于模式的行为——为此它已经被训练了几百万次。这就是为什么信任人工智能是危险的,它不会总是百分百正确,最终可能会导致不可预见的结果。

发表评论

电子邮件地址不会被公开。 必填项已用*标注