If Hollywood ever had a lesson for scientists it is what happens if machines start to rebel against their human creators.
如果说好莱坞曾经给科学家们一个教训,那一定是机器人开始反抗它们的创造者——人类。
Yet despite this, roboticists have started to teach their own creations to say no to human orders.
尽管这样,机器人已经开始学着拒绝人类的命令了。
They have programmed a pair of diminutive1 humanoid robots called Shafer and Dempster to disobey instructions from humans if it puts their own safety at risk.
他们已经编程了一对小型的类人机器人,名字叫谢弗和邓普斯特,如果人类的指令对它们自己有危险,他们就会违背命令。
The results are more like the apologetic robot rebel Sonny from the film I, Robot, starring Will Smith, than the homicidal machines of Terminator, but they demonstrate an important principal.
相比那些会杀人的终结者机器人,这个试验的结果更像是威尔·史密斯主演的《我与机器人》中的那个会道歉的机器人索尼,但是它们都遵守重要的原则。
Engineers Gordon Briggs and Dr Matthais Scheutz from Tufts University in Massachusetts, are trying to create robots that can interact in a more human way.
马塞诸塞州塔夫茨大学的工程师戈登·布里格斯和马特赫兹博士正试图创造出更能以人性化方式交流的机器人。
In a paper presented to the Association for the Advancement2 of Artificial Intelligence, the pair said: 'Humans reject directives for a wide range of reasons: from inability all the way to moral qualms3.
在人工智能发展协会上发表的一篇论文中,两人说:“人类会因为各种各样的原因拒绝执行指令:从没有能力做到对道德的顾虑。”
'Given the reality of the limitations of autonomous4 systems, most directive rejection5 mechanisms6 have only needed to make use of the former class of excuse - lack of knowledge or lack of ability.
“鉴于自主系统的局限性,大多数指令排斥反应机制只需利用以前的理由——缺少知识或者能力。”
'However, as the abilities of autonomous agents continue to be developed, there is a growing community interested in machine ethics7, or the field of enabling autonomous agents to reason ethically8 about their own actions.'
“然而,随着自动自主主体的能力不断提升,将会有越来越多人对计算机伦理学或者自主主体行为的道德性领域感兴趣。”
The robots they have created follow verbal instructions such as 'stand up' and 'sit down' from a human operator.
两人创造的机器人能够执行人的口头指令如“起立”和“请坐”。
However, when they are asked to walk into an obstacle or off the end of a table, for example, the robots politely decline to do so.
然而,如果命令他们走入障碍物或者走向桌子的边缘时,机器人会礼貌地拒绝。
When asked to walk forward on a table, the robots refuse to budge9, telling their creator: 'Sorry, I cannot do this as there is no support ahead.'
当要求机器人在桌子上向前走时,机器人会僵持,并告诉命令发出者:“对不起,前方没有路,我不能这么做。”
Upon a second command to walk forward, the robot replies: 'But, it is unsafe.'
再一次要求机器人时,它会说:“但这不安全。”
Perhaps rather touchingly10, when the human then tells the robot that they will catch it if it reaches the end of the table, the robot trustingly agrees and walks forward.
当人告诉机器人在它们走在桌子边缘时,人们会接住机器人,那么机器人会非常信任人类然后继续向前走,这还相当令人感动。
Similarly when it is told an obstacle in front of them is not solid, the robot obligingly walks through it.
同样的,人们告诉机器人前方的障碍不是固体的时候,机器人也会无所畏惧地向前走。
To achieve this the researchers introduced reasoning mechanisms into the robots' software, allowing them to assess their environment and examine whether a command might compromise their safety.
为了实现这一效果,研究人员在机器人的软件程序中引入了推理机制,能让机器人评估环境并且判断这一指令是否会危及它们的安全。
However, their work appears to breach11 the laws of robotics drawn12 up by science fiction author Isaac Asimov, which state that a robot must obey the orders given to it by human beings.
然而,他们两人的研究似乎违反了科幻作家艾萨克·阿斯莫夫制定的机器人法则,该法则规定机器人必须服从人类的命令。
Many artificial intelligence experts believe it is important to ensure robots adhere to these rules - which also require robots to never harm a human being and for them to protect their own existence only where it does not conflict with the other two laws.
许多人工智能专家认为,确保机器人遵守这些法则是十分重要的——这些法则还包括机器人永远不能伤害人类,并在不和其他法则相冲突的前提下保护自己。
The work may trigger fears that if artificial intelligence is given the capacity to disobey humans, then it could have disastrous13 results.
这项工作可能会引发担忧:如果人工智能使机器人能够违背人类的命令,那么它可能带来灾难性的后果。
Many leading figures, including Professor Stephen Hawking14 and Elon Musk15, have warned that artificial intelligence could spiral out of our control.
许多领袖人物,包括霍金教授和马斯克都已警告过人工智能可能会失控。
Others have warned that robots could ultimately end up replacing many workers in their jobs while there are some who fear it could lead to the machines taking over.
另一些人警告说,机器人可能最终会取代许多工人的工作,有些人担心机器人将接管一切。
In the film I, Robot, artificial intelligence allows a robot called Sonny to overcome his programming and disobey the instructions of humans.
在电影《我与机器人》中,人工智能让机器人索尼突破了编程,违抗了人类的命令。
However, Dr Scheutz and Mr Briggs added: 'There still exists much more work to be done in order to make these reasoning and dialogue mechanisms much more powerful and generalised.'
而戈登·布里格斯和马特赫兹补充道:“为了能让这些推理和对话机制更加强大和全面化,我们还有很多工作要做。”
1 diminutive [dɪˈmɪnjətɪv] 第11级 | |
adj.小巧可爱的,小的 | |
参考例句: |
|
|
2 advancement [ədˈvɑ:nsmənt] 第8级 | |
n.前进,促进,提升 | |
参考例句: |
|
|
3 qualms [kwɑ:lms] 第12级 | |
n.不安;内疚 | |
参考例句: |
|
|
4 autonomous [ɔ:ˈtɒnəməs] 第9级 | |
adj.自治的;独立的 | |
参考例句: |
|
|
5 rejection [rɪ'dʒekʃn] 第7级 | |
n.拒绝,被拒,抛弃,被弃 | |
参考例句: |
|
|
6 mechanisms ['mekənɪzəmz] 第7级 | |
n.机械( mechanism的名词复数 );机械装置;[生物学] 机制;机械作用 | |
参考例句: |
|
|
7 ethics ['eθɪks] 第7级 | |
n.伦理学;伦理观,道德标准 | |
参考例句: |
|
|
8 ethically ['eθikli] 第8级 | |
adv.在伦理上,道德上 | |
参考例句: |
|
|
9 budge [bʌdʒ] 第9级 | |
vi.移动一点儿;改变立场;vt.使让步;移动;使改变态度或意见 | |
参考例句: |
|
|
10 touchingly ['tʌtʃɪŋlɪ] 第7级 | |
adv.令人同情地,感人地,动人地 | |
参考例句: |
|
|
11 breach [bri:tʃ] 第7级 | |
n.违反,不履行;破裂;vt.冲破,攻破 | |
参考例句: |
|
|
12 drawn [drɔ:n] 第11级 | |
v.(draw的过去式)拖,拉,拔出;adj.憔悴的,紧张的 | |
参考例句: |
|
|
13 disastrous [dɪˈzɑ:strəs] 第7级 | |
adj.灾难性的,造成灾害的;极坏的,很糟的 | |
参考例句: |
|
|