英语六级CET-6考试网,英语六级CET-6考试考生的精神家园。祝大家英语六级CET-6考试成功 梦想成真!
网站公告 |
您现在的位置: 教育频道-新都网 >> 英语六级CET-6 >> 综合辅导 >> 真题解读 >> 正文

2018年6月大学英语六级考试CET6阅读理解真题及答案解析(完整版)

作者:新都教育    文章来源:新都网    点击数:    更新时间:2018/6/16

    Section C
    Directions: There are 2 passages in this section. Each passage is followed by some questions or unfinished statements. For each of them there are four choices marked A), B), C), and D). You should decide on the best choice and mark the corresponding letter on Answer Sheet 2 with a single line through the centre.
    Passage One
    Questions 46 to 50 are based on the following passage.
    In the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner or a child. Even though Spooner screams “Save her! Save her!” the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah’s 11 percent. The robot’s decision and its calculated approach raise an important question: would humans make the same choice? And which choice would we want our robotic counterparts to make?
    Isaac Asimov evaded the whole notion of morality in devising his three laws of robotics, which hold that 1. Robots cannot harm humans or allow humans to come to harm; 2, Robots must obey humans, except where the order would conflict with law 1; and 3. Robots must act in self-preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov’s robots—they don’t have to think, judge, or value. They don’t have to like humans or believe that hurting them is wrong or bad. They simply don’t do it.
    The robot who rescues Spooner’s life in I, Robot follows Asimov’s zeroth law; robots cannot harm humanity (as opposed to individual humans) or allow humanity to come to harm—an expansion of the first law that allows robots to determine what’s in the greater good. Under the first law, a robot could not harm a dangerous gunman, but under the zeroth law, a robot could kill the gunman to save others.
    Whether it’s possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as “harm” is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.
    Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It’s doubtful that a computer program can do that – at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies (替身) called “H-bots” from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both “die.” The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what’s best for humanity, especially if it can’t calculate survival odds?
    46. What questions does the example in the movie raise?
    A) Whether robots can reach better decisions.
    B) Whether robots follow Asimov’s zeroth law.
    C) How robots may make bad judgements.
    D) How robots should be programmed.
    47. What does the author think of Asimov’s three laws of robotics?
    A) They are apparently divorced from reality.
    B) They did not follow the coding system of robotics.
    C) They laid a solid foundation for robotics.
    D) They did not take moral issues into consideration.
    48. What does the author say about Asimov’s robots?
    A) They know what is good or bad for human beings.
    B) They are programmed not to hurt human beings.
    C) They perform duties in their owners’ best interest.
    D) They stop working when a moral issue is involved.
    49. What does the author want to say by mentioning the world “harm” in Asimov’s laws?
    A) Abstract concepts are hard to program.
    B) It is hard for robots to make decisions.
    C) Robots may do harm in certain situations.
    D) Asimov’s laws use too many vague terms.
    50. What has the roboticist at the Bristol Robotics Laboratory found in his experiment?
    A) Robots can be made as intelligent as human beings some day.
    B) Robots can have moral issues encoded into their programs.
    C) Robots can have trouble making decisions in complex scenarios.
    D) Robots can be programmed to perceive potential perils.
    答案
    46. A) Whether robots can reach better decisions
    47. D) They did not take moral issues into consideration.
    48. C) They perform duties in their owners' best interest.
    49. A) Abstract concepts are hard to program.
    50. C) Robots can have trouble making decisions in complex scenarios.

上一页  [1] [2] [3] [4] [5] 下一页

(责任编辑:admin)


查看更多关于大学英语,六级考试,CET6,真题,答案,解析的文章
快速导航
培训信息
特别说明
    由于各方面情况的不断调整与变化,新都教育所提供的招生和考试信息仅供参考,敬请考生以权威部门公布的正式信息为准。
版权声明
    凡本网注明“来源:新都教育”的所有作品,版权均属于新都网,未经本网授权不得转载、摘编或利用其它方式使用上述作品。已经本网授权使用作品的,应在授权范围内使用,并注明“来源:新都教育”。违反上述声明者,本网将追究其相关法律责任。
  凡本网注明“来源:XXXXX(非新都教育)”的作品,均转载自其它媒体,转载目的在于传递更多信息,并不代表本网赞同其观点和对其真实性负责。
  如作品内容、版权等存在问题,请在两周内同本网联系,联系邮箱:newdu2004@tom.com
  本网欢迎原创作品投稿,投稿邮箱:newdu2004@tom.com
  • 英语六级CET-6栏目导航
  • 资讯
    考试动态
    报考指南
    政策大纲
    听力
    听力指导
    历年真题
    强化训练
    口语
    词汇及语法
    语法指导
    词汇指导
    强化训练
    阅读理解
    阅读指导
    强化训练
    拓展阅读
    写作
    写作指导
    写作范文
    强化训练
    完形填空
    完型指导
    强化训练
    英语翻译
    翻译指导
    强化训练
    真题及模拟题
    模拟试题
    历年真题
    备考经验
    综合辅导
    综合辅导
    真题解读
    专家访谈
    英语六级CET-6网为英语六级考试考生提供听力、口语、词汇、语法、阅读理解、英语写作、完型填空、翻译、短文改错、真题、模拟题等英语六级考试辅导资料免费阅读及下载。
    Copyright © 2004-2009 Newdu.com All Rights Reserved 京ICP备09058993号
    本站为非经营性网站,收藏资料纯属个人爱好,若有问题请联系管理员:newdu2004@tom.com