Living with artistic intelligence [...]

Living with artificial intelligence

The second article of the second set of topics of Cambridge IELTS 18 is about artificial intelligence. Its content is somewhat abstract. But generally speaking, the following three issues have been discussed: First, the current narrow field of AI may develop into general intelligence in the future and surpass human beings; 2、 For the sake of human security, we need to endow them with moral goals, but we humans do not have a universal moral standard; 3、 Even if we find this moral standard, letting AI implement it will affect our autonomy. The following is the translation of each paragraph.

Cambridge IELTS 18Test2Passage2 Reading with Artificial Intelligence

Cambridge IELTS 18 Test2 Passage2 Reading original translation

Original translation of roast duck, please do not copy

introduction

Powerful artificial intelligence (AI) needs to be reliably aligned with human values, but does this mean AI will eventually have to police those values?

Powerful artificial intelligence (AI) needs to be reliably consistent with human values, but does this mean that AI will eventually have to regulate these values?

Paragraph 1

This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next? True, these prodigious accomplishments are all in so-called narrow Al, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) – machines that can achieve human-level performance on the full range of tasks that we ourselves can tackle.

This decade belongs to AI. They have created amazing feats again and again. After only learning for four hours, a chess playing AI can not only defeat all human chess players, but also all chess machines programmed by humans before? This is the news of yesterday. What's next? Admittedly, these amazing achievements are made in the so-called narrow AI field, that is, machines perform highly specialized tasks. However, many experts believe that such restrictions are about to disappear. By the middle of this century, we may have artificial general intelligence (AGI) - machines will reach the human level in all tasks that we can handle ourselves.

Paragraph 2

If so, there’s little reason to think it will stop there. Machines will be free of many of the physical constraints on human intelligence. Our brains run at slow biochemical processing speeds on the power of a light bulb, and their size is restricted by the dimensions of the human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the incredibly powerful Webb Space Telescope.

If so, we have no reason to think that development will stop here. Machines will break away from many physical limitations of human intelligence. The power of our brain is equivalent to a light bulb, running at a slow biochemical rate. And their size is limited by the size of human delivery channels. Given these constraints, their achievements are remarkable. But the gap between them and the physical limits of thinking may be as big as that between our eyes and the powerful Weber Space Telescope.

Paragraph 3

Once machines are better than us at designing even smarter machines, progress toward these limits could accelerate. What would this mean for us? Could we ensure safe and worthwhile coexistence with such machines? On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas, for example, might have wished that everything he touched turned to gold, but didn’t really intend this to apply to his breakfast.

Once machines are better at designing smarter machines than we are, progress towards these limits may accelerate. What does this mean for us? Can we ensure safe and happy coexistence with such machines? On the positive side, AI has shown its usefulness and profits in many aspects. We can expect super AI to be super useful and bring extra profits. However, the more powerful AI becomes, the more important it becomes to carefully specify its goals. Folklores are full of stories of disastrous consequences caused by wrong wishes - such as King Midas. He hoped that everything he touched would turn into gold, but he didn't really think breakfast would be included.

Paragraph 4

So we need to create powerful AI machines that are ‘human-friendly’- that have goals reliably aligned with our own values. One thing that makes this task difficult is that we are far from reliably human-friendly ourselves. We do many terrible things to each other and to many other creatures with whom we share the planet. If superintelligent ma chines don’t do a lot better than us, we’ll be in deep trouble. We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures.

Therefore, we need to create a strong "human friendly" AI. Their goals are reliably aligned with our own values. One of the difficulties of this task is that we are far from "human friendly". We have done many terrible things to each other and to many other creatures sharing the earth with us. If super intelligent machines can't do better than us, we will be in deep trouble. The powerful new intelligence we will have will magnify the dark side of our own fallible nature.

Paragraph 5

For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll be smart enough for the job. If there are routes to the moral high ground, they’ll be better than us at finding them, and steering us in the right direction.

For the sake of safety, we hope that these machines surpass humans in morality and cognitive ability. We hope that they pursue the moral highland rather than the trough that many of us have experienced. Fortunately, they are intelligent enough to do the job. If there are ways to the moral highland, they will be better than us at finding these ways and guiding us in the right direction.

Paragraph 6

However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The ‘getting started’ problem is that we need to tell the machines what they’re looking for with sufficient clarity that we can be confident they will find it – whatever ‘it’ actually turns out to be. This won’t be easy, given that we are tribal creatures and conflicted about the ideals ourselves. We often ignore the suffering of strangers, and even contribute to it, at least indirectly. How then, do we point machines in the direction of something better?

However, there are two major problems with this utopian vision. One is how we make the machine start this journey, and the other is what it means to achieve this goal. " The problem with "beginning" is that our article from IELTS, an old roast duck, needs to tell the machine in a clear enough way what they are looking for, so that we can believe that they will find it - no matter what the "it" is actually. This is not easy, because we are tribal creatures, and we dispute the ideal itself. We often ignore the pain of strangers, and even indirectly contribute to it. So, how should we guide the machine to develop in a better direction?

Paragraph 7

As for the ‘destination’ problem, we might, by putting ourselves in the hands of these moral guides and gatekeepers, be sacrificing our own autonomy – an important part of what makes us human. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own communities, for example.

As for the "goal" issue, if we put ourselves in the hands of these moral guides and gatekeepers, we may sacrifice our autonomy - an important part of our humanity. Those machines that are better at sticking to the moral high ground than we are may dissuade us from some mistakes that we take for granted at present. For example, we may lose the freedom to treat our own communities favorably.

Paragraph 8

Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical silicon police limiting our options? They might be so good at doing it that we won’t notice them; but few of us are likely to welcome such a future.

Of course, losing the freedom to misbehave is not always a bad thing: refusing to let children work in factories or smoke in restaurants is a sign of progress. But are we ready to accept the moral police to limit our choices? They may have done so well in this respect that we have not noticed them; But few people are willing to welcome such a future.

Paragraph 9

These issues might seem far-fetched, but they are to some extent already here. AI already has some input into how resources are used in our National Health Service (NHS)here in the UK, for example. If it was given a greater role, it might do so much more efficiently than humans can manage, and act in the interests of taxpayers and those who use the health system. This article is from laokaoya website. However, we’d be depriving some humans (e.g. senior doctors) of the control they presently enjoy. Since we’d want to ensure that people are treated equally and that policies are fair, the goals of AI would need to be specified correctly.

These problems may seem far fetched, but they already exist to some extent. For example, in the UK, AI has had some impact on the use of NHS resources. If it is given more authority, it may be more efficient than human management and act in a way consistent with the interests of taxpayers and patients. However, we will deprive some human beings (such as senior doctors) of the control they currently enjoy. Since we want to ensure that people are treated equally and that policies are fair and reasonable, we need to correctly define the goals of AI.

Paragraph 10

We have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if it is, it will require a cooperative spirit, and a willingness to set aside self-interest.

We are faced with a powerful new technology – it even represents a new way of thinking. For our own safety, we need to guide these new thinkers in the right direction and let them treat us well. It is not clear whether it is possible to achieve this, but if possible, it will require a spirit of cooperation and a willingness to put aside personal interests.

Paragraph 11

Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. And where exactly would that leave human beings? All the more reason to think about the destination now, and to be careful about what we wish for.

General intelligence and moral reasoning are generally considered to be the unique abilities of human beings. But security seems to require us to regard them as a whole: if we want to give general intelligence to machines, we also need to give them moral authority. What impact will this have on mankind? We need to consider our goals now and carefully examine our aspirations.

Cambridge IELTS 18Test2Passage1 Reading original translation Stonehenge Stonehenge

An ideal city

 Old Roast Duck IELTS Official Account
Fixed link of this article: http://www.laokaoya.com/56446.html |Old Roast Duck IELTS - Focus on IELTS preparation

Living with artistic intelligence: waiting for you to sit on the sofa!

Comment

Shortcut key: Ctrl+Enter
error: Alert: Content is protected !!