早稲田社会科学 傾向対策解答解説 2019問題5

早稲田社会科学 傾向対策解答解説 2019問題5

早稲田社会科学 傾向対策解答解説 2019問題5

早稲田社会科学 傾向対策解答解説 2019問題5

早稲田大学社会学部の過去問2019年の解答・解説・全訳です。受験生の入試対策のためにプロ家庭教師が出題傾向を分析します。


【大学】

早稲田大学

【学部】

:社会学部


【問題】

2019年 問題

【形式】

:適語補充+文章理解

【表題】

How to Make A.I. That's Good for People.

【作者】

:李飛飛 Fei-Fei Li

【対策】

:説明文。文章に適切な語句を補充し、最後にまとめて内容理解が問われます。問題2から問題5は同じ形式です。文章内容は、スタンフォード大学のAI研究者が、一般紙へ寄稿した記事です。AIがより人間的になるとはどういうことなのか、読解しましょう。

【用語】

:AI対人間 科学技術 産業革命

【目安時間】

:20分

スポンサーさん

早稲田社会科学 2019問題5


【問題5 読解問題】



次の英文を読んで下の問いに答えよ。解答はマーク解答用紙にマークせよ。

For a field that was not well known outside of academia a decade ago, artificial intelligence (A.I.) has grown dizzyingly fast. Technological companies from Silicon Valley to Beijing are betting everything on it, venture capitalists are pouring billions into research and development, and start-ups are being created on what seems like a daily basis. If our era is the next Industrial Revolution, as many claim, A.I. is surely one of its driving forces.

It is an especially exciting time for researchers. In the early 2000s, computers were barely able to detect sharp edges in photographs, let alone recognize something as loosely defined as a human face. But thanks to the growth of big data, advances in algorithms like neural networks and an abundance of powerful computer hardware, something momentous has occurred: A.I. has gone from an academic niche to the leading differentiator in a wide range of industries, including manufacturing, health care, transportation and retail. However, enthusiasm for A.I. might be preventing us from reckoning with its looming effects on society. Despite its name, there is nothing "artificial” about this technology – it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow's world, it must be guided by human concerns.

This approach could be called “human-centered A.I.” It consists of three goals that can help responsibly guide the development of intelligent machines. First, A.I. needs to reflect more of the depth that characterizes our own intelligence. Consider the richness of human visual perception. It's complex and deeply contextual, and naturally balances our awareness of the obvious with a sensitivity to nuance. By comparison, machine perception remains strikingly narrow.

Sometimes this difference is trivial. For instance, an image-captioning algorithm once fairly summarized a photo as “a man riding a horse” but failed to note the fact that both were bronze sculptures. Other times, the difference is more profound, as when the same algorithm described an image of zebras grazing on a savanna beneath a rainbow. While the summary was technically correct, it was entirely devoid of aesthetic awareness, failing to detect any of the vibrancy or depth a human would naturally appreciate.

That may seem like a subjective or inconsequential critique, but it points to a major aspect of human perception beyond the grasp of our algorithms. How can we expect machines to anticipate our needs — much less contribute to our well-being — without insight into these “fuzzier” dimensions of our experience?

Making A.I. more sensitive to the full scope of human thought is no simple task. The solutions are likely to require insights derived from fields beyond computer science, which means programmers will have to learn to collaborate more often with experts in other domains. Such collaboration would represent a return to the roots of our field, not a departure from it. Younger A.I. enthusiasts may be surprised to learn that the principles of today's deep-learning algorithms stretch back more than 60 years to the neuroscientific researchers David Hubel and Torsten Wiesel, who discovered how the hierarchy of neurons in a cat's visual cortex responds to stimuli.

 1 , ImageNet, a data set of millions of training photographs that helped to advance computer vision, is based on a project called WordNet, created in 1995 by the cognitive scientist and linguist George Miller. WordNet was intended to organize the semantic concepts of English.

Reconnecting A.I. with fields like cognitive science, psychology and even sociology will give us a far richer foundation on which to base the development of machine intelligence; we can expect the resulting technology to collaborate and communicate more naturally, which will help us approach the second goal of human-centered A.I.: enhancing us, not replacing us.

Imagine the role that A.I. might play during surgery. The goal need not be to automate the process entirely. Instead, a combination of smart software and specialized hardware could help surgeons focus on their strengths — traits like dexterity and adaptability — while keeping tabs on more mundane tasks and protecting against human error, fatigue and distraction. Or consider senior care. Robots may never be the ideal custodians of the elderly, but intelligent sensors are already showing promise in helping human caretakers focus more on their relationships with those they provide care for by automatically monitoring drug dosages and going through safety checklists. These are examples of a trend toward automating those elements of jobs that are repetitive, error prone and even dangerous. What's left are the creative, intellectual and emotional roles for which humans are still best suited.

No amount of ingenuity, however, will fully eliminate the threat of job displacement. Addressing this concern is the third goal of human-centered A.I.: ensuring that the development of this technology is guided, at each step, by concern for its effect on humans. Today's anxieties over labor are just the start. Additional pitfalls include bias against underrepresented communities in machine learning, the tension between A.I.'s appetite for data and the privacy rights of individuals and the geopolitical implications of a global intelligence race.

Adequately facing these challenges will require commitments from many of our largest institutions. Universities are uniquely positioned to foster connections between computer science and traditionally unrelated departments like the social sciences and even humanities, through interdisciplinary projects, courses and seminars. Governments can make a greater effort to promote computer science education, especially among young girls, racial minorities and other groups whose perspectives have been underrepresented in A.I. Corporations should combine their aggressive investment in intelligent algorithms with ethical A.I. policies that temper ambition with responsibility.

No technology is more reflective of its creators than A.I. It has been said that there are no “machine" values at all; machine values are human values. A human-centered approach to A.I. means these machines don't have to be our competitors, but partners in securing our well-being.  2  autonomous our technology becomes, its impact on the world — for better or worse — will always be our responsibility.


Fei-Fei Li. How to Make A.I. That's Good for People.


1. Which one of the following is closest in meaning to the phrase let alone?

a. more importantly
b. also
c. not to mention
d. and what is worse
e. unfortunately


2. Which one of the following is closest in meaning to the word momentous?

a. significant
b. unbridled
c. unprecedented
d. alarming
e. unsettling


3. Which one of the following is closest in meaning to the word profound?

a. nuanced
b. serious
c. complicated
d. subtle
e. aggravating


4. Which one of the following is closest in meaning to the word inconsequential?

a. irrelevant
b. laudable
c. modest
d. elementary
e. erratic


5. Which one of the following words best fits  1  in the passage?

a. On the other hand
b. Accordingly
c. Nevertheless
d. Likewise
e. In addition


6. Which one of the following is closest in meaning to the word mundane?

a. very difficult
b. less important
c. cumbersome
d. routine
e. complex


7. Which one of the following is closest in meaning to the word foster?

a. certify
b. impart
c. promote
d. round-off
e. unify


8. Which one of the following words best fits  2  in the passage?

a. Once
b. Because
c. However
d. Although
e. For


9. Which one of the following best describes the main point of this passage?

a. It is likely that one day A.I. will replace most jobs currently being done by human beings.
b. A.I is important because it helps human beings to engage in self-reflection.
c. A.I. is useful because it can help link computer science with other academic disciplines.
d. The development of A.I. should reflect the interests and needs of human beings.
e. A.I. is useful primarily when it is able to simulate the cognitive processes of human beings.

早稲田社会科学 2019問題5 解答


【問題5 読解問題 解答】


1. c
2. a
3. b
4. a
5. d
6. d
7. c
8. c
9. d

早稲田社会科学 2019問題5 解説


【問題5 読解問題 解説】


説明文。文章に適切な語句を補充し、最後にまとめて内容理解が問われます。問題2から問題5は同じ形式です。

文章内容は、スタンフォード大学のAI研究者が、一般紙へ寄稿した記事です。AIがより人間的になるとはどういうことなのか、読解しましょう。


【重要表現】


academia アカデミア 学術研究 意味解説例文

inconsequential 本質でない 意味解説例文

automate オートメイト 自動化する 意味解説例文

mundane マンデーン 平凡な 意味解説例文

早稲田社会科学 2019問題5 完成文


【問題5 読解問題  完成文】


For a field that was not well known outside of academia a decade ago, artificial intelligence (A.I.) has grown dizzyingly fast. Technological companies from Silicon Valley to Beijing are betting everything on it, venture capitalists are pouring billions into research and development, and start-ups are being created on what seems like a daily basis. If our era is the next Industrial Revolution, as many claim, A.I. is surely one of its driving forces.

It is an especially exciting time for researchers. In the early 2000s, computers were barely able to detect sharp edges in photographs, let alone recognize something as loosely defined as a human face. But thanks to the growth of big data, advances in algorithms like neural networks and an abundance of powerful computer hardware, something momentous has occurred: A.I. has gone from an academic niche to the leading differentiator in a wide range of industries, including manufacturing, health care, transportation and retail. However, enthusiasm for A.I. might be preventing us from reckoning with its looming effects on society. Despite its name, there is nothing "artificial” about this technology – it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow's world, it must be guided by human concerns.

This approach could be called “human-centered A.I.” It consists of three goals that can help responsibly guide the development of intelligent machines. First, A.I. needs to reflect more of the depth that characterizes our own intelligence. Consider the richness of human visual perception. It's complex and deeply contextual, and naturally balances our awareness of the obvious with a sensitivity to nuance. By comparison, machine perception remains strikingly narrow.

Sometimes this difference is trivial. For instance, an image-captioning algorithm once fairly summarized a photo as “a man riding a horse” but failed to note the fact that both were bronze sculptures. Other times, the difference is more profound, as when the same algorithm described an image of zebras grazing on a savanna beneath a rainbow. While the summary was technically correct, it was entirely devoid of aesthetic awareness, failing to detect any of the vibrancy or depth a human would naturally appreciate.

That may seem like a subjective or inconsequential critique, but it points to a major aspect of human perception beyond the grasp of our algorithms. How can we expect machines to anticipate our needs — much less contribute to our well-being — without insight into these “fuzzier” dimensions of our experience?

Making A.I. more sensitive to the full scope of human thought is no simple task. The solutions are likely to require insights derived from fields beyond computer science, which means programmers will have to learn to collaborate more often with experts in other domains. Such collaboration would represent a return to the roots of our field, not a departure from it. Younger A.I. enthusiasts may be surprised to learn that the principles of today's deep-learning algorithms stretch back more than 60 years to the neuroscientific researchers David Hubel and Torsten Wiesel, who discovered how the hierarchy of neurons in a cat's visual cortex responds to stimuli.

Nevertheless, ImageNet, a data set of millions of training photographs that helped to advance computer vision, is based on a project called WordNet, created in 1995 by the cognitive scientist and linguist George Miller. WordNet was intended to organize the semantic concepts of English.

Reconnecting A.I. with fields like cognitive science, psychology and even sociology will give us a far richer foundation on which to base the development of machine intelligence; we can expect the resulting technology to collaborate and communicate more naturally, which will help us approach the second goal of human-centered A.I.: enhancing us, not replacing us.

Imagine the role that A.I. might play during surgery. The goal need not be to automate the process entirely. Instead, a combination of smart software and specialized hardware could help surgeons focus on their strengths — traits like dexterity and adaptability — while keeping tabs on more mundane tasks and protecting against human error, fatigue and distraction. Or consider senior care. Robots may never be the ideal custodians of the elderly, but intelligent sensors are already showing promise in helping human caretakers focus more on their relationships with those they provide care for by automatically monitoring drug dosages and going through safety checklists. These are examples of a trend toward automating those elements of jobs that are repetitive, error prone and even dangerous. What's left are the creative, intellectual and emotional roles for which humans are still best suited.

No amount of ingenuity, however, will fully eliminate the threat of job displacement. Addressing this concern is the third goal of human-centered A.I.: ensuring that the development of this technology is guided, at each step, by concern for its effect on humans. Today's anxieties over labor are just the start. Additional pitfalls include bias against underrepresented communities in machine learning, the tension between A.I.'s appetite for data and the privacy rights of individuals and the geopolitical implications of a global intelligence race.

Adequately facing these challenges will require commitments from many of our largest institutions. Universities are uniquely positioned to foster connections between computer science and traditionally unrelated departments like the social sciences and even humanities, through interdisciplinary projects, courses and seminars. Governments can make a greater effort to promote computer science education, especially among young girls, racial minorities and other groups whose perspectives have been underrepresented in A.I. Corporations should combine their aggressive investment in intelligent algorithms with ethical A.I. policies that temper ambition with responsibility.

No technology is more reflective of its creators than A.I. It has been said that there are no “machine" values at all; machine values are human values. A human-centered approach to A.I. means these machines don't have to be our competitors, but partners in securing our well-being. However autonomous our technology becomes, its impact on the world — for better or worse — will always be our responsibility.

早稲田社会科学 2019問題5 全訳

制作中

似ている記事
スポンサーさん