As
someone who has worked in A.I. for decades, I’ve witnessed the failure
of similar predictions of imminent human-level A.I., and I’m certain
these latest forecasts will fall short as well. The challenge of
creating humanlike intelligence in machines remains greatly
underestimated. Today’s A.I. systems sorely lack the essence of human
intelligence: understanding the
situations we experience, being able to grasp their meaning. The
mathematician and philosopher Gian-Carlo Rota famously asked, “I wonder
whether or when A.I. will ever crash the barrier of meaning.” To me,
this is still the most important question.
The
lack of humanlike understanding in machines is underscored by recent
cracks that have appeared in the foundations of modern A.I. While
today’s programs are much more impressive than the systems we had 20 or
30 years ago, a series of research studies have shown that deep-learning
systems can be unreliable in decidedly unhumanlike ways.
Programs that “read” documents and answer questions about them can easily be fooled into giving wrong answers when short, irrelevant snippets of text are appended to the document. Similarly,
programs that recognize faces and objects, lauded as a major triumph of
deep learning, can fail dramatically when their input is modified even
in modest ways by certain types of lighting, image filtering and other
alterations that do not affect humans’ recognition abilities in the
slightest.
One recent study showed
that adding small amounts of “noise” to a face image can seriously harm
the performance of state-of-the-art face-recognition programs. Another study,
humorously called “The Elephant in the Room,” showed that inserting a
small image of an out-of-place object, such as an elephant, in the
corner of a living-room image strangely caused deep-learning vision
programs to suddenly misclassify other objects in the image.
[...]
These are only a few examples
demonstrating that the best A.I. programs can be unreliable when faced
with situations that differ, even to a small degree, from what they have
been trained on. The errors made by such systems range from harmless
and humorous to potentially disastrous: imagine, for example, an airport
security system that won’t let you board your flight because your face
is confused with that of a criminal, or a self-driving car that, because
of unusual lighting conditions, fails to notice that you are about to
cross the street.
Even more worrisome
are recent demonstrations of the vulnerability of A.I. systems to
so-called adversarial examples. In these, a malevolent hacker can make
specific changes to images, sound waves or text documents that while
imperceptible or irrelevant to humans will cause a program to make
potentially catastrophic errors.
The
possibility of such attacks has been demonstrated in nearly every
application domain of A.I., including computer vision, medical image
processing, speech recognition and language processing. Numerous studies
have demonstrated the ease with which hackers could, in principle, fool
face- and object-recognition systems with specific minuscule changes to images, put inconspicuous stickers on a stop sign to make a self-driving car’s vision system mistake it for a yield sign or modify an audio signal so that it sounds like background music to a human but instructs a Siri or Alexa system to perform a silent command.
These
potential vulnerabilities illustrate the ways in which current progress
in A.I. is stymied by the barrier of meaning. Anyone who works with
A.I. systems knows that behind the facade of humanlike visual abilities,
linguistic fluency and game-playing prowess, these programs do not — in
any humanlike way — understand the
inputs they process or the outputs they produce. The lack of such
understanding renders these programs susceptible to unexpected errors
and undetectable attacks.
What would
be required to surmount this barrier, to give machines the ability to
more deeply understand the situations they face, rather than have them
rely on shallow features? To find the answer, we need to look to the
study of human cognition.
Our own
understanding of the situations we encounter is grounded in broad,
intuitive “common-sense knowledge” about how the world works, and about
the goals, motivations and likely behavior of other living creatures,
particularly other humans. Additionally, our understanding of the world
relies on our core abilities to generalize what we know, to form abstract concepts, and to make analogies — in
short, to flexibly adapt our concepts to new situations. Researchers
have been experimenting for decades with methods for imbuing A.I.
systems with intuitive common sense and robust humanlike generalization
abilities, but there has been little progress in this very difficult
endeavor.
A.I. programs that lack
common sense and other key aspects of human understanding are
increasingly being deployed for real-world applications. While some
people are worried about “superintelligent” A.I., the most dangerous
aspect of A.I. systems is that we will trust them too much and give them
too much autonomy while not being fully aware of their limitations. As
the A.I. researcher Pedro Domingos noted in his book “The Master
Algorithm,” “People worry that computers will get too smart and take
over the world, but the real problem is that they’re too stupid and
they’ve already taken over the world.”
Fonte: aqui
Nenhum comentário:
Postar um comentário