Páginas

04 outubro 2018

Declínio da Inteligência Artificial

[...]

Underpinning much of the buzz over artificial intelligence in London and elsewhere is the implicit premise that AI is thetransformative technology of the moment, or maybe of the decade, or even of the century or, well, just about ever. Promises like the AI Summit’s claim that the technology goes “beyond the hype” to “deliver real value in business” only drives the corporate feeding frenzy among executives desperate not to be left behind.
But there is something else going on “beyond the hype,” something that ought to be disconcerting for AI boosters: Among those closest to the cutting edge of machine learning, there is a sense–perhaps a faint but creeping suspicion—that deep learning, the techniques which underpin most of what people think of as ground-breaking AI, may not deliver on its promise.
For one thing, there’s a growing consensus among AI researchers that deep learning alone will probably not get us to artificial general intelligence (the idea of a single piece of software that is more intelligent than humans at a wide variety of tasks). But there’s also a growing fear that AI may not create systems that are reliably useful for even a narrower set of real-world challenges, like autonomous driving or making investment decisions.

Filip Piekniewski, an expert on computer vision in San Diego who is currently the principal AI scientist for Koh Young Technology Inc, a company the builds 3D measurement devices, recently kicked off a debate with a viral blog post predicting a looming “AI Winter,” a period of disillusionment and evaporating funding for AI-related research. “(The field has experienced several such periods over the past half-century.)

Piekniewski’s evidence? The pace of AI breakthroughs seems to be slowing and those breakthroughs that are occurring seem to require ever-larger amounts of data and computer power. Several top AI researchers who had been hired by big tech firms to head in-house AI labs have left or moved into slightly less prominent roles. And most importantly, Piekniewski argues that the recent crashes of self-driving cars point to fundamental issues with the ability of deep learning to handle the complexity of the real world. Even more notable than the crashes, he says, is how often machines lose confidence in their ability to make safe decisions and cede control back to human drivers.
Piekniewski also references the work of New York University’s Gary Marcus, who earlier this year published a much-discussed paper critiquing the failings of today’s deep learning systems. This software, Marcus argues, can identify objects in images, but lacks any model of the real world. As a result, they often can’t handle new situations, even if they are very similar to the ones they’ve been trained to perform. For instance, the DeepMind algorithm that performs so well at the Atari game “Breakout”—and which the company often highlights in public presentations—does terribly if it is suddenly presented with a different-sized paddle, whereas a top human player would likely find the larger paddle wasn’t much of a handicap.

[..]




Fonte: aqui

Nenhum comentário:

Postar um comentário