Many technologists who are not also neuroscientists would like us to believe that human-like artificial intelligence—or something close to it—is right around the corner. Just a discovery or two away. Less attention is given to the actual gulf between our current knowledge and capabilities and that actual future.
We're to assume instead that it's trivial, at least in the sense that it will soon be bridged and that this bridging is inevitable. And so concepts like machine intelligence and neural networks are tossed around like sci-fi props. Luke Hewitt, a doctoral student at the MIT Department of Brain and Cognitive Sciences, is particularly concerned about the "unreasonable reputation" of neural networks. In a post at MIT's Thinking Machines blog, he argues that there are good reasons to be more skeptical.
Hewitt's central point is that by becoming proficient in a single task, it's very easy for a machine to seem generally intelligent, when that's not really the case.
"The ability of neural networks to learn interpretable word embeddings, say, does not remotely suggest that they are the right kind of tool for a human-level understanding of the world," Hewitt writes. "It is impressive and surprising that these general-purpose, statistical models can learn meaningful relations from text alone, without any richer perception of the world, but this may speak much more about the unexpected ease of the task itself than it does about the capacity of the models. Just as checkers can be won through tree-search, so too can many semantic relations be learned from text statistics. Both produce impressive intelligent-seeming behaviour, but neither necessarily pave the way towards true machine intelligence."
"If they have succeeded in anything superficially similar, it has been because they saw many hundreds of times more examples than any human ever needed to."
That said, Hewitt is far from a neural networking detractor. He notes that neural networking techniques—in which webs of nodes function as information processing units in ways similar to biological neurons—are immensely powerful when it comes to learning patterns from very large datasets. This is their utility in otherwise computationally prohibitive tasks like text and speech recognition. That's one of the brain's superpowers: finding meaning within relentless floods of sensory data. The brain's auditory and visual centers must take vast amounts of input in the form of waves and pixels, turn it all into data, and then capture the meaning, the statistical regularities, in that data.
But only a starting point:
The many facets of human thought include planning towards novel goals, inferring others' goals from their actions, learning structured theories to describe the rules of the world, inventing experiments to test those theories, and learning to recognise new object kinds from just one example. Very often they involve principled inference under uncertainty from few observations. For all the accomplishments of neural networks, it must be said that they have only ever proven their worth at tasks fundamentally different from those above. If they have succeeded in anything superficially similar, it has been because they saw many hundreds of times more examples than any human ever needed to.
It's easy to get swamped by Singularity noise and science fictional grand stands against gun-toting machine intelligence, so well-reasoned AI reality checks like Hewitt's are worth spotlighting. The reality is often so, so far from the hype. Deep learning, whether it's our brains contending with floods of sensory input or algorithms reading handwriting, is necessary for intelligence, but it's not intelligence in itself.
Fonte: aqui
Nenhum comentário:
Postar um comentário