Here’s my view: deep learning really is great, but it’s the wrong tool for the job of cognition writ large; it’s a tool for perceptual classification, when general intelligence involves so much more. What I was saying in 2012 (and have never deviated from) is that deep learning ought to be part of the workflow for AI, not the whole thing (“just one element in a very complicated ensemble of things”, as I put it then, “not a universal solvent, [just] one tool among many” as I put it in January). Deep learning is, like anything else we might consider, a tool with particular strengths, and particular weaknesses. Nobody should be surprised by this.
Deep learning is important work, with immediate practical applications.
…
Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. The most powerful A.I. systems … use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.
[...]
At that time I concluded in part that (excerpting from the concluding summary argument):
Humans can generalize a wide range of universals to arbitrary novel instances. They appear to do so in many areas of language (including syntax, morphology, and discourse) and thought (including transitive inference, entailments, and class-inclusion relationships).
Advocates of symbol-manipulation assume that the mind instantiates symbol-manipulating mechanisms including symbols, categories, and variables, and mechanisms for assigning instances to categories and representing and extending relationships between variables. This account provides a straightforward framework for understanding how universals are extended to arbitrary novel instances.
Current eliminative connectionist models map input vectors to output vectors using the back-propagation algorithm (or one of its variants).
To generalize universals to arbitrary novel instances, these models would need to generalize outside the training space.
These models cannot generalize outside the training space.
Therefore, current eliminative connectionist models cannot account for those cognitive phenomena that involve universals that can be freely extended to arbitrary cases.
At that time I concluded in part that (excerpting from the concluding summary argument):
Humans can generalize a wide range of universals to arbitrary novel instances. They appear to do so in many areas of language (including syntax, morphology, and discourse) and thought (including transitive inference, entailments, and class-inclusion relationships).
Advocates of symbol-manipulation assume that the mind instantiates symbol-manipulating mechanisms including symbols, categories, and variables, and mechanisms for assigning instances to categories and representing and extending relationships between variables. This account provides a straightforward framework for understanding how universals are extended to arbitrary novel instances.
Current eliminative connectionist models map input vectors to output vectors using the back-propagation algorithm (or one of its variants).
To generalize universals to arbitrary novel instances, these models would need to generalize outside the training space.
These models cannot generalize outside the training space.
Therefore, current eliminative connectionist models cannot account for those cognitive phenomena that involve universals that can be freely extended to arbitrary cases.