We are told AI is on an inevitable rise and humans simply can’t measure up. In no time, the headlines say, artificial intelligence will take our jobs, fight our wars, manage our health, and, perhaps eventually, call the shots for the flesh-and-blood masses. Big data, it seems, knows best.
Don’t buy it.
The reality is, computers still can’t think like us, though they do seem to have gotten into our heads. Intimidated by the algorithms, humanity could use a little pep talk.
It is true that computers know more facts than we do. They have better memories, make calculations faster, and do not get tired like we do.
Robots far surpass humans at repetitive, monotonous tasks like tightening bolts, planting seeds, searching legal documents, and accepting bank deposits and dispensing cash. Computers can recognize objects, draw pictures, drive cars. You can surely think of a dozen other impressive–even superhuman–computer feats.
It is tempting to think that because computers can do some things extremely well, they must be highly intelligent. In a Harvard Business School
study published in April, experimenters compared the extent to which people’s opinions about things like the popularity of a song were influenced by “advice” that was attributed either to a human or a computer. While a subset of expert forecasters found the human more persuasive, for most people in the experiment, the advice was more persuasive when it came from the algorithm.
Computers are great and getting better, but computer algorithms are still designed to have the very narrow capabilities needed to perform well-defined chores, like spell checking and searching the internet. This is a far cry from the general intelligence needed to deal with unfamiliar situations by assessing what is happening, why it is happening, and what the consequences are of taking action.
Computers cannot formulate persuasive theories. Computers cannot do inductive reasoning or make long-run plans. Computers do not have the emotions, feelings, and inspiration that are needed to write a compelling poem, novel, or movie script. Computers do not know, in any meaningful sense, what words mean. Computers do not have the wisdom humans accumulate by living life. Computers do not know the answers to simple questions like these:
If I were to mix orange juice with milk, would it taste good if I added salt?
Is it safe to walk downstairs backwards if I close my eyes?
I don’t know how long it will take to develop computers that have a general intelligence that rivals humans. I suspect that it will take decades. I am certain that people who claim that it has already happened are wrong, and I don’t trust people who give specific dates. In the meantime, please be skeptical of far-fetched science fiction scenarios and please be wary of businesses hyping AI products.
Forget emotions and poems: Take today’s growing fixation with using high-powered computers to mine big data for patterns to help make big decisions. When statistical models analyze a large number of potential explanatory variables, the number of possible relationships becomes astonishingly large–we are talking in the trillions.
If many potential variables are considered, even if all of them are just random noise, some combinations are bound to be highly correlated with whatever it is we are trying to predict through AI: cancer, credit risk, job suitability, potential for criminality. There will occasionally be a true knowledge discovery, but the larger the number of explanatory variables considered, the more likely it is that a discovered relationship will be coincidental, transitory, and useless–or worse.
[...]
The situation is exacerbated if the discovered patterns are concealed inside black boxes, where even the researchers and engineers who design the algorithms do not understand the details inside the black box. Often, no one knows fully why a computer concluded that this stock should be purchased, this job applicant should be rejected, this patient should be given this medication, this prisoner should be denied parole, this building should be bombed.
[...]
In the age of AI and big data, the real danger is not that computers are smarter than us, but that we think computers are smarter than us and therefore trust computers to make important decisions for us. We should not be intimidated into thinking that computers are infallible. Let’s trust ourselves to judge whether statistical patterns make sense and are therefore potentially useful, or are merely coincidental and therefore fleeting and useless.
Human reasoning is fundamentally different from artificial intelligence, which is why it is needed more than ever.