Páginas

19 março 2013

Interação algoritmos e seres humanos



[...] Although algorithms are growing ever more powerful, fast and precise, the computers themselves are literal-minded, and context and nuance often elude them. Capable as these machines are, they are not always up to deciphering the ambiguity of human language and the mystery of reasoning. Yet these days they are being asked to be more humanlike in what they figure out.
“For all their brilliance, computers can be thick as a brick,” said Tom M. Mitchell, a computer scientist at Carnegie Mellon University.
And so, while programming experts still write the step-by-step instructions of computer code, additional people are needed to make more subtle contributions as the work the computers do has become more involved. People evaluate, edit or correct an algorithm’s work. Or they assemble online databases of knowledge and check and verify them — creating, essentially, a crib sheet the computer can call on for a quick answer. Humans can interpret and tweak information in ways that are understandable to both computers and other humans.
Question-answering technologies like Apple’s Siri and I.B.M.’s Watson rely particularly on the emerging machine-man collaboration. Algorithms alone are not enough.
Twitter uses a far-flung army of contract workers, whom it calls judges, to interpret the meaning and context of search terms that suddenly spike in frequency on the service.
For example, when Mitt Romney talked of cutting government money for public broadcasting in a presidential debate last fall and mentioned Big Bird, messages with that phrase surged. Human judges recognized instantly that “Big Bird,” in that context and at that moment, was mainly a political comment, not a reference to “Sesame Street,” and that politics-related messages should pop up when someone searched for “Big Bird.” People can understand such references more accurately and quickly than software can, and their judgments are fed immediately into Twitter’s search algorithm.
“Humans are core to this system,” two Twitter engineers wrote in a blog post in January.
Even at Google, where algorithms and engineers reign supreme in the company’s business and culture, the human contribution to search results is increasing. Google uses human helpers in two ways. Several months ago, it began presenting summaries of information on the right side of a search page when a user typed in the name of a well-known person or place, like “Barack Obama” or “New York City.” These summaries draw from databases of knowledge like Wikipedia, the C.I.A. World Factbook and Freebase, whose parent company, Metaweb, Google acquired in 2010. These databases are edited by humans.
When Google’s algorithm detects a search term for which this distilled information is available, the search engine is trained to go fetch it rather than merely present links to Web pages.
“There has been a shift in our thinking,” said Scott Huffman, an engineering director in charge of search quality at Google. “A part of our resources are now more human curated.”
Other human helpers, known as evaluators or raters, help Google develop tweaks to its search algorithm, a powerhouse of automation, fielding 100 billion queries a month. “Our engineers evolve the algorithm, and humans help us see if a suggested change is really an improvement,” Mr. Huffman said.
[...]
Fonte: aqui

Nenhum comentário:

Postar um comentário