Páginas

18 março 2020

Inteligência Artificial é uma ideologia

[...]







AI might be achieving unprecedented results in diverse fields, including medicine, robotic control, and language/image processing, or a certain way of talking about software might be in play as a way to not fully celebrate the people working together through improving information systems who are achieving those results. “AI” might be a threat to the human future, as is often imagined in science fiction, or it might be a way of thinking about technology that makes it harder to design technology so it can be used effectively and responsibly. The very idea of AI might create a diversion that makes it easier for a small group of technologists and investors to claim all rewards from a widely distributed effort. Computation is an essential technology, but the AI way of thinking about it can be murky and dysfunctional.


You can reject the AI way of thinking for a variety of reasons. One is that you view people as having a special place in the world and being the ultimate source of value on which AIs ultimately depend. (That might be called a humanist objection.) Another view is that no intelligence, human or machine, is ever truly autonomous: Everything we accomplish depends on the social context established by other human beings who give meaning to what we wish to accomplish. (The pluralist objection.) Regardless of how one sees it, an understanding of AI focused on independence from—rather than interdependence with—humans misses most of the potential for software technology.

Supporting the philosophy of AI has burdened our economy. Less than 10 percent of the US workforce is officially employed in the technology sector, compared with 30–40 percent in the then leading industrial sectors in 1960s. At least part of the reason for this is that when people provide data, behavioral examples, and even active problem solving online, it is not considered “work” but is instead treated as part of an off-the-books barter for certain free internet services. Conversely, when companies find creative new ways to use networking technologies to enable people to provide services previously done poorly by machines, this gets little attention from investors who believe “AI is the future,” encouraging further automation. This has contributed to the hollowing out of the economy.

Bridging even a part of this gap, and thus reducing the underemployment of workforces in the rich world, could expand the productive output of Western technology far more than greater receptiveness to surveillance in China does. In fact, as recent reporting has shown, China’s greatest advantage in AI is less surveillance than a vast shadow workforce actively labeling data fed into algorithms. Just as was the case with the relative failures of past hidden labor forces, these workers would become more productive if they could learn to understand and improve the information systems they feed into, and were recognized for this work, rather than being erased to maintain the “ignore the man behind the curtain” mirage that AI rests on. Worker understanding of production processes empowering deeper contributions to productivity were the heart of the Japanese Kaizen Toyota Production System miracle in the 1970s and 1980s.

To those who fear that bringing data collection into the daylight of acknowledged commerce will encourage a culture of ubiquitous surveillance, we must point out that it is the only alternative to such a culture. It is only when workers are paid that they become citizens in full. Workers who earn money also spend money where they choose; they gain deeper power and voice in society. They can gain the power to choose to work less, for instance. This is how worker conditions have improved historically.

It is not surprising that quantitative technical and economic arguments converge on the centrality of human value. Estimates suggest that the total computational capacity of a single human mind is greater than that of all today’s computers in the world put together. With the pace of processor improvements slowing as Moore’s law ends, the prospects of this changing dramatically anytime soon are dim.

Nor is such a human-centric approach to technology simply a theoretical possibility. Tens of millions of people every day use video conferencing to deliver personal services, such as language and skill instruction, online. Online virtual collaboration spaces like GitHub are central to value creation in our era. Virtual and augmented reality hold out the prospect of dramatically increasing what is possible, allowing more types of collaborative work to be performed at great distances. Productivity software from Slack to Wikipedia to LinkedIn to Microsoft product suites make previously unimaginable real-time collaboration omnipresent.

Indeed, recent research has shown that without the human-created Wikipedia, the value of search engines would plummet (since that is where the top results of substantial searches are often found), even though search services are touted as frontline examples of the value of AI. (And yet the Wikipedia is a thread-bare nonprofit, while search engines are some of the most highly valued assets in our civilization.) Collaboration technologies are helping us work from home through the Covid-19 epidemic; it has become a matter of survival, and the future promises ways where long-distance collaboration may become ever more vivid and satisfying.

To be clear, we are great enthusiasts for the methods most discussed as illustrations of the potential of AI: deep/convolution networks and so on. These techniques, however, rely heavily on human data. For example, Open AI’s much celebrated text-generation algorithm was trained on millions of websites produced by humans. And evidence from the field of machine teaching increasingly suggests that when the humans generating the data are actively engaged in providing high-quality, carefully chosen input, they can train at far lower costs. But active engagement is possible only if, unlike in the usual AI attitude, all contributors, not just elite engineers, are considered crucial role players and are financially compensated.


A powerful gut response from some AI enthusiasts, after reading this far, might be that we have to be wrong, because AI is starting to train itself, without people. But AI without human data is only possible for a narrow class of problems, the kind that can be defined precisely, not statistically, or based on ongoing measures of reality. Board games like chess and certain scientific and math problems are the usual examples, though even in these cases human teams using so-called AI resources usually outperform AI by itself. While self-trainable examples can be important, they are rare and not representative of real-world problems.

“AI” is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity. Given that any such replacement is a mirage, this ideology has strong resonances with other historical ideologies, such as technocracy and central-planning-based forms of socialism, which viewed as desirable or inevitable the replacement of most human judgement/agency with systems created by a small technical elite. It is thus not all that surprising that the Chinese Communist Party would find AI to be a welcome technological formulation of its own ideology.


[..]

Fonte:
AI is an Ideology, Not a Technology Glen Weyl is Founder and Chair of the RadicalxChange Foundation and Microsoft’s Office of the Chief Technology Officer Political Economist and Social Technologist (OCTOPEST). Jaron Lanier is the author of Ten Arguments for Deleting Your Social Media Accounts Right Now and Dawn of the New Everything. He (and Glen) are researchers at Microsoft but do not speak for the company.

Resultado de imagem para artificial intelligence

Nenhum comentário:

Postar um comentário