Um cartucho do videogame clássico SuperMarioBros, lançado em 1985, foi vendido por US$100 mil dólares. O preço recorde decorre:
a) o fato do SuperMarioBros ser um jogo clássico
b) o cartucho vendido não ter tido sua embalagem violada
c) estar em excelente estado de preservação
d) existirem 11 variações da embalagem, mas as duas primeiras estavam disponíveis apenas no mercado de teste. A cópia vendida é uma destas embalagens
e) a embalagem tinha um adesivo da NES, o que é raro.
f) o tipo de embalagem geralmente se desgasta com o tempo, o que não ocorreu com esta cópia, mais de 30 anos depois
g) existe um grupo de colecionadores, muitos deles pessoas ricas, que são colecionadores de videogames.
h) o SuperMarioBros "salvou" a indústria de games
Eis um bom caso para discutir custo histórico e valor justo, não?
Fonte: Aqui
15 fevereiro 2019
14 fevereiro 2019
Importantes limitações de Machine Learning
Estamos na era da abundância de informação e há todo momento vemos a imprensa falando de big data, machine learning e inteligência artificial. Há muita propaganda em torno do tema, pois os pesquisadores e entusiastas querem angariar dinheiro com o conhecimento que têm do tema, vendendo cursos, dando palestras etc. Ha aplicaçoes importantes de machine learning. Mas há um exagero em torno da aplicabilidade dessas ferramentas. Muitos deles acreditam que basta ter uma grande base de dados, um algoritmo computacional, que será possível resolver diversos problemas em várias áreas. No entanto, para isso acontecer é necessário o cumprimento de 2 hipóteses:
1) Dado amostra grande o suficiente de alguma distribuição, podemos aproximá-la eficientemente com um modelo estatístico, o que é conhecido como teorema da aproximação universal.
2) Uma amostra estatística de um fenômeno é suficiente para automatizar , raciocinar , prever o fenômeno;
No entanto, essas 2 hipóteses na maioria das vezes não ocorre na realidade:
1º Problema: Aproximação universal não é universal
There is a theoretical result known as the universal approximation theorem. In summary it states that any function can be approximated to an arbitrary precision by (at least) three level composition of real functions, such as e.g. a multilayer perceptron with sigmoidal activation. This is a mathematical statement, but a rather existential one. It does not say if such approximation would be practical or achievable with, say, gradient descent approach. It merely states that such approximation exists. As with many such existential arguments, their applicability to real world is limited. In the real world, we work with strictly finite (in fact even "small") systems that we simulate on the computer. Even though the models we exercise are "small", the parameter spaces are large enough, such that we cannot possibly search any significant portion of them. Instead we use gradient descent methods to actually find a solution, but by no means there are guarantees that we can find the best solution (or even good at all).[..]
2º Problema: O teorema do limite central tem limites
The second issue is that any time a dataset is created, it is assumed that the data contained in the set has a complete description of the phenomenon, and that there is a (high dimensional) distribution localised on some low dimensional manifold which captures the essence of the phenomenon. This strong belief is in part caused by the belief in the central limit theorem: a mathematical result that tells us something about averaging of random variables. [..]But there is a fine print: the theorem assumes that the original random variables had finite variance and expected values, were independent and identically distributed. Not all distributions have those properties! In particular critical phenomena exhibit fat tail distributions which often have unbounded variance and undefined expected values (not to mention many real world samples are actually not independent or identically distributed).
3º Problema: O mundo está sempre mudando ( em termos técnicos: o mundo é não estacionário)
Aside from the fact that many real world phenomena may exhibit rather hairy fat tail distributions, there is another quirk which is often not captured in datasets: that phenomena are not always stationary. In other words, the world keeps evolving, changing the context and consequently altering fundamental statistical properties of many real world phenomena. Therefore a stationary distribution (a snapshot) at a given time may not work indefinitely[...]
These things happen all the time, potentially changing the daily/weekly/monthly patterns of behaviours. The pace at which things are changing may be to quick for statistical models to follow, even if they retrain online. Granted there are certainly aspects of behaviour and some humans which are very stable: some people work in the same place for many years and have very stable patterns of behaviours. But this cannot be assumed in general.
Fonte: aqui
1) Dado amostra grande o suficiente de alguma distribuição, podemos aproximá-la eficientemente com um modelo estatístico, o que é conhecido como teorema da aproximação universal.
2) Uma amostra estatística de um fenômeno é suficiente para automatizar , raciocinar , prever o fenômeno;
No entanto, essas 2 hipóteses na maioria das vezes não ocorre na realidade:
1º Problema: Aproximação universal não é universal
There is a theoretical result known as the universal approximation theorem. In summary it states that any function can be approximated to an arbitrary precision by (at least) three level composition of real functions, such as e.g. a multilayer perceptron with sigmoidal activation. This is a mathematical statement, but a rather existential one. It does not say if such approximation would be practical or achievable with, say, gradient descent approach. It merely states that such approximation exists. As with many such existential arguments, their applicability to real world is limited. In the real world, we work with strictly finite (in fact even "small") systems that we simulate on the computer. Even though the models we exercise are "small", the parameter spaces are large enough, such that we cannot possibly search any significant portion of them. Instead we use gradient descent methods to actually find a solution, but by no means there are guarantees that we can find the best solution (or even good at all).[..]
2º Problema: O teorema do limite central tem limites
The second issue is that any time a dataset is created, it is assumed that the data contained in the set has a complete description of the phenomenon, and that there is a (high dimensional) distribution localised on some low dimensional manifold which captures the essence of the phenomenon. This strong belief is in part caused by the belief in the central limit theorem: a mathematical result that tells us something about averaging of random variables. [..]But there is a fine print: the theorem assumes that the original random variables had finite variance and expected values, were independent and identically distributed. Not all distributions have those properties! In particular critical phenomena exhibit fat tail distributions which often have unbounded variance and undefined expected values (not to mention many real world samples are actually not independent or identically distributed).
3º Problema: O mundo está sempre mudando ( em termos técnicos: o mundo é não estacionário)
Aside from the fact that many real world phenomena may exhibit rather hairy fat tail distributions, there is another quirk which is often not captured in datasets: that phenomena are not always stationary. In other words, the world keeps evolving, changing the context and consequently altering fundamental statistical properties of many real world phenomena. Therefore a stationary distribution (a snapshot) at a given time may not work indefinitely[...]
These things happen all the time, potentially changing the daily/weekly/monthly patterns of behaviours. The pace at which things are changing may be to quick for statistical models to follow, even if they retrain online. Granted there are certainly aspects of behaviour and some humans which are very stable: some people work in the same place for many years and have very stable patterns of behaviours. But this cannot be assumed in general.
Fonte: aqui
13 fevereiro 2019
Inteligencia Aritificial é uma grande mentira
A.I. is a big fat lie. Artificial intelligence is a fraudulent hoax — or
in the best cases it's a hyped-up buzzword that confuses and deceives.
The much better, precise term would instead usually be machine learning – which is genuinely powerful and everyone oughta be excited about it.
1) Unlike AI, machine learning's totally legit. I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, "Hooha!". However, these advancements are almost entirely limited to supervised machine learning, which can only tackle problems for which there exist many labeled or historical examples in the data from which the computer can learn. This inherently limits machine learning to only a very particular subset of what humans can do – plus also a limited range of things humans can't do.
2) AI is BS. And for the record, this naysayer taught the Columbia University graduate-level "Artificial Intelligence" course, as well as other related courses there.
AI is nothing but a brand. A powerful brand, but an empty promise. The concept of "intelligence" is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers – including the likes of Bill Gates and Elon Musk – all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.
The term artificial intelligence has no place in science or engineering. "AI" is valid only for philosophy and science fiction – and, by the way, I totally love the exploration of AI in those areas.
3) AI isn't gonna kill you. The forthcoming robot apocalypse is a ghost story. The idea that machines will uprise on their own volition and eradicate humanity holds no merit.
Fonte: aqui
1) Unlike AI, machine learning's totally legit. I gotta say, it wins the Awesomest Technology Ever award, forging advancements that make ya go, "Hooha!". However, these advancements are almost entirely limited to supervised machine learning, which can only tackle problems for which there exist many labeled or historical examples in the data from which the computer can learn. This inherently limits machine learning to only a very particular subset of what humans can do – plus also a limited range of things humans can't do.
2) AI is BS. And for the record, this naysayer taught the Columbia University graduate-level "Artificial Intelligence" course, as well as other related courses there.
AI is nothing but a brand. A powerful brand, but an empty promise. The concept of "intelligence" is entirely subjective and intrinsically human. Those who espouse the limitless wonders of AI and warn of its dangers – including the likes of Bill Gates and Elon Musk – all make the same false presumption: that intelligence is a one-dimensional spectrum and that technological advancements propel us along that spectrum, down a path that leads toward human-level capabilities. Nuh uh. The advancements only happen with labeled data. We are advancing quickly, but in a different direction and only across a very particular, restricted microcosm of capabilities.
The term artificial intelligence has no place in science or engineering. "AI" is valid only for philosophy and science fiction – and, by the way, I totally love the exploration of AI in those areas.
3) AI isn't gonna kill you. The forthcoming robot apocalypse is a ghost story. The idea that machines will uprise on their own volition and eradicate humanity holds no merit.
Eric Siegel, Ph.D. is the founder of the Predictive Analytics World conference series—which includes events for business, government, healthcare, workforce, manufacturing, and financial services—the author of Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die—Revised and Updated Edition (Wiley, January 2016), executive editor of The Predictive Analytics Times, and a former computer science professor at Columbia University. For more information about predictive analytics, see the Predictive Analytics Guide.
Fonte: aqui
Rir é o melhor remédio
O astrólogo Omar Cardoso foi um dos mais famosos do Brasil. Seus conselhos e signos eram levados a sério por muitos. Em 1966, em uma coluna da revista O Cruzeiro (16 de abril de 1966, ed 28, p 105, e respondendo aos questionamentos realizados por carta, uma resposta realmente chama a atenção. Uma leitora chamada "Desconfiada de São Francisco de Paulo - RS" mandou uma correspondência aparentemente perguntando se seu marido tinha "outra" no coração e um possível curso. No trecho de sua resposta o astrólogo aconselha (grifo do blog):
Pode estudar: enfermagem, contabilidade, finanças, música, pintura, desenho, costura, farmácia, medicina, fabrico ou comércio de artigos cosméticos, fisioterapia, etc.
Assinar:
Postagens (Atom)