Translate

22 novembro 2019

Efeito da competição bancária sobre o custo do crédito e a atividade econômica

Resumo:

We estimate the effect of bank competition on financial and real variables. To do so, we use regional heterogeneous exposure to mergers and acquisitions (M&A) of large banks as an instrument for changes in competition of local banking markets in Brazil. We use detailed administrative data on loans and firms and a difference-in-differences empirical strategy that compares changes in outcomes for markets affected or not by the M&A episodes. We find that, following M&A episodes, spreads increase and lending decreases persistently in exposed markets relative to controls. We show that this is larger for more concentrated markets at the time of the M&A episode, and is unlikely to be driven by other factors, such as branch closures. We find that non-agricultural employment falls .2% for an increase of 1% in spreads. We develop a model of heterogeneous firms and concentration in the banking sector that is consistent with the micro evidence. In our model, the semi-elasticity of credit to lending rates is a sufficient statistic for the effect of concentration on output, which we estimate to be -3.17. Among other counterfactuals, we show that if Brazilian spreads fell to world levels, output would increase by approximately 5%.

Fonte: Bank Competition, Cost of Credit and EconomicActivity: evidence from BrazilGustavo Joaquim and Bernardus Van Doornik - BCB Working Paper

Resultado de imagem para spread bancario brasil

Finanças obscuras do Vaticano

O Papa Francisco tenta resolver um problema mundano: tornar o Vaticano um lugar menos pecaminoso em termos financeiros. Mas o quadro traçado pela The Economist mostra que nem as autoridades europeias acreditam na santidade do Vaticano: o lugar parece favorecer a lavagem de dinheiro:


AFTER A WEEK of resignations and exclusions, the Vatican faces the very real risk of being reduced once more to the status of an international financial pariah. In the coming days its officials are due to answer a detailed questionnaire for Moneyval, Europe’s anti-money-laundering and anti-terrorist-financing watchdog. The picture they will have to paint could scarcely be less reassuring.

The Financial Information Authority (AIF)—the Vatican’s regulatory body and the cornerstone of a nine-year campaign to dispel the Holy See’s image as a refuge for hot money and shady dealings—is no longer eligible to receive intelligence on suspected financial crime from its counterparts in other states. The AIF’s president, René Brülhart, has left (the Vatican announced on November 19th that his contract would not be renewed). Half his board has since resigned. And the authority’s director is suspended from duty.

Information is Beautiful

Eu gosto de acompanhar o prêmio Information is Beautiful. No passado já tivemos demonstrações contábeis vencendo o prêmio.

Na versão anunciada agora, o jornal Estadão venceu o Non-English Language por um gráfico sobre adoção no Brasil .Veja o gráfico aqui

50 melhores livros de não ficção dos últimos 25 anos

A Slate selecionou um conjunto de 50 livros que representariam o que melhor foi produzido de não ficção nos últimos 25 anos.

Qualquer lista é controversa. Nesta, reconheci dois livros. O primeiro, Fun Home, de Alison Bechdel, a mesma que criou uma medida de descriminação de gênero em filmes, é um quadrinhos sobre a sua vida. Conta, em especial, o relacionamento com seu pai. Tem uma versão em língua portuguesa. É um livro triste, mas que vale uma lida.

O segundo é Informação, de James Gleick. Também tem tradução para o português e o autor tinha escrito anteriormente sobre a teoria do Caos. Confesso que já comecei a ler este livro duas vezes e não consegui passar das primeiras páginas. Talvez seja também um problema meu. É uma obra que impressiona pelo número de páginas.

Rir é o melhor remédio

Quando você decide cortar custos

21 novembro 2019

Inteligência Artificial: não caia na propaganda exagerada


Expectations that computers are on the verge of matching or surpassing humans' abilities may be rampant, but they aren't new.


So began Melanie Mitchell's engaging and informative lecture, hosted by The Santa Fe Institute and delivered to a packed house at The Lensic Performing Arts Center on Tuesday.


Mitchell, a professor of computer science at Portland State University and external professor at SFI, introduced the audience to AI's history by noting that as far back as 1958 (and this was not even the beginning, she added), researcher and engineer Frank Rosenblatt debuted a device called The Perceptron, developed by the Office of Naval Research. The Perceptron had been taught to recognize hand-written letters and Rosenblatt's unveiling of it prompted The New York Times to declare that the Navy had "revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."


Many other predictions followed.


In 1961, Claude Shannon, pioneer of information theory, proclaimed, "I confidently expect that within a matter of 10 or 15 years, something will emerge from the laboratory which is not too far from the robot of science fiction fame."


Nobel Laureate Herbert Simon followed, in 1965, saying: "Machines will be capable, within 20 years, of doing any work that a man can do."


And MIT AI lab founder Marvin Minsky, in 1967, predicted that within a generation, the problem of creating artificial intelligence would be "substantially solved."


None of these predictions have actually come to pass yet. "We are still at a stage where machines are very much less intelligent than we are," Mitchell said. "but people still are making big predictions about them."


For example, Coursera co-founder, computer scientist Andrew Ng, an expert in machine learning, just two years ago compared AI to electricity circa 100 years ago, saying "I actually have a hard time thinking of an industry that I don't think AI will transform in the next several years." Elon Musk, more forebodingly, has described AI as "our biggest existential threat."


In fact, Mitchell's new book, Artificial Intelligence: A Guide for Thinking Humans, begins with a story of visiting Google for an AI meeting in 2014, along with her mentor, AI legend Douglas Hofstadter, author of the seminal computer science book and Pulitzer prize winner Gödel, Escher, Bach: An Eternal Golden Braid. At that meeting, Hofstadter expresses to the Google engineers his own fears about AI encroaching on those qualities he assumed to be distinctly human and un-replicable. Mitchell writes: "He fears that AI might show us that the human qualities we most value are disappointingly simple to mechanize."


The field, Mitchell soon discovered, was divided on this point. "What I found is that the field of AI is in turmoil," she writes, with one side confident of machines' limitations, and the other predicting it's just a matter of time before they wipe out humanity. She embarked on the book to "understand the true state of affairs" in AI, the basic tenets of which she presented in Santa Fe.


Her lecture first discussed what artificial intelligence actually is, given that the term, she said, is surprisingly difficult to define. Examples are easier to find: chess-playing machines, virtual assistants, GPS, self-driving cars. Mitchell said her favorite definition is "an anarchy of methods," as this helps characterize the way in which AI is an umbrella term for all types of ways machines perform actions people might describe as intelligent.


From there, Mitchell described how machine learning has become a more significant part of the field since the 1980s. Prior to then, so-called intelligent machines learned by having people manually program them with rules. In machine learning, which became more prevalent in the field between the 1990s and 2000s, machines actually learn by being given data rather than through human programming. By 10 years ago, machine learning had become prevalent in AI, and "deep learning," a type of machine learning design inspired by the way the human brain works, had taken over machine learning. Today, she noted "all of the things you use" that fall under AI—speech recognition, Google search, facial recognition—"are powered by deep learning."


All are far from flawless. Machines can be hacked, and she provided numerous examples of facial recognition software stymied by special glasses and self-driving cars led astray by so-called "adversarial machine learning," to name a few. Moreover, machines can't extrapolate larger concepts and meaning in the same way that humans can.


"I think what's going wrong is these systems are not capturing the meaning that is in the data they're processing," she said. "They don't understand in the way that we humans understand. It's hard to say exactly what it means to understand; it's one of the problems we have with this kind of language." Words such as intelligence, understanding and comprehension, she said "don't have very good definitions" in terms of how brains work. "I can't say whether a machine is understanding or not, but I know it's different than the way I understand. And if machines are going to work with us in our world, we need to make sure they understand the way we do."

Fonte: aqui

Imagem