Resumo:
We examine how machine learning can be used to improve and understand human decision making. In particular, we focus on a decision that has important policy consequences. Millions of
times each year, judges must decide where defendants will await trial—at home or in jail. By law,
this decision hinges on the judge’s prediction of what the defendant would do if released. This is
a promising machine learning application because it is a concrete prediction task for which there
is a large volume of data available. Yet comparing the algorithm to the judge proves
complicated. First, the data are themselves generated by prior judge decisions. We only observe
crime outcomes for released defendants, not for those judges detained. This makes it hard to
evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have
a broader set of preferences than the single variable that the algorithm focuses on; for instance,
judges may care about racial inequities or about specific crimes (such as violent crimes) rather
than just overall crime risk. We deal with these problems using different econometric strategies,
such as quasi-random assignment of cases to judges. Even accounting for these concerns, our
results suggest potentially large welfare gains: a policy simulation shows crime can be reduced by
up to 24.8% with no change in jailing rates, or jail populations can be reduced by 42.0%with no
increase in crime rates. Moreover, we see reductions in all categories of crime, including violent
ones. Importantly, such gains can be had while also significantly reducing the percentage of
African-Americans and Hispanics in jail. We find similar results in a national dataset as well. In
addition, by focusing the algorithm on predicting judges’ decisions, rather than defendant
behavior, we gain some insight into decision-making: a key problem appears to be that judges to
respond to ‘noise’ as if it were signal. These results suggest that while machine learning can be
valuable, realizing this value requires integrating these tools into an economic framework: being
clear about the link between predictions and decisions; specifying the scope of payoff functions;
and constructing unbiased decision counterfactuals.
Human Decisions and Machine Predictions
Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan
NBER Working Paper No. 23180
February 2017