On the one hand, more and more decisions are being made based on what machines can learn about us: who gets a loan, who gets into college, who gets insurance, etc. On the other hand, people have many reservations about the fairness of algorithms, about algorithmic perpetuation of biases built into historical data, about the mis- or overinterpretation of statistical correlations, and more. This grant funds work by economists Jens Ludwig from the University of Chicago and Sendhil Mullainathan from Harvard to study when, why, and how people should override recommendations based on artificial intelligence. The team will focus on how New York City judges decide to release or hold suspects before trial. Machine generated recommendations—ones that use facts about a suspect to predict whether that subject will commit a crime if released back into the community—are already in use. But judges are also privy to information about a subject that a typical algorithm is not, including a suspect’s courtroom dress, demeanor, accompanying associates, etc. Ludwig and Mullainathan will study whether and how these additional factors affect both judicial predictions of suspect behavior as well as AI predictions of judicial behavior.