Grants

Harvard University

To study algorithmic fairness by developing a theory of principled scoring functions based on notions about pseudorandomness and multicalibration

  • Amount $995,133
  • City Cambridge, MA
  • Investigator Cynthia Dwork
  • Year 2020
  • Program Technology
  • Sub-program Exploratory Grantmaking in Technology

The Internet Age is quickly giving way to the Age of the Algorithm.  Decision-makers of all kinds are increasingly turning to complex algorithmic methods to help them allocate resources, set policies, and assign risk.  Banks use algorithms to figure out how likely someone is to default on a loan. Online retailers use algorithms to decide which ads to display on your phone.  Pollsters use algorithms to determine who is and who is not likely to vote. Increasing reliance on algorithmic verdicts comes with risks of its own, however.  The worry is not so much that the algorithms might get things wrong—human judgement, after all, is hardly error free--but they might get things systematically wrong, disfavoring one group of people over another for arbitrary or irrelevant reasons.  The worry is that we might build algorithms, in other words, that are unfair. This grant funds efforts by a team led by Harvard computer scientist Cynthia Dwork that aim to address this issue. Dwork’s plans involve constructing new theoretical frameworks—based on rigorous mathematical notions called pseduorandomenss, latitude, and multicalibration--that can be used to define and evaluate whether an algorithm is fair or not.  Grant funds will allow Dwork to fully develop her theory, build some algorithms that meet that those characteristics described, and test them to see if they indeed perform as theory predicts.  If successful, the effort would constitute a significant stride forward in our understanding of an increasingly essential cog in the machinery of modern life. 

Back to grants database
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website.