Grants Database

The Foundation awards approximately 200 grants per year (excluding the Sloan Research Fellowships), totaling roughly $80 million dollars in annual commitments in support of research and education in science, technology, engineering, mathematics, and economics. This database contains grants for currently operating programs going back to 2008. For grants from prior years and for now-completed programs, see the annual reports section of this website.

Grants Database

Grantee
Amount
City
Year
  • grantee: Columbia University
    amount: $300,000
    city: New York, NY
    year: 2020

    To explore the application of formal methods in computer science to the study of trustworthiness of AI systems

    • Program Technology
    • Sub-program Exploratory Grantmaking in Technology
    • Investigator Jeannette Wing

    This grant funds a project by computer scientist Jeannette Wing, Director of the Columbia University Data Science Institute and Professor of Computer Science, to adapt "formal methods” (the representation of computer science systems as mathematical objects) to AI systems.  Once the AI system, the input data, and the desired trust property are formally specified, the AI system can then be analyzed using mathematics, allowing a skilled analyst to rigorously prove or disprove statements about the system being represented. The technique holds obvious appeal for those concerned about the trustworthiness of AI systems, since a formal methods analysis has the potential to reveal how an AI system would or would not behave in novel situations.  Grant funds will support Wing’s attempts to extend formal methods theory to AI systems, including how to formally specify properties of AI systems like fairness, privacy, and robustness.  A particular focus of Wing’s work will be to better formally understand the relationships among such properties, in order to identify and generalize their commonalities and differences.  Wing will also work on trying to use formal methods to characterize, with respect to these trust properties, the relationship between AI systems and the datasets used for training and testing them.

    To explore the application of formal methods in computer science to the study of trustworthiness of AI systems

    More
  • grantee: University of Washington
    amount: $412,528
    city: Seattle, WA
    year: 2020

    To better understand and improve the testing and verification of distributed manufacturing

    • Program Technology
    • Sub-program Exploratory Grantmaking in Technology
    • Investigator Nadya Peek

    Open and inexpensive hardware has the potential to revolutionize the creation and deployment of sensors and other scientific instruments, expanding access and lowering barriers to innovation in data-driven research methods.  Much of the activity within the open hardware movement has been on expanding the distributed production of hardware, through tools like the open licensing of hardware design and the creation of open 3-D      printing templates for instrument parts.  There has been comparatively less emphasis, however, on how to measure and ensure quality control in a distributed production process.  The widespread availability of inexpensive sensors will only revolutionize science, after all, if the sensors actually work.  This project by University of Washington researcher Nadya Peek will  improve our understanding of quality control in distributed manufacturing processes.  Over the course of the grant, Peek will engage in four streams of activity aimed at filling gaps in current open hardware calibration practices. First, she will develop a generalizable format for documenting the theoretical capabilities of a production machine like a consumer-grade 3D printer.  Second, once this format is created, Peek will use it to develop calibration software capable of verifying that a specific instance of that machine is performing to expectations and within acceptable error parameters.  Third, Peek will develop new software to monitor such machines in real time, ensuring that they are maintaining precision and calibration through the production process.  Fourth, she will develop low-barrier procedures for testing the precision and quality of the final output. In addition, Peek will also field a survey questioning how researchers in the open hardware community are adapting their distributed production processes in response to the shutdowns caused by the COVID-19 pandemic.  

    To better understand and improve the testing and verification of distributed manufacturing

    More
  • grantee: Harvard University
    amount: $995,133
    city: Cambridge, MA
    year: 2020

    To study algorithmic fairness by developing a theory of principled scoring functions based on notions about pseudorandomness and multicalibration

    • Program Technology
    • Sub-program Exploratory Grantmaking in Technology
    • Investigator Cynthia Dwork

    The Internet Age is quickly giving way to the Age of the Algorithm.  Decision-makers of all kinds are increasingly turning to complex algorithmic methods to help them allocate resources, set policies, and assign risk.  Banks use algorithms to figure out how likely someone is to default on a loan. Online retailers use algorithms to decide which ads to display on your phone.  Pollsters use algorithms to determine who is and who is not likely to vote. Increasing reliance on algorithmic verdicts comes with risks of its own, however.  The worry is not so much that the algorithms might get things wrong—human judgement, after all, is hardly error free--but they might get things systematically wrong, disfavoring one group of people over another for arbitrary or irrelevant reasons.  The worry is that we might build algorithms, in other words, that are unfair. This grant funds efforts by a team led by Harvard computer scientist Cynthia Dwork that aim to address this issue. Dwork’s plans involve constructing new theoretical frameworks—based on rigorous mathematical notions called pseduorandomenss, latitude, and multicalibration--that can be used to define and evaluate whether an algorithm is fair or not.  Grant funds will allow Dwork to fully develop her theory, build some algorithms that meet that those characteristics described, and test them to see if they indeed perform as theory predicts.  If successful, the effort would constitute a significant stride forward in our understanding of an increasingly essential cog in the machinery of modern life. 

    To study algorithmic fairness by developing a theory of principled scoring functions based on notions about pseudorandomness and multicalibration

    More
  • grantee: NumFOCUS
    amount: $249,532
    city: Austin, TX
    year: 2020

    To develop and deploy the NumFOCUS Digital Learning and Community Platform

    • Program Technology
    • Sub-program Exploratory Grantmaking in Technology
    • Investigator Lorena Barba

    To develop and deploy the NumFOCUS Digital Learning and Community Platform

    More
  • grantee: Open Collective Foundation
    amount: $50,000
    city: Walnut, CA
    year: 2020

    To support open source software organizations that have suffered losses due to pandemic-related event cancellations

    • Program Technology
    • Sub-program Exploratory Grantmaking in Technology
    • Investigator Duane O'Brien

    To support open source software organizations that have suffered losses due to pandemic-related event cancellations

    More
  • grantee: Yarn Labs
    amount: $150,000
    city: Cambridge, MA
    year: 2020

    To design and prototype a model for an AI Bias Bounty system

    • Program Technology
    • Sub-program Exploratory Grantmaking in Technology
    • Investigator Joy Buolamwini

    To design and prototype a model for an AI Bias Bounty system

    More
  • grantee: Columbia University
    amount: $249,998
    city: New York, NY
    year: 2020

    To support the discovery and iterative use of machine learning models through improvements to the AI Model Share platform

    • Program Technology
    • Sub-program Exploratory Grantmaking in Technology
    • Investigator Michael Parrott

    To support the discovery and iterative use of machine learning models through improvements to the AI Model Share platform

    More
  • grantee: University of Montreal
    amount: $333,960
    city: Montreal, Canada
    year: 2020

    To study and give greater clarity to the categorization of predatory publishing in science

    • Program Technology
    • Sub-program Exploratory Grantmaking in Technology
    • Investigator Kyle Siler

    Fraudulent journals charging fees to publish works by academic authors without checking the submitted articles for quality or legitimacy and without providing editing, review, or other services provided by more legitimate journals, is commonly known as “predatory publishing.”  Predatory journals deliver little to no value to their authors and flood the scientific corpus with poorly-vetted, seldom-cited articles. This grant funds research led by Kyle Siler at the Universitй de Montrйal to study predatory academic journals.  Starting with journals in a set of widely-circulated lists of predatory publishers, Siler and colleagues will begin by refining a definition of “predation”ѕ;the diverse variety of legitimate journal practices makes precise definition controversialѕ;and then compare articles published in predatory and non-predatory venues through a set of lenses: inclusion in vetted databases, citation, full-text analysis, authorship, and variability within publication. Siler and his team will produce peer-reviewed papers as well as briefings for scientific stakeholders. In addition, the researchers will release the first open-access, article-level dataset on the “dark web” of seldom-indexed illegitimate and/or quasi?illegitimate academic journals.

    To study and give greater clarity to the categorization of predatory publishing in science

    More
  • grantee: Gathering for Open Science Hardware
    amount: $574,770
    city: Hudson, NY
    year: 2020

    To support community events and new models for developing open scientific hardware

    • Program Technology
    • Sub-program Exploratory Grantmaking in Technology
    • Investigator Shannon Dosemagen

    The Gathering for Open Science Hardware (GOSH) is a community of professional and citizen scientists, educators, and other open science enthusiasts that are working to advance discovery through leveraging the scientific opportunities created by open hardware.  Funds from this grant provide two years of support for GOSH’s core community-building and development activities.  Funded activities include planning and hosting of the GOSH annual meeting, development of a model for regional and topic-focused GOSH events, outreach to university administrators and other potential funders, and a “collaborative development program” that would seek to support open hardware projects through an experimental combination of online project development with time-bounded in-person intensive collaboration.

    To support community events and new models for developing open scientific hardware

    More
  • grantee: New York University
    amount: $1,999,053
    city: New York, NY
    year: 2019

    To study and build a research community around the genesis of data used to train and evaluate the performance of AI systems

    • Program Technology
    • Sub-program Exploratory Grantmaking in Technology
    • Investigator Jason Schultz

    Artificial intelligence (AI) algorithms are being built and trained to perform a wide variety of tasksСrecognizing faces, identifying objects in photos, processing natural language by extracting concepts from text. Once a system is built and trained, however, how do we know how well it performs relative to other such systems? How do we know if the data used to train the system reflect the context in which the system will be used? To answer these questions, we need to scrutinize the training datasets that are used to construct AI systems, and the benchmarking datasets against which these systems are assessed. This grant supports work by Meredith Whittaker and Kate CrawfordСthe co-founders of the AI Now Institute at New York UniversityСand NYU Law professor Jason Schultz. Over the course of three years, Whittaker, Crawford, Schultz, and their team will dig deeply into the history, design, and technical details of some of the most foundational AI datasets, investigating where they came from, how they have evolved, and how they have been used over time. They will use these findings to catalyze a broader conversation about how to understand and appropriately govern the AI systems that are informed by these datasets. The grant outputs will include multiple papers produced for both academic and lay audiences, visualizations of the provenance and uses of specific datasets, and workshops that will bring together the growing community of researchers studying the data that underpins AI research.

    To study and build a research community around the genesis of data used to train and evaluate the performance of AI systems

    More
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website.