In 2021, economist Nick Huntington-Kline conducted an experiment. He started with two papers recently published in high quality economics journals. The first reported on the effects of compulsory schooling on teenage pregnancy rates, looking at policy interventions in the United States and Norway. The second was a study of the effects of employer-provided health insurance on entrepreneurship. Huntington-Kline provided the data and methodology underlying each study to independent researchers, asking them to replicate the original authors’ findings, if they could. In total, seven attempted replications were made of both studies. What Huntington-Klein found, in work that would later win the Western Economic Association’s Paper of the Year award, was significant variation in the attempted replications. Given the same data and the same methodological instructions, the seven replication attempts reached different point estimates of the magnitude of the effect they were replicating, different estimates of the statistical significance of those estimates, and different confidence intervals for their conclusions. This result, though only a report of seven replications, has potentially significant implications for empirical science, including what sort of findings should count as publishable, how worried we should be when a result fails to replicate, and the adequacy of how research methodologies are documented and shared. Funds from this grant support an extension of Huntington-Kline’s work, allowing him and his team to field a similar experiment at significantly larger scale, using 100 replications instead of seven. The study will allow Huntington-Kline to document with greater robustness and precision how researcher choices in the analysis of data lead to variability in the conclusions they reach.