The review process for software is analogous to but in some ways different from a manuscript review; in addition to assessing the integrity of the methods manifested in the software’s algorithms, reviewers can consider features of the code itself and how well it is “bundled” for use by others. Does it run well under varying conditions? How interoperable is it with other platforms? What is the quality of its documentation? Such review is important for two main reasons. First, software that receives high marks by reputable reviewers lowers barriers to use. Scientists can trust that well-reviewed code is robust, trustworthy, and easy to implement, even if they did not write the code themselves. Second, well-regarded software reviews (and citations) can signal value and thus increase the incentives for software engineers and others to develop and maintain research software. This grant funds a project by ecologist and data scientist Leah Wasser to further advance research software review in Python, arguably the dominant programming language for data science. pyOpenSci will mimic many of the core functions of the rOpenSci ecosystem including a grassroots process to develop common community standards, a transparent review process that leverages critical tooling from the Journal of Open Source Software, and efforts to build a strong, well-connected, diverse network of developers, engineers, and working scientists committed to the project.