Correlation is symmetric. If some variable in a dataset, A, rises and falls in correlation with another variable, B, then B also falls and rises with A. Causal relations, in contrast, have direction. Saying “A causes B” is entirely different from saying “B causes A.” The directionality of causation makes detecting and analyzing causation using only correlations extremely difficult. Statistical methods framed entirely in terms of correlation therefore cannot express, let alone analyze or measure, causal relationships. Traditional data analysis also falls short when based on correlations that are spurious—that is, ones caused by random noise rather than fundamental processes. Big datasets do not make such challenges any easier, either. In fact, the opposite is the case. What’s ultimately needed, if we are to detect, measure, and understand causality, are more robust frameworks that draw on a richer set of concepts specifically designed to detect the inherent directionality of causal inference. Computer scientist Elias Bareinboim of Columbia University is developing just such an analytic framework. Drawing on seminal research conducted with his collaborator, computer scientist Judea Pearl, Bareinboim works with graphical methods designed to represent not only correlations between variables in a dataset, but whether and how other variables in the dataset affect those correlations. This framework allows the analyst to ask rigorous counterfactual queries of a dataset (what would have happened if…) that are essential for understanding causal relations among variables. In addition, Bareinboim’s framework is being adapted for implementation by sophisticated AI and machine learning programs. The goal is to allow them to separate causal relationships in data from non-causal correlations. Grant funds will support Bareinboim’s research on these and related topics for a period of two years.