Research

Read the latest research and writing from GRAIL Network members (newest first):

Shira Mitchell and Kristian Lum examine the assumptions and choices made to justify the use of prediction-based decision making, discuss how those choices and assumptions raise fairness concerns, and offer a more consistent catalog of fairness definitions in “Algorithmic Fairness: Choices, Assumptions, and Definitions.” Annual Review of Statistics and its Application, Vol. 8, 2021.

Jonathan Frankle examines the tradeoffs associated with distributed training methods for neural networks, finding that using local stochastic gradient descent (SGD) results in faster training times, but lower accuracy, in  “Trade-offs of Local SGD at Scale: An Empirical Study,” for the 12th Annual Workshop on Optimization for Machine Learning.

Avi Goldfarb examines how an increased reliance on prediction-based systems in warfare will also increase the need for, and value of, human judgement, in, “Artificial intelligence in war: Human judgment as an organizational strength and a strategic liability,” Brookings Research Report, November 2020.

James Bessen – “AI and Jobs: the role of demand, National Bureau of Economic Research, January 2018

“Artificial intelligence (AI) technologies will automate many jobs, but the effect on employment is not obvious. In manufacturing, technology has sharply reduced jobs in recent decades. But before that, for over a century, employment grew, even in industries experiencing rapid technological change. What changed? Demand was highly elastic at first and then became inelastic. The effect of artificial intelligence on jobs will similarly depend critically on the nature of demand. This paper presents a simple model of demand that accurately predicts the rise and fall of employment in the textile, steel and automotive industries. This model provides a useful framework for exploring how AI is likely to affect jobs over the next 10 or 20 years.”

Suresh Venkatasubramanian, et al. – “Auditing Black-box Models for Indirect Influence. Knowledge and Information Systems, Knowledge and Information Systems, January 2018

Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior and in particular how different features influence the model prediction. This is important when interpreting the behavior of complex models or asserting that certain problematic attributes (such as race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models, which lets us study the extent to which existing models take advantage of particular features in the data set, without knowing how the models work.

Robert Seamans and Manav Raj – “AI, Labor, Productivity and the Need for Firm-Level Data, National Bureau of Economic Research, January 2018

“We summarize existing empirical findings regarding the adoption of robotics and AI and its effects on aggregated labor and productivity, and argue for more systematic collection of the use of these technologies at the firm level. Existing empirical work primarily uses statistics aggregated by industry or country, which precludes in-depth studies regarding the conditions under which robotics and AI complement or are substituting for labor. Further, firm-level data would also allow for studies of effects on firms of different sizes, the role of market structure in technology adoption, the impact on entrepreneurs and innovators, and the effect on regional economies amongst others. We highlight several ways that such firm-level data could be collected and used by academics, policymakers and other researchers.”

Ryan Calo – “Artificial Intelligence Policy: A Primer and Roadmap, UC Davis Law Review, October 2017

“Talk of artificial intelligence is everywhere. People marvel at the capacity of machines to translate any language and master any game. Others condemn the use of secret algorithms to sentence criminal defendants or recoil at the prospect of machines gunning for blue, pink, and white-collar jobs. Some worry aloud that artificial intelligence will be humankind’s “final invention.”

This essay, prepared in connection with UC Davis Law Review’s 50th anniversary symposium, explains why AI is suddenly on everyone’s mind and provides a roadmap to the major policy questions AI raises. The essay is designed to help policymakers, investors, technologists, scholars, and students understand the contemporary policy environment around AI at least well enough to initiate their own exploration.”

Margaret Hu – “Algorithmic Jim Crow, Fordham Law Review, 2017

“This Article contends that current immigration- and security-related vetting protocols risk promulgating an algorithmically driven form of Jim Crow. Under the “separate but equal” discrimination of a historic Jim Crow regime, state laws required mandatory separation and discrimination on the front end, while purportedly establishing equality on the back end. In contrast, an Algorithmic Jim Crow regime allows for “equal but separate” discrimination. Under Algorithmic Jim Crow, equal vetting and database screening of all citizens and noncitizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting, and acting upon vetting and screening systems in ways that result in a disparate impact.”

Kristian Lum and William Isaac – “To predict and serve?, The Royal Statistical Society, October 2016

“Predictive policing systems are used increasingly by law enforcement to try to prevent crime before it occurs. But what happens when these systems are trained using biased data? Kristian Lum and William Isaac consider the evidence – and the social consequences.”