GRAIL Recommended Reading
Check out some of the research and writing our Network members find illuminating-
Suresh Venkatasubramanian recommends:
Disparate Impact in Big Data Policing – The degree to which predictive policing systems have discriminatory results is unclear to the public and to the police themselves, largely because there is no incentive in place for a department focused solely on “crime control” to spend resources asking the question. This is a problem for which existing law does not provide a solution. Finding that neither the typical constitutional modes of police regulation nor a hypothetical anti-discrimination law would provide a solution, this Article turns toward a new regulatory proposal centered on “algorithmic impact statements.”
Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes -The enormous financial success of online advertising platforms is partially due to the precise targeting features they offer. Although researchers and journalists have found many ways that advertisers can target—or exclude—particular groups of users seeing their ads, comparatively little attention has been paid to the implications of the platform’s ad delivery process, comprised of the platform’s choices about who should see an ad.
It has been hypothesized that this process can “skew” ad delivery in ways that the advertisers do not intend, making some users less likely than others to see particular ads based on their demographic characteristics. In this paper, we demonstrate that such skewed delivery occurs on Facebook, due to market and financial optimization effects as well as the platform’s own predictions about the “relevance” of ads to different groups of users. We find that both the advertiser’s budget and the content of the ad each significantly contribute to the skew of Facebook’s ad delivery. Critically, we observe significant skew in delivery along gender and racial lines for “real” ads for employment and housing opportunities despite neutral targeting parameters.
Stan Adams recommends:
Mixed Messages? The Limits of Automated Social Media Content Analysis – Governments and companies are turning to automated tools to make sense of what people post on social media, for everything ranging from hate speech detection to law enforcement investigations. Policymakers routinely call for social media companies to identify and take down hate speech, terrorist propaganda, harassment, “fake news” or disinformation, and other forms of problematic speech. Other policy proposals have focused on mining social media to inform law enforcement and immigration decisions. But these proposals wrongly assume that automated technology can accomplish on a large scale the kind of nuanced analysis that humans can accomplish on a small scale.
Automated decision systems are currently being used by public agencies, reshaping how criminal justice systems work via risk assessment algorithms and predictive policing, optimizing energy use in critical infrastructure through AI-driven resource allocation, and changing our employment and educational systems through automated evaluation tools and matching algorithms. Many such systems operate as “black boxes” – opaque software tools working outside the scope of meaningful scrutiny and accountability. The Algorithmic Impact Assessment (AIA) framework proposed in this report is designed to support affected communities and stakeholders as they seek to assess the claims made about these systems, and to determine where – or if – their use is acceptable.
Caleb Watney recommends:
Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics – We live in an age of paradox. Systems using artificial intelligence match or surpass human level performance in more and more domains, leveraging rapid advances in other technologies and driving soaring stock prices. Yet measured productivity growth has declined by half over the past decade, and real income has stagnated since the late 1990s for a majority of Americans. We describe four potential explanations for this clash of expectations and statistics: false hopes, mismeasurement, redistribution, and implementation lags.
The Malicious Use of Artificial Intelligence – Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.