Transparency is the Key to Ethical AI Decision-Making

By
Charlie von Simson
-
April 1, 2019
-
AI and The Law

As artificial intelligence underpins more and more public and private decision-making, critics are justifiably concerned about the potential lack of accountability for how those decisions are made. In our view, transparency is the key to ensuring that decisions that are aided and augmented by algorithmic data analysis are ethically sound.


When businesses and government agencies turn algorithms loose to make decisions that offend our sense of fairness we can analyze and fix the problem through a combination of data science and existing law. Our understanding of how machine learning actually works is progressing daily. Tort law will evolve to create appropriate standards of care and theories of liability. Improper government bias rooted in automated decision-making tools will be addressed the way biased government decisions are addressed today: through challenges to the decisions under statutes and the due process and equal protection clauses of the Constitution.

Some ethicists make a false distinction between “neutral” and “policy-directed” algorithms. Contrary to their view, every species of algorithm-based data analysis (we’ll use the shorthand “AI”) is the product of an active editorial hand. The very point of algorithmic analysis is to apply “bias” and “discrimination” (called recall, matching and ranking by machine intelligence engineers) to reveal and exploit statistical patterns in large data sets. It’s meaningless to talk about algorithmic bias as being positive or negative. The challenge for people applying AI to public decision making is to understand and fix improper biases that harm people.

Transparency is the most important step toward a solution to harmful algorithmic bias. The AI Now project at New York University has made an interesting start at solving the “black-box problem” by isolating five essential elements of a “Public Agency Algorithmic Impact Assessment”:

  1. Agencies should conduct a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities;
  2. Agencies should develop meaningful external researcher review processes to discover, measure, or track impacts over time;
  3. Agencies should provide notice to the public disclosing their definition of “automated decision system,” existing and proposed systems, and any related self-assessments and researcher review processes before the system has been acquired;
  4. Agencies should solicit public comments to clarify concerns and answer outstanding questions; and
  5. Governments should provide enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that agencies have failed to mitigate or correct.

In her chilling book on the unintended consequences of algorithmic bias, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Virginia Eubanks provides a detailed case study to illustrate how algorithmic decision-making goes wrong. The study offers a good example of how the AI Now guidelines should be implemented to adjust or abandon an ill-considered algorithm.

Eubanks reports that in a well-meaning effort to detect child abuse, the Allegheny County, Pennsylvania department of Children, Youth and Families implemented an algorithm to assign “threat scores” to children. The scores were intended to quantify how vulnerable the children were to child abuse. The county wanted to assign the threat score based on an algorithmic “outcome variable.” The trouble started because the county didn’t have enough data concerning child maltreatment to create a statistically meaningful model. So instead of using relevant data, the county fell into an attractive trap: it substituted variables that it had a lot of data on—community referrals to public hotlines and placement in foster care—as proxies for child mistreatment. The county then used that algorithm to predict and assign the “threat scores.”

Contrary to the first element of the AI Now assessment framework, the county did not examine or account for its own biases. As it turns out, the substitute variables are not good proxies for child abuse. For one, they are subjective. As Eubanks explains, the referral proxy includes a hidden bias: “anonymous reporters and mandated reporters report black and biracial families for abuse and neglect three and a half more often than they report white families." Even worse, angry neighbours, landlords or family members make intentionally false reports as punishment or retribution. The resulting “threat scores” revealed as much about those reporting alleged abuse than about the victims.

The Allegheny County algorithm also illustrates the consequences of failing to provide due process to address the failures of automated decision making in accordance with the fifth AI Now principal. Allegheny County didn’t have data on all families; its data had been collected only from families using public resources—i.e., low-income families. This resulted in an algorithm that targeted low-income families for scrutiny, and that potentially created feedback loops, making it difficult for families swept up into the system to ever completely escape the monitoring and surveillance it entails. This outcome offends basic notions of fairness. It is an attractive target for a civil rights class action.

The Allegheny County case shows how tort law will develop to address over-reliance on bad algorithms. Despite the fact that the “threat score” was supposed to serve as one of many factors for caseworkers to consider before deciding which families to investigate, Eubanks observed that in practice the algorithm were training the intake workers. Caseworker judgment had helped counteract hidden bias. Armed with the algorithm, caseworkers started substituting their own judgment with that of the algorithm, effectively relinquishing their gatekeeping role. The system became more class and race biased as a result.

Once the failures of an improperly-biased algorithm are broken into their component parts, its easier to see which features of the failures can be fixed by better data science and which constitute meaningful civil rights violations that must be fixed by the courts. Evolving frameworks such as the AI Now factors will assist policymakers and lawyers in increasing transparency and developing and enforcing sound standards for addressing harmful algorithmic bias.

Charlie von Simson

Charlie von Simson is a legal subject matter expert at ROSS. He practiced law for twenty years before running away to join a startup.