Fortune Magazine: A.I. bias isn’t the problem. Our society is.

patrick-tomasso-55F0yZTirUY-unsplash.jpg

On Wednesday, Sens. Ron Wyden and Cory Booker and Rep. Yvette Clarke introduced the Algorithmic Accountability Act, indicating policymakers’ increasing concern that artificial intelligence is magnifying human bias in tools such as facial recognition, self-driving cars, customer service, marketing, and content moderation. 

While A.I. has incredible potential to improve our lives, the truth is that it is only capable of reflecting our societal problems right back at us. And because of that, we can’t trust it to make important decisions that are susceptible to human prejudice.

Even the most enlightened of humans have deep-seated biases. Difficult to identify, they are even harder to correct. Today’s A.I. learns by encoding patterns from the data that it feeds on. If you build an A.I. system designed to identify who is going to be a future convict, for example, the only data you can rely on is past data. Since the percentage of blacks in prison is higher and the percentage of whites in prison is lower than their respective shares of the U.S. population, a naive A.I. system will infer that a black person is more likely than a white person to commit a crime.

Such a system is unable to take into account all of the systemic biases that have ensured blacks’ relatively higher incarceration rates. And we currently don’t have data to train A.I. systems other than data that, though superficially objective, inherently expresses societal norms and biases.

Finding better data will be exceptionally hard. Even if we programmed an A.I. system to ignore race and use different measures when predicting future criminality, the results would likely come out the same. Consider the other attributes convicts might share: living in particular neighborhoods, coming from single-parent families, or not graduating from high school. All of these categories would essentially act as proxies for race, since machine learning cannot account for all their different interlinkages.

In this way, the current generation of artificial intelligence is smart like a savant, but has nothing close to the discriminating intelligence of a human.

A.I. shines in performing tasks that match patterns in order to obtain objective outcomes. Playing Go, driving a car on a street, and identifying a cancer lesion in a mammogram are excellent examples of narrow A.I. These systems can be incredibly helpful extensions of how humans work and are already surpassing us in discrete parts of jobs. A tumor is a tumor, regardless of whether it is in the body of an Asian or Caucasian. Able to base their judgements on objectively measurable data, these systems are readily correctible if and when interpretations of those data are subject to overhaul. 

But, although an A.I. machine may best a human radiologist in spotting cancer, it will not, for many years to come, replicate the wisdom and perspective of the best human radiologists.

This is where A.I. presents its greatest risk: in softer tasks that may have objective outcomes but incorporate what we would normally call judgment. Some such tasks exercise much influence over people’s lives. Granting a mortgage, admitting a child to a university, awarding a felon parole, or deciding whether children should be separated from their birth parents due to suspicions of abuse fall into this category. Such judgments are highly susceptible to human biases—but they are biases that only humans themselves have the ability to detect.

Another failure of A.I. lies in its analysis and use of content. Services like YouTube have created algorithms to boost user engagement that identify the stickiest, most engaging content and promote it. Unfortunately, these algorithms were designed without a circuit breaker, and any human assessment of whether the toxic content promoted by these algorithms was good for society was a too-late afterthought.

Growing awareness of these risks, however, has not slowed the rapid weaving of algorithmic decision-making into the fabric of society. Today, the tech giants deploy A.I. to influence what we see, hear, buy—and even feel. The shift of the burden of discernment from people to machines is rapidly spreading into many other nooks—but under the guise of “optimization.”

It’s time to press pause. Perhaps in the future we can create systems that do an excellent job of utilizing data about our lives while excising bias. Until then, A.I. left unchecked is more of a risk than a benefit to society.

Original link