Photo by Andy Kelly on Unsplash

The White House’s ‘AI Bill of Rights’ outlines five principles to make artificial intelligence safer, more transparent and less discriminatory

Many AI algorithms, like facial recognition software, have been shown to be discriminatory to people of color. // Photo by Andy Kelly on Unsplash

Christopher Dancy, Penn State

Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there is very little policy or regulation governing the development and use of AI systems in the U.S. Tech companies have largely been left to regulate themselves in this arena, potentially leading to decisions and situations that have garnered criticism.

Google fired an employee who publicly raised concerns over how a certain type of AI can contribute to environmental and social problems. Other AI companies have developed products that are used by organizations like the Los Angeles Police Department where they have been shown to bolster existing racially biased policies.

There are some government recommendations and guidance regarding AI use. But in early October 2022, the White House Office of Science and Technology Policy added to federal guidance in a big way by releasing the Blueprint for an AI Bill of Rights.

The Office of Science and Technology says that the protections outlined in the document should be applied to all automated systems. The blueprint spells out “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The hope is that this document can act as a guide to help prevent AI systems from limiting the rights of U.S. residents.

As a computer scientist who studies the ways people interact with AI systems – and in particular how anti-Blackness mediates those interactions – I find this guide a step in the right direction, even though it has some holes and is not enforceable.

Improving systems for all

The first two principles aim to address the safety and effectiveness of AI systems as well as the major risk of AI furthering discrimination.

It is critically important to include feedback from the people who are going to to be most affected by an AI system – especially marginalized communities – during development. // Photo by Possessed Photography on Unsplash

To improve the safety and effectiveness of AI, the first principle suggests that AI systems should be developed not only by experts, but also with direct input from the people and communities who will use and be affected by the systems. Exploited and marginalized communities are often left to deal with the consequences of AI systems without having much say in their development. Research has shown that direct and genuine community involvement in the development process is important for deploying technologies that have a positive and lasting impact on those communities.

The second principle focuses on the known problem of algorithmic discrimination within AI systems. A well-known example of this problem is how mortgage approval algorithms discriminate against minorities. The document asks for companies to develop AI systems that do not treat people differently based on their race, sex or other protected class status. It suggests companies employ tools such as equity assessments that can help assess how an AI system may impact members of exploited and marginalized communities.

These first two principles address big issues of bias and fairness found in AI development and use.

Privacy, transparency and control

The final three principles outline ways to give people more control when interacting with AI systems.

The third principle is on data privacy. It seeks to ensure that people have more say about how their data is used and are protected from abusive data practices. This section aims to address situations where, for example, companies use deceptive design to manipulate users into giving away their data. The blueprint calls for practices like not taking a person’s data unless they consent to it and asking in a way that is understandable to that person.

The next principle focuses on “notice and explanation.” It highlights the importance of transparency – people should know how an AI system is being used as well as the ways in which an AI contributes to outcomes that might affect them. Take, for example, the New York City Administration for Child Services. Research has shown that the agency uses outsourced AI systems to predict child maltreatment, systems that most people don’t realize are being used, even when they are being investigated.

Smart speakers have been caught collecting and storing conversations without users’ knowledge. // Photo by Sebastian Scholz (Nuki) on Unsplash

The AI Bill of Rights provides a guideline that people in New York in this example who are affected by the AI systems in use should be notified that an AI was involved and have access to an explanation of what the AI did. Research has shown that building transparency into AI systems can reduce the risk of errors or misuse.

The last principle of the AI Bill of Rights outlines a framework for human alternatives, consideration and feedback. The section specifies that people should be able to opt-out of the use of AI or other automated systems in favor of a human alternative where reasonable.

As an example of how these last two principles might work together, take the case of someone applying for a mortgage. They would be informed if an AI algorithm was used to consider their application and would have the option of opting out of that AI use in favor of an actual person.

Smart guidelines, no enforceability

The five principles laid out in the AI Bill of Rights address many of the issues scholars have raised over the design and use of AI. Nonetheless, this is a nonbinding document and not currently enforceable.

It may be too much to hope that industry and government agencies will put these ideas to use in the exact ways the White House urges. If the ongoing regulatory battle over data privacy offers any guidance, tech companies will continue to push for self-regulation.

One other issue that I see within the AI Bill of Rights is that it fails to directly call out systems of oppression – like racism or sexism – and how they can influence the use and development of AI. For example, studies have shown that inaccurate assumptions built into AI algorithms used in health care have led to worse care for Black patients. I have argued that anti-Black racism should be directly addressed when developing AI systems. While the AI Bill of Rights addresses ideas of bias and fairness, the lack of focus on systems of oppression is a notable hole and a known issue within AI development.

Despite these shortcomings, this blueprint could be a positive step toward better AI systems, and maybe the first step toward regulation. A document such as this one, even if not policy, can be a powerful reference for people advocating for changes in the way an organization develops and uses AI systems.

Christopher Dancy, Associate Professor of Industrial & Manufacturing Engineering and Computer Science & Engineering, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Picture of By The Conversation

By The Conversation

All Posts

More
Articles

[ninja_form id=16]

SEARCH OUR SITE​

Search

GET THE LATEST ISSUE IN YOUR INBOX​

SIGN UP FOR OUR NEWSLETTER NOW!​

* indicates required

 

We hate spam too. You'll get great content and exclusive offers. Nothing more.

TOP POSTS THIS WEEK

INNOVATION & TECH TODAY - SOCIAL MEDIA​

Looking for the latest tech news? We have you covered.

Don’t be the office chump. Sign up here for our twice weekly newsletter and outsmart your coworkers.