Biden’s Executive Order Puts Civil Rights In The Middle Of The AI Regulation Discussion

US Vice President Kamala Harris looks on during a meeting with civil rights leaders and consumer protection experts to discuss the societal impact of artificial intelligence, in the Eisenhower Executive Office buildi... US Vice President Kamala Harris looks on during a meeting with civil rights leaders and consumer protection experts to discuss the societal impact of artificial intelligence, in the Eisenhower Executive Office building in Washington, DC, on July 12, 2023. (Photo by Mandel NGAN / AFP) (Photo by MANDEL NGAN/AFP via Getty Images) MORE LESS
Start your day with TPM.
Sign up for the Morning Memo newsletter

This article is part of TPM Cafe, TPM’s home for opinion and news analysis. It was originally published at The Conversation.

On Oct. 4, 2022, the White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights: A Vision for Protecting Our Civil Rights in the Algorithmic Age. The blueprint launched a conversation about how artificial intelligence innovation can proceed under multiple fair principles. These include safe and effective systems, algorithmic discrimination protections, privacy and transparency.

A growing body of evidence highlights the civil and consumer rights that AI and automated decision-making jeopardize. Communities that have faced the most egregious discrimination historically now face complex and highly opaque forms of discrimination under AI systems. This discrimination occurs in employment, housing, voting, lending, criminal justice, social media, ad tech targeting, surveillance and profiling. For example, there have been cases of AI systems contributing to discrimination against women in hiring and racial discrimination in the criminal justice system.

In the months that followed the blueprint’s release, the arrival of generative AI systems like ChatGPT added urgency to discussions about how best to govern emerging technologies in ways that mitigate risk without stifling innovation.

A year after the blueprint was unveiled, the Biden administration issued a broad executive order on Oct. 30, 2023, titled Safe, Secure, and Trustworthy AI. While much of the order focuses on safety, it incorporates many of the principles in the blueprint.

The order includes several provisions that focus on civil rights and equity. For example, it requires that the federal government develop guidance for federal contractors on how to prevent AI algorithms from being used to exacerbate discrimination. It also calls for training on how best to approach the investigation and prosecution of civil rights violations related to AI and ensure AI fairness throughout the criminal justice system.

The vision laid out in the blueprint has been incorporated in the executive order as guidance for federal agencies. My research in technology and civil rights underscores the importance of civil rights and equity principles in AI regulation.

Civil rights and AI

Civil rights laws often take decades or even lifetimes to advance. Artificial intelligence technology and algorithmic systems are rapidly introducing black box harms such as automated decision-making that may lead to disparate impacts. These include racial bias in facial recognition systems.

These harms are often difficult to challenge, and current civil rights laws and regulations may not be able to address them. This raises the question of how to ensure that civil rights are not compromised as new AI technologies permeate society.

When combating algorithmic discrimination, what does an arc that bends toward justice look like? What does a “Letter from Birmingham Jail” look like when a civil rights activist is protesting not unfair physical detention but digital constraints such as disparate harms from digitized forms of profiling, targeting and surveillance?

The 2022 blueprint was developed under the leadership of Alondra Nelson, then acting director of the Office of Science and Technology Policy, and her team. The blueprint lays out a series of fair principles that attempt to limit a constellation of harms that AI and automated systems can cause.

Beyond that, the blueprint links the concepts of AI fair principles and AI equity to the U.S. Constitution and the Bill of Rights. By associating these fair principles with civil rights and the Bill of Rights, the dialogue can transition away from a discussion that focuses only on a series of technical commitments, such as making AI systems more transparent. Instead, the discussion can address how the absence of these principles might threaten democracy. https://www.youtube.com/embed/34GcXV6bwG8?wmode=transparent&start=0 Arati Prabhakar, director of the White House Office of Science and Technology Policy, and Alondra Nelson, former acting director, discussed the Blueprint for an AI Bill of Rights at a conference on the anniversary of its release.

A few months after the release of the blueprint, the U.S. Department of Civil Rights Division, the Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission and the Federal Trade Commission jointly pledged to uphold the U.S.’s commitment to the core principles of fairness, equality and justice as emerging automated systems become increasingly common in daily life. Federal and state legislation has been proposed to combat the discriminatory impact of AI and automated decision-making.

Civil rights organizations take on tech

Multiple civil rights organizations, including the Leadership Conference on Civil and Human Rights, have made AI-based discrimination a priority. On Sept. 7, 2023, the Leadership Conference launched a new Center for Civil Rights and Technology and tapped Nelson, author of the Blueprint for an AI Bill of Rights, as an adviser.

Before the release of the new executive order, Sen. Ed Markey, Rep. Pramila Jayapal and other members of Congress sent a letter to the White House urging the administration to incorporate the blueprint’s principles into the anticipated executive order. They said that “the federal government’s commitment to the AI Bill of Rights would show that fundamental rights will not take a back seat in the AI era.”

Numerous civil rights and civil society organizations sent a similar letter to the White House, urging the administration to take action on the blueprint’s principles in the executive order.

As the Blueprint for an AI Bill of Rights passed its first anniversary, its long-term impact was unknown. But, true to its title, it presented a vision for protecting civil rights in the algorithmic age. That vision has now been incorporated in the Executive Order on Safe, Secure, and Trustworthy AI. The order can’t be properly understood without this civil rights context.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation
Latest Cafe

Notable Replies

  1. Nothing in there about how to stop Skynet from killing us all or turning us into batteries.

  2. Garbage in and garbage out. I have looked at this issue and I don’t see anything about AI that makes its solutions inherently better than human solutions. Faster yes, but better no. At the bottom of all AI is a human being or a group of human beings putting algorithms together. In doing that they bring their human prejudices to the table. The AI they produce is going to reflect their human prejudices.

    I have been thinking about the software company I am associated with. When we built our company we intentionally built it to be as inclusive as it could be. That was the intentional work of our executives and our HR professionals. Their is nothing to stop another company from incorporating their own hatred and prejudices in the work they do.

    If you watch the AI sausage being prepard you rapidly come to the conclusion that Skynet is a very long way off. The best AI apes self-awareness but it really isn’t. It is just a different way to make sense of large amounts of data. It does reinject mystery into software development but it isn’t majic.

  3. You are spot on target. AI depends on its data source. If that source is flawed in any way, your AI is flawed.

  4. With generative AI it’s a combination of flawed data and flawed humans training the model’s output. Racial and other bias can creep in from both directions. It’s going to be very difficult to weed out completely because the AI will just reflect the bias of the society that builds it.

    I read the fact sheet on the “Safe, Secure, and Trustworthy AI” executive order, but it seems mostly just aspirational fluff, without any teeth to stop the immense commercial push to get generative AI on everyone’s phones and into everyone’s lives.

    Bill Gates wrote a post on his blog praising the coming of Personal AI Assistants, and I can just imagine how companies are drooling at the prospect of being able to scrape that personal info as a marketing tool. The privacy aspect of AI collecting personal data as your best buddy living on your phone isn’t being looked at hard enough. It’s a potential gold mine that goes far beyond what Facebook and other social media can do now with your data.

  5. I am less worried about Skynet and more worried about Soylent Green.

Continue the discussion at forums.talkingpointsmemo.com

7 more replies

Participants

Avatar for discobot Avatar for tigersharktoo Avatar for ronbyers Avatar for darrtown Avatar for sthammond Avatar for tmulcaire Avatar for brian512 Avatar for zenicetus Avatar for chjim Avatar for ClutchCargo Avatar for john_adams

Continue Discussion
Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Deputy Editor:
Editor at Large:
General Counsel:
Publisher:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: