Ethics in artificial intelligence (AI) has never been more relevant than it has been in the 21st century. Despite the several useful applications of AI in various disciplines to improve processes and bring about new discoveries in science and economic opportunities, they can also bring about negative outcomes and risks.
The agnostic nature of AI technologies raises the concern of countries to set up AI regulation and governance to protect their citizens. This is often accomplished by ensuring that AI solutions are thoroughly researched and developed, and those crucial accountability procedures are upheld to make them secure for use and minimize their detrimental effects. These regulations and practices can also foster public trust and understanding of the technologies used to create some of the functionalities we use in everyday applications.
One of the attempts at making AI systems safe for the American people is the AI Bill of Rights, a set of principles and practices that guide the design, use, and deployment of automated systems in America.
What is the AI Bill of Rights?
Most AI practitioners do not start projects with the intent to harm society and are more than happy to adjust to industry prescribed practices available to them and businesses that utilize AI. In reality, research teams and businesses find it difficult to achieve their goals and respect user rights at the same time with trade-offs (cost, time, and level of implementation difficulty) which can make the AI system ineffective or unsafe for people to use.
The AI Bill of Rights is a set of 5 principles that guide the design and use of automated systems. These principles should make it simpler for researchers and businesses to do the right thing especially when they deal with projects that have a direct impact on people and society. Consider the implications of applying an automated system to promote people, give insurance or charge premiums, job recruitments, or admission into schools. Production models trained with biased data might generate catastrophic results and societal backlash. This might generally turn into algorithmic discrimination.
The following is a list and a summary of the 5 principles of the AI Bill of Rights:
- Safe and Effective Systems
- Algorithmic Discrimination Protections
- Data Privacy
- Notice and Explanation
- Human Alternatives, Consideration, and Fallback
The main document exhaustively discusses these principles with appropriate examples and laws backing them, but here is a summary of these principles.
Safe and Effective Systems
In the possibility of AI technologies causing human harm, this principle proposes that for every use, human lives should be protected from any ineffective or unsafe systems. It focuses on three main concerns:
Project due processes
It presents a framework for teams to proactively and continuously protect the public from harm. It consists of early-stage extensive consultations with the public depending on the technology’s use. Their input should be added to the designing, implementing, deploying, and maintaining processes for the automated system. It also expects all systems to be tested based on domain-specific best practices. Risk identification/mitigation strategies, continuous monitoring (functional and operational), and clear system governance procedures and business setup to run these procedures are also part of this framework for proper due process.
It encourages teams to utilize AI technologies to avoid inappropriate, low-quality, or irrelevant data use and prevent harm by reusing data in sensitive domains (criminal data in a legal case). People using these technologies should source high-quality data relevant to the task, track the data source and review the data for inputs that have the ability to create an ineffective system.
Organizations and independent research teams should be able to demonstrate the safety and effectiveness of the system through independent evaluations by researchers, ethics review boards, and detailed reporting of the project’s processes and documentation on the technology use.
Algorithmic Discrimination Protections
Algorithmic discrimination happens when automated systems support unjustified disparate treatment or negatively affect individuals based on their identity (race, ethnicity, sex, religion, etc.). To mitigate this, the bill proposes that research teams and organizations should:
- Proactively assess the equity in their design process.
- Utilize robust and representative data.
- Guard against proxies.
- Continuously monitor, assess disparities, and mitigate against disparities.
- Ensure that principles of accessibility are discussed and applied during the design, development, and deployment of automated systems.
- Document organizational processes tied to the project and the system.
- Report impact assessments.
Achieving data privacy is fundamental to the success of other frameworks as data is increasingly collected, shared, or reused between companies in different sectors or industries and can lead to tracking and profiling of individuals. For example, an insurer can use publicly available data (social media) of a potential policyholder to decide the type of life insurance to offer them. With no clear laws on what can or can’t be used, this can be detrimental to everyone.
To ensure data privacy, the bill proposes that organizations:
- Protect privacy by design and by default.
- Protect the public from unchecked surveillance.
- Provide the public with mechanisms for appropriate and meaningful consent and control over their data.
- Demonstrate that data and user control are protected.
Notice and Explanation
The bill proposes that professionals (designers, developers, data scientists, etc.) should inform the public that an automated system is being used and clearly explain the role the system plays to impact the user positively or negatively.
Organizations or researchers should:
- Provide clear, timely, understandable, and accessible notice of use and explanations.
- Provide explanations as to how and why a decision was made or an action was taken by an automated system.
- Demonstrate protections for notice and explanation.
Human Alternatives, Consideration, and Fallback
The final principle encourages organizations to give people the option to opt-out where appropriate and to have access to someone who can promptly assess and address any issues they may have.
They encourage organizations to:
- Provide a mechanism to conveniently opt-out from automated systems in favor of a human alternative, where appropriate.
- Provide timely human consideration and remedy by a fallback and escalation system in the event that an automated system fails, produces an error.
- Institute training, assessment, and oversight to combat automation bias and ensure any human-based components of a system are effective.
- Implement additional human oversight and safeguards for automated systems related to sensitive domains.
- Demonstrate access to human alternatives consideration and fallback.
A detailed explanation and practical steps to achieve all previously discussed principles in any organization are written in the AI Bill of Rights Blueprint document. It also gives an overview of the different U.S. laws that support each principle.
It is important to note that:
- The AI Bill of Rights is a non-binding document and does not constitute the American government policy;
- It does not modify or direct an interpretation of any existing policy;
- Enforcement measures are not included;
- The appropriate application of these proposed principles majorly depends on the context in which the automated system is being utilized;
- In cases where this principle may not be appropriate, sector-specific guidance should be utilized to guide the actions of the automated system. For example, an automated health diagnostic system in the health sector.
How it Helps Industries
This bill will allow stakeholders in the various sectors that employ AI technologies to gain cognizance of the user’s rights based on the context of the use of such technology. In every planning stage, algorithms intended to be used will be thoroughly scoped in relation to their impact on the users. There will be more emphasis on the inspection of data practices at relevant stages of a project’s life cycle which determine algorithmic behaviors. Exhaustive monitoring, testing, and maintenance (post-deployment) of models will be encouraged to ensure the safe use of automated systems.
It makes it less difficult to operationalize the intended goals of making systems safe for use through due processes that enhance the explainability of algorithms, mostly done through clarifying reasons around the use of the data, representation in the data, and rigorous testing to ensure fair treatment of the target groups. When it comes to AI, it’s all about scale and how fast problems can escalate. When it is not checked, it can adversely affect people, so it needs to be constantly monitored, tested, and improved upon.
Perspectives on AI Regulations
There have been different reactions to this bill; some fear that these regulations might lead to stricter AI laws that stifle innovation that might disadvantage businesses.
Eric Schmidt, former CEO of Google, said,
“There are too many things that early regulation may prevent from being discovered.”
Director of the Mozilla Foundation Mark Surman, on the other hand, showed positive sentiments towards the proposed bill. He believes the frameworks presented will provide regulatory clarity in the area of the AI technology market.
“The AI systems that permeate our lives are often built in ways that directly conflict with these principles. They’re built to collect personal data, to be intentionally opaque, and to learn from existing, frequently biased data sets.”
The EU is seen to be ahead of the US with AI regulations, as they have created the General Data Protection Regulation (GDPR) which so many companies have used to transform their data policy followed by the proposed EU AI regulation (EU AI act). It is said to have more checks and balances than the Bill of Rights. As a fruitful step in the right direction, this bill is a great way for the U.S. to protect its citizens, especially less represented groups from algorithmic bias and ineffective automated systems.
“It’s never been more important to work on securing the rights for everyone before we become a world truly run by A.I. What rights do we have? What limits should exist on companies and governments alike in a just world? If we are to be the authors of the future, let us write a Bill of Rights worthy of the aspirations of future generations.”
With all the advancements in AI, there is still much to be understood so it is a good idea to create boundaries on what is and isn’t acceptable. Experts can debate the pros and cons of regulations but it is clear, based on the negative effects of AI in the past, that regulations are a welcomed idea to protect people. They can co-exist with all the innovative endeavors with AI technologies to ensure that users are protected. The proposed AI Bill of Rights is a good step in the right direction. It clearly sets the boundaries by which research teams and organizations should operate. Although it is not a legally binding document, it can inform policies and standard practices in the industry. At the moment, the proposed EU AI Act is the closest to the AI bill for the European market and seems to be more encompassing but they are equally important to safeguard user rights and reduce the likelihood of creating faulty AI systems.