Guiding principles
Due to the societal implications of AI, companies, governments, and researchers have a responsibility to consider and mitigate its potential unintended consequences. Some organisations, such as Microsoft and Deloitte, have created internal policies and principles to guide their development or use of AI technology.
Fairness
It's important to design AI systems to be fair. They should treat everyone fairly instead of affecting different groups of people in various ways and maintain neutrality even if data is biased.
For example, as we noted earlier, when AI was used to approve loan applications, it prioritised one gender and race over others based on historical instances of bias in the data.
A fair AI program would treat all applications objectively, regardless of data bias.
Mitigating bias
To create and use AI fairly, companies and individuals can prevent bias by:
Understanding how bias can be introduced and how it can affect AI decisions. For example, bias can be introduced through machine learning, such as in the Tay example we discussed previously, or via biased data, such as in the loan example.
Using diverse data that reflects the diversity found in the real world. In addition, creating diverse teams, in terms of both demographics and skillsets, can help avoid and mitigate AI bias. In many historical cases, the first people to notice these issues of bias have been developers and users who are part of minority groups themselves (such as female employees in tech teams). By supporting diversity in AI teams, issues around unwanted bias may be detected earlier, leading to increased opportunities to mitigate bias.
Building AI systems that can learn without developing bias and using tools that can detect and mitigate bias. For example, IBM's AI Fairness 360 library has a range of techniques for improving fairness and detecting bias in models. Some tools include preprocessing algorithms, which aim to reduce bias in the data itself before an AI system processes it. Others include algorithms designed to penalise unwanted bias while building the model or balance outcomes after a prediction. The best choice for a particular situation will depend on various factors relating to the dataset, the AI system itself, and its application.
Understanding the limits of AI and supplementing AI decisions with sound human judgment. It's important to recognise that our data, models, and technical solutions to bias all have limitations. Even if developers and users employ the best practices in design and bias detection, a risk of unwanted bias in an AI system will remain. Ultimately, humans need to be held accountable for decisions made with AI systems that affect others.
(McKenna, 2021; Microsoft, n.d.)
Reliability and safety
AI must be designed to be reliable, safe, and consistent, even in unprecedented circumstances. AI systems need to act as they were initially intended to, respond safely to unexpected conditions and resist harmful manipulation. Developers and users must be able to check that AI systems are behaving as expected under real-world operating conditions.
For example, when used to diagnose and monitor health, AI must accurately detect and read data ignoring irrelevant physiological differences in patients.
Improving reliability and safety
Steps to mitigate unreliability and improve safety include:
Test to ascertain AI behaviour in a variety of circumstances. How AI systems behave and the type of conditions they can handle reliably and safely depends on the situations that developers anticipate and test for during design and testing. It's important to test for performance failures and ensure that AI systems don't evolve in ways that are different from original expectations.
Perform ongoing maintenance of systems. Responsible AI does not end after testing and deployment. It's just as important that companies properly operate, maintain, and protect their AI systems over time. If AI systems aren't appropriately maintained, they are more likely to become unreliable or inaccurate. As such, it's essential to consider long-term operations and monitoring practices every time AI is used.
Ensure people are in control. As AI should ultimately augment and enhance human capabilities, people should play a key role in making decisions about how and when an AI system is deployed, as well as deciding if it's safe to continue to use it over time. Human judgment can also help to detect potential blind spots and biases in AI systems. (Microsoft, n.d.)
Privacy and security
It's important to design AI that protects privacy and data, as it needs to handle the data it contains responsibly and securely. Privacy and data security issues require close attention with AI technology, as access to data is necessary for AI systems to make reliable predictions and decisions.
For example, AI that collects personal data on online retail customers should be programmed to do so only once it has received user consent (consider an instance where the user has enabled specific cookies on the website). AI should then maintain data security as per laws and any agreement with the consumer.
Maintaining privacy
Steps to maintain privacy include:
Comply with privacy laws. AI systems must comply with privacy laws that require transparency about data collection, use, and storage. Privacy laws also dictate that users can decide how their data is used. As such, companies that develop and use AI should invest in effective compliance processes to ensure that data collected and used by AI systems is handled appropriately.
Consider whether crucial decisions made with AI might undermine users' trust. Laws are constantly changing in response to concerns of consumer advocates regarding AI's use of data. Companies may also need to consider implementing appropriate assurance and governance policies to keep up with changing laws and maintain consumer trust.
Use AI as a tool to support decision-making instead of a tool to make decisions. As with many other guiding principles, human oversight is essential in maintaining privacy and security. Using a human-centred approach provides the opportunity for safeguards to ensure the responsible use of AI technology.
(Leonard and Nichols, 2022; Microsoft, n.d.)
Inclusiveness
The principle of inclusiveness refers to designing AI to be accessible to all. A diverse range of people and abilities use AI. This means AI must be designed with this in mind, to include and address various human needs and experiences. For the one billion people with disabilities worldwide, AI technologies can provide numerous opportunities, such as improving access to education, government services, and employment.
For example, AI-based solutions such as real-time speech-to-text conversion, visual recognition technology, and predictive text are already helping individuals with hearing, visual, and other impairments. In addition, there are multiple projects where AI is embedded into wearable devices that decode social information for people along the autism spectrum.
Enhancing inclusiveness
Steps to maintain inclusiveness include:
Build for accessibility, keeping in mind the variety of needs and experiences of the people who will use and hopefully benefit from the technology. Overall, the more people involved in AI processes, the better the outcome due to various skills and experiences.
Identify any unintentional barriers to accessibility through testing. Including diverse people and data in AI testing will help identify potential barriers to accessibility. Use this information to make changes and improve the AI design so that it can be used by a more diverse group of people, as found in the real world.
Consider how fairness is closely linked to inclusiveness. By taking steps to improve the fairness of an AI system, developers and users can help to make the system more inclusive. On the other hand, keeping inclusivity front of mind in AI development and use will help to enhance fairness, as the resulting system will cater to the needs of a broader range of individuals.
Transparency
AI needs to be transparent and intelligible. In other words, people must understand how and why AI performs certain functions. When we know how and why AI performs certain functions, we can also use this information to identify flaws or biases in its design.
Returning to the example of Tay, designers were able to explain Tay's actions when they realised it had functioned as expected. Tay learnt from the data it gathered in human interactions. What the programmers had not anticipated was the behaviour of the human element. Click on this link to view an example of transparency and how Microsoft communicated with stakeholders after the introduction of Tay.
Maintaining transparency
Steps to maintain transparency include:
Improve AI intelligibility. When AI systems help make decisions that significantly impact people's lives, it's important the people affected by these decisions can understand how they were made. For example, a bank could use an AI system to decide whether a person is creditworthy, or a company might use an AI system to determine the most qualified candidates to hire. Improving intelligibility in these cases could involve a range of practices of different complexities, from simply informing those involved that decisions are being made with the help of AI systems to providing models explaining whether AI was used and how the systems get from their input to the output.
Communicate honestly with stakeholders. To maintain transparency and trust in both companies and AI systems, companies need to communicate honestly with a range of people involved in AI development and use, such as customers, employees at all levels, and third-party providers. This includes communicating about issues, mistakes, and lessons learned, as Microsoft did regarding Tay (you can view this example by following the link above). It also involves informing stakeholders of AI advances and successes.
Accountability
AI programs have increasing impacts on individuals' lives, decisions and choices. As we've seen in this module, AI is used to detect medical conditions, approve loans and protect the environment. Designers need to remain accountable for how their AI systems operate, particularly when the actions humans take are based on the information such AI provides.
For example, an individual may opt for surgery to remove a malignant tumour detected by AI, but what happens if AI is wrong? This could have devastating effects on individuals. Part of human-centred AI is ensuring that humans have the final decisions on matters, not AI. In our medical scenario, this could mean our patient gets a second human opinion and advice before opting for unnecessary or life-threatening surgery.
Humans must therefore remain accountable for how AI operates and the level of autonomy it has.
Maintaining accountability
Steps to maintain accountability include:
Strive to meet industry standards and develop accountability norms that prioritise human control over AI autonomy. These norms and standards can ensure that AI systems aren't alone responsible for decisions that impact people's lives and that humans maintain meaningful control.
Companies should also consider creating a dedicated AI internal review body to provide oversight and guidance regarding important questions about the development and use of AI systems. Such bodies can also help create best practices for documenting and testing AI systems during development and provide guidance on how an AI system will be used in sensitive cases. They can also help monitor AI systems to keep them up-to-date and ensure they're still working as intended.