Considerations and examples
Responsible AI
AI development represents many possibilities for companies and individuals, but it can also lead to ethical issues and increased regulatory and legal risks. For instance, Amazon engineers spent years working on AI hiring software that did things such as read resumes to identify the key markers of a successful candidate. They eventually had to discard the program because their models systematically discriminated against women.
As shown in this example, AI development and use can have unintended consequences, which is why responsible AI is so important.
(Blackman, 2020)
Tay the Twitter chatbot
In 2016, Microsoft released a chatbot on Twitter called Tay. Tay was trained using machine learning to learn, unsupervised, from interactions with Twitter users.
This was intended to allow Tay to replicate human communication and personality traits and better engage with users. However, within 24 hours of being switched on, users interacted with Tay and fed her bigoted rhetoric, turning her from a polite bot into a vehicle for hate speech. (Microsoft, n.d.)
Risk scoring system
Microsoft partnered with a large financial lending company to create a risk-scoring system to help make decisions about loan approvals. To achieve this goal, Microsoft revised and used an existing industry model with customer data from the financial company.
An audit discovered that while the system successfully approved only low-risk loans, all of these loans were for male borrowers. Upon further investigation, it revealed that training data reflected the fact that loan officers historically favour male borrowers.
Facial recognition
It's particularly important to be aware of potential unintended consequences when using sensitive technologies, like facial recognition. Recently, there has been an increase in the demand for facial recognition technology, with many requests coming from law enforcement agencies. Facial recognition technology has many potential applications in this area, such as finding missing children.
However, a government may use these technologies to restrict people's freedoms. For example, this technology could be used to allow constant surveillance of specific people or to discriminate against people based on their age, race or gender.
Reducing poaching
AI is used to help authorities catch wildlife poachers through AI-powered image classification and object detection. This process begins with offline training of the Al model using 70 labelled videos containing animals and poachers. After testing, drones are deployed to fly over wildlife sanctuaries and take photos, which are transmitted to a computer. The images are then sent to the cloud for analysis. The AI system outputs annotations marking the poachers' location on the original images.
This method allows authorities to catch poachers in real-time and object detection could be used to combat the illegal wildlife trade, worth $8 billion to $10 billion each year. (Chui et al., 2018)
Detecting diseases
Researchers at the University of Heidelberg and Stanford University have developed an AI system that can detect cancer. This system scans images of skin lesions and can determine whether they are cancerous with an accuracy greater than professional dermatologists. In addition, AI-enabled wearable devices are being used to identify people with potential early indicators of diabetes with 85% accuracy. These systems work by analysing heart-rate sensor data and comparing it to healthy and unhealthy heart rates.
Identifying lead
AI is currently being used to predict houses that could still have dangerous lead water pipes in Michigan. To achieve this goal, the city has provided a team with data describing the attributes of each home and records of each service line (section of pipes).
Using this data, an AI model has been developed and trained to predict the probability that a part of a service line linked to a particular home is dangerous. This algorithm has helped determine which homes require monitoring or should be assigned to a pipe replacement crew. (Chui et al., 2018)