Procedures and Policies
Without a formal AI government system, the irresponsible or incorrect application of AI can lead to unwanted and even serious consequences for organisations. The risks can be operational, financial, regulatory and reputational. Further, it can also result in risks beyond the organisation, such as privacy violations, discrimination, accidents, and manipulation of political systems.
(KOSA AI, 2021)
A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs. To address this challenge, the What-If Tool is an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding.
Independent validation
In machine learning, validation is the process of evaluating a trained model with a testing data set. It helps to quantify the ability of AI to produce predictions or outputs with enough reliability, which is important for achieving business objectives.
Independent data, or data that do not depend on one another, can be used to test on AI systems. Sometimes, the algorithm will perform well within the training data environment, but the accuracy may decrease in other situations. This is why independent data validation is so important.
To get another independent perspective, organisations can contact companies that specialise in validating AI systems. Such companies may assess the model's strengths and limitations, suggestions to manage risk, and assurance for managers, supervisory boards and auditors.