Governance
A clear definition of policies and explanations of accountability is crucial to ensuring artificial intelligence (AI) systems are created and deployed responsibly. In recent years, laws and perspectives from companies, academia, and other technical groups have contributed to self and co-regulatory approaches that have contributed to curbing inappropriate AI use.
As companies in all industries increasingly adopt AI, both its strengths and limitations are becoming more apparent. The benefits of AI extend beyond simply making our everyday lives easier; they also enable social transformation in sectors such as healthcare, finance, and justice, which makes it a great platform for bringing about change. Regarding the conversation around the limitations of AI, we mostly refer to a lack of responsibility in using AI.
AI is a fast-developing technology, and there is a fine line between maintaining responsibility and being irresponsible when AI is used in organisations. Each organisation uses AI differently based on its business needs and target audience, and they have different resources and limitations. This is why governing practices surrounding AI are needed to provide oversight and guidance within organisations to ensure that AI is used responsibly. (Pratt, 2021)
It's important to note that AI governance should not be confused with AI regulation. AI regulation refers to the laws and regulations enforced by a governmental institution concerning AI, applicable to all organisations under their jurisdiction. In contrast, AI governance refers to how AI is managed in an organisational setting.
AI governance and responsible use of AI begin with organisations developing their own guidelines and ensuring that their AI initiatives are aligned with their values and the positive impact they seek to make with their purpose before choosing and implementing a system of governance. (Kompella, 2022)
Establishing a governance system
Our discussion on the risks of AI misuse and the lack of governance has shown that every organisation needs a governance system that can serve as a basis for responsible AI. It's important to note the distinction between the governance system and management. A governance system for AI should include, but isn't limited to:
developing and implementing policies
ethical and regulatory standards
best practices
building a culture of integrity
providing advice
educating employees.
Simply put, a governance system oversees how decisions are made, while management is the act of making those decisions. (Rao & Golbin, 2021)
Establishing a governance system in any organisation should start with establishing ethical principles. Ethical principles serve as an important guide for employees - whether it is the decision of a marketing staff to determine what ad campaign to run or a data scientist's identification of where AI can be used and how it can be built.
As we mentioned, a gap exists between organisational ethics and accountability, not to mention that laws and regulations around ethics and technology are relatively nascent.
Establishing ethical principles will not only help to close that gap at an organisational level but also provide crystal-clear guidance in an environment where 'right' and 'wrong' can be ambiguous, and the line between innovative and offensive is thin.
(Burkhardt et al., 2019)