AI and ethics: Do more than the right thing.
What we’re saying is you need to take your ethical AI governance beyond regulation.
AI systems have the potential to produce a lot of good, but as they evolve, so do the ethical risks. It’s your duty as a director to act in good faith and build ethics into your AI framework.
Our last article lowercased AI data privacy risk. This time, we’ll take you through how your organisation can take its ethical AI framework beyond the regulatory changes to come.
Between back-to-back meetings (I have 30 seconds).
Before your next meeting, get an understanding of why you need to bring ethics into your AI governance processes.
Australia is likely to follow the EU and establish risk-based AI regulation, classifying AI systems based on risk levels and setting responsibilities for each classification. More on that here.
Australia has published a set of 8 voluntary ethical principles. These are a good jumping-off point for your ethical AI framework.
Organisations are now expected to champion ethical and transparent AI use. For example, Telstra just became the first Australian company to join UNESCO’s Business Council to promote ethical AI.
Beyond future regulatory punishment, building ethics into your AI framework will do your organisation a lot of good, like:
Building trust with your customers, partners and other stakeholders.
Preventing reputational damage caused by unintended AI consequences.
Keeping pace with rapid innovation and ensuring it’s ethically viable before adoption.
Promoting early identification of unethical risk.
Minimising any impact on your organisation's bottom line.
So, what are the key ethical dilemmas you should be aware of? Read on.
With your third coffee. (I have 2 minutes).
Skip the AFR skimming today and get up to speed on the key AI ethical risks instead.
Data Privacy: If you missed it, here’s all you need to know about AI and data privacy. It concerns the increasing risk of privacy breaches with the amount of data being gathered and created.
Bias and discrimination: AI can inherit and amplify the societal biases reflected in their training data, resulting in unfair or discriminatory outcomes.
For example, in 2018, Amazon removed its AI recruiting tool as it showed bias against women. It had been trained on applications for the last ten years, so it followed the pattern that most successful applicants were men. Similarly, in 2019, Apple Card came under fire after the credit card issuer was accused of giving women lower credit limits due to their algorithm.
Transparency and accountability: Many AI systems can be difficult to understand or interpret. If your systems aren’t transparent in how they arrive at decisions, you could lose the trust of your stakeholders.
Autonomy and control: There is a potential for loss of human control as AI becomes more autonomous. You should think about whether your systems make certain critical decisions without human oversight.
Environmental impact: While AI can be used to solve our approach to global environmental challenges, it’s estimated that data centres already consume nearly 3% of the world’s total electricity. AI alone is estimated to account for 3-4% of global power demand by 2030.
Accountability and liability: If there’s no clear division of responsibility for an AI system, it can make fixing its mistakes even more difficult.
Misuse: As AI becomes more and more powerful, so does the potential negative impact if it’s used to harm groups and individuals.
So, how can we address these key ethical issues? Read on.
As you walk your dog. (I have 5 minutes)
Go above and beyond current regulations and read our step-by-step guide on establishing an ethical AI framework.
Step 1 — Establish clear ethical guidelines.
You can follow Australia’s set of 8 voluntary ethical principles to create your organisation's ethical AI culture:
Privacy protection and security. Check out our step-by-step guide on this here.
Fairness. You might want to consider monitoring your systems for any signs of unfair discrimination against individuals, communities or groups. This includes both its outcomes and accessibility. More on this in Step 2.
Human-centered values. Essentially, your systems should complement your workforce, not hamper them. You should consider human rights, diversity, and the autonomy of individuals in its application. More on this in Step 3.
Transparency and explainability. Your customers, partners and other stakeholders need to be made aware of how they are being impacted by AI and when they are engaging with it. More on this is Step 4.
Human, social and environmental well-being. Your AI systems should be justified in producing beneficial outcomes for humans, society and the environment. If your systems were designed for business purposes like increasing efficiency, it’s about managing their positive and negative impacts. More on this in Step 5.
Reliability and safety. You need to establish safety measures for your systems so that they operate within their intended purpose. These measures should be proportionate to their potential risks. More on this in Step 5.
Contestability. When your system harms or impacts a person, group or environment, there must be mechanisms to challenge its use or outcomes. More on this in Step 5.
Accountability. You need to create a culture of accountability, with responsibilities clearly identifiable for different stages in the AI lifecycle.
Step 2 — Bias detection and mitigation.
While eliminating bias completely is impossible, it’s important for your organisation to be actively working to reduce it. You can do this by using large data sets representative of the real world, and by validating your model using feedback to incorporate into retraining.
We recommend using the tool IBM Watson Openscale, which monitors AI models for bias and fairness, providing insights and synthetic data set adjustments to ensure ethical outcomes.
Another tool is TensorFlow’s Fairness Indicators, which is a suite of open-source tools built on top of their Model Analysis that helps measure and mitigate biases in machine learning models.
Step 3 — Human Oversight.
To ensure AI does not undermine human autonomy, you can use the EU’s High-Level Expert Group's three different oversight governance mechanisms, depending on how much you rely on your AI system.
Human-in-the-loop — intervention at every stage of the AI lifecycle.
Human-on-the-loop — intervention during the design cycle and monitoring of operation.
Human-in-command — intervention during the overall activity of the system. This includes the ability to decide when and how to use it, the decision not to use it and the ability to override a decision it makes.
Step 4 — Transparency.
You must clearly explain how AI decisions are made and why certain outcomes are reached.
A simple example is Commbank’s Bill Sense AI application, which provides users with insights into their saving patterns and bill frequency. They make clear that it predicts bills based on their previous transactions, meaning future predictions may not always be correct. Plus, they make it clear to customers that they are always in control of using, overriding, or disregarding predictions.
If your organisation is creating content using AI, you could add Adobe’s AI-generated content icon of transparency to the image, video or PDF you created. When viewers look at it, they can hover over the mark, and it will show information about its ownership, the AI tool used to make it and other production details.
Step 5 — Regular audits.
Regular monitoring and testing of your AI systems will allow your organisation to gain feedback on results so you can identify problems and manage ongoing risks. Plus, it’s a chance to assess their overall efficiency and performance.
The OS Toolkit developed by the Institute for the Future and the Omidyar Network is a good basis for your audits. It provides a checklist of eight key risk zones to identify emerging areas of harm, 14 scenarios to spark conversations about the long-term impact of your systems, and seven future-proofing strategies to take ethical action today.
In terms of managing environmental impact, The Green Software Foundation’s open-source Green Metrics Tool can be used to assess and manage your AI system’s carbon footprint. As an added benefit, improving your systems' energy efficiency could also lower costs.