When it comes to data privacy: Risk it for the biscuit.
What we’re saying is that you need to take a risk-based approach to AI and data privacy…
As applications and the adoption of AI grow, more data is gathered and used, increasing the risk of privacy and cybersecurity issues.
Current Australian laws and regulations around AI and data privacy are confusing and, as you’ll discover, largely outdated. But if the European Union’s pioneering AI act is anything to go by, change is coming. Read about all you need to know for your organisation below.
With your morning coffee (I have 30 seconds).
Instead of scrolling through LinkedIn or sifting through industry press, have a sip of these key points.
Australia hasn’t enacted any specific AI-related regulations. They currently rely on existing laws that affect AI as a by-product.
Forget the US. The EU’s AI Act is now the de facto standard for global AI regulation. The act classifies AI systems based on risk levels and sets responsibilities for each classification, even banning specific high-risk AI systems that threaten fundamental rights. They also established new administrative infrastructures, including an AI Office and AI Board.
Before Australia follows in its footsteps, you need to establish an AI governance framework based on risk. Or you’ll risk being in breach of compliance, like Elon Musk’s X being taken to court for using European’s data without consent to train their models.
Tools can help you manage AI data privacy within your organisation. These include OneTrust, BigID and Enterprise-Level ChatGTP Sandboxed Licences.
The AI lifecycle is complex, making addressing data privacy a difficult task. This is why your governance process must be thorough and mapped clearly. Read on for more context on AI regulation
In the Uber home (I have 2 minutes).
Before you flick through emails, use this small moment of peace to catch up on the current state of AI and data privacy regulation.
The key issues with AI and data privacy are liability and data rights. From tool development to implementation, determining which organisations are responsible or liable for a privacy breach at each stage within this chain is difficult. Plus, with the data generated or processed, determining which party has ownership rights and then enforcing those rights poses further legal challenges.
Existing Australian laws do still have some merit. For example, in 2021, Clearview AI was found to have breached the Privacy Act 1988 by scraping facial images and biometric templates from Australian servers without consent to inform their facial recognition tool. Other laws include the Online Safety Act 2021, the Australian Consumer Law, the Corporations Act 2001, and various intellectual property and anti-discrimination laws.
Australia has already indicated it would adopt a risk-based framework like the EU. In 2019, the government published a set of voluntary AI Ethics Principles and, in 2023, launched a consultation followed by an Interim Response into safe and responsible AI practices. Following the response, they announced they would establish an AI Expert Group to assist in the shift.
The US is much like Australia. You’ve heard of cases such as OpenAI facing lawsuits from newspapers or Scarlett Johannsen threatening legal action against OpenAI’s voice assistant feature. However, they still rely on existing federal laws and guidelines with an aim to introduce AI legislation and a federal regulation authority.
The UK is taking a different approach… or are they? The UK prioritises a flexible framework over comprehensive regulation, with sector-specific laws so they remain adaptive to rapid changes. However, in July, the King’s Speech proposed a set of binding measures to place requirements on powerful AI models.
While on the treadmill (I have 5 minutes)
Get a head start on changing regulations and read our step-by-step guide on establishing an AI and data privacy governance framework.
1— Define AI within your business and its role. You don’t need to be able to code and create models, but you must be digitally literate to avoid breaching your duties.
Assign an AI Governance Lead. They will be your director responsible for the oversight of the organisation's strategy and risk management processes.
Develop a functional definition of AI within your organisation. It sounds silly, but subtle differences in what your company defines as AI can impact its applications.
Audit your current and desired AI usage. You need to understand the specific types of AI technology your business uses and identify the risks of each type.
2 — Know your compliance obligations. You're halfway there if you’ve gotten this far into the article.
Understand the current regulatory landscape within Australia. Do your research into the various laws we mentioned above.
Understand the global regulatory landscape. Your governance framework must comply with regional laws if you operate overseas.
Predict the future. Or going off the EU Act, your governance framework should be risk-based. You can check your obligations under the EU Act using their compliance checker.
3 — Establish a governance framework based on risk. Luckily, there are several AI risk frameworks published by public and private sector entities that you can align with. As mentioned, Australia published a set of 8 voluntary ethical principles that can inform overall business practices related to AI.
However, we think the US National Institute of Standards and Technology’s (NIST) Voluntary AI Risk Management Framework is particularly helpful. It consists of 4 main components:
Map: Understanding the context behind implementing AI, categorising the risks related to the AI systems, and the targets of usage.
Measure: Identify risk measurement metrics, evaluate AI systems' trustworthiness, and create mechanisms to track those risks.
Manage: Develop strategies that prioritise risks and act upon them based on their projected impact.
Govern: Cultivate a culture of risk management.
4 — Use AI data privacy tools to optimise governance.
OneTrust — a privacy management platform we recommended for larger businesses that can help you manage compliance with global privacy and data governance. It offers data mapping, assessments, and incident management tools and can be adapted as Australian standards evolve. Besides data privacy, it also has a consent management platform with features like cookie consent, user consent and tools to ensure compliance with consent regulations.
TrustArc — a similar privacy management platform that we recommend for smaller businesses. Like OneTrust it ensures your organisation complies with data privacy regulations, but it does not offer audit management solutions. However, it’s been around a lot longer and has a dependable reputation.
ChatGPT Enterprise — essentially ChatGPT tailored to your business. It offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customisation options, and more. The main benefit is that you’ll own and control your data, as they won’t use your business’s data and conversations to train their models.
Adobe Creative Cloud for Enterprise — Adobe Firefly, their AI creation tool, comes with an indemnity clause that states Adobe will pay any copyright claims related to work generated using this tool. This eliminates the fear of a lawsuit if your organisation creates work using it.