AI’s Silent Threat: The Dangers of Letting Employees Use AI Without Boundaries

What we’re saying is without a top-down enterprise-level AI rollout, you’re putting your organisation at risk.

 

While AI has the potential to enhance operational efficiency within your organisation, poor management of those systems can produce negative outcomes like financial, legal and reputational damage.

Even if you haven’t already adopted it, there’s a good chance your employees are already using it — and not securely.

So, avoid negligence and read on as we’ll give you a rundown on how your organisation can put a responsible and practical AI framework in place.

 
 

With your morning smoothie (I have 30 seconds)

Instead of scrolling through LinkedIn or industry press, understand the AI-led operational risks your organisation could be exposed to.

AI systems are not secure by default. Without enterprise-level tech and usage guidelines, your employees are likely leaking private data, risking reputational, legal, and financial fallout.

Unskilled users of AI make mistakes. Lack of a trained workforce or poor segregation of roles will expose you to incorrect information being shared and acted upon.

Your model is only as good as the data it’s trained on. Crap in, crap out - without knowledge sources being structured in a optimal format for AI, the AI can hallucinate and spit back a murky blend of incorrect product, company, and customer data.

Biases present in data can be perpetuated and amplified by AI systems. This can lead to incorrect decisions and financial loss, as well as reputational damage. 

AI without oversight can leave you vulnerable. Heavy reliance on AI, insufficient management or lack of backup mechanisms leaves your organisation exposed to a potential AI model failure.

 

While waiting for your toast to pop out (I have 2 minutes)

See how other organisations have managed AI well and how others have been burned.

Winners:

Mayo Clinic makes sure all its AI-generated insights are reviewed by healthcare professionals before medical decisions are made. The private American medical centre ensures that its AI systems that support diagnostics, patient management, and personalised treatment plans are treated with a degree of human oversight to avoid the risk of misdiagnosis. Plus, they keep their data reliable by regularly updating their models and have also implemented an AI ethics board to review algorithms and ensure patient safety. 


Nestle uses blockchain-integrated AI systems to ensure transparency in their supply chain. The platform, which was developed through a collaboration with OpenSC, allows consumers to track their food right back to the farm. They also use AI-powered quality control systems that track food safety throughout the production process.

Losers:

Zillow shut down its AI-powered home-flipping program in 2021 after its model failed to accurately predict housing prices, causing users to make poor buying decisions. The real estate company over-relied on the faulty model and, as a result, ended up taking a $500 million write-down and laying off 25% of its workforce.


In 2018, IBM’s Watson for Oncology faced criticism for offering unsafe treatment suggestions. The AI system, developed to assist oncologists with treatment recommendations for cancer patients, was trained on a limited dataset that lacked comprehensive real-world clinical data. As a result, they faced reputational damage, patient safety concerns, and heavy financial losses as IBM had to scale back their ambitions and strategically realign.

 

After your meeting finished early. (I have 5 minutes)

Use that time you got back to develop a top-down enterprise-level AI framework.

1 — Classify your AI systems based on risk. 

You should identify the risks associated with your organisation’s AI systems, set priorities, and establish governance processes based on the level of those risks.

If you haven’t already read it, our first article on AI and data privacy covered this, and we recommended using the US National Institute of Standards and Technology’s (NIST) Voluntary AI Risk Management Framework.

However, you might also want to supplement this framework with risk management software like LogicManager. The platform integrates AI-driven analytics to help you identify, assess and mitigate risks, including those driven by AI.

2 — Establish robust IT infrastructure and cyber protections.

To minimise the risk of model failure and malicious interference, your organisation must be equipped to support AI. 


LogicMonitorcan help you enhance the performance and reliability of your IT infrastructure. It provides real-time monitoring and analytics across cloud, on-premises, and hybrid environments so you can proactively detect issues, automate alerting, and identify potential bottlenecks.

3 — Address transparency issues.

The details of the models used in AI systems are often not disclosed or readily available. Understanding how your models generate results is crucial for supporting their fairness, accuracy and compliance. 


You could leverage C3 AI for Enterprise, which integrates local interpretable model-agnostic explanations (LIME) into their platform. This technique approximates any machine learning model with an interpretable model to explain each prediction.

4 — Equip employees and other stakeholders for effective and responsible use.

To ensure proper oversight and use of AI systems, you should train your employees on how these systems work, when and how to use them, and how to interpret or verify their outputs.

You should also provide compliance and legal teams with training so they can identify violations and reinforce governance guidelines.

5 — Manage and enhance the quality of the data you use.

You should continuously update your datasets to reflect new information and changes in your industry.

Great Expectations is an open-source tool that can help you ensure the data you use meets quality standards. The tool can aid you in data validation, profiling and documentation.

You might also consider using a data labelling platform like LabelBox to support you in annotation tasks. The platform also has quality control features to ensure accurate and unbiased data labelling.

If your organisation lacks sufficient data, synthetic users that augment your real-world datasets could be a solution. Our third article breaks this down in more detail. 

6 — Create a culture of ethical AI usage.

It’s important to build ethics into your AI framework beyond the risk of regulatory punishment. Doing this can help you foster trust with stakeholders, prevent reputational damage, and minimise other impacts on your organisation’s bottom line. 

Our second article provides a more detailed framework for ethical AI governance. 

7 — Regularly test and validate.

You should regularly test your AI systems so they function as intended. 

You can use Datadog to help you audit, detect and respond to operational risks. Their monitoring and analytics platform provides real-time visibility into your AI systems. 

Adding an appropriate degree of human oversight is also crucial to validating model outputs. You can use the EU’s High-Level Expert Group's three different oversight governance mechanisms as a starting point. 

8 — Continuous monitoring and maintenance.

You should create a culture of regular observation so you can identify performance issues and anomalies early.

We recommend using DarkTrace to help your organisation be proactive and adaptive to evolving AI-driven risks. The platform leverages advanced AI and machine learning to provide real-time detection and autonomous response to anomalies. 

9 — Monitor third-party use of AI.

Your organisation needs to know which of your partners use AI and how they manage their AI-related risks. This is so you have an idea of your possible exposure. 

10 — Monitor the regulatory landscape.

Your organisation should know your current compliance obligations within Australia and how they may evolve. Plus, it’s also important to look at the global landscape, even if you don’t operate overseas, as it may indicate what’s to come. 


Our first article covers the current regulatory landscape in more detail.

 
Previous
Previous

How Do People Feel About AI

Next
Next

Synthetic Users in Research: Clients, beware the s-AI-lesman