Back to Blogs
September 24, 2024

Shadow AI: Shadow IT With Added Risks

Shadow AI is the use of AI technology by employees without their organisation's knowledge. While it has clear potential benefits, it also comes with a range of novel risks, so organisations need to put governance in place around the use of AI.

6 Minute Read

You may be familiar with the term Shadow IT, and if so, you can probably guess what Shadow AI is; it’s the use of AI technology without the knowledge or involvement of an organisation’s IT function. It’s something that has grown rapidly, and which presents some of the same risks as shadow IT plus some new ones. It’s not just a problem for IT, it can impact a business financially and reputationally.

When we talk about shadow AI we’re primarily referring to the use of large language models (LLMs) like ChatGPT, Google Gemini and Claude. This is the form of AI that has the highest profile and is easily accessible and usable by non-technical employees.

It’s clear that adoption of AI has exploded. As far back as September 2023 Salesforce estimated that 49% of people had tried using AI. The technology analysts Forrester are predicting that during this year 60% of employees will try AI tools at work.

It’s also clear that the organisations may not be aware of just how much AI is being used by their employees. The cybersecurity vendor Cyberhaven, in its AI Adoption and Risk Report, published a few months ago, identified that 74% of ChatGPT use at work was through non-corporate accounts, and was even higher for some other AI tools. The inference is that if a non-corporate account is being used, then the employee is not using AI as part of an initiative sanctioned by their organisation.

So, lots of people are using AI at work, and their employers may not be aware of it. So what? Well, AI presents risks. As with shadow IT, shadow AI can mean that avoidable costs are incurred because different groups are buying similar tools, or subscriptions are not purchased cost-effectively. Careless use of AI tools can also increase the chances of the type of data breach that shadow IT may cause, but there are other risks that are specific to AI, including:

  • The loss of company IP
  • Commercial risk
  • Reputational risk
  • Regulatory risk
  • Litigation

Let’s examine these.

IP Risk

The loss of IP is a real risk with large language models like ChatGPT. As LLMs work by ingesting huge amounts of data and then analysing and categorising that data, it’s entirely possible that any proprietary information that is fed into an LLM could be presented to other users of that LLM. Organisations that you would expect to have a good handle on this risk, like Apple and Samsung,  have implemented bans on employees using LLM tools for this reason. Even if your company doesn’t have the equivalent to the recipe for Coca-Cola to protect, it probably has very sensitive financial, customer or product information that it wouldn’t want shared with competitors or the general public. If an organisation doesn’t have control over what documents are being uploaded to LLMs then this kind of data breach is very possible.

Commercial Risk

The latest LLMs and tools powered by them make it easy for non-technical users to create applications that can be used with customers, e.g., AI-powered chatbots. While this may have customer service benefits it can also create some risks. One of these is commercial, like when an airline was forced to give a discount when an AI chatbot invented a policy. While it’s possible to put in safeguards to reduce the risk of customers deliberately or inadvertently using a chatbot to get discounts or free services, it requires a level of knowledge on the part of the employee creating the chatbot.

Also, people are constantly trying to find ways to overcome those safeguards, as demonstrated when someone familiar with AI technology got a car dealer’s AI chatbot to sell them a brand-new SUV for $1. Luckily, this was a prank rather than attempt to get a great discount, as was someone else’s successful attempt to negotiate a buy one get one free offer on new cars, but it illustrates the risk.

Reputational Risk

Poorly supervised AI efforts can also damage an organisation’s reputation. A well-known example of this in the UK is DPD’s customer service chatbot that was persuaded to become very sweary.

More serious examples would include the City of New York’s chatbot giving business owners incorrect, and in some cases illegal, advice, or the group of academics providing information to an Australian parliamentary enquiry that implicated - completely falsely - KPMG and Deloitte of involvement in accounting scandals. The academics had used information provided by an LLM that was completely non-factual.

This type of plausible sounding but incorrect information is called an “hallucination”, and it’s a feature of LLMs to such an extent that the Forrester article mentioned above is predicting that insurers will start offering hallucination insurance.

Regulatory Risk

Fear of falling foul of regulations has led a number of financial institutions to ban the use of LLMs. The main concern was the potential for inadvertent sharing of sensitive financial data. Health is another heavily-regulated industry that has seen examples of AI initiatives causing problems, like providing potentially dangerous advice to people with eating disorders.

Litigation

AI has the potential to expose organisations to litigation in a couple of ways. Firstly, there’s the risk of inadvertently infringing on someone else’s IP (the flip side of the IP risk covered earlier in this article). Amazon stopped employees using ChatGPT when it found what it believed to be some its own internal data in ChatGPT responses. The obvious risk is that one of Amazon’s competitors could also access this information.

Secondly, there’s the risk that the AI triggers litigation because it has caused harm. In the US there’s already been successful litigation, resulting in a payout of $365,000, caused by an AI–powered application clearly exhibiting age discrimination. You can also imagine that something like the example of a recipe for “aromatic water mix”, more commonly known as chlorine gas, provided by an AI “mealbot” could result in a lawsuit.

What’s the Solution?

Clearly, AI has tremendous potential to make organisation more efficient and effective, and allowing employees to experiment with AI can enable organisations to be agile and steal a march on competitors. Therefore, how can organisations realise these benefits without being impacted by some of the risks above?

A key part of getting the benefits of AI without also risking a PR disaster or worse, is governance. In this context, this means ensuring that AI initiatives are supervised, that people who understand the risks are involved, and that AI tools are closely monitored.

The problem that shadow AI presents to governance is that you can’t govern what you’re not aware of, so a foundational step is to get shadow AI out of the shadows and into the daylight. Costimised can help you find out what applications your employees are using, including LLMs, and work out how you can then reduce risk and save money on software subscriptions. To book a no-obligation discovery call please contact us at enquiries@costimised.com.

Latest Blogs

View All
December 4, 2024

Software Can Seriously Damage Your Financial Health

We recently worked with a company that had been purchased out of administration. The main reason the company had gone into administration was that it couldn’t generate enough revenue to cover its Cloud costs. An extreme example, but most companies are overspending on cloud. The good news? This is a fixable problem.

Read Full Blog
November 13, 2024

Mergers and Acquisitions - Don't End Up With Too Much Tech

November 13, 2024

A Perfect Storm?

A number of factors are combining to make technology costs increase rapidly, and this is happening at a time when organisations are facing other cost pressures. We lay out what is going on to make software application and cloud services costs rise - including vendor actions, Shadow IT and the increasing use of AI, plus what you can do about it.

Read Full Blog
October 23, 2024

What Happens if Your Cloud Software Provider Goes Out of Business?

Organisations are increasingly dependent on technology, and most of that technology now lives in the cloud. Unlike on-premise software, if a cloud software vendor disappears, so can the software. How big a risk is this? What can you do about it?

Read Full Blog