5 Steps IT Must Take to Avoid Shadow AI

In our latest Security Insights, we addressed Shadow AI, the unsanctioned use of AI tools within your organization. The goal was to assist organizations by encouraging their users to either 

  • a) follow company policy on shadow AI solutions or 
  • b) if no explicit policy exists, to ask for guidance and direction for the benefit of both the employee and the company due to the potential damage, both financial and reputational, that might result from the use of shadow AI solutions.

In this article we want to address the important steps IT must take to ensure they keep their organizations safe in a GenAI world.

1. Determine to what extent, if any, AI should be used, and which flavors are permissible.

There are some companies that have banned or limited AI solutions at work. Some of the biggest causes for concern that prompted restrictions include a concern over data leakage (both company and client data), source code (since AI is often used by developers to help write or improve upon existing code), confidential data, trade secrets and more.

2. Embrace the reality that AI is here and going to be used by your users.

According to the latest Dell GenAI Pulse Survey “76% of IT leaders believe GenAI will be significant if not transformative for their organizations.” You cannot hold off your users forever on the use of AI, rather, you must drive the direction your organization will go.

3. Collaborate with executive leadership (c-level folks) to establish a centralized business approach.

Depending on your concerns you might conclude that free, open AI platforms are not safe for employee use. You might prefer a private, protected, on-premises AI solution, or a private cloud-based enterprise solution that allows you to control and manage AI for your employees. 

The value here is that users can use AI without having to second guess and worry that they might be exposing company data because the AI is company-owned and managed. One article expressed this as “bringing AI to your data” as opposed to pushing your data into AI solutions that are not managed.  

Note: A search for enterprise AI paid-for solutions will yield a variety of results. ChatGPT has an enterprise plan with a higher level of security and privacy attached. Microsoft Copilot has security features regarding data isolation, compliance (including GDPR compliance), permission settings and auditing.

4. Define policies for AI use and ensure the policies are clear, accessible, and affirmed by all employees.

Your policies are, quite frankly, your policies. It’s not for us to say what your policy should be, how flexible or strict your guidelines should be. Every organization will need to determine the extent to which generative AI tools should be used including which tools are acceptable, for what purposes, and to what extent company/client data is allowed.  

These policies should be as clear as possible because the vague, gray policy types tend to invite unwanted creativity. It is essential that the policy be distributed for affirmation in such a way that the user can provide attestation that is trackable, but also in a way that allows them to refer back to that policy when in doubt on what company policy is.

5. Reinforce company policy through training that mimics cyber security in frequency and focus.

One thing we’ve learned about corporate learners is that when it comes to subjects like cyber security and, more recently, generative AI policy, the one-and-done approach doesn’t work. Once initial base level training is provided it’s essential to repeat and reinforce that training at reasonable intervals.  Beyond training and reminders on policies and ethical use of AI for your users you certainly want to make sure you provide training that helps users to be productive, while also being responsible.