Nextiva / Blog / Customer Experience

Customer Experience (CX) Customer Experience September 1, 2025

How to Avoid (or Manage) Shadow AI in Your Business

Shadow AI in Unified Communications
Shadow AI could be creeping into your business. Here’s how to mitigate the risks before you fall foul of them.
Dominic Kent
Author

Dominic Kent

Shadow AI in Unified Communications

It’s no longer possible to ignore AI as part of your communications strategy. Even if you’re biding your time to wait and see what happens, certain people in your organization can’t wait to use it.

With this desire, however, comes the challenge of managing your AI tools. This leaves the door open to a phenomenon called shadow AI.

What Is Shadow AI?

Shadow AI is the term used to describe the adoption of AI-powered tools, chatbots, or automations by employees without formal IT approval or oversight. While there is no malice intended by users, these under-the-radar tools often lead to wider business problems, including:

  • Bypassing security reviews and data-handling procedures
  • Exposing sensitive customer or proprietary data
  • Creating compliance blind spots and governance gaps
  • Fracturing collaboration by storing information in uncontrolled silos

Wanting to be more productive and efficient, but not performing due diligence, can lead to repercussions. You must be aware of the actions and the potential consequences of shadow AI.

How Does Shadow AI Start?

Users get access to free accounts accessible via a web browser. Take ChatGPT, for example. All you need to do is enter the URL and sign up with your personal account. That’s it; you can now enter whatever you like into it.

YouTube Video

What if that’s something that shouldn’t be shared outside of your business?

While you may follow company directives, like using a single communications app across the board, there will always be pockets of people choosing to use their preferred apps. The answer as to why is simple: because they can.

If a certain department prefers Google Sheets to Microsoft Excel, you can bet they’re going to use it. And if they’re genuinely more productive, why should you stop them?

The enterprise cybersecurity answer is around security and storage. All documents and sensitive data must be in a centralized and monitored repository, like SharePoint.

The same will be true for shadow AI. Without the appropriate integration and governance frameworks (and AI governance may mean something new), it’s a major task for IT to know what’s going on and where. It can also make for a disconnected, siloed workforce.

It’s not necessarily AI systems, machine learning, and large language models (LLMs) that are going to cause you problems. It’s more the habits and routines that are causes for concern when it comes to the use of AI in businesses.

With AI-powered apps and chatbots like ChatGPT, Perplexity, and a host of image creation and process-shortcutting tools available via a web browser and a free personal account, it’s all too easy for those who want access to get just that.

Without AI policies in place, the high risk of unauthorized AI tools extends further and further. It just takes one LinkedIn post showcasing a new open-source project for companywide virality to take off. Next thing you know, you’re battling data leakage and reviewing your risk management program.

The Risks of Shadow AI

When conducting a risk assessment of letting shadow AI tools operate without your knowing, we must focus on five main areas.

1. Data security

It’s not just the IT department that has concerns around the risks of shadow AI. It’s a wider business security issue that will heavily involve multiple IT and security teams.

The most important sensitive data type that flows into AI tools is customer support (16.3% of sensitive data), which includes confidential information that customers share in support tickets. In such scenarios, decision-making becomes a joint effort, and you must think about a concerted company initiative.

Source: Cyberheaven

The standout security risk of unsanctioned AI is the potential for cyberattacks, customer data breaches, or exploitation of vulnerabilities. When AI gets used maliciously, often occurring via unsanctioned and untested AI applications, the unaccountable nature of AI means it’s hard to defend against it.

Like with any cyberattack, users won’t be aware of the consequences of their actions. But clicking one wrong button or entering sensitive information into the wrong place could lead to data leaks, GDPR issues, and considerable monetary or reputational damage. T

2. Lack of traceability

If the output of an AI tool is offensive or negative, there is no point of accountability. The lack of ownership and responsibility means there is no author of the AI-generated output. But it’s also impossible to suggest the output was “created” by the individual who entered the prompt.

In extreme cases, the discrimination or bias is possible due to information scraped from the internet. If an AI model’s original data source contains biases, so could the output, which may be used for business decisions, marketing purposes, or hiring.

3. Compliance issues

The use of generative AI apps, like ChatGPT and DALL-E, requires input of information to create an output. Not only must you sign up using credentials that may or may not be company information, but you must also manually suggest prompts to generate your desired output.

In the consumer world, where people are free to give up their personal information, this isn’t a concern. In the business world, however, users may be sharing customer or company information in exchange for their desired output. With more than half of generative AI adopters using unapproved tools at work, it becomes even more concerning.

Source: Salesforce

One example is the creation of blog posts using an AI assistant. It’s now easier than ever to get a blog post generated by an AI tool. But the quality of the output is reliant on the quality and quantity of the information you provide it.

For example, simply asking ChatGPT to write a blog post about the consequences of shadow AI generates an overview of some consequences without business or scenario context. If you want a high-quality blog post with information pertinent to your specific business or use case, you may need to provide it with company data and information that you shouldn’t disclose to such an app.

4. Human collaboration

When one user or set of users decides to use AI for their workflow, it may feel like they’re making huge productivity gains. But what lurks beneath may be a different matter entirely.

We’ve become accustomed to using platforms like Zapier to create action-based triggers for transactional work. For example, if a sales deal gets marked “closed-won” in Salesforce, the Zap can update any number of other platforms to notify other individuals and departments. If we apply that same treatment to human interactions, however, collaboration may suffer.

There is no doubt that you may save time, for example, by asking generative AI apps to reply to your emails or messages, but that’s not the right approach to collaboration. Companies like Cisco and Microsoft have spent decades on innovation to create a digital workplace environment that, when used correctly, saves hours per week and creates an inclusive, productive, and happy base for work to happen. But humans are always at the center.

Nextiva ONE app
Example of a digital workplace collaboration tool in action.

5. Support

You must also consider what happens on the back end. If an employee takes it upon themself to create an automation or use a shadow AI app that isn’t sanctioned, what happens when it stops working?

That specific user can no longer use the workflow they’ve created for themself, and they look to IT for support. In most cases, this new support ticket will be the first time an engineer has heard about the use of that particular app. It’s a tale as old as time, but when certain pockets of users get hold of an app that appears to make their workday more efficient, they expect it to not only work but also be supported when it goes wrong.

The impact here, of course, isn’t just that you must either say, “No, we don’t support that,” or “We’ll fix it because it’s making you more productive.” The reality is that if you do make the decision to support such an app, it must undergo security and compliance checks. If it passes, the IT support team must learn how to use it, troubleshoot it, and maintain it. What starts as a simple shadow AI app can turn into a cost center of its own.

Steps to Manage Shadow AI and Move People Back to Your Communications Platform

Without becoming a dictatorship, you can take some steps to encourage people back to the platform you’ve invested in.

1. Audit the different apps in use

Understanding what is genuinely in use in your organization can be a great first step to achieving equilibrium between your sanctioned and unsanctioned apps. The awareness of which apps are in use can uncover features available in your core platform.

For example, if users are using AI assistants to create meeting minutes in a third-party app, you can begin plans to introduce CoPilot if you’re a Microsoft house. Rather than having standalone, unsupported AI functionality, use the built-in, compliant features instead. In a lot of cases, awareness that something exists on your platform of choice is the blocker to usage. Hence, a third-party app is sought to remedy the problem.

Start by creating a simple multiple-choice questionnaire and sending it to users. You’ll get a view of the most-used apps and edge cases to empower your next unified communications (UC) buying decisions. Multiple choice is important here to ensure a high rate of completion.

How cross-platform channel messages and direct messages work synchronously without different tools.

2. Take a proactive approach to AI adoption

What’s your process for rolling out new features to end users? In large businesses, it can take a long time to enable them and get them to the people who want to use them. This creates a barrier to adoption and leads to curiosity outweighing patience. If something is available online and can be downloaded or accessed with just a click, chances are the curious user is going to check it out.

Think about how you can fast-track the process from feature release to user availability. You can’t shortcut compliance procedures and security audits, but you can gain efficiencies during the evaluation and rollout processes.

When these features become available, plan workshops or training videos to raise awareness of what a new feature or technology can really do. The ultimate goal of most collaboration platforms is to do everything, leading to that platform being the only app you need to open during the day. Of course, the reality is that we’re switching apps around 3,600 times a day. So, while there’s a long way to go, awareness of what can be done in-platform is beneficial to all.

Source: Mio

3. Use an all-in-one UC platform

By introducing AI in a centralized platform, you have total control and visibility over which AI tools are in use. This is where Nextiva comes into play as an AI-first platform covering both unified comms as a service (UCaaS) features and contact center as a service (CCaaS) abilities.

As standard, you get UCaaS features like:

  • Self-service auto attendants
  • Advanced conversational IVR
  • Intelligent call routing
  • Automated summaries and meeting insights
  • Voice analytics and business outcomes

These are complemented by CCaaS features like:

  • Automated compliance and quality monitoring
  • Sentiment analysis
  • Call recording and transcription
  • AI Agent Assist
  • Virtual agents and chatbots
Nextiva Call Pop with Customer information populated.

Its UC platform ensures all internal contacts have access to messaging, calling, and meetings. The contact center component reduces cost per interaction with AI and offers automation that increases self-service containment, boosts workforce productivity, and reduces attrition.

How Nextiva helps you govern and leverage AI safely

By consolidating AI usage inside Nextiva, you maintain centralized visibility, integrated compliance controls, and the ability to audit every AI interaction.

Nextiva’s suite of AI tools includes:

  • Nextiva AI insights: Privacy-first analytics dashboards that surface AI-driven engagement metrics without exposing raw user data.
  • Secure payment agent assist: PCI-certified payments made easy without the need for human intervention.
  • AI Copilot: Built-in generative AI for drafting emails, generating meeting summaries, and crafting proposals. For contact center users, you get real-time agent recommendations and automated post-call summaries.
Nextiva-AI-contact-center

4. Enable UC platform champions

Like any trend, someone has to be shouting about it for it to catch on. You can create your own internal awareness and anticipation for both your chosen UC platform and any new features that are near release.

By empowering platform champions to have early access, create training materials, and spread news about your chosen technology, you’re creating an internal marketing team. Task this team with boosting adoption and ensuring the usage of all the available features.

When you approach UC platform adoption with positive intent (i.e., making the entire company aware of its benefits to specific people and departments), you will see adoption increase and the possibility of users drifting to third-party apps decrease when they have a desire to use AI solutions to complete a task.

What if There Is Still Rogue AI Usage?

In the case of shadow IT, particularly messaging and video apps, it’s become acceptable to allow pockets to operate autonomously. In these cases, there are two outcomes:

  • They work fine in silos but cause an issue when collaborating across companies.
  • IT and procurement get tasked with finding a solution to integrate everything.

The same will be true for shadow AI technology. There will be scenarios where you’re oblivious to its use. But that doesn’t make it okay. There are still significant risks when it comes to security, compliance, and collaboration.

While it may seem okay to let employees play with ChatGPT using a personal login, there are many questions posed:

  • What is the output created?
  • To whom does it belong?
  • Does it represent your company?
  • What damage could it do?

It will also be true that shadow AI may provoke controversy or offense simply by being used in certain scenarios. Because it’s often viewed as a shortcut to “real work,” the lack of mainstream acceptance or understanding of AI has the potential to upset non-adopters. Using built-in functionality, like Nextiva’s copilot or meeting summaries, however, is merely using the features within your UC platform.

Proactively encouraging adoption here is the key to reducing shadow AI in other areas. However, if it still occurs and you have the resources available to do so, there are some steps you can take to mitigate the risks:

  • Define and enforce clear AI usage policies.
  • Maintain a centralized AI inventory where all tools must be registered.
  • Implement role-based access controls and network segmentation for all AI services.
  • Educate employees on shadow AI risks, and promote requesting sanctioned solutions instead.
  • Conduct regular audits of AI usage and report findings to governance stakeholders.
  • Implement an AI-powered UC/CC platform that you can integrate low-risk shadow AI into.
YouTube Video

Nextiva: Your Gateway to AI-Powered User Efficiency

Whether you’re concerned with managing shadow AI or introducing ethical AI into your current work processes, Nextiva’s unified comms and unified customer experience platforms create a safe hub.

Whether it’s collaboration with colleagues or analyzing customer interactions, the intelligence and automation live (and stay) within your centralized environment.

As well as enabling user efficiency, admins get access to a centralized portal to manage their phone system functions, manage users, edit features, adjust licenses, manage permissions, build call flows, and more.

Ready to learn more about Nextiva AI? Learn more at Nextiva. 👇

Build Amazing Customer Experiences

Transform customer experience on a Unified Customer Experience Management platform designed to help you acquire, retain, and grow your customers.

See Nextiva in action.
Quick, on-demand demos.