CYBERSECURITY | SailPoint launches tool to address risks from unauthorized AI use
SailPoint said the use of so-called shadow AI presents security and compliance risks, particularly when employees upload sensitive or confidential data into external platforms without oversight.

SailPoint has launched a new solution aimed at helping enterprises monitor and manage the use of unauthorized artificial intelligence tools by employees.
The company said its Shadow AI Remediation capability provides real-time visibility into how workers use generative AI platforms such as ChatGPT, Claude and Gemini, which are often accessed outside approved IT systems.
SailPoint said the use of so-called shadow AI presents security and compliance risks, particularly when employees upload sensitive or confidential data into external platforms without oversight.
The tool allows organizations to track AI usage, including document uploads and interaction frequency, and take action to reduce risk. Security teams can block unauthorized activity, redirect users to approved tools or require justification for usage.
“Many vendors are trying to solve the Shadow AI problem with isolated browser or endpoint tools, but that misses the bigger picture. This is fundamentally an identity challenge,” said Chandra Gnanasambadam, executive vice president of product and chief technology officer at SailPoint.
The solution can be deployed through browser-based extensions using device management platforms, allowing implementation without significant infrastructure changes, the company said.
SailPoint added that the new capability forms part of its broader framework for managing AI-related risks by integrating identity, data and security monitoring.
A company report cited by SailPoint found that 80% of organizations have experienced unintended actions from AI systems, including inappropriate data access or sharing.
The company said the launch reflects increasing enterprise demand for governance tools as the use of AI applications in the workplace continues to grow.
