The introduction of Open AI’s ChatGPT was a watershed moment for the software industry, kicking off a GenAI rush with its release in November 2022. SaaS vendors are now rushing to update tools with Improved productivity driven by generative AI.
Among a wide range of uses, GenAI tools make it easier for developers to build software, assist sales teams with mundane email writing, help marketers produce unique content at low cost, and enable teams and creatives to brainstorm new ideas.
Recent significant GenAI product launches include Microsoft 365 Copilot, GitHub Copilot, and Salesforce Einstein GPT. Notably, these GenAI tools from leading SaaS vendors are paid enhancements, a clear sign that no SaaS vendor will want to miss out on the opportunity to profit from GenAI transformation. Google will soon launch its SGE “Search Generative Experience” platform for premium AI-generated summaries instead of a list of websites.
At this rate, it’s only a matter of time before some form of AI capability becomes standard in SaaS applications.
However, this advancement of AI into the cloud-enabled landscape is not without new risks and downsides for users. Indeed, the widespread adoption of GenAI apps in the workplace is quickly raising concerns about exposure to a new generation of cybersecurity threats.
Learn how to improve your SaaS security posture and mitigate AI risks
Reacting to the risks of GenAI
GenAI works on training models that generate new data mirroring the original based on information shared with the tools by users.
Since ChatGPT now warns users when they log in, “Do not share sensitive information” and “check your facts”. When asked about the risks of GenAI, ChatGPT responds: “Data fed to AI models like ChatGPT can be used for model training and improvement purposes, potentially exposing them to researchers or developers working on these models.”
This exposure expands the attack surface of organizations sharing internal information in cloud-based GenAI systems. New risks include the danger of leaking IP, sensitive and confidential customer data and PII, as well as threats arising from the use of deepfakes by cybercriminals using stolen information for phishing scams and identity theft.
These concerns, as well as challenges in meeting compliance and government requirements, are triggering a backlash to GenAI applications, especially from industries and sectors that process confidential and sensitive data. According to a recent Cisco study, more than one in four organizations have already banned the use of GenAI due to privacy and data security risks.
The banking industry was among the first to ban the use of GenAI tools in the workplace. Financial services leaders hope for the benefits of using AI to become more efficient and help employees do their jobs, but according to a survey conducted by Arizent, 30% still prohibit the use of intelligence tools artificial generative technology within your company.
Last month, the US Congress imposed a ban on the use of Microsoft’s Copilot on all government-issued PCs to strengthen cybersecurity measures. “The Microsoft Copilot application was deemed by the Office of Cybersecurity to pose a risk to users due to the threat of the Chamber’s data leaking to cloud services not approved by the Chamber,” said Catherine Szpindor, the Chamber’s chief administrative officer, according to a Axios report. . This ban follows the government’s previous decision to block ChatGPT.
Addressing a lack of supervision
Aside from reactive GenAI bans, organizations undoubtedly struggle to effectively control the use of GenAI as applications penetrate the workplace without training, oversight, or employer knowledge.
According to a recent Salesforce study, more than half of GenAI adopters use unapproved tools at work. The research found that despite the benefits offered by GenAI, the lack of clearly defined policies around its use could put companies at risk.
The good news is that this may start to change now if employers follow new US government guidelines to strengthen AI governance.
In a statement released earlier this month, Vice President Kamala Harris directed all federal agencies to designate an AI chief with “the experience, expertise, and authority to oversee all AI technologies … to ensure that AI is used responsibly.”
With the US government taking the lead in encouraging the responsible use of artificial intelligence and dedicated resources to manage risks, the next step is to find ways to safely manage apps.
Take back control of GenAI apps
The GenAI revolution, whose risks remain in the realm of the unknown, comes at a time when the focus on perimeter protection is becoming increasingly obsolete.
Today, threat actors are increasingly focusing on the weakest links within organizations, such as human identities, non-human identities, and misconfigurations in SaaS applications. Threat actors nationwide have recently used tactics such as brute-force password sprays and phishing to successfully distribute malware and ransomware, as well as perform other malicious attacks on SaaS applications.
Complicating efforts to secure SaaS applications, the lines between work and personal life are now blurred when it comes to device use in the hybrid work model. With the temptations that arise from GenAI’s power, it will become impossible to prevent employees from using the technology, sanctioned or otherwise.
The rapid adoption of GenAI in the workforce should, therefore, be a wake-up call for organizations to reevaluate whether they have the security tools to handle the next generation of SaaS security threats.
To regain control and gain visibility into GenAI SaaS apps or apps with GenAI capabilities, organizations can turn to advanced zero-trust solutions such as SaaS Security Posture Management (SSPM) that can enable the use of AI by rigorously monitoring its risks.
Gaining insight into every AI-enabled connected app and measuring its security posture for risks that could compromise SaaS security will enable organizations to prevent, detect and respond to new and evolving threats.
Learn how to initiate SaaS security for the GenAI era