When a major vulnerability rocks the cybersecurity world, like the recent XZ backdoor or the Log4J2 flaws of 2021, the first question most companies ask is, “Are we affected?” In absence of well-written playbooksanswering a simple question can take a lot of effort.
Microsoft and Google are investing heavily in generative AI systems that can turn big security questions into concrete actions and assist security operations. and, increasingly, taking automated actions. Microsoft offers overworked security operations centers with Safety co-pilota service based on generative artificial intelligence capable of doing this identify breaches, connect threat signals and analyze data. And Google’s Gemini in Safety is a collection of security features powered by the company’s Gemini generative AI.
To boot Simbian joins the race with its new generative AI-powered platform to help businesses address their security operations. Simbian’s system combines large language models to summarize data and understand the native language, other machine learning models to connect disparate data points, and a software-based expert system based on security information gleaned from the Internet.
Where setting up a security information and event management (SIEM) system or a security orchestration, automation and response (SOAR) system might take weeks or months, using AI reduces the time, in some cases, by the second, says Ambuj Kumar, co-founder and CEO of Simbian.
“With Symbian, these things are done literally in seconds,” he says. “You ask a question, you express your goal in natural language, we break down the execution of the code into steps and all of this is done, automatically, it’s self-sustaining.”
Helping overworked security analysts and incident responders streamline their jobs is a perfect application for the most powerful capabilities of generative AI, says Eric Doerr, vice president of engineering at Google Cloud.
“Opportunities in security are particularly acute given the elevated threat landscape, the well-publicized talent gap among cybersecurity professionals, and the fatigue that represents the status quo on most security teams,” Doerr says. “Accelerate productivity and reduce average detection, response and containment times [or] Mitigating threats through the use of GenAI will allow security teams to recover and defend their organizations more successfully.”
Different starting points, different “advantages”
Google’s advantages in the market are evident. The information technology and internet giant has the budget to stay the course, has the technical expertise in machine learning and artificial intelligence from its DeepMind projects to innovate, and has access to plenty of training data – a key consideration for creating large language models (LLMs).
“We have a huge amount of proprietary data that we used to train a custom security LLM, SecLM, which is part of Gemini for Security,” says Doerr. “This is the superset of 20 years of intelligence from Mandiant, VirusTotal and more, and we are the only platform that has an open API – part of Gemini for Security – that allows partners and enterprise customers to extend our solutions security and have a single artificial intelligence capable of operating with the entire context of the enterprise.”
Like Simbian, Gemini leadership in security operations: a capability under the aegis of Gemini in Security — will assist with investigations starting in late April, leading the security analyst and recommending actions from within Chronicle Enterprise.
Simbian uses natural language queries to generate results, so ask: “Are we affected by the XZ vulnerability?” will produce a table of IP addresses of vulnerable applications. Depending on the systems the Simbian platform connects to, the systems also use security knowledge gathered from the Internet, to create guides for security analysts that show them a script of instructions to give the system to perform a specific task.
“Help is a way to personalize or create trustworthy content,” says Simbian’s Kumar. “Right now we’re creating the guides, but once… people start using them, then they can create their own.”
Strong ROI claims for LLMs
Returns on investment will increase as companies move from a manual process to an assisted process towards autonomous operation. Most AI-based generative systems are only advanced to the assistant or co-pilot stage, when it suggests actions or performs only a limited set of actions, after obtaining user permissions.
The real return on investment will come later, Kumar says.
“What we’re excited to build is autonomous: autonomous means making decisions on your behalf that are within the scope of the guidance you give it,” he says.
Google’s Gemini also appears to bridge the gap between an AI assistant and an automated engine. Financial services firm Fiserv uses Gemini in security operations to create detections and playbooks faster and with less effort, and to help security analysts quickly find answers using natural language search, increasing the productivity of security teams, it says Doerr.
However, trust is still an issue and a barrier to greater automation, he says. To build trust in the system and solutions, Google remains focused on building explainable AI systems that are transparent in how they arrive at a decision.
“When you use natural language input to create a new detection, we show you the syntax of the detection language and you choose to run it,” he says. “This is part of the process of building trust and context with Gemini for Security.”