Automate routine operational workflows with generative AI

When you think about your day-to-day responsibilities in terms of security, compliance, identity and management, how much of this work follows a repeatable process? How much more efficient could you be if these processes were automated through generative artificial intelligence (GenAI)?

GenAI has the power to dramatically streamline operational workflows and democratize knowledge across the entire security team, regardless of experience levels or knowledge of a specific technology or threat vector. Instead of having to manually search for information, SOC analysts can use the natural language processing (NLP) built into GenAI models to ask questions and receive answers in a more natural format. NLP also gives GenAI the flexibility to “understand” what a user asks for and adapt to his or her style or preferences.

However, it is important to recognize that GenAI is not intended to replace human skills. Rather, it should help analysts respond to threats more efficiently by assisting them with guided recommendations and best practices based on the organization’s security data, known threat intelligence, and existing processes. That’s how.

Establish trust through transparency

Before a security, compliance, identity, or management workflow can be automated, teams must be sure that all the information available to them is complete and accurate. Routine backend work is an ideal candidate for automation because it is predictable and easily auditable. Instead of letting analysts spend their time answering simple help desk tickets or writing incident reports, why not leverage NLP and GenAI to automate those tasks? This allows analysts to dedicate their time to more business-critical work.

For all of this to work effectively, GenAI models must be transparent. Analysts should be able to understand the sources the AI ​​model has drawn from and easily validate that information to ensure the AI ​​provides accurate recommendations.

At Microsoft, we have defined, published, and implemented ethical principles to guide our work on artificial intelligence. And we have created continually improving design and governance systems to put these principles into practice. Transparency is one of our founding principles A responsible AI frameworkalong with fairness, reliability and security, privacy and protection, inclusiveness and responsibility.

How to deploy GenAI in your environment

A series of repeatable, multi-step processes across security, compliance, identity and management are ripe for automation.

For example, when investigating incidents, analysts often need to examine suspicious scripts, command line arguments, or files that may have been executed on an endpoint. Instead of manually searching for this information, analysts can simply provide the script they observed and ask the AI ​​model to break it down using a collection of cobbled together instructions to perform specific security-related tasks. Each prompt book requires specific input, such as a code snippet or the name of a threat actor.

The script is then explained step by step and the AI ​​model is consulted to determine whether the script could be malicious. From there, if there is a network indicator, it is correlated with threat intelligence and relevant findings are summarized before being included. The AI ​​can also provide recommendations based on the script’s actions and generate a report summarizing the session for non-technical audiences.

Using AI in this way offers two main benefits. First, AI can automatically upskill users who may not understand the complexities of analyzing a script or file using a very transparent and repeatable process. Second, it saves time by allowing the model to assist with common follow-up actions, such as correlating any indicators with threat intelligence and writing a summary report.

Another use case for GenAI is device management and compliance conditional access policies. If devices don’t meet specific criteria, they can’t access corporate resources. This can lead to machines being blocked and users filing internal tickets to resolve the issue. In this scenario, IT operations or help desk support staff can leverage NLP prompts to insert the device’s unique identifier and quickly understand the device’s compliance status. The AI ​​can then use NLP to explain why the device is non-compliant and provide detailed instructions on how to fix the problem in the appropriate tool. This is powerful because someone with no direct experience in a particular tool can now perform the task, avoiding the need to step up the process.

Ultimately, GenAI has the potential to completely revolutionize how we approach security, compliance, identity and business management processes. By extending our thinking about how to apply GenAI in operational roles, we can save professionals time, equip them with new skills, and ensure their time is spent on what matters most.

– To know more Partner perspectives from Microsoft Security



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *