Businesses are increasingly adopting generative AI to automate IT processes, detect security threats, and take control of front-line customer service functions. A IBM survey in 2023 found that 42% of large enterprises were actively using AI and another 40% were exploring or experimenting with it.
At the inevitable intersection of AI and cloud, companies need to think about how to secure AI tools in the cloud. One person who has thought about this a lot is Chris Betz, who became the CISO of Amazon Web Services last August.
Prior to AWS, Betz served as executive vice president and CISO at Capital One. Betz also served as senior vice president and chief security officer at Lumen Technologies and in security roles at Apple, Microsoft and CBS.
Dark Reading recently spoke with Betz about secure AI workloads in the cloud. An edited version of that conversation follows.
Dark Reading: What are some of the big challenges around securing AI workloads in the cloud?
Chris Betz: When I talk to many of our customers about generative AI, the conversations often start with, “I have this really sensitive data and I’m trying to deliver a feature to my customers. How do I do that in a safe and secure way?” I really appreciate that conversation because it’s so important that our clients focus on the outcome they’re trying to achieve.
Dark reading: What worries customers most?
Betz: The conversation needs to start with the concept that “your data is your data.” We have a great advantage in that I can rely on an IT infrastructure that does a really good job of keeping the data where it is. So the first piece of advice I give is: Understand where your data is. How is it protected? How is it used in the generative AI model?
The second thing we talk about is that interactions with a generative AI model often use some of their customers’ most sensitive data. When you ask a generative AI model for information about a specific transaction, you will use information about the people involved in that transaction.
Dark reading: Are companies concerned about both what AI does with internal company data and customer data?
Betz: Customers are more eager to use generative AI in their interactions with their customers and in mining and leverage the huge amount of data they have internally and make it work for both their internal employees and their customers. It is so important for companies to manage this incredibly sensitive data securely because it is the lifeblood of their businesses.
Companies need to think about where their data is and how it is protected when they make suggestions to the AI and when they receive responses.
Dark Reading: Are response quality and data security related?
Betz: AI users always need to think about whether they are getting quality answers. The reason for security is for people to trust their computer systems. If you’re putting together this complex system that uses a generative AI model to deliver something to the customer, you need the customer to trust that the AI is giving them the right information to act on and that it’s protecting their information.
Dark Read: Are there specific ways AWS can share about how it protects against AI attacks in the cloud? I think immediate injection, poisoning attacks, adversary attacks, that kind of thing.
Betz: With a solid foundation already in place, AWS was well prepared to meet the challenge as we have been working with AI for years. We have a large number of in-house AI solutions and a number of services that we offer directly to our customers, and security has been an important consideration in how we develop these solutions. It’s what our customers ask for and it’s what they expect.
As one of the largest-scale cloud providers, we have broad visibility into evolving security needs around the world. The threat intelligence we acquire is aggregated and used to develop actionable insights for use within customer tools and services such as Guard. Additionally, our threat intelligence is used to generate automated security actions on behalf of customers to keep their data safe.
Dark reading: We’ve heard a lot about cybersecurity vendors using artificial intelligence and machine learning to detect threats by looking for unusual behavior on their systems. What other ways are companies using AI to protect themselves?
Betz: I’ve seen customers do amazing things with generative AI. We’ve seen them take advantage of CodeWhisperer [AWS’ AI-powered code generator] to rapidly prototype and develop technologies. I’ve seen teams use CodeWhisperer to help themselves create secure code and ensure that gaps in the code are filled.
We have also created generative AI solutions in touch with some of our internal security systems. As you might imagine, many security teams handle huge amounts of information. Generative AI enables a synthesis of that data to make it very usable for both builders and security teams to understand what’s going on in systems, ask better questions, and piece the data together.
When I started thinking about cybersecurity talent shortage, generative AI today not only helps improve the speed of software development and improve coding security, but also helps aggregate data. It will continue to help us because it amplifies our human capabilities. AI helps us bring information together to solve complex problems and helps bring data to engineers and security analysts so they can start asking better questions.
Dark reading: Do you see any security threats specific to AI and the cloud?
Betz: I’ve spent a lot of time with security researchers examining cutting-edge generative AI attacks and how attackers look at them. There are two classes of things I think about in this space. The first is that we see malicious actors starting to use generative AI to achieve faster and improve what they already do. Social engineering content is an example of this.
Attackers also use AI technology to write code faster. It’s very similar to where the defense is. Part of the power of this technology is that it makes an asset class simpler, and that’s true for attackers, but it’s equally true for defenders as well.
The other area that I see researchers starting to look at more is the fact that these generative AI models are code. Like other codes, they are likely to have weaknesses. It’s important to understand how to protect them and ensure they exist in a defended environment.