The security challenges of AI begin with its definition

As artificial intelligence (AI) continues to attract everyone’s attention, AI safety has become a popular topic in the marketplace of ideas. AI security is capturing the media cycle, AI security startups are surreptitiously emerging left and right, and incumbents are rushing to release AI-relevant security features . It’s clear that security teams are concerned about AI.

But what exactly does “AI security” mean?

Frankly, we still don’t know what security means for AI because we don’t yet know what AI development means. “Safety for X“It usually comes later X has matured – think cloud, network, web apps – but AI remains a moving target.

However, there are some distinct categories of issues that emerge as part of AI safety. These align with the concerns of different roles within an organization, so it is unclear whether they merge easily, although they obviously have some overlap.

These problems are:

  1. Building safe AI applications

Let’s tackle them one at a time.

1. Visibility

Security always starts with visibility, and protecting AI applications is no different. Chances are, many teams in your organization are using and building AI applications right now. Some may have the security knowledge, resources, and experience to do this properly, but others probably don’t. Each team might use different technology to build their applications and apply different standards to ensure they work properly. To standardize practices, some organizations create specialized teams inventory and review all AI applications. Although this is not an easy task in the company, visibility is important enough to start this process.

2. Prevention of data leakage

When ChatGPT was first launched, many companies went down the same path desperately trying to block it. Every week new headlines emerged of companies losing their intellectual property to AI because an employee copied and pasted highly sensitive data into the chat so they could ask for a summary or funny poem about it. This was really all anyone could talk about for a few weeks.

Since you can’t control ChatGPT or any of the other AI on the consumer market, this has become a sprawling challenge. Companies issue acceptable use policies with approved enterprise AI services, but these are not easy to enforce. This issue has attracted so much attention that OpenAI, which caused the scare in the first place, changed its policies to allow it users to opt out to be included in the training set and for organizations to pay to opt out on behalf of all their users.

This problem (users pasting bad information into an app where it doesn’t belong) seems similar to that data loss prevention (DLP) AND Cloud Access Security Broker (CASB) solutions have been created to solve. It remains to be seen whether companies can use these tools created for conventional data to protect data within AI.

3. Checking the AI ​​model

Think about SQL injection, which powered the application security testing industry. It occurs when data is translated into instructions, thus allowing people who manipulate the application’s data (i.e., users) to manipulate the application’s instructions (i.e., its behavior). After years of serious problems that devastated web applications, application development frameworks have risen to the challenge and now securely handle user input. If you’re using a modern framework and going down its paved road, SQL injection is for all intents and purposes a solved problem.

One of the strange things about AI from an engineer’s perspective is that it mixes instructions and data. You tell the AI ​​what you want it to do with the text and then let your users add more text in essentially the same input. As you would expect, this allows users to edit the instructions. Using smart instructions allows you to do this even if the application builder really tried to prevent it, a problem we are all familiar with today timely injection.

For AI application developers, trying to control these uncontrollable patterns is a real challenge. This is a security issue, but it’s also a predictability and usability issue.

4. Building safe AI applications

Once you allow the AI ​​to act on the user’s behalf and chain those actions one after another, you’ve reached uncharted territory. Can you really tell if the AI ​​is doing the things it should do to achieve its goal? If you could think of and list everything that AI might need to do, then you probably wouldn’t need AI in the first place.

It is important to emphasize that this issue concerns how artificial intelligence interacts with the world, and therefore it is as much about the world as it is about artificial intelligence. Most Copilot apps pride themselves on inheriting existing security controls by impersonating users, but are user security controls really that strong? Can we really rely on user-managed and assigned permissions to protect sensitive data from curious AI?

A final thought

Trying to say something about where AI will end up, or by extension the safety of AI, is trying to predict the future. As the Danish proverb says, it is difficult to make predictions, especially about the future. As the development and use of artificial intelligence continues to evolve, the security landscape it is destined to evolve with them.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *