The rapid adoption of generative artificial intelligence (GenAI) is pushing organizations in the Middle East and Africa to strengthen data privacy and cloud security protections in an effort to ward off the most troubling aspects of AI technology.
The good news for security teams: Concerns about GenAI are driving budget growth, with expected increases of 24% and 17% on spending on data privacy and cloud security, respectively, compared to 2023. Gartner said in a recent analysis.
The bad news: The landscape of potential threats posed by AI is largely unexplored, and companies are still strategizing to address its disruptive effects on businesses. Recent dangers include employee leakage of intellectual property via chatbots, attackers perfecting their social engineering, and AI “hallucinations” causing unexpected business impacts.
Overall, unauthorized use by workers poses operational risks, while adoption of the technology by attackers means a likely increase in their core technical capabilities and improvements in social engineering attacks, says Nader Henein, vice president analyst from Gartner, which covered the Middle East and North Africa. (MENA) in his study.
“With an LLM scraping LinkedIn, every phishing attack becomes a unique, targeted spear-phishing venture [and] what was once reserved for high-value targets now becomes the norm,” he says. “AI is four decades old, but LLMs and generative capabilities are new and within reach. To say we have control over all potential risks is arrogance.”
Microsoft denounces GenAI abuses
Concerns about the business impact of generative AI are certainly not limited to the Middle East and Africa. Microsoft and OpenAI warned last week that the two companies had detected nation-state attackers from China, Iran, North Korea and Russia using the companies’ GenAI services to improve attacks by automating reconnaissance, answering questions about targeted systems, and improving messages and decoys used in social engineering attacks, among other tactics. And in the workplace, three-quarters of cybersecurity and IT professionals they believe GenAI is being used by workerswith or without authorization.
The obvious security risks aren’t dampening enthusiasm for GenAI and LLM. Nearly a third of organizations worldwide they already have a pilot program in place to explore the use of GenAI in their business, with 22% already using the tools and 17% implementing them.
“[W]With some upfront technical effort, this risk can be minimized by thinking through specific use cases to enable access to generative AI applications, while looking at the risk based on where the data flows,” he said Teresa Tung, chief cloud-first technologist at Accenture In a 2023 analysis of the top threats related to generative artificial intelligence. “‘Trust by design’ is a critical step in building and operating successful systems,” he added.
Roots of data privacy
For organizations in the Middle East and Africa, concerns about adopting generative AI – as well as updating data protection laws – are the main drivers of increased data protection budgets, while the Cloud adoption is driving the need to secure companies’ cloud services, according to Gartner forecasts.
Overall, companies and government agencies in the MENA region The region is expected to spend $3.3 billion on security and risk management this year, a 12% increase from 2023, says Shailendra Upadhyay, senior principal analyst at Gartner.
“Due to the implementation of data protection laws for the processing of ‘personal data’ involving identifiable elements [or] identified individuals, companies in the MENA region will be required to maintain a higher level of data privacy and cybersecurity hygiene in 2024,” it says.
Next up is spending on cloud security, with increased adoption of IaaS, PaaS and SaaS and the need to purchase cloud security tools, Upadhyay adds.
Concerns around GenAI extend across both segments, with data security being the top concern among companies implementing it and the cloud infrastructure typically providing GenAI services.
GenAI for cybersecurity
The overall adoption of AI technologies in the Gulf Cooperative Council (GCC) region exceeds that of other places in the world, including the United States and Europe. The application of the technology, however, is mostly uneven.
According to consultancy McKinsey, around 62% of organizations use AI in at least one business function, compared to 58% in North America and 47% in Europe, but most use it only for marketing operations, sales or services.
“[C]Companies now implementing AI have only just scratched the surface of what it can offer,” McKinsey said in its State of AI in GCC Countries 2023 Report.
Likewise, organizations in the Middle East and North Africa are still in the early stages of their cloud journey.
Relatively expensive internet service, lack of connectivity in many regions and regulatory uncertainty around the cloud have led to a slowdown in demand, according to a second McKinsey analysis. Yet the situation is evolving rapidly: Governments are funding emerging knowledge-based economies and creating new data security regulations to match those rules in other parts of the world.
Meanwhile, GenAI can also be part of the security solution: Cybersecurity companies are adding intelligence and machine learning (AI/ML) to their products as a way to reduce workloads on already overworked teams.
McKinsey recommends three criteria for AI adoption: a clearly defined AI strategy, a team of AI-skilled workers, and a process in place for rapid AI adoption and scalability. Currently, however, less than 30% of GCC companies met each of these three criteria, the company said in its report.