The European Union’s AI law will stifle innovation and competition

European Union (EU) lawmakers overwhelmingly approved legislation to regulate artificial intelligence last week in a bid to guide member countries as the sector grows rapidly.

THE Artificial Intelligence Law (AI Act) passed 523–46, with 49 votes not cast. According to the European Parliament, the legislation is intended for “guarantee[] security and respect for fundamental rights, while promoting innovation.” It is much more likely, however, that the law will hinder innovation instead, particularly given that it is regulating a technology that changes rapidly and is not well understood .

“In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed,” the law reads. law.

The legislation classifies AI systems into four categories. Systems deemed unacceptably high risk, including those that seek to manipulate human behavior or those used for social scoring, will be banned. Equally prohibited is the use of biometric identification in public spaces for law enforcement purposes, with a few exceptions.

The government will subject high-risk systems, such as high-priority infrastructure and utilities, to risk assessment and oversight. Limited-risk apps and general-purpose AI, including foundational models like ChatGPT, will need to comply with transparency requirements. Minimal-risk AI systems, which lawmakers say will make up the majority of applications, will remain unregulated.

In addition To address risk in order to “avoid undesirable outcomes”, the law aims to “create a governance structure at European and national level”. THE European AI Office, described as the center of AI expertise across the EU, was established to implement the AI ​​Law. It also establishes a Committee on Artificial Intelligence which will be the EU’s main advisory body on the technology.

The costs resulting from a violation of the law are no joke: “they range from fines of 35 million euros or 7% of global revenue to 7.5 million euros or 1.5% of revenue, depending on the violation and the size of the company,” according to Holland and Knight.

In practice, the regulation of AI will now be centralized among the member countries of the European Union. The aim, according to the law, is to establish a “harmonised standard”, a measure routinely used in the EU, for such regulation.

The EU is far from the only governing body to pass AI legislation to keep the burgeoning technology in check; China introduced their temporary measures in 2023 and President Joe Biden signed a executive order on October 30, 2023, keep in check the development of AI.

“To realize the promise of artificial intelligence and avoid the risks, we must govern this technology,” Biden said said later at a White House event. While the US Congress has yet to come up with long-term legislation, the EU’s AI Act could give it inspiration to do the same. Biden’s words certainly sound similar to the EU’s approach.

But critics of the new EU law fear that the set of rules Want stifle innovation and competition, limiting consumer choice in the market.

“We can decide to regulate more quickly than our main competitors,” She said Emmanuel Macron, president of France, “but we are regulating things that we have not yet produced or invented. It is not a good idea.”

Anand Sanwal, CEO of CB Insights, he echoed the thought: “The EU now has more AI regulations than significant AI companies.” Meanwhile, Barbara Prainsack and Nikolaus Forgó, professors at the University of Vienna, wrote Nature medicine that the AI ​​Act views technology strictly through the lens of risk without recognizing benefit, which will “hamper the development of new technologies while failing to protect the public.”

EU legislation is not all bad. Its restrictions on the use of biometric identification, for example, address a real civil liberties concern and represent a step in the right direction. Less ideal is that the law makes many of them exceptions for national security cases, allowing member states to freely interpret what exactly raises privacy concerns.

It is yet to be determined whether American lawmakers will adopt a similar risk-based approach to regulating AI, but it is not far-fetched to think that it could only be a matter of time before the push for such a law materializes in Congress. If and when this happens, it is important to be prudent in encouraging innovation, as well as continuing to safeguard civil liberties.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *