The United Nations adopted a resolution on the responsible use of artificial intelligence on Thursday, with unclear implications for the global safety of AI.
THE Proposal drawn up by the United States – co-sponsored by 120 countries and accepted without a vote – focuses on promoting “safe, secure and trustworthy artificial intelligence,” a phrase it repeats 24 times in the eight-page document.
The move signals an awareness of the pressing questions that artificial intelligence poses today – its role in disinformation campaigns and its ability to exacerbate human rights abuses and inequality between and within nations, among many others – but fails to demand anything from anyone and only makes general mention of cybersecurity threats in particular.
“You need to get the right people on board, and I think this is hopefully a step in that direction,” says Joseph Thacker, principal AI engineer and security researcher at AppOmni. Ultimately, he believes that “it can be said [to member states]: ‘Hey, we decided to do it. And now you’re not following through.'”
What the resolution says
The most direct mention of cybersecurity threats arising from artificial intelligence in the new UN resolution is found in subsection 6f, which encourages Member States to “strengthen investments in the development and implementation of effective safeguards, including physical security, the security of artificial intelligence systems and risk management across all sectors. the life cycle of artificial intelligence systems.”
Thacker emphasizes the choice of the term “systems security”. He says: “I like this term, because I think it encompasses everything [development] life cycle and not just safety.”
Other suggestions focus more on the protection of personal data, including “mechanisms for monitoring and managing risks, mechanisms for data protection, including policies on personal data protection and privacy, as well as impact assessments, if any case,” both during testing and evaluation of AI systems and post-deployment.
“There wasn’t anything world-changing to begin with, but aligning globally — at least having a baseline standard of what we consider acceptable or unacceptable — is pretty huge,” Thacker says.
Governments face the problem of artificial intelligence
This latest UN resolution follows stronger actions taken by Western governments in recent months.
As usual, the European Union led the way with its own AI Law. The law prohibits certain uses of the technology – such as creating social scoring systems and manipulating human behavior – and imposes fines for noncompliance that can amount to millions of dollars or substantial shares of a company’s annual revenue.
Biden’s White House has also made great strides with a Executive order last fallpushing AI developers to share critical security information, develop cybersecurity programs to find and fix vulnerabilities, and prevent fraud and abuse, encapsulating everything from disinformation media to terrorists using chatbots to engineer bioweapons.
It remains to be seen whether politicians will have a significant, global impact on the safety and security of AI, Thacker says, not least because “most country leaders will age, naturally, as they slowly move up the chain of power. So focusing on AI is difficult.”
“My goal, if I were trying to educate or change the future of AI and its safety, would be pure education. [World leaders’] the schedules are so packed, but they have to learn and understand it in order to legislate and regulate it properly,” he points out.