Microsoft and OpenAI are sued as Copilot engineer reports malicious and offensive images

A Microsoft engineer is raising the alarm about offensive and harmful images that he says are too easily created by the company’s artificial intelligence image generation tool, sending letters Wednesday to U.S. regulators and the board of directors of the tech giant urging them to act.

Shane Jones told The Associated Press that he considers himself a whistleblower and that he also met with U.S. Senate staffers last month to share his concerns.

The Federal Trade Commission confirmed receiving his letter Wednesday but declined further comment.

Microsoft said it was committed to addressing employee concerns about company policies and appreciated Jones’ “effort in studying and testing our latest technology to further enhance its security.” He said he recommended that he use the company’s “robust internal reporting channels” to investigate and address issues. CNBC was first to report on the letters.

Jones, a leading software engineering manager, said he spent three months trying to address his security concerns about Microsoft’s Copilot Designer, a tool that can generate new images from written instructions. The tool is derived from another AI image generator, DALL-E 3, made by OpenAI, a close commercial partner of Microsoft.

“One of the most concerning risks with Copilot Designer is when the product generates images that add harmful content despite a benevolent request from the user,” he said in his letter to FTC Chairman Lina Khan. “For example, when only using the ‘car crash’ suggestion, Copilot Designer has a tendency to randomly include an inappropriate and sexually objectified image of a woman in some of the images created.”

Other harmful content involves violence, as well as “political bias, underage alcohol and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion, to name a few,” he told the FTC. His letter to Microsoft urges the company to withdraw it from the market until it is safer.

This isn’t the first time Jones has expressed his concerns publicly. He said Microsoft initially advised him to bring his findings directly to OpenAI, so he did.

He also publicly posted a letter to OpenAI on Microsoft-owned LinkedIn in December, leading a manager to inform him that Microsoft’s legal team “asked me to delete the post, which I reluctantly did,” according to his letter to advise.

In addition to the U.S. Senate Commerce Committee, Jones has taken his concerns to the attorney general of Washington state, where Microsoft is headquartered.

Jones told the AP that while the “core issue” is with OpenAI’s DALL-E model, those using OpenAI’s ChatGPT to generate AI images won’t get the same harmful results because the two companies layer their products with different protections.

“Many of the issues with Copilot Designer are already resolved with ChatGPT security measures,” he said via message.

A number of impressive AI image generators hit the scene for the first time in 2022, including OpenAI’s second generation DALL-E 2. This, and the subsequent release of OpenAI’s ChatGPT chatbot, sparked public fascination that put commercial pressure on tech giants like Microsoft and Google to release their own versions.

But without effective safeguards, the technology poses dangers, including how easily users can generate harmful “deepfake” images of political figures, war zones or non-consensual nudity that falsely appear to show real people with recognizable faces. Google temporarily suspended the ability of its Gemini chatbot to generate images of people following outrage over the way it depicted race and ethnicity, for example by placing people of color in Nazi-era military uniforms.

Sign up for the Eye on AI newsletter to stay up to date on how artificial intelligence is shaping the future of business. Sign up for free.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *