Salesforce exec warns of an AI winter driven by user trust issues

Lightning-fast advances in artificial intelligence have necessitated some guardrails and a developing philosophy on how to ethically incorporate technology into the workplace. AI should play the role of co-pilot alongside humans, not exist on autopilot, Paula Goldman, Salesforce’s head of ethical and humane use, said during FortuneThe Brainstorm AI conference in London will be held on Monday.

“We need next-level controls. We need people to be able to understand what is happening in the AI ​​system,” he said FortuneNick Lichtenberg, executive news editor of . “And most importantly, we need to design AI products that take into account what AI is good at and bad at, but also what people are good at and bad at in their decision-making judgments.”

Goldman’s main concern among the growing number of user concerns is the ability of AI to generate wholesome content, including that free of racial or gender bias, and excessive user-generated content such as deepfakes. She warns that unethical applications of AI could reduce funding and development of the technology.

“It’s possible that the next AI winter will be caused by issues of trust or people’s adoption of AI,” Goldman said.

The future of AI-related productivity gains in the workplace will be driven by training and people’s willingness to adopt new technologies, he said. To promote trust in AI products, particularly among employees using the applications, Goldman suggests implementing “conscious attrition,” which is essentially a series of checks and balances to ensure that the AI ​​tools in place of work do more good than harm.

What Salesforce Did to Implement “Conscious Attrition”

Salesforce has started to keep tabs on potential biases in its use of AI. In fact, the software giant has developed a marketing segmentation product that generates appropriate demographic data for email campaigns. While the AI ​​program generates a list of potential demographics for a campaign, it is the human’s job to select the appropriate demographics so as not to exclude relevant recipients. Likewise, the software company has a pop-up warning about generative models on its Einstein platform that incorporate zip codes or zip codes, which are often related to certain races or socioeconomic statuses.

“We’re moving more and more toward systems that can detect anomalies like that and encourage and nudge humans to take a second look,” Goldman said.

In the past, bias and copyright infringement have undermined trust in artificial intelligence. An MIT Media Lab study found that artificial intelligence software programmed to identify the race and gender of different people had an error rate of less than 1 percent in identifying light-skinned men, but an error rate of 35% in identifying dark-skinned women, including wealthy women. well-known personalities such as Oprah Winfrey and Michelle Obama. Jobs that use facial recognition technology for high-risk tasks, such as equipping drones or body cameras with facial recognition software to carry out lethal attacks, are compromised by inaccuracies in the AI ​​technology, said Joy Buolamwini, an author of the study. Similarly, algorithmic biases in health databases can lead artificial intelligence software to suggest inappropriate treatment plans for certain patients, the Yale School of Medicine has found.

Even for those in industries without lives at risk, AI applications have raised ethical concerns, including OpenAI scavenging hours of user-generated YouTube content, potentially infringing the copyrights of content creators without their consent . Beyond the spread of misinformation and the inability to complete basic tasks, artificial intelligence still has a long way to go before it can realize its potential as a useful tool for humans, Goldman said.

But designing smarter AI capabilities and human-driven security devices to strengthen trust is what Goldman finds most exciting about the future of the industry.

“How do you design products where you know what to trust and where you should take a second look and apply human judgment?”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *