Critical vulnerabilities in ChatGPT plugin expose sensitive data

Three security vulnerabilities discovered in the extension functions used by ChatGPT open the door to unauthorized, click-free access to user accounts and services, including sensitive repositories on platforms like GitHub.

ChatGPT plugins and custom versions of ChatGPT published by developers extend the capabilities of the AI ​​model, enabling interactions with external services by granting OpenAI’s popular generative AI chatbot access and permissions to perform tasks on various third-party websites , including GitHub and Google Drive .

Salt Labs researchers discovered the three critical vulnerabilities affecting ChatGPT, the first of which occurs during the installation of new plugins, when ChatGPT redirects users to plugin websites for code approval. By exploiting this, attackers could trick users into approving malicious code, leading to automatic installation of unauthorized plug-ins and potential compromise of subsequent accounts.

Second, PluginLab, a framework for plugin development, lacks proper user authentication, allowing attackers to impersonate users and perform account takeover, as seen with the “AskTheCode” plugin which connects ChatGPT with GitHub.

Finally, Salt researchers found that some plugins were susceptible to OAuth redirect manipulationallowing attackers to inject malicious URLs and steal user credentials, facilitating further account takeovers.

The report notes that the issues have since been resolved and there was no evidence that the vulnerabilities had been exploited, so users should update their apps as soon as possible.

GenAI’s security issues put a vast ecosystem at risk

Yaniv Balmas, vice president of research at Salt Security, says the issues found by the research team could put hundreds of thousands of users and organizations at risk.

“Security leaders at any organization need to better understand the risk, so they should examine which plugins and GPTs their company uses and which third-party accounts are exposed through those plugins and GPTs,” he says. “As a starting point, we suggest doing a security review of their code.”

For plugin and GPT developers, Balmas advises developers to be more aware of the internals of the GenAI ecosystem, the security measures involved, how to use them, and how to abuse them. This specifically includes what data is sent to GenAI and what permissions are granted to the GenAI platform or connected third-party plugins, such as permission for Google Drive or GitHub.

Balmas points out that the Salt research team has only audited a small percentage of this ecosystem, and says the findings indicate that there is a greater risk related to other GenAI platforms and many existing and future GenAI plugins.

Balmas also says that OpenAI should place more emphasis on security in developer documentation, which will help reduce risks.

Security risks of the GenAI plugin will likely increase

Sarah Jones, cyber threat intelligence research analyst at Critical Start, agrees that Salt Lab’s findings suggest that a wider security risk associated with GenAI plugins.

“As GenAI becomes increasingly integrated with workflows, vulnerabilities in plugins could provide attackers with access to sensitive data or functionality within various platforms,” he says.

This highlights the need for robust security standards and regular audits for both GenAI platforms and their plug-in ecosystems, as hackers begin to Identify flaws in these platforms.

Darren Guccione, CEO and co-founder of Keeper Security, says these vulnerabilities serve as a “strong reminder” of the inherent security risks involved in third-party applications and should push organizations to strengthen their defenses.

“As organizations race to leverage AI to gain competitive advantage and improve operational efficiency, the pressure to quickly deploy these solutions should not take precedence over security assessments and employee training,” he says.

The proliferation of AI-enabled applications has also introduced challenges software supply chain securityrequiring organizations to adapt their security controls and data governance policies.

He points out that employees are increasingly feeding proprietary data into AI tools – including intellectual property, financial data, business strategies and more – and unauthorized access by a malicious actor could be crippling to an organization.

“An account takeover attack that jeopardizes an employee’s GitHub account, or other sensitive accounts, could have similarly damaging impacts,” it warns.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *