Privacy Risk: Open-Source Artificial Intelligence/Machine Learning
Apr 11, 2023
Recently, employees of Samsung were reported to have exposed internal confidential corporate information, including meeting documents and source code, via their use of ChatGPT.
Many people continue to experiment with new artificial intelligence/machine learning (AI/ML) tools like ChatGPT, especially the 4.0 version. While there is a lot of potential to create greater efficiencies in the workplace via the use of such tools, companies must alert their employees to be cautious. In the instance of Samsung, internal sensitive information was exposed, however, companies may also risk a privacy breach should they input third-party confidential corporate information into these open-source tools.
The key privacy/data security issue is that ChatGPT, and other similar open-source AI/ML tools, retain user input data to further train themselves. This means that any information input into an open-source AI/ML platform–whether a question, data, document, source code, or confidential agreement– places that data into the public machine learning algorithm, exposing that information to other open-source AI/ML users.
How does your Cyber policy respond?
Cyber policies are meant to respond to a privacy breach, including unauthorized disclosure of private information, such as personally identifiable information (PII) or third-party confidential corporate information.
CAC Specialty suggests that Insureds review their Cyber policy to understand how it may interpret such privacy breaches. A few considerations to be aware of:
- Third-party information may be defined by the policy, and tends to broadly include data, including trade secrets, under the control of the Insured or a third party to whom the Insured is liable.
- Intentional acts, so long as they are not in violation of the law, are typically not excluded. Most policies do not contain a negligence trigger for privacy breaches.
- Only the acts of a member of the Control Group (certain officers of the Insured, as defined) should be imputed to the Insured Entity.
The best insurance is loss prevention. CAC Specialty recommends that companies provide proper training to their employees to ensure that use of new open-source AI/ML tools is accounted for and that their corporate privacy policies are updated accordingly.
For more information, please reach out to your CAC Specialty contact.