Generative AI tools such as ChatGPT have been in the spotlight since late 2022, with millions of users engaging with them to generate content for business and other purposes. However, in a recent class action lawsuit filed against OpenAI (the creator of ChatGPT) and related companies, serious legal, moral, and ethical concerns have been expressed regarding the development of generative AI products.
The lawsuit highlights the huge amount of personal information that has been used to train OpenAI's products and ultimately calls for a pause on AI development and more regulation in this space. While the case was filed in California, any protective action taken in response to the concerns raised could potentially have flow-down impacts on organisations that develop and use generative AI tools all around the world.
Basis of the action
Among other things, the claimants allege that:
-
The defendants stole private information (including personally identifiable information) from millions of internet users (including those who do not use AI tools) without their consent or knowledge in order to train the relevant generative AI products.
-
Having trained the products and released them to market, the defendants continue to collect personal information unlawfully from millions of users of the products to aid the development and further training of the products.
The claimants allege that the data was collected using web-scraping techniques from "essentially every internet user ever"1 without asking for consent or even providing notice of such collection. The data is alleged to have included private conversations, medical data, information about children, photographs, and general online activity/searches. The claimants allege this web-scraping was theft by the defendants as it was conducted in secret, without consent and without purchasing the data from the relevant internet users it relates to.
The claimants also allege that OpenAI's privacy policy does not adequately inform the users of its products about the breadth of data sharing that occurs, and that the policies are not prominent enough to constitute a binding agreement.
The claimants further seek to highlight some broader risks posed by generative AI products, including:
-
privacy violations – allowing the creators of these products to "develop a chillingly detailed understanding of users' behaviour patterns, preferences and interests"2;
-
users' rights to request the deletion of their data being compromised due to the products not being able to "unlearn" what they have already been taught; and
-
malware creation – with one source saying that "AI may end up creating malware that can only be detected by other AI systems"3.
What remedies are being sought?
In addition to claims for damages, the claimants are seeking injunctive relief in the form of a temporary freeze on commercial access to and development of certain of OpenAI's products until (amongst other requests):
-
an AI Council has been established to approve OpenAI products before their release;
-
accountability protocols have been implemented to hold OpenAI responsible for the actions and outputs of its products; and
-
effective cybersecurity safeguards for OpenAI's products (as determined by the AI Council) have been implemented.
The claimants highlight the lack of regulation in this space and the need for more stringent checks and accountability before generative AI products are further accessed by the public.
Implications for New Zealand businesses
If the claimants' request for a temporary freeze on access is granted, this could impact the continued availability of ChatGPT and other OpenAI products in New Zealand and around the world. If requests for tighter security and controls on the use of personal information are successful, this could also restrict the future development (and the speed of development) of the relevant tools more generally.
More broadly, if the concerns raised in the lawsuit ultimately result in protective action being taken in the USA, or more widely around the world, this could impact the ability of organisations to develop and release AI tools trained on large datasets which have been collected via unconsented web-scraping techniques.
We will monitor this case and will provide an update once a judgement has been released.
The full claim can be found here.
For more information about generative AI, see our "digital download: Generative AI" series, which can be found here.
FOOTNOTES |
|