ChatGPT (an artificial intelligence (“AI”) generative language model system) took the world by storm when it first launched in November 2022. It is designed to generate instantaneous human-like text responses and to engage in dialogue with users through predictive patterns trained on massive datasets – which includes sources from books, websites and the internet at large. Its significant and viral gain in popularity is largely attributed to the language model’s ability to contextualise and continually refine its responses (as compared to its predecessors), versatility in its application (from telling jokes, providing movie recommendations, to writing code and drafting documents) as well as its accessibility to the public through various platforms. With the release of the language model’s latest version number “GPT-4” in March 2023, the number of users globally is expected to be on an all-time high.
This article will briefly introduce some key legal considerations and risks that come with the utilisation of ChatGPT and other similar AI chatbot technology.
A. Ownership of Generated Content
Readers should note that other AI generative models will have their own terms and conditions that may differ from ChatGPT’s in respect of ownership rights over generated content – indeed, a few other similar AI programs have expressed that only a limited, non-exclusive license over specific rights to the generated content will be granted to the user or alternatively, a paid subscription is required before the user is granted such ownership rights.
B. Data Privacy and Confidentiality Concerns
While the Input referred to above is under the user’s ownership, any information entered into ChatGPT may still become part of its training dataset. As such, any sensitive or confidential information (such as personal data of individuals or commercial trade secrets and information) may be stored, processed and used in the servers of entities operating the service. Further, there is the risk of such confidential information being incorporated into ChatGPT’s Output for other users.
Many large multinational corporations are increasingly cracking down on use of AI generative tools in the workplace by implementing internal policies to restrict or outright ban the use of the same. An example can be seen in Samsung’s recent ban on the use of such AI generative tools after it discovered that its employees had uploaded sensitive internal code into ChatGPT.
C. Accuracy of Output
A common issue with ChatGPT and other AI generative tools is its tendency to provide incorrect or even fabricate information that may be entirely fictional (a phenomenon known as “hallucinations”). Due to the superficially plausible and high-quality Output produced by ChatGPT, it is tempting for many to rely on such AI generative tools to replace everyday tasks, work and decision-making functions.
Recently in June 2023, two lawyers were sanctioned by a US district judge after submitting a court filing plagued with fake citations (i.e. “non-existent cases”) generated by ChatGPT. One of the lawyers in question had explained to the court that he “did not comprehend that ChatGPT could fabricate cases”. Such situations are not novel, and ChatGPT (and other AI generative tools) still lack the necessary ability for reasoning and critical thinking leaving it prone to manipulation and susceptibility to bias and influence of input. Readers should be mindful when using AI generative tools to check for accuracy, appropriateness and usefulness of the Output before accepting and using the same.
D. Liability and Infringement
“The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication”.
As mentioned, ChatGPT generates Output based on datasets available to it (which will likely include copyrighted sources obtained from the internet), as such the resulting generated Output may be substantially similar to an existing copyrighted work. Commercial use of the same (without permission from the copyright holder) could potentially result in a copyright infringement claim being made against the user. Readers should take all precautions to review the information generated in the Output to ensure it does not inadvertently reproduce third party copyrighted materials.
As AI generative tools like ChatGPT continue to advance and become more pervasive in our everyday lives, the discussions surrounding how such AI technology should be responsibly used and regulated gains more momentum amongst business leaders, governments, public organisations and academic institutions all around. We are beginning to see many jurisdictions around the world laying the legal foundation for better regulation in the use of AI – such as with Malaysia recently announcing its intention to introduce policies and regulatory frameworks over the governance of AI within the country. No doubt, this will be a space to keep a watchful eye on as legislative developments and further technological advances in AI continue to unfold in the near future.
Author: Shawn Zachary Tan, LL.B. (Hons) Queen Mary University of London (UK), Middle Temple.
Disclaimer: The views, thoughts and opinions expressed in the articles belong solely to the author and do not reflect the views of Loke, King, Goh & Partners. Readers of this website should contact their lawyer/attorney to obtain advice with respect to any particular legal matter. No reader, user or browser of this site should act or refrain from acting on the basis of the information on this site without first seeking legal advice from counsel in the relevant jurisdiction. LKGP Advocates shall not be held liable for any liabilities, losses and/or damages incurred, suffered and/or arising from the articles posted on this site.