Navigating Legal Issues in the AI Chatbot Era: The Rise of ChatGPT

ChatGPT (a generative artificial intelligence (“AI”) language model system) took the world by storm when it first launched in November 2022. It is designed to generate instantaneous human-like text responses and to engage in dialogue with users through predictive patterns trained on massive datasets – which includes sources from books, websites and the internet at large. Its significant and viral gain in popularity is largely attributed to the language model’s ability to contextualise and continually refine its responses (as compared to its predecessors), versatility in its application (from telling jokes, providing movie recommendations, to writing code and drafting documents) as well as its accessibility to the public through various platforms. With the release of the language model’s latest version number “GPT-4” in March 2023, the number of users globally is expected to be on an all-time high.

This article will briefly introduce some key legal considerations and risks that come with the utilisation of ChatGPT and other similar AI chatbot technology.

A. Ownership of Generated Content

ChatGPT’s Terms of Use states that a user may provide input into ChatGPT (“Input”), and receive output generated by ChatGPT (“Output”) – the Input and Output are collectively referred to as “Content”. The user owns all Input and subject to the condition that the user complies with ChatGPT’s Terms of Use, “OpenAI [developer] hereby assigns to you [user] all its right, title and interest in and to Output”. This means ownership over the rights to the Output / Content will belong to the user who will be free to use the Content for personal or commercial means (such as the sale or publication of the Content), so long as there is no violation of any applicable law or ChatGPT’s Terms of Use.

Readers should note that other generative AI models will have their own terms and conditions that may differ from ChatGPT’s in respect of ownership rights over generated content – indeed, a few other similar AI programs have expressed that only a limited, non-exclusive license over specific rights to the generated content will be granted to the user or alternatively, a paid subscription is required before the user is granted such ownership rights.

B. Data Privacy and Confidentiality Concerns

While the Input referred to above is under the user’s ownership, any information entered into ChatGPT may still become part of its training dataset. As such, any sensitive or confidential information (such as personal data of individuals or commercial trade secrets and information) may be stored, processed and used in the servers of entities operating the service. Further, there is the risk of such confidential information being incorporated into ChatGPT’s Output for other users.

Many large multinational corporations are increasingly cracking down on use of generative AI tools in the workplace by implementing internal policies to restrict or outright ban the use of the same. An example can be seen in Samsung’s recent ban on the use of such generative AI tools after it discovered that its employees had uploaded sensitive internal code into ChatGPT.

C. Accuracy of Output

A common issue with ChatGPT and other generative AI tools is its tendency to provide incorrect or even fabricate information that may be entirely fictional (a phenomenon known as “hallucinations”). Due to the superficially plausible and high-quality Output produced by ChatGPT, it is tempting for many to rely on such generative AI tools to replace everyday tasks, work and decision-making functions.

Recently in June 2023, two lawyers were sanctioned by a US district judge after submitting a court filing plagued with fake citations (i.e. “non-existent cases”) generated by ChatGPT. One of the lawyers in question had explained to the court that he “did not comprehend that ChatGPT could fabricate cases”. Such situations are not novel, and ChatGPT (and other generative AI tools) still lack the necessary ability for reasoning and critical thinking leaving it prone to manipulation and susceptibility to bias and influence of input. Readers should be mindful when using generative AI tools to check for accuracy, appropriateness and usefulness of the Output before accepting and using the same.

D. Liability and Infringement

ChatGPT’s Sharing & Publication Policy also specify that the Output should not be misrepresented as being entirely human-generated or AI-generated and that the published content must disclose AI’s role in creating the published content. The Terms of Use also provide stock language which may be adopted by the user:

The author generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication”.

The above makes clear that while the user is required to disclose AI’s role in creating the content, ultimate responsibility over the publication for the same lies with the user. ChatGPT’s Terms of Use also provides that the Output generated is provided on an “as is” basis whilst disclaiming all warranties (except to the extent prohibited by law). As such, the user relying on or publishing such Output should bear in mind that they will be legally responsible over any disputes arising from such use of the Output.

As mentioned, ChatGPT generates Output based on datasets available to it (which will likely include copyrighted sources obtained from the internet), as such the resulting generated Output may be substantially similar to an existing copyrighted work. Commercial use of the same (without permission from the copyright holder) could potentially result in a copyright infringement claim being made against the user. Readers should take all precautions to review the information generated in the Output to ensure it does not inadvertently reproduce third party copyrighted materials.

Concluding Comments

As generative AI tools like ChatGPT continue to advance and become more pervasive in our everyday lives, the discussions surrounding how such AI technology should be responsibly used and regulated gains more momentum amongst business leaders, governments, public organisations and academic institutions all around. We are beginning to see many jurisdictions around the world laying the legal foundation for better regulation in the use of AI – such as with Malaysia recently announcing its intention to introduce policies and regulatory frameworks over the governance of AI within the country. No doubt, this will be a space to keep a watchful eye on as legislative developments and further technological advances in AI continue to unfold in the near future.

Author: Shawn Zachary Tan, LL.B. (Hons) Queen Mary University of London (UK), Middle Temple.

Disclaimer: The views, thoughts and opinions expressed in the articles belong solely to the author and do not reflect the views of Loke, King, Goh & Partners. Readers of this website should contact their lawyer/attorney to obtain advice with respect to any particular legal matter. No reader, user or browser of this site should act or refrain from acting on the basis of the information on this site without first seeking legal advice from counsel in the relevant jurisdiction. LKGP Advocates shall not be held liable for any liabilities, losses and/or damages incurred, suffered and/or arising from the articles posted on this site.

Scroll to Top