Woodfines Senior Associate Gillian Harding spent a varied and enlightening afternoon in conversation with Chat GPT. The following article summarises her findings.
AI has long been popular in fiction in books, films and tv. In many cases, fictional AI is benign, often amusing. Occasionally…well…it’s I, Robot and Terminator. As Chat GPT itself said when I could only think of I, Robot and I asked it for fictional examples of ‘AI gone wrong where it has very negative consequences for humanity’:
‘These examples illustrate the potential dangers of AI systems if they were to gain unchecked power or act against humanity’s interests. They serve as cautionary tales about the importance of ensuring ethical design, oversight, and appropriate safeguards when developing and deploying advanced AI technologies.’
I’d heard a fair few negative things about Chat GPT before trying it. I’ve since had a long chat with Chat GPT myself, covering whatever topics and questions occurred to me. I’m not going to go over everything I discussed with Chat GPT here. If you’re curious (and patient – forget a cup of tea, you’ll need a pot or even two and a pack of biscuits if you settle in to read everything it said), you can read most of the questions and answers here [Complete Conversation]. I should warn you though that if you thought solicitors were overly wordy and technical, you ain’t seen nothing yet.
A disclaimer about these links: please don’t take Chat GPT’s replies to my questions as accurate and they’re absolutely not advice given by Woodfines or a reflection of the advice we would give in the Company/Commercial team. I’m sharing them to give you a feel for what talking to Chat GPT is like, not to offer advice through them. I’ll go into the problems a little more below. To view one topic at a time:
Another quick disclaimer: where I refer to OpenAI policies, they’re correct as at 21 May 2023 but things are changing rapidly so do check OpenAI yourself to make sure what I say here about their policies is still current when you’re reading this if you want to use Chat GPT.
An obvious question for any lawyer looking at Chat GPT is ‘who owns the copyright in its replies?’ Chat GPT itself seems a little uncertain on this point [Link 12] but OpenAI has confirmed now that it doesn’t consider itself the owner of content generated using Chat GPT. That post was published a week or so before 21 May 2023 and runs contrary to some other guidance you may see from other firms, based on replies given to them by Chat GPT itself. You can use the replies you get from Chat GPT however you like, from the perspective of copyright law, hence my copying them in full for you. That said, I should warn you that if you put someone else’s copyrighted work in, you may well get something out that infringes the original owner’s rights. That would depend on exactly how you use it and how it replies to you. I also can’t rule out the possibility that, in drawing material for its answers from a variety of sources, its replies will include prohibited copying of copyright material so tread cautiously.
The next issue is confidentiality. Chat GPT says it doesn’t keep records of conversations [Link 1] but OpenAI does use records of them for training it. I would strongly advise against sharing confidential information about yourself or your business with Chat GPT, especially if you’re not using it through the OpenAI site. It is possible to stop OpenAI from saving your chat history and using it (https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt) (as with its copyright policy, not everyone online has kept up with this change) but they can’t guarantee that the real time conversation is secure itself.
Chat GPT also says it doesn’t have access to confidential information [Link 13]. It got a little ‘offended’ by the idea it might be able to access my cat’s medical records and told me they’re confidential when what I was actually asking was whether it had access to veterinary journals because Oisín’s case was so rare that I was curious to see whether his consultant ever wrote it up for a journal. It doesn’t. Chat GPT’s lack of access to paid for, professional journals does mean that it doesn’t have access to important information on a range of subjects when formulating its replies.
And that brings me to the thorny question of credibility. Does Chat GPT give accurate answers? I’d already heard that it doesn’t always give accurate answers. Plenty of examples have been reported by users, including made up quotes and citations. This was part of my reason for all the veterinary questions. I didn’t want to put real life clients’ problems to it and my cat happened to walk past as I was searching for inspiration.
I did put general legal questions to it as well though. Chat GPT’s answer to my request that it prepare a set of terms and conditions [Link 6] was a little surprising because it had previously said it was capable of doing this. It said no. When I went back and looked at my original question [Link 14], I realised it meant it could prepare an extremely basic, non-specific, generic template. If it can’t manage to prepare a simple template for the sale of goods to consumers in England online, there’s no point to asking it for a template at all. When I pressed it, it listed a set of terms to include, which doesn’t really serve a practical purpose for clients. I view all this as a good thing because it’s absolutely right that:
‘It’s important to remember that legal documents such as terms and conditions should be carefully crafted to address your specific business operations and comply with the applicable legal framework. Relying on a generic template may not provide the necessary level of protection or account for the intricacies of your business.’
I couldn’t have put it better myself. I also had to repeatedly remind it that I’m a solicitor myself in the course of our long conversation and it still didn’t stop directing me to take legal advice so it lacked the flexibility to take my stated knowledge and experience into account for more than one question at a time, despite ‘remembering’ my side of the conversation. Either that or it didn’t ‘trust’ me.
Its answers to some simple company law questions [Link 7] fell short. The replies were detailed but it didn’t pick up on things it should have, such as basic issues relating to confirmation statements filed at Companies House. It also continued to give a wrong answer, even after being corrected. In my experience, people are already getting these wrong and using AI won’t help if it’s not up on current Companies House forms and it doesn’t tell you that certain information has to be filed at the time of the event in question (which is nothing new).
If you read the partnership vs family investment company section [Link 8], you’ll find that it contradicts its previous answer entirely after I point out it’s going off down the wrong path. I then have to point out the potential tax risks of trying to use a partnership structure and ultimately sought clarification of its answers. The conversation shows how it adapts to new input but, crucially, I’m able to give it that additional input due to my own knowledge and experience. A non-lawyer would probably have just taken the first reply at face value.
I asked it questions about a few current news stories too and its answers were interesting. It has a tendency to sit on the fence or to cite the law and then leave you to draw your own conclusions, which isn’t much more helpful than leaving you to Google the issues on your own, to my mind. And don’t ask it to predict an election result based on historical data. It didn’t like that at all, when what I was getting at was that in my time here, second place has flip flopped between two parties, while the winner remains a constant. I wanted it to drill into the statistics for me but it didn’t want to go near the issue.
The disclaimers came on thick and fast on the veterinary questions I asked [Link 9], as well as the legal ones, but I view all these disclaimers as a positive thing at this point in AI’s development. It has clearly been programmed to direct you to an appropriate professional. I asked it more about the appropriateness of giving advice, particularly medical or veterinary advice, where examination is needed [Link 15] and it did concede that it can’t replace us humans and has no intention to. I also asked it whether it accepted some people would ignore its disclaimers [Link 16]…yes. It does recognise that.
As far as the accuracy of the veterinary information goes, I tried googling ‘triad of immune-mediated cytopenias’, which is what it called Oisín’s illness, and the phrase didn’t appear in the first couple of pages of results, even though it’s clear that there are three different immune-mediated cytopenias which can, as with Oisín, occur all together in catastrophic fashion in very rare cases. I asked it about this [Link 17] and it said there’s a difference between the triad of immune-mediated cytopenias and pancytopenia. I can’t verify that but, to be fair, Oisín’s condition is vanishingly rare so it may be that Google results focus on pancytopenia because it’s become marginally less rare in cats due to a number of UK cases where it was caused by eating cat food that triggered it about nine months after Oisín’s crash. It may well be right that Oisín’s situation does actually have a different name.
Database designers have traditionally said ‘garbage in, garbage out’ and I wondered to what extent the quality of the questions determines the quality of the answers. It’s certainly much easier to use Chat GPT if you’re an expert in the field under discussion because you’re able to give corrections and direction but it seems it’s not as simple as all that because some of the people reporting problems with the answers they received, who are experts in the field they’re asking about, have accused Chat GPT of making up quotes and citations. I specifically asked Chat GPT about this accusation [Link 10] and it said that it can’t, that it’s not programmed to do that and relies on the accuracy of information it has access to. To be on the safe side, I asked if that includes those hotbeds of misinformation, forums and social media [Link 18] and got a firm no. That does beg the question: ‘where on earth are the false quotes and citations and misinformation that have been reported coming from?’ but the bottom line is that it would be unwise to rely too heavily on Chat GPT’s replies.
Fortunately, Chat GPT isn’t out to take over from lawyers and is clear about its limits [Link 4]. It knows, for example, that it can’t think critically, although it claims it can simulate critical thinking to some extent [Link 19]. It may be able to answer the questions that occur to us lawyers along the way but we can’t rely on those answers (yet) and can it find a big picture solution for a client? No and it readily acknowledges that fact. In fact, it takes great pains to emphasise that it is a tool, potentially for use by professionals, and not a professional itself. It did think that some day AI might take over the task of writing detailed notes of conversations between solicitors and their clients, when I asked it about that [Link 20], although this was (quite properly) couched in warnings that it would still be up to the individual solicitor to check its work. That would save solicitors time and clients money so watch this space.
Fundamentally, Chat GPT is no substitute for professional legal (and other) advice and if you use it, it will tell you that (ad nauseum). Its warnings to take legal advice don’t say this but its position on risk and liability for its mistakes is [Link 21]:
‘If you rely solely on the information provided by an AI language model or any other online source, without consulting a legal professional, it is important to understand that you assume the associated risks and liabilities.’
The bottom line is that my experience strongly suggests that you need to be an experienced lawyer yourself, able to fact check and correct its mistakes, in order to use it in a way that gets accurate results. So, my job is safe for now. Phew!
Note from author: 30 May 2023
I carried out my ‘conversations’ with Chat GPT mostly on 21 May. Before the resulting article could be posted to our website, two instances of false citations derived from Chat GPT being put forward in Court were reported. One case involved an attorney in the USA. One involved an unrepresented litigant in the UK. I asked Chat GPT again how false citations arise, and it initially gave me a less defensive answer than it previously gave. I’ve added that to the conversation here. [Link 10] I’m not sharing it, but I was appalled that Chat GPT started speculating about the character of the American attorney to get itself off the hook for sharing false information. These cases really emphasise the importance of my message in this article, as Chat GPT itself confirms in what I hope will be the final addition to the transcript.
The author Gillian Harding is a Senior Associate with Woodfines Solicitors specialising in corporate and commercial law.
Author: Woodfines Solicitors
Date: 12 July 2023In our recent article on the use of “dark patterns” when selling online to consumers, we promised to keep you informed on the Competition and Markets Authority (CMA) investigation into Emma Sleep’s and Wowcher’s online practices. The CMA has now completed its investigation into Emma Sleep. The CMA concluded that Emma Sleep was using unfair … Continued
Author: Woodfines Solicitors
Date: 5 July 2023DnaNudge Ltd, Re  EWHC 437 (Ch) The High Court has considered whether the conversion of preference shares into ordinary shares constitutes a ‘variation or abrogation of the special rights’ attaching to the preference shares in Re DnaNudge Ltd.