Why I Do Not Use ChatGPT or Bard in my Work

It seems that everyone is talking about the power of ChatGPT and Bard to revolutionise the world of work. Social media is full of people explaining how these tools can make work more efficient. So why would anyone not use them. As the author of a book on whether human interpreters will be replaced by machines and as someone who does keep a keen eye on technology, here are the reasons why I do not use ChatGPT, Bard or any similar Large Language Models in my practice and have no intention to for now.

LLMs Produce Important Errors

I have tested both ChatGPT and Bard and found that both produce generic advice on interpreting, often with important errors. While I have stopped publishing my findings publicly as Microsoft seem to be harvesting public critiques as free advice for improving the model, I have found that both services make mistakes on how interpreting can be delivered and what it takes to get it right.

Both services either don’t know about individuals or make erroneous statements about them. They list publications that either don’t exist or were written by different authors. They do not recognise that different approaches are needed for interpreting taking place in different contexts.

In short, LLMs spout convincing, generic fluff when it comes to interpreting. As that is the field I know best, I can only assume that something similar is going on in other fields too. As a consultant, I simply cannot afford to pass on information that most likely has subtle, but nonetheless important errors.

Any Efficiency Gains from LLMs are Lost due to Increased Verification

This leads to the second issue, which is that, while LLMs can write blog posts, rephrase text, and provide summaries, and sometimes even write computer code, their propensity for mistakes means that everything they write needs to be checked. You might get a blog post in seconds but you then  might need to spend hours checking it. You might get a summary before your kettle boils but you need to check it against the original anyway.

If we really count the time needed to do the whole task, rather than just the raw LLM output, it is doubtful that any time is saved. This is before we even think about the time it takes to train humans to create prompts that give better results. If the time needed to learn a tool and check it output is greater than the time needed to do the task manually, the tool is not helpful.

LLMs rely on Massive Harvesting of Copyrighted Data

People are gradually realising that the power behind LLMs is a massive harvesting of online data, without any payment to the rights holders. Whether this is a copyright breach is something for the courts to decide but it does seem that content licenses have not been taken into account. Add in the availability of personal data on the internet due to breaches and there may be privacy implications too.

As a writer myself, I am actively looking for ways to restrict AI access to my content until the big AI firms join the existing copyright licensing schemes and pay, just like anyone else would have to. I feel that they should play by exactly the same rules as everyone else. Until they do, I feel it is hypocritical for me to use their tools in my work.

LLMs are Very Power Hungry

At a time when we are all trying to reduce our carbon emissions, it seems to not make sense to use LLMs, which are incredibly power-intensive, to do work that we could do just as quickly ourselves.

Client Data Should not be Sent to LLMs

Finally, the free versions of LLMs harvest incoming data to train the model. This means that documents we ask the models to summarise or to extract terms from become part of the database. This could lead to breaches of client confidentiality and possibly embarrassing leaks. While the nature of LLMs make it unlikely that the content will come out exactly as it went in, we still do need to remember that interpreters have a duty not to reveal any sensitive client information. Submitting client documents to an LLM breaches that duty.

For this reason, I am very concerned at recommendations that interpreters use LLMs for term extraction or the creation of glossaries. Just as I am concerned about interpreters using free machine translation systems to provide gist translations of client documents. Our efficiency must not come at the cost of client confidentiality.

My View in Brief

I do think that there will likely be possible responsible and useful uses of LLMs in the future. I do think that they have the potential to be helpful. At the moment and in the medium term, the risks associated with them, their performance, and the basis on which they work mean that I will not be using them. There needs to be an open, honest, and balanced conversation on their use. They are not a threat to interpreting but they do have risks. They are not a cure-all but they do have potential uses. For now, however, Integrity Languages will remain LLM-free for all interpreting, writing, and consulting work.

Would you like these posts delivered directly to your inbox? Sign up here:

Categories:

Leave a Reply

Your email address will not be published. Required fields are marked *

ten + 10 =