Integrity Languages

Blog

Tag Archives: machine interpreting

In-person, remote and machine interpreting: A challenge

By: Jonathan Downie    Date: June 6, 2018

Everyone had heard of the Turing test. The idea there is to see if a chat bot can pass as human. What if there were a similar test or competition for machine interpreting?

To make life interesting and fair, I would like to suggest that we have a single, authentic event but supply interpreting in the same language pairs provided by professional human interpreters in person, professional human interpreters via a remote interpreting platform that allows the interpreters to have a video feed, and one or more machine interpreting solutions.

It would then be a simple matter of providing one audio channel for each of the human teams (one in-person, one remote) and any machine interpreting software being tested. The audience could then choose to listen to their feed of their choice and record their impressions.

Vitally, they would be asked to suggest which channel was which (human in-person, human remote, machine(s)). Their impressions could then be cross-checked by looking at which channels were listened to the most.

As too many machine interpreting developers have discovered, laboratory results are simply not a reliable guide to real-world performance. The only way to truly test the state of machine interpreting and make useful improvement is to run a field test. And, my view is that the field test should be as realistic and comprehensive as possible.

This is why I would suggest the following setup for the event:

2 hours of simultaneous interpreting, alternative between languages every 20-30 minutes (so interpreting English to German then German to English then back again) followed by,

a 1-2 hour factory tour (Wi-Fi signal not guaranteed) with the audience hearing on a system similar to the commercially available tour guide systems followed by returning to the conference hall for,

a 1-2 hour discussion with the questions unknown beforehand to any of the teams or the speakers but still on the themes covered by the event.

The two sessions in the conference hall would, ideally be live-streamed, to get the reactions of those outside the event. Following the success of the Heriot-Watt University Multilingual Debate, it should be possible to put on a conference that would be interesting in its own right and otherwise identical to one the interpreters would normally work at. This breadth of work represents work that will be familiar to most freelance conference interpreters.

In fact, there is no real reason why a company could not use such an event to give an insight into their products. A whisky company could talk about their environmental policies, fishing industry representatives could talk about regulations and give a tour of a working boat, a manufacturing company could showcase their innovation.

Since few clients have large pre-aligned bilingual databases, I would also suggest that every one of the teams receive identical briefing documents from the client. And, of course, if a speech doesn’t arrive on time and some creativity is needed, that just adds to the realism.

Given what we know about interpreter motivation, the interpreters and AV team should be paid at their normal rates and should be chosen by a consultant in the same way as they would be chosen for a normal job.  There should perhaps also be some sort of monetary award for the winners too, to encourage further developments.

I believe that this would not only give us an accurate view of where machine interpreting is but would encourage developments in the field, showcase excellent intepreting and provide a platform for a company with interesting products to show what they do. There would likely be significant press interest, just as there has been when machine translation companies claim to match humans or when literary translators are pitched against their digital counterparts.

It should not be difficult to find a venue to host it and I am sure that any of the good interpreting AV companies would relish the challenge of finding a way to keep the test fair and hide the identity of who was on each channel, even during the factory tour.

It would just need a corporate sponsor and a company or organisation willing to be guinea pigs. If you know anyone who would be interested in playing any role, please drop me an email. I would welcome any feedback on the idea.

 

 

 

Thou Shalt Not Gloat: What the Tencent Fiasco Means for Interpreters

By: Jonathan Downie    Date: May 1, 2018

Another day, another company trying to replace human interpreters and failing miserably. As I discussed last week, the Tencent interpreting fiasco means that, for now at least, the jobs of human interpreters are safe … but is that it?

It’s a familiar story. A company tries to develop a machine interpreting system with pretty much zero knowledge of what interpreters actually do, apart from the fact that it has something to do with words. The company tells everyone what wonderful technology they have and launches it in a blaze of glory. And then, on its first true public test, it flops.

The story has been seen repeatedly from at least 2012 and recently, Chinese tech giant, Tencent, followed suit. Another demo, another set of giggling journalists. Will tech companies never learn?

While professional interpreters might be tempted to gloat or laugh, neither response is helpful. The fact is that tech companies will never give up on machine interpreting, the prize is just too great. And for professional interpreters, the implications of that have never been clearer. Read on to examine them.

Continue reading

The Tencent Interpreting Fiasco: a buyer perspective

By: Jonathan Downie    Date: April 27, 2018

It was hard to miss. Tencent, one of the biggest technology companies in China, aimed to show off their technological prowess by turning over the interpreting at their major tech showcase to a machine. And the results were … not great. The machine spouted gibberish, journalists were amused and suddenly the job of human interpreters seemed safe.

The problem is that most of the discussions of the whole affair were very short-sighted. For businesses and interpreters alike, such short-term “will humans have a job next year?” thinking is strategically useless. In fact, the whole “humans or AI” debate is misleading.

In this post, I will look at what business leaders, events professionals and other buyers need to learn from the Tencent fiasco. Next week, I will look at the perspective of interpreters.

So what do buyers need to learn from the Tencent machine interpreting fiasco?

Let’s start with the obvious: machine interpreting is not ready to be used at important events.

Despite the claims of companies selling the latest gadget and the claims of machine translation suppliers, the best that current technology can do is help you get directions to the train station or help you order pasta. In fact, the latter is even one of the use cases suggested by Google themselves!

There are many reasons why machine interpreting is not even nearly ready to take over your next event but the most important to remember at the moment is that machine interpreting can only deal with words. While words are important, they will always get their meaning from context, intonation and allusion.

Saying “we have no reservations” takes on entirely different meanings depending on who says it. If a hotel receptionist says it, it probably means that your travel agent has messed up. If a potential client says it five minutes before they are due to sign a large contract, it means something completely different. Currently, machine interpreting has no way of determining the wider context of how language is used, apart from sometimes being able to take into account what was said before.

Human interpreters are trained to understand language in context. This is why they ask for detailed briefs before they accept assignments. This is why sometimes they will refer assignments to their colleagues, who might know a specific context better than they do.

Until machine interpreting can understand the social and cultural context of what is being said, it will be as likely to get you in trouble and help you seal the deal. 

The Tencent fiasco not only shows this principle in action but demonstrates the need to be highly critical of the claims of machine interpreting providers. Tencent’s claim of “97% accuracy” most likely came from laboratory results and limited in-house testing. The only results that matter from machine interpreting providers are the experience of clients using it in similar environments to you. For now, it will pay to ignore any research that comes out of testing laboratories. They simply don’t reflect real-life conditions.

This doesn’t mean that we should ignore or ridicule machine interpreting. It will have its uses. It may be worth equipping your sales team with it, to make it easier for them to find their way around foreign cities. One day soon, it may even make human interpreting more effective by helping interpreters to prepare better.

But its uses are still limited and there are still privacy concerns attached. Anything said into a machine interpreting app can and will be used as training data. As soon as you turn on machine interpreting, you basically sign away your rights to keep what you said private.

As much as the Tencent fiasco serves as a warning of the dangers in being overconfident in your newest product, it can and should launch some serious debates about the relevance and usefulness of such technologies for businesses and the extent to which we are happy to sign away privacy in return for technological improvements.

 

Want expertise in setting up interpreting for your business? Drop me a message to set up a free Skype call.

Over-hyped, Under-thought and nowhere near ready: Machine Interpreting

By: Jonathan Downie    Date: July 12, 2017

A few months ago, I was flying to an important meeting and I was flicking through the in-flight magazine (for pitching purposes, you see). As I did that I spotted a short paragraph touting the latest technological development: an in-ear device that promised to translate flawlessly from one language to another. It looks like from now own event managers can dispense with us interpreters for good and just load up on a supply of tiny devices to make sure everyone has a great event, no matter which language they speak.

 

Obviously that isn’t going to happen.

 

Despite the wonderful headlines in the press and the incredible claims made by marketing departments, the chances of machine interpreting ear-pieces doing anything more than replacing phrasebooks is miniscule.

 

Why?

 

Firstly, there is nothing fundamentally new in the technology used in such devices. Machine translation of some sort or another has been around since the 1940s and is still producing results that range from the plausible to the ridiculous. Remember when google translate turned Russia into Mordor? Remember all those websites displaying mangled English because of poor use of machine translation?

 

Without going into the fine detail of where machine translation actually stands right now (you can read that in this article), basically, unless you are willing to spend months training it and are okay restricting your language to controlled phrases, the results of machine translation will be a bit dodgy.

 

When it comes to magical translation ear-pieces, machine translation is twinned with voice recognition – the technology that is still giving us frustrating helplines, semi-useful virtual assistants and the fury of everyone who doesn’t have a “standard accent”. Sure, voice recognition technology is advancing all the time but it still works best when you use a noise-cancelling microphone and speak super-clearly – not quite the thing for crowded cafés or busy conferences.

 

The second reason why translation headsets are not a cure-all is that interpreting is about far more than just matching a word or phrase in one language with a word or phrase in another. Language is a strange beast and in all communication, people use idioms, metaphors, similes, sarcasm, irony, understatement, and implications and are tuned to social cues, intentions, body language, atmosphere and intonation. At the moment, and for as much of the future as we can predict, computers will struggle to handle even one of those things.

 

Human interpreters have to be expert people readers as well as having enviable language knowledge. Ask the CEO for whom an interpreter helped sort out a cultural and terminological misunderstanding that threatened to lose the company a deal with several million pounds. Ask the doctor who worked with an interpreter to be culturally-aware enough to give a patient the right treatment. Ask the speaker whose interpreter prevented him from making a big, but accidental cultural mistake.

 

When human interpreters work, they don’t simply function as walking dictionaries. They take what is said in one language, try to understand its meaning, tone, and purpose and then recreate it in another language in a way that will work in that specific context.

 

The only way that machines could ever do that would be if meetings and events were just about stuffing information into people’s heads and human beings always said exactly what they meant in a completely neutral way. With the current emphasis on the importance of delegate experience and our newfound awareness that people are more than just robots, it makes sense that we would realise that their communication deserves to be handled by experts, not machines.

 

So the next time someone tries to persuade you that you should let machines take over the interpreting at your event, just remember: for information processing, use a computer; for experience and expertise, work with humans.