Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization 
Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization 
Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization 
Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization 
Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization Translation & Localization 

Post info

Share this post

Artificial intelligence robot

How to Make the Most of Neural Machine Translation

Neural Machine Translation Blog Series – Part 2

This blog is part two of our series on Neural Machine Translation and its impact on LSPs serving the life science industry. Find part one on Everything You Need to Know about NMT here.

In our introduction to NMT, we provided an overview of the neural machine translation model, its development, as well as the most common misconceptions surrounding its usage. We now turn our attention to NMT functionality, and look at what particular applications it is best suited for.

A Question of Choice

The purpose of Neural Machine Translation models is to automatically translate text from one language into another. Current NMT algorithms function by converting written source text into vectors and building neural networks between them. The algorithm then chooses correct translations based on the strength of these connections, which are developed and reinforced using training data.

Figure 1 (Pielmeier and Lommel, 2019) below outlines a basic version of this workflow using an MT engine.

Figure 1 (Source: CSA Research)

Even at its most basic level, the implementation of an MT model requires several elements, including training material, human post-editing, as well as the engine itself. This suggests that the success of any machine learning technology depends on proper selection and customisation. Rather than viewing NMT as a one-size-fits-all solution, users will thus extract maximum benefit by aligning the algorithm with specific industry or company requirements. In other words, making the most of NMT is a question of choice.

Choosing the Right Engine

The starting point to rolling out a machine translation service is choosing the right engine. There are a host of providers offering everything from open source and cloud-based software, to predictive-modelling engines with wizards, visualisation tools and many other features.

Their programming is also different: applications such as Moses, funded by the European Commission, use statistical models, whereas Google relies on neural networks for most of its translation services, including Google Translate and Google Cloud AutoML Translation. MT therefore has a variety of possible applications, with each available engine conceived in response to a particular need. Most of these applications also allow a high degree of customisation. Instead of relying on ready-made software, LSPs should build dynamic, domain-specific engines that are tailored to their or their clients’ requirements.

Choosing the Right Training Data

We spoke in part one about deep learning, and highlighted the importance of engine training for any automated application. This is nowhere as important as in NMT. It is said that NMT output is only as good as its training data (Bond, 2020), with the computing idea of “garbage in, garbage out” (GIGO) so well known, it has been included in the Oxford English Dictionary. As the engine relies on training data to build and later strengthen the connections between its vectors, it is crucial that the training materials are of good quality.

However, while quality of training data is important, so is choosing the right type of text. Machine translation engines work best with texts that are narrow enough in range to yield appropriate results and avoid nonsensical text. For instance, an NMT algorithm for life sciences should be trained specifically on regulatory or clinical trial documents if that is the user’s domain, rather than general medical texts. In turn, this reduces the negative impact of data noise on MT training (ibid.), and cuts down on the time needed for post-editing.

Here LSPs are at a considerable advantage, benefitting from sizeable archives of legacy translations, translation memories and glossaries. As these translations have undergone stringent QC already, they make ideal training materials. Moreover, the document range is narrow and domain-specific, whilst also being sufficiently varied so as to allow the engine to learn and make probabilistic predictions in producing a translation output.

Training NMT engines for Life Science Translations

We established that the best training data for NMT engines has to be ‘just right’, in between the overly‑general and restrictive narrow. We now look at the types of documents commonly found in the life science industry which lend themselves to automated translation.

Figure 2 below outlines how different document types require a different mix of human and machine involvement to translate. Based on this, we can infer that machine translation appears best suited to repetitive, technical documents with more limited terminology and relatively free of nuanced stylistic language. On the other end of the spectrum sits creative translation, or texts considered high-risk, where human translation is mandatory. This could include translations for regulatory submissions, clinical trial documentation, or hospital reports.

This still leaves a variety of other documents for which machine translation could be a time and cost‑effective option:

  • Batch records
  • Manufacturing documents
  • User guides/manuals
  • Validation documents
  • Letters and emails
  • Inspection reports
  • Medical papers
  • IFUs
  • ADR spreadsheets
  • CTDs
  • CMC docs/letters
  • Internal training materials
  • Websites
  • Analytical docs
  • Reimbursement forms
  • Audit reports
  • Licenses, test reports
  • Risk assessment docs
  • Technical docs
  • CIOMS forms
  • Cover letters
  • SOPs

NMT and Language

Part of the process of customising NMT engines is choosing language pairs that will consistently yield accurate results. Selecting the right language pairs depends on several factors, the most important of which can be translator availability and volume of training data. Given that machine learning requires a large body of high quality examples for its training, rare or less-commonly spoken languages will be at a disadvantage: with less content with which to build and train a useable engine, the translations generated will be less accurate and sound less natural than those for more widely-spoken languages. It follows that the best languages for NMT have a global reach, with Spanish, French and Chinese at the forefront for proficient machine translation outputs (Figure 3).

Figure 3 (Source: Lommel, 2017a, CSA Research)

This requirement makes NMT a reliable tool for LSPs. Unlike freelance linguists, who will generally be limited by the languages they work in, LSPs have access to larger volumes of training data in a variety of language combinations. LSPs also benefit from greater QC and post-editing capacity, and can work alongside their own experienced translators in the post-editing process.

Pivot vs Zero-Shot Machine Translation

There are however developments to make MT available for less common language pairs. Of these, pivot and zero-shot MT are growing in importance, as they aim to build machine translation models for languages for which training datasets do not exist or are too small (Liu, 2020).

Pivot MT refers to an approach whereby one language serves as a pivot or bridge between two others. In practice, language A is translated into language B by first translating language A into a third language C, then translating this into language B.

Figure 4 (Source: Lommel, 2017b, CSA Research)

On the other hand, zero-shot MT aims to produce a direct translation without training on specific language pairs, relying instead on the system’s ability to independently build connections between languages it has been trained on. Although this approach is still in its infancy, it is probably the most promising initiative, set to revolutionise machine translation technology.

Next up on the blog: NMT and LSPs.


SOURCES

Atanet.org. 2017. Machine Translation Vs. Human Translation. [online] Available at: <https://www.atanet.org/governance/advocacy_day_2017_handout_myth.pdf> [Accessed 30 September 2020].

Bond, E., 2020. Cambridge Researchers Tackle Neural Machine Translation’S Gender Bias | Slator. [online] Slator. Available at: <https://slator.com/machine-translation/cambridge-researchers-tackle-neural-machine-translations-gender-bias/> [Accessed 30 September 2020].

Liu, CH., 2020. Issue #66 – Neural Machine Translation Strategies For Low-Resource Languages | Iconic Translation Machines. [online] Iconic Translation Machines. Available at: <https://iconictranslation.com/2020/01/issue-66-neural-machine-translation-strategies-for-low-resource-languages/> [Accessed 30 September 2020].

Lommel, A., 2017a. Is Neural MT Really As Good As Human Translation?. [online] CSA Research. Available at: <https://csa-research.com/Insights/ArticleID/89/Is-Neural-MT-Really-as-Good-as-Human-Translation> [Accessed 30 September 2020].

Lommel, A., 2017b. Zero-Shot Translation Is Both More And Less Important Than You Think. [online] CSA Research. Available at: <https://csa-research.com/Insights/ArticleID/90/Zero-Shot-Translation-is-Both-More-and-Less-Important-than-you-think> [Accessed 30 September 2020].

Pielmeier, H. and Lommel, A. , “Machine Translation Use at LSPs: Data on How Language Service Providers Use MT”, Common Sense Advisory, May 2019.

Statmt.org. 2020. Moses – Main/Homepage. [online] Available at: <http://www.statmt.org/moses/> [Accessed 30 September 2020].

Written by Raluca Chereji, Project Manager / Digital Marketing Lead.

Search

Categories

Featured Content

Sign up for our Newsletter

Follow Us