Loading......

Home > Blogs & Events > Blogs

Augmented translation ,
Automated content enrichment ,
Machine translation ,
Translation technology

Augmented Translation: Are We There Yet?

November 4, 2020

Arle Lommel

blog-image

In 2017, CSA Research introduced the concept of “augmented translation,” a technology-centric approach to amplifying the capabilities of human translators. Although companies such as Lilt, SDL, and Unbabel had already implemented portions of this model – namely the tight integration of human and machine capabilities – we predicted that more technologies would be folded in over time to create an AI-driven platform for linguists that would make them far more efficient and capable by offloading repetitive tasks and reducing the need to break pace to look up information.
Despite our predictions of its imminent arrival, augmented translation has so far remained more of a hypothetical model than a reality, but a recent wave of innovation based on artificial intelligence in its machine learning incarnation suggests that our vision is close to reality. More and more, companies – such as Lengoo, SmartCAT, thebigword, and Ubiqus, among others – are building systems and processes with an AI-first approach. Rather than tacking disparate technologies together or trying to cram MT into the old translation memory-centric model, they are rethinking their approaches to focus on building systems that deliver better results.
 

The following points review the individual pieces of the technology to determine the state of the art as it stands:

  • Lights-out project management. Josh Gould, CEO at thebigword, recently told us that 94% of his company’s projects are delivered through a touchless, self-service model, and the large majority of those projects never require the intervention of a project manager. Other LSPs have seen similar results. Taking a lights-out approach in which everything is automated unless a problem arises has profound benefits in terms of reducing the time it takes to start a project, providing greater transparency for customers (who really want to see a big project status bar like the one Domino’s Pizza offers its online patrons), and assuring that nothing slips through the cracks.
     
  • Translation memory and machine translation. The most advanced LSPs are finally moving past traditional TM matches and instead integrating TM with machine translation. This shift makes sense: modern MT uses the same underlying data and unifying the technologies helps improve consistency and reliability. The goal is a system that learns in real time and becomes ever better from incorporating human feedback and learning from how linguists interact with it. Just having MT isn’t enough to truly augment human capabilities, but as MT learns and adapts from linguists – both individually and collectively – the boundaries between TM and MT will blur, as will those between human and machine.
     
  • Terminology management and automated content enrichment (ACE). Sadly, terminology management remains the neglected stepchild of the localization industry. Far too many organizations pay lip service to the need for proper terminology, and then continue to pass around Excel files full of terms and their translation. Fortunately, companies such as Coreon, Interverbum Technologies, and Kaleidoscope have started to build out the sophisticated applications that bridge terminology and knowledge management. When terminology enriched with rich metadata becomes available, it enables both human translators and machine translation to be more consistent, contextual, and correct. Although ACE – which adds links to relevant data into content streams, as is done by the Okapi Ocelot system – has revolutionary potential, its mainstream implementation faces issues similar to terminology management.

The key in augmented translation is that these technologies all need to work in tandem with each other. They can all provide benefits on their own, but it is only when they interoperate, choreographed by a TMS, that they truly shine. For example, a lights-out project management system needs to be able to estimate the work required for a project based on the relevance of linguistic resources, the past performance of translators for particular content types. And MT needs access to terminology to take the next step forward in relevance and quality. Look for terminology management to become more embedded with content, which will create the conditions for ACE to finally move into the mainstream. All of these technologies will reinforce each other to deliver increased intelligence and assistance to linguists.
Despite the benefits that augmented translation can offer, its appearance has been slow in coming. However, the COVID-19 pandemic has greatly accelerated development in this area. LSPs have used their downtime to retool around this concept and build more efficient workflows. Increased demand for cloud-based technologies has simplified the infrastructure needed to build them and technology developers keen to differentiate themselves have started building the needed pieces and tying them together.
Augmented translation may not yet be a reality for most LSPs and linguists, but it is getting ever closer. The changes it will introduce will mean that the translator of 2030 will look back on 2020 as we today look back on the early industrial age, when machines started to dramatically increase productivity and standard of living for most people. Although the development will not be immediate or easy, the outcome will be a world in which translators are more valuable and more productive than ever before.
 

Stay Informed with CSA Research

Subscribe to our newsletter for updates on the latest research, industry trends, and upcoming events.

Subscribe

Ready to Explore CSA Research Insights?

Access exclusive data, reports, and analysis that power smarter decisions across the global content industry.

Reliable
Comprehensive
Data-Driven
Research
Visit the platform

Meet Our Analyst

writer_profile_image

Arle Lommel

VP Of Research

After obtaining a BA in linguistics in 1997, I began working for the now-defunct Localization Industry Standards Association (LISA), where I headed up standards development and worked on quality assessment models. At the same time, I completed a PhD in ethnographic research at Indiana University in 2011. In 2012 I began work for the German Research Center for Artificial Intelligence (DFKI) in Berlin, Germany, where I headed up development of the Multidimensional Quality Metrics (MQM) system for quality evaluation and worked on various EU and German government-funded projects. In 2015 I returned to the United States and began working for CSA Research in January 2016. In my life I have lived in Alaska, Utah, Indiana, Hungary, and Germany. I speak English, Hungarian, and German, as well as bits and pieces of many other languages.

Connect with Arle Lommel

Recent Blogs

Turn Research Into Action

Our consulting team helps you apply CSA Research insights to your organization’s
specific challenges, from growth strategy to operational excellence.