On 11 April 2018 Microsoft Dublin hosted the TAUS* QE Summit for the 6th time and Vincent Gadani was our perfect host. The day was full and very energetic with plenty of subjects touching not only on quality but also on more general, visionary subjects. We had four main topics with panels throughout the day and a break-out session at the end. The break-out session was intended for the identification of items/issues for TAUS to work on until the next meeting, or “homework”, as Jaap van der Meer called it. In this blog I’d like to share my most important takeaways with you.
Topic 1 – Quality in Gig Economy*
This topic was dominated by the question of how crowd sourcing* and self-organizing peer-to-peer platforms differ from the more classical, corporate globalization management systems when it comes to quality, review and evaluation. It was no surprise that the quality aspect uses up a lot more resources than the translation part. The question if this actually pays off was not really answered. But it was asserted that for some markets, brands or system types, gig translators (experts, not necessarily translators by education), just hit the right nerve. ‘Real’ translators that are no experts don’t always transport the particular sentiment of a system or brand, the panelists added. This becomes clear, when ‘classic’ translation and ‘gig’ translation are A/B tested*: End-users often prefer gig translations to the classic variants.
How to Recruit (Gig) Translators?
- According to Pactera the process for using gig translators via platform based recruitment cycles is as follows: 1. Talent Attraction, 2. Application, 3. Testing (to analyze basic and more sophisticated skills), 4. Training, 5. Ranking. Not very different from the classical recruitment workflow.
- Translations.net (matecat, mymemory etc.) uses an AI based translator ranking to make sense of all their data: This decision-support system called ‘T-Rank’ was trained on over 1.2 million translation jobs. This is how translations.net manages to offer their clients the best translators for a given job.
- Unbabel has an elaborate process that focuses on the reviewer. Only professional translators with at least 5 years of experience can do the job. They also have to climb the “reviewer’s career ladder” and are audited regularly. Additional evaluations of their job results, as well as guidelines help keep up the quality.
When asked about the selection process in general and what is taken into account when recruiting, Pactera pointed out that a translator’s degree is still important, although not necessarily for gig translators. Translations.net asks for degrees and every three years they automatically scan the CV of each translator. Unbabel asks for degrees from editors and analysts.
Topic 2 – User Experience (UX)* vs Linguistic Quality
The panel started with the following question: What role does language play in UX?
- Grainne from Amplexor said that as a user she does not care much about the language itself but about the user experience as a whole. In the end the users want to quick and easy find information or buy the things they are looking for. At the least, language is not to ‘hinder’ this process and must not be offensive.
- According to from Booking.com, UX is all about the apps and functions on a website and it should enforce the users’ trust in the brand. He sees two groups of users that have to be considered: the bookers and the partners (e.g. hotels).
- For Alberto from Travel Republic it’s about all the touch points on the website or product. It is necessary to consider the process and to build journey maps for all customer types – or personas. Content plays a huge part and localization needs to be closer to designers and content creators. In order to measure success, the right kind of data analysis is extremely important.
UX Research Versus UX Design
When asked about the difference between UX research and UX design, Alberto said that UX research is a variant of market research and UX design is the actual design of the product or website. The panelists had different perspectives on how UX is measured and how localization or language issues come into play:
- For Amplexor there should always be a framework in place to help clients identify different issues (language, imagery, interaction with the product…). All this data has to be gathered and analyzed regularly.
- Booking.com focuses on bounce rates, booking rates, cancellations and sees a strong relation of product and language. The question they are asking is: Do we have to build a quality framework for all 40 internal language teams in order to try to eliminate localization mistakes like spelling or grammar?
Measuring And Improving UX
All panelists agreed that the best high level metrics for UX are and will remain user based metrics like ‘user satisfaction’ via surveys and interviews. Also they agreed on that quality problems arise from source content which is not ‘culture-neutral‘ and can not be transcreated. Very problematic: Missing context. And last but not least: When a campaign is not relevant to the target culture the localization is a complete waste of time and money, even if the quality is perfect!
Tips for UX from the experts to the audience: In order to have better UX in all markets, A/B testing is definitely a good way to go. Also: Define your quality goals strictly and strengthen your relationship to the other teams working on the product or website.
Topic 3 – Education and the translator’s future
This panel dealt with the question, if the education of translators is still up-to-date and what should or could be done to improve it. Where is the translator’s profession heading? Most members of the panel were sure that future translators need a more diverse skill-set than current translators have. It was agreed, that companies could help this process along by sharing data in a repository – hosted by TAUS. This way localization and other tasks could be performed on current material to keep universities up-to-reality regarding the translation-material. We think that universities need to integrate companies more in order to know where the job market is heading and be more agile in reacting to changes. Student curricula usually lag far behind reality and this needs to change.
Topic 4 – Quality and the Modern Translation Pipeline (MTP)
With Jaap van der Meer (TAUS), Roberto Superbo (KantanMT), John Tinsley (Iconic Translation Machines), Kerstin Berns (berns language consulting GmbH), Elaine O’Curran (welocalize), Wayne Bourland (Dell)
For the last panel of the day I showed the audience what the translation pipeline for buyers in Germany looks like. What gap has to be bridged to reach the so-called ‘Modern Translation Pipeline’ (MTP)? There are many reasons for a more traditional outline in translation processes. A main inhibitor for using more flexible cloud applications still is data security. GDPR is looming over our European heads and it is important to talk about handling this properly.
Elaine showed how welocalize manages quality, namely by using the DQF-metric. She also talked about the difficulties when mapping their quality metric to diverse client metrics. These difficulties are the main reason TAUS propagates the standardization of quality metrics for translation buyers and vendors alike.
Wayne shared his translation pipeline vision for Dell. This is largely based on the MTP which helps as a guideline for combining different departments’ and subsidiaries’ translation business. He also showed the challenges when mapping all of Dells internal quality metrics and translation quality data to the DQF. A task Dell has been at for a while now, with guidance from the TAUS team.
Roberto Superbo showed the TQA process in KantanMT. This process helps users develop and manage their customized MT engines better, in an automated way.
Last but not least we were very surprised to hear from a survey by Iconic Translation. According to this survey, the use of raw MT engines on the buyers’ side is on the rise. At the same time the use of MT via LSPs is declining. One of the reasons (but not the only one) is the success of neural MT engines. The other important reason is the vendor lock buyers get into when having their LSP build their engines for them. Which reminds us a little of the eternal discussion about who owns the translation memory data 😉
Bold Action Items
We then discussed the topics in four break-out sessions and came up with a little homework for TAUS:
- How can we educate the industry (buyers and vendors alike) to use DQF homogenously?
- How can we strengthen the buyers trust in cloud applications?
- How can we ensure that critical data is separated from uncritical data making the use of the cloud easier?
(Don’t miss the upcoming TAUS webinar about GDPR on Mai 8!)
- How can companies share current localization project data with universities to make education of future translators more ‘on point’?
- Are important tasks missing from current curricula in universities? If so, what exactly is missing?
- How can the modern translation pipeline ensure that translators receive sufficient context information?
- We need a clear definition of ‘transcreation’ to help buyers and vendors talk about the same thing. Can TAUS draft a paper with the most important aspects of transcreation?
The next TAUS QE Summit in October – hosted by Amazon in Seattle – will show how these bold action items have been tackled. We are already very curious!
Thank you all for an inspiring, engaging and above all enjoyable QE Summit in beautiful Dublin!
*TAUS: The language data network is an independent and neutral industry organization that develops communities through a program of events and online user groups and by sharing knowledge, metrics and data that help all stakeholders in the translation industry develop a better service. (TAUS)
*DQF: The Dynamic Quality Framework, developed by TAUS and its members, provides a commonly agreed approach to select the most appropriate translation quality evaluation model(s) and metrics depending on specific quality requirements (TAUS)
*Gig economy: Labor market characterized by the prevalence of short-term contracts or freelance work, as opposed to permanent jobs (BBC News)
*Crowd sourcing: A portmanteau of crowd and outsourcing. Sourcing model in which individuals or organizations obtain goods and services, including ideas and finances, from a large, relatively open and often rapidly-evolving group of internet users (Wikipedia)
*User experience (UX): Refers to a person’s emotions and attitudes about using a particular product, system or service. It includes the practical, experiential, affective, meaningful and valuable aspects of human–computer interaction and product ownership.(Wikipedia)
*A/B testing: A way to compare two versions of a single variable typically by testing a subject’s response to variable A against variable B, and determining which of the two variables is more effective. (Wikipedia)