Suppose I had this content in my TM (note the highlighted text):
These are the sort of matches I’d get for various sentences:
It’s very annoying to have a sentence like in example C and to think, “Didn’t I have that fragment ‘web content management’ to translate before in this project? Why can’t the software spot that and tell me how I translated it?" Yes, I can use concordance search, then scroll through the often-noisy results, and if I find a match, read the (possibly long) target text to be sure I use a consistent translation. But wouldn’t it be better if the software tried to find such matches for me, automatically, and identified the translation in the target text as well, so I could quickly insert it?
Something even more annoying was to have a reviewer send back a translation because I’d translated fragments like that in one (perfectly good) way, but the TM contained an earlier, different wording that the software didn’t show me – apart from time spent changing it after, showing me that translation up-front would have saved me time before. Again, I could’ve used a concordance search, but to be sure of consistency I’d have to do an awful lot of just-in-case searching, which certainly wouldn’t save time.
I wasn’t the only one wanting more, either. By 2010 I’d completed an MA in Translation Studies and was asked back to teach CAT tools. The students – new to TM technology and judging it with fresh eyes – had to compare and analyze the different tools. They were often puzzled at not seeing translation proposals for repeated fragments like this, especially since – for most text types – repeated fragments are far more common than repeated or near-repeated sentences.
Coincidentally, parts of the industry started to look at this more in 2005, and by 2007 TAUS published a report into this important kind of recall. But TM systems were only addressing it in limited ways. Things improved in 2009 with SDL Trados Studio and its AutoSuggest Creator feature – still a very useful tool – but that didn’t do everything I wanted (it wouldn’t show me fragment match translations until I started typing them; I couldn’t see the context the translation came from to check it made sense; it wouldn’t show me anything added or changed in the TM since creating the AutoSuggest dictionary). Neither did any other system on the market for years afterward – as I had to demonstrate very clearly within a PhD on how to do it better.
The prototype TM system I developed during that PhD got SDL’s attention, so they took me on, and it’s now the basis of the new upLIFT features in SDL Trados Studio 2017. What’s really great is that it doesn’t just help with the example C segment in the table above – recalling a translation for ‘web content management’ – but also helps with the example B segment. The sort of matches I get are now a lot better:
How does that look in SDL Trados Studio? Here are some screen shots of a typical display:
Figure 1 – upLIFT fragment recall
The top of the Fragment Matches pane (Figure 1) shows the parts of my segment that have matches by underlining, with a list of source fragments and their translations below, which I can click to see the full TU context. I can insert the translation with the mouse or via AutoSuggest as shown.
Figure 2 – upLIFT Match Repair
Figure 2 shows a repaired match – in this case, the repair has changed my fuzzy match into a perfect translation that I can simply check and confirm.
I’m thrilled with the way SDL’s UX team have made the new functionality easy to use and understand, displayed in a familiar way that’s consistent with existing features. And when I’m translating now – although that’s to maintain a “translator’s view" of Studio, rather than because someone pays me for the results – I can’t tell you how great it is that I’m getting translation proposals in the way I wanted for years.
So, what’s going on behind the scenes to make that happen? If you’d like to know more about that, come back soon for the second part of this “upLIFTing tale" …