My presentation at LavaCon had a screen shot of an actual translation memory, and I wanted to flesh that out a bit more.
A translation memory is simply a database of phrases that have already been translated, usually broken into segments by sentence. No matter what translation management tool you are using, the linguist usually winds up with a traditional left/right view of the source text and the target text, with some sort of color coding or other marking of the segment by classification of “unique”, “fuzzy match”, “100% match”, and sometimes “ICE match” (In Context Exact).
In this example (which I built in XL for clarity), we have a paragraph of text from a medical pamphlet. The yellow segments in this example are 100% matches (but since other segments in the paragraph have changed not ICE matches), the green is a fuzzy match, and no color is unique text.
What happened here is that since the brochure was last translated someone took the sentence “These could be signs of an illness like asthma, bronchitis or COPD (chronic obstructive pulmonary disease), or they could be allergies.” and broke it up into two. This strikes me as more of a preference change than anything marketing or legal would require; in fact, I think it harms the flow of reading, but maybe that’s just me.
The business case against, however, is simple. Let’s assume a pricing structure of 20¢ per word unique, 10¢ fuzzy match, 4¢ for 100% match. This is high by current standards, but round numbers and all.
Last version, the client paid $5.80 for translation of segment 3 (29 words @ 20¢). Instead of taking advantage of that existing work, however, for this edition of the pamphlet the linguist has to figure out what changed, modify segment 3 to the tune of $2.50 (25 words @ 10¢), and translate segment 4 for $1.00 (5 words @ 20¢). This preference change cost the client $3.50, and of course time. Just imagine how that multiplies over any document of decent length.