Most marketers treat translation as one bucket. In reality, there are several distinct categories, each with their own tolerance for adaptation, risk level, and management approach. In this guest post, Mark Sheehy, Senior Marketing Localization Engineer at McAfee, joins us to unpack how localization really works at the enterprise level — and why understanding these differences is the key to scaling without tripping over compliance or wasting budget.
1: Four Core Types of Translation — and Why Legal is in a Class of Its Own
A. Legal Translation (highest risk, zero tolerance)
- How to manage: Use legally trained linguists, not just bilinguals. Build review steps into your workflow. Many enterprises rely on vendors that specialize in legal-only translation.
- Example: A mistranslated liability clause (“contact us within six days”) can put you in court.
- Key Enabler: Translation memories and legal glossaries. Translation memory is a database of past translations that ensures the same clauses are rendered consistently every time. A legal glossary is a vetted list of approved translations for sensitive terms. Together they reduce risk and rework.
B. Product & Support Translation (technical and exact)
- How to manage: Stick closely to the source text; there’s no room for interpretation in software strings or support instructions. Involve subject matter experts (engineers, product managers) in review.
- Example: “Ctrl+Alt+Delete opens Task Manager” must be literal — there’s no creative option here.
- Key Enabler: Source control and style guides. Source control prevents constant product name/version drift from wreaking havoc. Style guides define how instructions and technical terms must appear across languages.
C. Marketing Translation (adaptive and creative)
- How to manage: This is where transcreation comes in: adapting the meaning of content so it resonates locally while keeping the intent, tone, and emotional impact intact. Budget for higher per-word costs — your translators are acting like copywriters. Use in-market reviewers to validate tone and cultural fit.
- Example: In France, “Buy now” becomes simply “Acheter” — “Acheter maintenant” sounds unnatural. Idioms like “The early bird gets the worm” don’t translate at all in Japan.
- Key Enabler: Transcreation + in-market reviewers. Transcreation is creative adaptation, not machine translation. It’s about rewriting content so the message hits home in the target culture. In-market reviewers then validate that the adapted content feels authentic.
D. Software UI Translation (overlaps with product, but distinct in practice)
- How to manage: UI strings must fit into buttons, menus, and layouts. Length and directionality (e.g., right-to-left languages) matter. You need in-context QA to catch broken layouts or untranslated strings before release.
- Example: A translated button label that overruns its box in German or gets cut off in Arabic UI.
- Key Enabler: Linguistic QA in context. Reviewing translations inside the live software or staging environment ensures they work as intended. Automated tools can flag untranslated or misplaced text, but a human still needs to check usability.
Why This Matters
Treating all translation the same is what leads to delays, budget overruns, or compliance risks. Recognizing these categories — and managing each with the right enablers — is the difference between “just shipping words” and building a scalable localization process.
2: Country-Specific Adaptations and Source Control
Localization often breaks down because offers, products, and messaging differ by country. If you don’t classify these differences and build them into your workflow, they slip through the cracks. Automation helps, but constant changes mean you still need human review.
Categories of Country-Specific Differences
- Tone & Formality
Example: Japan requires highly formal honorifics, even in product or support content. - Availability of Products or Features
Example: “24/7 phone support” may be true in the U.S. but not in Japan. - Contact Information & Local Details
Example: Wrong phone number, or referencing services not available locally. - Seasonality & Cultural References
Example: “Holiday season” resonates in the U.S., but in Europe it’s “Christmas,” while in Asia it may not apply at all.
How Teams Manage It
- Translation Briefs: Notes for translators flagging differences by market.
- In-Market Reviewers: Native colleagues proof translations inside the Translation Management System (TMS).
- Source Reviews: Push back to core teams when English content makes assumptions that don’t fit all markets.
- Keep Source Generic: Phrase copy so it doesn’t over-promise (“Support available” vs. “24/7 support”).
The Reality Check
- Many teams still track product names and offer availability in Excel sheets.
- These sheets are always changing and rarely synced across teams.
- Automation rules require constant updates.
- Human review is still the fastest way to catch misalignments.
Why This Matters
Marketers in regulated industries can’t afford to assume one global version works everywhere. Building systematic checks for tone, availability, contact details, and seasonality is how you avoid painful rework — or worse, discovering compliance issues live in-market.
3: Scale and Languages Covered
Localization isn’t just about “a few key languages.” For an enterprise, coverage can stretch from 25 to 40+ languages depending on whether the content is marketing, product, or legal. Understanding the scope helps marketers budget and resource realistically.
The Annual High-Water Mark: Legal Documents
- Every year, McAfee re-translates its End-User License Agreements (EULAs) and privacy notices.
- These are long legal documents that must be updated whenever products change or regulations shift.
- They cover about 42 languages each cycle.
- Because legally trained translators are required, the bill for this single annual cycle is about $50,000.
Marketing Content (Regular Coverage)
- Americas: Portuguese (Brazil), Spanish (Mexico — used as the standard across Latin America), French (Canada).
- Europe/EMEA: Core set: German, French, Italian, Spanish, Portuguese, Dutch. Extended: Nordics (Norwegian, Danish, Finnish) and Eastern Europe (Croatian, Ukrainian, Russian, Polish, etc.).
- Asia-Pacific: Japanese, Korean, Chinese (Traditional and Simplified).
Product vs. Legal Coverage
- Product content (software strings, support text): usually ~28 languages.
- Legal documents (EULAs, privacy notices): extend to the full 40+ language spread, including smaller markets such as Thai, Slovak, Slovenian, and Czech.
Why This Matters
Marketers often underestimate how wide the net has to be. One campaign can easily touch 30 markets, each needing translation, review, and testing. Legal cycles are even heavier, both in scope and cost. Without planning for this scale, teams risk running out of budget or missing compliance obligations.
4: Four Big Mistakes That Keep Teams Stuck
Even large enterprises fall into the same traps again and again. These mistakes waste money, delay launches, and undermine trust. Here are the four biggest, plus how to avoid them:
- Constant Source Changes
The mistake: Product names, features, or claims get updated after translation jobs are already sent. That breaks consistency and creates expensive rework.
Fix: Implement strict source control. Content should be fully reviewed and signed off before being sent. - Ignoring Cultural Nuance
The mistake: Teams assume word-for-word translation will work everywhere. It doesn’t — tone, idioms, and formality differ drastically.
Fix: Budget for transcreation and in-market reviewers. Example: “The early bird gets the worm” means nothing in Japan. - High Staff Turnover and Lack of Training
The mistake: Organizations rely heavily on contingent workers without onboarding. Mistakes repeat.
Fix: Build lightweight training. Document pitfalls (concatenation, glossary use, tone). Use shared issue trackers. - Concatenation in Source Text
The mistake: Breaking sentences into fragments and expecting them to reassemble. Different word orders make this impossible.
Bad Example:
Line 1: “Your account has”
Line 2: “expired”
→ Cannot be translated accurately in Japanese or German.
Good Example:
“Your account has expired.”
Fix: Avoid concatenation. Design UI to keep full sentences intact.
5: Evaluating Translators, Agencies, and the Work
Enterprises mix single-language vendors (SLVs), multi-language vendors (MLVs), and in-house translators. The key is evaluation and accountability.
What to Look For
- Communication: Confirm deadlines, flag delays, raise queries. Silence is a red flag.
- Test Jobs: Always start with small jobs before onboarding.
- Accountability: Pair the main vendor with a premium vendor for LQA (Language Quality Assurance).
How Quality is Measured
- LQA and Six Sigma Scoring: Audit defects per volume, roll into sigma score.
- Quarterly Business Reviews (QBRs): Track vendor performance.
- Issue Tracker: Shared system logging late deliveries and faults.
The Case for In-House Translators
- Consistency: Same linguists every week.
- Direct Communication: Faster problem-solving.
- Coverage: Core 9 languages (e.g., fr-CA, es-MX, pt-BR, de, it, nl). Not cost-effective for all languages.
Service-Level Agreements (SLAs) That Matter
- Turnaround time.
- Language quality.
- Responsiveness.
Section 6: Where AI Fits (and Fails)
AI in localization is an accelerator, not a replacement. At scale, it improves speed and reduces repetitive work — but without human review, the risks outweigh savings.
How We Use AI Today
- Machine Translation (MT) inside the TMS: Custom DeepL engines. Raw Google Translate not acceptable.
- Automated QA tools: Scan websites/screenshots for untranslated text, wrong language, missing punctuation, etc.
The Limits of AI
- Publishing raw MT is dangerous: Always do MTPE (Machine Translation + Post-Editing). If a text isn’t in the Translation Memory database, it means no linguist reviewed it — a red flag it came from raw MT.
- Video localization isn’t enterprise-ready: Automatic subtitling/dubbing exists but always requires edits.
Key Takeaway
AI saves time but does not remove the need for trained linguists. Best used in hybrid: MT + human editing, automated QA + human reviewers.
7: Measuring Success (KPIs)
Localization proves value only when tied to core marketing metrics.
What to Track
- Traffic & Conversions.
- CTA Performance.
- Renewals & Retention.
- Campaign Metrics.
- Customer Feedback (CSAT/NPS).
How Teams Measure It
- Segmented Dashboards: Report per locale, not “International.”
- Proxy Metrics: Use UTMs, form IDs, and analytics triangulation.
- Feedback Loops: In-market reviewers validate KPI realism.
Example
U.S. whitepaper converts at 5%. German version converts at 1%. With UTMs tied to forms, you can test alternatives (“Jetzt herunterladen” vs “Whitepaper sichern”).
8: Localization Debt — Shortcuts That Don’t Work
Like tech debt, “localization debt” builds when teams cut corners.
Common Shortcuts
- Relying on raw MT.
- Weak source control.
- Unclear ownership.
- Contractor pressure to push flawed jobs.
How to Mitigate
- Require source sign-off.
- Define roles clearly.
- Maintain steady communication loops.
- Push for top-down enforcement.
9: Cultural Adoption & Changing Minds
When marketing demands “1,000 words in 24h,” localization teams need process, not shortcuts.
How to Win Adoption
- MT Engines: Pre-translate, but never publish raw MT.
- In-House Linguists: Faster turnaround for key languages.
- TMS Workflows: Include GEO reviewers in workflows to capture preferences in glossaries.
Result
Preferences are documented, reused, and validated. This improves speed, quality, and shifts mindsets from resistance to collaboration.
10: How a Translation Job Really Flows
Localization is not a “send for translation” button — it’s a workflow with checks and tools.
Supporting Systems
- Translation Memories (TM): Reuse past translations. Full matches = cheap; fuzzy matches = partial, need edits.
- Glossaries: Apply approved terms automatically.
- Trackers/Finance Hub: Assignments and billing per job.
- Connectors: Link AEM, Figma, app stores directly to the TMS.
Step-by-Step Flow
- Request: Marketer submits content.
- Pre-check: Workbench shows source text and TM matches.
- Translation & QA: Vendor → internal check → PM → LQA → GEO review.
- Delivery & In-Context Review (ICR): Localized pages checked in live/staging.
Why This Matters
Without a workflow, localization collapses under ad-hoc fixes. Enterprise success comes from tools + people + processes.
11: Entering New Markets (Budget, Timeline, Team)
Launching in a new country needs presence, compliance, and support.
What You Need
- Budget + Brand Awareness: Distribution deals or campaigns need heavy localization support.
- Patience: Expect months of iteration, not weeks.
Team Roles
- Copywriters/creatives.
- Legal.
- Localization PM + linguists.
- LQA/ICR testers.
- Channel/partner ops.
- Analytics.
Key Enabler
Iterative market testing: Start with a subset, then expand.
Example
Distribution partnerships can expose millions of users quickly — but every clause and banner must be localized. Without resources, shortcuts become bottlenecks.
12: Agency vs. Individual Translators vs. In-house
Each model has strengths; most enterprises blend them.
In-House Translators
- Strengths: Consistency, speed.
- Limits: Only cost-effective for core languages.
Agencies (SLV/MLV)
- Strengths: Coverage and scale.
- Limits: Variable consistency and slower responsiveness.
Freelancers
- Strengths: Flexible, lower cost.
- Limits: Scalability and availability bottlenecks.
Hybrid Model
- Cheaper MLV for volume.
- Premium vendor for LQA spot checks.
- In-house translators for priority locales.
SLAs That Matter
- Turnaround time.
- Quality, measured via defect metrics.
Takeaway
Blending capacity with accountability ensures coverage, quality, and speed without overspending.