ZenML

Automating Translation Workflows with LLMs for Post-Editing and Transcreation

TransPerfect 2025
View original source

TransPerfect integrated Amazon Bedrock into their GlobalLink translation management system to automate and improve translation workflows. The solution addressed two key challenges: automating post-editing of machine translations and enabling AI-assisted transcreation of creative content. By implementing LLM-powered workflows, they achieved up to 50% cost savings in translation post-editing, 60% productivity gains in transcreation, and up to 80% reduction in project turnaround times while maintaining high quality standards.

Industry

Tech

Technologies

Overview

TransPerfect is a global leader in language and technology solutions, founded in 1992 with over 10,000 employees across 140 cities on six continents. The company offers translation, localization, interpretation, and various language services, with a proprietary translation management system called GlobalLink. This case study describes how TransPerfect partnered with the AWS Customer Channel Technology – Localization Team to integrate Amazon Bedrock LLMs into GlobalLink to improve translation quality and efficiency.

The AWS Localization Team is itself a major TransPerfect customer, managing end-to-end localization of AWS digital content including webpages, technical documentation, ebooks, banners, and videos—handling billions of words across multiple languages. The growing demand for multilingual content and increasing workloads necessitated automation improvements in the translation pipeline.

The Challenge

Content localization traditionally involves multiple manual steps: asset handoff, preprocessing, machine translation, post-editing, quality review cycles, and handback. These processes are described as costly and time-consuming. The team identified two specific areas for improvement:

Technical Architecture and Workflow

The solution integrates LLMs into the existing translation workflow within GlobalLink. The workflow consists of four components in sequence:

The case study provides a concrete example showing a source English segment being translated to French through MT, then refined by APE (with subtle improvements like changing “au moment de créer” to “lorsque vous créez”), and finally reviewed by HPE.

LLM Integration Details

TransPerfect chose Amazon Bedrock for several key reasons related to production deployment concerns:

Data Security and Compliance

Amazon Bedrock ensures that data is neither shared with foundation model providers nor used to improve base models. This is described as critical for TransPerfect’s clients in sensitive industries such as life sciences and banking. The service adheres to major compliance standards including ISO, SOC, and FedRAMP authorization, making it suitable for government contracts. Extensive monitoring and logging capabilities support auditability requirements.

Responsible AI and Hallucination Prevention

The case study highlights Amazon Bedrock Guardrails as enabling TransPerfect to build and customize truthfulness protections for the automatic post-edit offering. This is particularly important because LLMs can generate incorrect information through hallucinations. For translation workflows that require precision and accuracy, Amazon Bedrock’s contextual grounding checks detect and filter hallucinations when responses are factually incorrect or inconsistent with the source content.

Model Selection

The solution uses Anthropic’s Claude and Amazon Nova Pro models available through Amazon Bedrock. For transcreation specifically, the LLMs are prompted to create multiple candidate translations with variations, from which human linguists can choose the most suitable adaptation rather than composing from scratch.

Prompt Engineering Approach

For automatic post-editing, the LLM prompts incorporate:

This allows the LLM to improve existing machine translations based on established quality standards and preferences.

Workflow Configurations

The solution supports different workflow configurations based on content type and requirements:

Production Results

The case study reports the following outcomes, though readers should note these are self-reported figures from a vendor partnership announcement:

These are significant claims, particularly the 95% improvement rate. While the results sound impressive, it’s worth noting that the definition of “markedly improved translation quality” and the methodology for measuring these improvements are not detailed in the case study.

LLMOps Considerations

Several aspects of this case study are relevant to LLMOps best practices:

Integration with Existing Systems

Rather than building a standalone AI solution, the LLM capabilities were integrated into TransPerfect’s existing GlobalLink translation management system. This approach leverages established workflows and tooling while adding AI capabilities at specific points in the pipeline.

Human-in-the-Loop Design

The solution maintains human oversight at various stages. For transcreation, linguists choose from multiple LLM-generated candidates. For post-editing, content can still route to human reviewers when needed. This graduated approach allows for quality assurance while gaining efficiency benefits.

Guardrails and Safety

The explicit use of Amazon Bedrock Guardrails for contextual grounding checks demonstrates attention to output quality control in production. Translation is a domain where accuracy is paramount, and hallucinations or inaccuracies could have significant consequences for clients.

Scalability

Amazon Bedrock as a fully managed service provides scalability benefits, which is important given the stated volumes (billions of words across multiple languages).

Compliance Requirements

The case study emphasizes compliance certifications (ISO, SOC, FedRAMP) as decision factors, reflecting the reality that enterprise AI deployments must meet regulatory and security requirements.

Critical Assessment

While this case study presents compelling results, some caveats merit consideration:

Despite these limitations, the case study provides a useful example of how LLMs can be integrated into established enterprise workflows for incremental automation rather than wholesale replacement of existing systems.

More Like This

Enterprise-Scale AI-First Translation Platform with Agentic Workflows

Smartling 2025

Smartling operates an enterprise-scale AI-first agentic translation delivery platform serving major corporations like Disney and IBM. The company addresses challenges around automation, centralization, compliance, brand consistency, and handling diverse content types across global markets. Their solution employs multi-step agentic workflows where different model functions validate each other's outputs, combining neural machine translation with large language models, RAG for accessing validated linguistic assets, sophisticated prompting, and automated post-editing for hyper-localization. The platform demonstrates measurable improvements in throughput (from 2,000 to 6,000-7,000 words per day), cost reduction (4-10x cheaper than human translation), and quality approaching 70% human parity for certain language pairs and content types, while maintaining enterprise requirements for repeatability, compliance, and brand voice consistency.

translation content_moderation multi_modality +44

AI-Powered Security Operations Center with Agentic AI for Threat Detection and Response

Trellix 2025

Trellix, in partnership with AWS, developed an AI-powered Security Operations Center (SOC) using agentic AI to address the challenge of overwhelming security alerts that human analysts cannot effectively process. The solution leverages AWS Bedrock with multiple models (Amazon Nova for classification, Claude Sonnet for analysis) to automatically investigate security alerts, correlate data across multiple sources, and provide detailed threat assessments. The system uses a multi-agent architecture where AI agents autonomously select tools, gather context from various security platforms, and generate comprehensive incident reports, significantly reducing the burden on human analysts while improving threat detection accuracy.

fraud_detection customer_support classification +31

Fine-Tuning LLMs for Multi-Agent Orchestration in Code Generation

Cosine 2025

Cosine, a company building enterprise coding agents, faced the challenge of deploying high-performance AI systems in highly constrained environments including on-premise and air-gapped deployments where large frontier models were not viable. They developed a multi-agent architecture using specialized orchestrator and worker models, leveraging model distillation, supervised fine-tuning, preference optimization, and reinforcement fine-tuning to create smaller models that could match or exceed the performance of much larger models. The result was a 31% performance increase on the SWE-bench Freelancer benchmark, 3X latency improvement, 60% reduction in GPU footprint, and 20% fewer errors in generated code, all while operating on as few as 4 H100 GPUs and maintaining full deployment flexibility across cloud, VPC, and on-premise environments.

code_generation high_stakes_application regulatory_compliance +33