The translation industry is seeing a transition from general-purpose large language models to smaller, specialised models focused on greater accuracy and efficiency.
Over the past five years, neural machine translation (NMT) has shifted from a cautious experiment to mainstream adoption, with large language models (LLMs) playing a major role in driving new approaches to content localisation for global businesses. However, the future of the industry appears to be moving towards small language models (SLMs) designed for specific industries and language pairs.
Grant Straker, Co-founder and Chief Executive Officer of Straker, has outlined his perspective on the changing landscape: "In just five years, the translation industry has undergone a radical transformation. What began as cautious experimentation with neural machine translation (NMT) has evolved into widespread adoption of large language models (LLMs), reshaping how global businesses approach content localisation.
But the next leap forward, I believe, won't come from models that are bigger, broader, and more complex. It will come from something much more focused and far more effective.
According to Straker, small language models offer tangible benefits over their larger predecessors. He explained that Straker has observed consistent outperformance by SLMs in their intended roles: "At Straker, we've seen first-hand how Small Language Models (SLMs) - purpose-built for specific industries and language pairs - consistently outperform their larger, general-purpose counterparts. In our experience, SLMs represent the future of translation: more accurate, more efficient, and more commercially viable."
The generalist limitation
LLMs are undoubtedly powerful, with advanced capabilities to generate human-like text and address diverse tasks. However, Straker noted that this versatility can result in problems in specialised translation, especially in highly regulated industries. He highlighted the example of domain-specific jargon, explaining, "There's no question that LLMs are powerful. Their versatility and capacity to generate human-like text have unlocked incredible new possibilities. But when it comes to translation - especially in regulated sectors - their generalist design becomes a weakness."
To illustrate, he pointed to the word "equity," which carries different meanings in real estate and finance. "Without domain-specific training, LLMs can easily misinterpret such terms risking confusion or, worse, critical errors in legal, financial, or medical communications."
Operational concerns were also addressed, with Straker stating, "The problem isn't just accuracy. LLMs are resource-hungry, slow to deliver, and often require heavy human editing to meet professional standards. For businesses managing hundreds of content streams across multiple languages, this quickly becomes unsustainable both financially and operationally."
The case for smaller models
To address these challenges, Straker described the development of Tiri, a suite of SLMs created for translation and localisation tasks. He stated, "To solve this, we developed Tiri - a suite of Small Language Models trained specifically for translation and localisation. These aren't all-purpose bots trying to do everything. Instead, they operate more like expert linguists deeply specialised, highly contextual, and tuned for specific tasks."
He shared an example of improved quality and contextual accuracy in financial translation tasks when using a Tiri model: "When we trained a Tiri model to handle Japanese-to-English investor relations material, the gains in quality and contextual accuracy were significant outperforming general-purpose LLMs that lacked financial nuance. That's the power of specialisation."
Three business benefits
Straker laid out three core reasons why SLMs may better serve the needs of business translation. The first centres on domain accuracy: "Tiri models are trained on high-quality, domain-specific data including translation memories and industry glossaries. They don't just translate words; they understand the context behind them. Whether it's legal contracts, pharmaceutical documentation, or technical manuals, the result is consistent, first-pass accuracy."
The second reason relates to efficiency: "Smaller models require less compute power. That means lower costs, faster speeds, and seamless integration into existing workflows without the need for expensive infrastructure or long processing times."
The third advantage is continued improvement: "We've embedded Reinforcement Learning from Human Feedback (RLHF) directly into our workflows. This means Tiri models get better over time learning from real-world edits to align more closely with client expectations and preferred tone."
Translation as a craft
In his remarks, Straker also discussed the philosophical transition in translation with the rise of AI-driven tools: "For us, this isn't just a technical shift, it's a philosophical one. Translation is a craft. It's about preserving meaning, intent, and cultural nuance across borders. AI must honour that, not flatten it."
He concluded with his belief in the role of specialised AI models in the future of translation: "That's why I believe the future of translation will be shaped by specialisation, not scale. Businesses don't need AI that knows a little about everything. They need AI that deeply understands their domain, their customers, and their voice.
Because in the end, translation isn't just about sounding fluent. It's about being understood."