
MT Summit 2025 marked a clear shift towards discussing deployments, inclusive design, and ‘communication’ instead of ‘translation’. We’ve already highlighted some of the most impactful papers presented this year in a curated list. But today, we’re going deeper with someone who shaped the summit from the inside out.
Helena Moniz was General Chair of MT Summit 2025 and President of the International Association for Machine Translation (IAMT). She also leads the European Association for Machine Translation (EAMT) and chairs the Ethics Committee at the Center for Responsible AI. Her unique position at the intersection of research, industry, and policy gives her a clear-eyed view of where machine translation must go next.
Read on to discover the essential skills needed to succeed in today’s machine translation industry.
LLMs Take Over: Shifting Machine Translation from Metrics to Meaning
What were the biggest trends at MT Summit 2025? What do they tell us about the future of machine translation? What about agentic AI?
First, this was definitely the year of LLMs. Core tasks such as translation, style and quality estimation are being redefined. But there’s also reflection. Should we keep calling it “machine translation” or move toward “multilingual technologies”? The field is evolving, and so should the way we talk about it.
Second, this conference was about real deployments. No more hypothetical models -people showed what worked and what didn’t in live pipelines. As systems get complex, bridging academia, industry, and policymakers is more crucial than ever.
And third, we can’t ignore ethics. Progress has to go hand in hand with responsibility.
As for agentic AI, it’s still early days. While the term is everywhere, it often feels like a rebranding of what’s already been done. True agentic behavior, where AI actively initiates, decides, and adapts, is still emerging.
Old Metrics Don’t Cut It Anymore
How are quality measurement metrics keeping up with LLMs and agentic AI, and how is the role of translators evolving alongside these new technologies?
Traditional metrics like COMET are outdated. Ricardo Rei, the winner of the best thesis award, even said his thesis is now “deprecated.” We live in the LLM and agentic age, and old metrics don’t catch cultural adaptation or creativity.
Are we using the right metrics? Nope. Some work is being done on new evaluation pipelines, but it’s early days. ChatGPT and similar tools sometimes outperform old metrics but bring new headaches around transparency. We still don’t fully understand their training data or real-world behavior, especially in multilingual, culturally sensitive contexts.
Are we post-editing the same way [with] LLMs as we were post-editing MT? No, it’s different now. Translators are moving toward transcreation by adapting content culturally and personally. We’re training new translators for a new world.
From MT to “Linguaging”: Reframing What We Translate
You mentioned we’re moving from “language” to “communication.” What does that mean?
Translation isn’t just about language; it’s about emotions, personality, and culture. We break the language barrier, but we don’t break the communication barrier. The goal is no longer just linguistic accuracy, it’s human connection. That’s why multimodal systems, those that go beyond text, are so important. But here, we’re really missing the mark. Current systems are still far from capturing the richness of human expression.
Ethics in MT: More Than Just Bicycle Brakes on a Transatlantic Plane
How seriously is the MT field taking AI ethics and inclusivity?
Ethics isn’t a nice-to-have, it’s essential. We’re using LLMs without knowing if they include copyrighted material or how fair they are to the creators of the data. That raises huge ethical questions.
At BridgeAI, we develop practical checklists to promote fairness, transparency and risk management. These tools help public bodies to navigate AI legislation such as the EU AI Act with confidence. We need real solutions, not just papers.
On inclusivity, there’s so much more to do. Low-resource languages and dialects remain underserved. At a recent ACL Birds of a Feather session on crisis translation, we discussed the lack of data and how to address this issue, considering both synthetic data and new techniques.
Inclusivity and Low-Resource Languages: Still a Long Tail
What progress was made on low-resource languages?
Despite all the talk, systemic change is slow. We still see isolated case studies rather than real solutions. One key point is that Latin America isn’t one language but dozens. Argentine Spanish is very different from Colombian Spanish.
Only 17 languages are commercially viable for MT today. If you’re working on Inuktitut or Cherokee, you’re isolated. There are no citations. There are no grants. The incentives don’t work.
Big Tech talks about covering 1000 languages but most of this work is done behind closed doors, without community involvement or transparency.
LLMs Hype vs Reality: What You Need to Know About Today’s Machine Translation
What are the current issues with large language models in MT and how can the community help?
There’s a lot of hype. But even with prompting, we lack real context and personalization. We don’t have cultural adaptation, gender inclusivity, or dialect-specific models. My Azorean Portuguese? Nope. And transparency? No way. We’re drowning in a nightmare of a million LLMs. Users don’t know what to pick or trust.
We need a transparency index for what we build. New metrics, ethical frameworks, and real inclusivity for low-resource languages are urgent. The EAMT is open to contributions from special interest groups working in the areas of low-resource languages, cultural adaptation and crisis translation. Everyone is invited.
Will AI Replace Translators? What to Learn to Stay Ahead
Helena Moniz confirms that AI literacy is no longer optional. It is now a must. Whether linguist, project manager, or researcher, understanding AI’s capabilities and limits is now part of the job.
No need to become a machine learning engineer, but:
Hard skills to focus on:
- Basics of AI and LLMs: how they work, where they fail, and how to prompt effectively
- Leveraging AI Tools: Post-Editing, Quality Assurance, and Automation in Localization
- Data awareness: why training data matters, transparency, and bias
- Core translation background: solid theory and practice, paired with AI skills
Soft skills that matter:
- Cultural intelligence: machines translate but can’t “linguage” nuance and tone
- Ethical judgment: spotting bias or inappropriate output is critical
- Critical assessment: question AI outputs, don’t blindly trust automation
- Teamwork: building together, not working in isolation
At Custom.MT we’re here to help you grow where we can make the biggest impact — on the technical side. Check out our webinars on GenAI, prompting techniques, and best practices here, or sign up for the next live session to keep your skills sharp and future-ready.
Comments are closed.