Large Language Model-Based Translation Agents: A Review and Future Perspectives

Authors

  • YiHeng Qi School of Foreign Studies, Yangtze University, Jingzhou 434000, China Author

DOI:

https://doi.org/10.63313/LLCS.9146

Keywords:

Large language models, translation agents, human-AI collaborative translation

Abstract

Large language models (LLMs) have enabled general-purpose agents to expand into vertical domains such as translation. Accordingly, translation tools are evolving from passive programs into translation agents capable of autonomous planning, memory management, and external resource use. This paper reviews recent studies on translation agents from the perspective of Translation Studies rather than from a purely algorithmic viewpoint. It traces their technological evolution and theoretical mapping, with particular attention to single prompting with retrieval-augmented generation, multi-agent collaboration, and multi-level memory mechanisms. These developments partly simulate and reconstruct the cognitive strategies and collaborative networks of human translators. At the same time, the integration of translation agents into complex workflows reveals persistent problems, including limitations in quality assessment, blurred ethical accountability, and fluctuations in translators’ cognitive load. The paper argues that future research should further examine evaluation mechanisms, ethical norms, and responsibility allocation within human-AI collaborative translation while also strengthening translator prompt literacy, digital resilience, and curriculum reform in translation education. The significance of translation agents lies not in replacing human translators, but in reshaping the translation ecosystem and its collaborative models.

 

References

[1] Chen, J. (2025) A Comparative Analysis of Generative AI Translation Ethics at Home and Abroad. Journal of Kunming University of Science and Technology (Social Sciences Edition), 25(5).

[2] Liu, M. (2024) Research on Russian Translation of Chinese Political Discourse Empowered by Large Language Models: Creation and Application of a Multi-Agent Interactive Translation Framework. Technology Enhanced Foreign Language Education, No. 6.

[3] Liu, S. and Zhang, Y. (2025) From MTPE to HACT: Research on Translation Workflow Innovation Driven by Large Language Models. Foreign Language Education Research, 13(1), 18-26.

[4] Wang, H. and Liu, S. (2025) From MTPE to AIPE: The Evolution of Translation Models in the Generative AI Era and Its Implications for Translation Education. Shandong Foreign Language Teaching, 46(3), 111-121.

[5] Wang, S. (2025) From "Search Quotient" to "Question Quotient": Constructing a Conceptual Framework for Translator Prompt Literacy in the Era of Generative AI. Foreign Language Education Research, 13(1), 1-8.

[6] Zhao, J. and Li, X. (2024) Research on the Construction and Application of Translation Agents Driven by Large Language Models. Technology Enhanced Foreign Language Education, No. 5, 22-28.

[7] Alfaify, A. (2025) Replacing the Irreplaceable: A Case Study on the Limitations of MT and AI Translation during the 2023 Gaza-Israel Conflict. In: Bouillon, P., Gerlach, J., Girletti, S., et al., Eds., Proceedings of Machine Translation Summit XX: Volume 2, European Association for Machine Translation, Geneva, 8-17.

[8] Bowker, L. (2021) Machine Translation Literacy Instruction for Non-Translators: A Comparison of Five Delivery Formats. In Proceedings of the Translation and Interpreting Technology Online Conference, Held Online, 25-36.

[9] Chen, L. (2025) Examining Cognitive Load in Human-Machine Collaborative Translation: Insights from Eye-Tracking Experiments of Chinese-English Translation. Frontiers in Psychology, 16, Article 1570929.

[10] Feng, Z., Su, J., Zheng, J., Ren, J., Zhang, Y., Wu, J., Wang, H. and Liu, Z. (2025) M-MAD: Multidimensional Multi-Agent Debate for Advanced Machine Translation Evaluation. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vienna, 7084-7107.

[11] Guo, X. and Moindjie, M.A. (2022) A Perspective on the Translator’s Subjectivity and Its Constraints in Literary Translation: A Critical Analysis. Journal of Positive School Psychology, 6(4), 3118-3123.

[12] Hackenbuchner, J. and Kruger, R. (2023) DataLitMT - Teaching Data Literacy in the Context of Machine Translation Literacy. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation, Tampere, 285-293.

[13] Iskanderov, Y. and Pautov, M. (2020) Agents and Multi-Agent Systems as Actor-Networks. In Proceedings of the 12th International Conference on Agents and Artificial Intelligence, Valletta, 179-184.

[14] Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Kuttler, H., Lewis, M., Yih, W.-T., Rocktaschel, T., Riedel, S. and Kiela, D. (2020) Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Advances in Neural Information Processing Systems, 33, 9459-9474.

[15] Li, W., Chen, J., Li, B., et al. (2025) TACTIC: Translation Agents with Cognitive-Theoretic Interactive Collaboration. arXiv preprint arXiv:2506.08403.

[16] Parasuraman, R. and Manzey, D.H. (2010) Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3), 381-410.

[17] Santoni de Sio, F. and Mecacci, G. (2021) Four Responsibility Gaps with Artificial Intelligence: Why They Matter and How to Address Them. Philosophy & Technology, 34, 1057-1084.

[18] Wang, G.X., Hu, J. and Qian, J. (2026) Who Has the Final Word? Designing Multi-Agent Collaborative Framework for Professional Translators. arXiv preprint arXiv:2602.19016.

[19] Wang, Y., Zeng, J., Liu, X., Wong, D.F., Meng, F., Zhou, J. and Zhang, M. (2024) DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory. arXiv preprint arXiv:2410.08143.

[20] Wu, M., Yuan, Y., Haffari, G. and Wang, L. (2024) (Perhaps) Beyond Human Translation: Harnessing Multi-Agent Collaboration for Translating Ultra-Long Literary Texts. arXiv preprint arXiv:2405.11804.

[21] Xi, Z., Chen, W., Guo, X., et al. (2023) The Rise and Potential of Large Language Model Based Agents: A Survey. arXiv preprint arXiv:2309.07864.

[22] Zhang, R., Zhao, W. and Eger, S. (2025) How Good Are LLMs for Literary Translation, Really? Literary Translation Evaluation with Humans and LLMs. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Albuquerque, 10961-10988.

Downloads

Published

2026-03-24

Issue

Section

Articles

How to Cite

Large Language Model-Based Translation Agents: A Review and Future Perspectives. (2026). Literature, Language and Cultural Studies, 4(3), 80-87. https://doi.org/10.63313/LLCS.9146