The Natural Language Processing & Portuguese-Chinese Machine Translation Laboratory (NLP2CT) under the University of Macau (UM) Faculty of Science and Technology (FST) participated in the Metrics Shared Task at the Sixth Conference on Machine Translation with the translation team from Alibaba Dharma Academy. They won first prizes in five of the eight automatic translation quality tests, as well as two second prizes and one fifth prize, outperforming competitors such as Google and Unbabel.
With the rapid development of machine translation in recent years, automatic evaluation of translation quality has become an indispensable part of machine translation. Due to a large demand for language translations, it is important to be more efficient in discovering errors, adjust the parameters in the translation system, evaluate the performance of the translation system, and compare differences between translation systems. These demands make automatic evaluation of translation quality a hot research topic.
RoBLEURT, the automatic translation quality evaluation model used in this competition, was jointly developed by UM and Dharma Academy. In addition to multi-stage pre-training for enhancing model learning capabilities, the model also uses robustness enhancement training strategies based on pseudo-data construction and innovative multi-model and multi-fold crossover combination technologies such as verification and reordering. RoBLEURT participated in the evaluation of eight translation tests. Eventually, it won the first prizes in the evaluation of one oral translation (Chinese-English) and four press release translations (Chinese-English, Czech-English, German-English and Japanese-English), outperforming rival models from Google and Unbabel. In addition, RoBLEURT won second prizes in the evaluation of a Hausa-to-English translation and an Icelandic-to-English with no training data. The competition showcased the university's cutting-edge research in automatic translation quality evaluation, provided a platform for the exchange of experience and knowledge with hi-tech companies, and created an opportunity for students to learn more about cutting-edge research.
The NLP2CT mainly conducts research on machine learning and natural language, including language processing, such as deep learning, machine translation, dialogue systems, and natural language reasoning. It has established extensive cooperation with many well-known scientific research companies. So far, it has published more than 100 research articles at many leading international conferences, such as the Association for Computational Linguistics (ACL), the International Conference on Learning Representations (ICLR), the AAAI Conference on Artificial Intelligence (AAAI), the International Joint Conference on Artificial Intelligence (IJCAI), the Conference on Empirical Methods in Natural Language Processing (EMNLP), and the IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP). The model was sponsored by the Science and Technology Development Fund (file number: 0101/2019/A2) and UM (file number: MYRG2020-00054-FST).
View gallery