Intra-Sentential Intricacies Pertaining to the AI- Recognition of Arabic Female Names Rendered into English
Keywords:
Translation, Nouns, Arabic, Artificial Intelligence, Machine TranslationAbstract
AI-translation models unexpectedly fail in communicating messages in between natural languages, leading to errors that vary according to the degree and nature of relatedness between the Source and Target languages. By examining the lapses in the AI-translation of Arab female names into English, this paper red-flags error metrics in dealing with such Arabic texts. A reliable MT evaluation tool, compared to ‘BLEU and NIST measures’ according to Turian (2003), is the unigram-based F-measure, which uses a bitext grid to identify the texts’ similarities. Such evaluation mechanisms will evidently reflect the relegated TT quality resulted from the Source Text’s nature, a matter that necessitates AI-translation developers to parameterize their models in a way that handles such imminent inadequacies. This paper calls for novice ways to evaluate the AI translation systems in order to improve their efficacy on the one hand, and to abide by proper translation theories on the other.
Published
How to Cite
Issue
Section
Copyright (c) 2025 International Journal of Linguistics and Translation Studies

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.