Morphological Knowledge in Multilingual Large Language Models: A Comparative Analysis of mT5 and ByT5

dc.contributor.advisorLiesenfeld, A.
dc.contributor.advisorGalke, L.P.A.
dc.contributor.authorĐăng, Thi Thao Anh
dc.date.issued2024
dc.description.abstractMorphology is a crucial factor for multilingual language modeling as it poses direct challenges for tokenization. Here, we seek to understand how tokenization influences the morphological knowledge encoded in multilingual language models. Specifically, we capture the impact of tokenization by probing two pre-trained language models mT5 and ByT5 sharing same model architecture, training objective, and training data -- which only differ in their tokenization strategies: subword tokenization vs. character-level tokenization. Probing the morphological knowledge encoded in these models on 17 languages, our analyses show that multilingual language models learn the morphological systems of some languages better than others, that morphological information is encoded in the middle and late layers, with morphology being present in earlier layers with standard tokenization, yet character-level models eventually yield commensurate morphological knowledge. Finally, we show that languages with more irregularities require a higher proportion in the pre-training data to compensate for the increased complexity.
dc.identifier.urihttps://theses.ubn.ru.nl/handle/123456789/18141
dc.language.isoen
dc.thesis.facultyFaculteit der Letteren
dc.thesis.specialisationspecialisations::Faculteit der Letteren::Researchmasters::Researchmaster Language and Communication
dc.thesis.studyprogrammestudyprogrammes::Faculteit der Letteren::Researchmasters
dc.thesis.typeResearchmaster
dc.titleMorphological Knowledge in Multilingual Large Language Models: A Comparative Analysis of mT5 and ByT5
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
thesis_Anh.pdf
Size:
782.28 KB
Format:
Adobe Portable Document Format