What do Neural Machine Translation Models Learn about Morphology?

<p dir="ltr">Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze...

Full description

Saved in:
Bibliographic Details
Main Author: Yonatan Belinkov (18973897) (author)
Other Authors: Nadir Durrani (5297438) (author), Fahim Dalvi (18427905) (author), Hassan Sajjad (5297441) (author), James Glass (11410946) (author)
Published: 2017
Subjects:
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1864513557099446272
author Yonatan Belinkov (18973897)
author2 Nadir Durrani (5297438)
Fahim Dalvi (18427905)
Hassan Sajjad (5297441)
James Glass (11410946)
author2_role author
author
author
author
author_facet Yonatan Belinkov (18973897)
Nadir Durrani (5297438)
Fahim Dalvi (18427905)
Hassan Sajjad (5297441)
James Glass (11410946)
author_role author
dc.creator.none.fl_str_mv Yonatan Belinkov (18973897)
Nadir Durrani (5297438)
Fahim Dalvi (18427905)
Hassan Sajjad (5297441)
James Glass (11410946)
dc.date.none.fl_str_mv 2017-07-30T06:00:00Z
dc.identifier.none.fl_str_mv 10.18653/v1/p17-1080
dc.relation.none.fl_str_mv https://figshare.com/articles/conference_contribution/What_do_Neural_Machine_Translation_Models_Learn_about_Morphology_/27050740
dc.rights.none.fl_str_mv CC BY 4.0
info:eu-repo/semantics/openAccess
dc.subject.none.fl_str_mv Information and computing sciences
Artificial intelligence
Machine learning
Language, communication and culture
Linguistics
Neural Machine Translation (MT)
State-of-the-Art Performance
End-to-End Architecture
Language Representations
Training Process
Granularity Levels
dc.title.none.fl_str_mv What do Neural Machine Translation Models Learn about Morphology?
dc.type.none.fl_str_mv Text
Conference contribution
info:eu-repo/semantics/publishedVersion
text
conference object
description <p dir="ltr">Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs. character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs. decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.</p><h2>Other Information</h2><p dir="ltr">Published in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)<br>License: <a href="http://creativecommons.org/licenses/by/4.0/" target="_blank">http://creativecommons.org/licenses/by/4.0/</a><br>See conference contribution on publisher's website: <a href="https://dx.doi.org/10.18653/v1/p17-1080" target="_blank">https://dx.doi.org/10.18653/v1/p17-1080</a></p><p dir="ltr">Conference information: 55th Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 518–523 Vancouver, Canada, July 30 - August 4, 2017</p><p dir="ltr"><br></p>
eu_rights_str_mv openAccess
id Manara2_2628af4c4d3ece75a1cae06cc3244f7f
identifier_str_mv 10.18653/v1/p17-1080
network_acronym_str Manara2
network_name_str Manara2
oai_identifier_str oai:figshare.com:article/27050740
publishDate 2017
repository.mail.fl_str_mv
repository.name.fl_str_mv
repository_id_str
rights_invalid_str_mv CC BY 4.0
spelling What do Neural Machine Translation Models Learn about Morphology?Yonatan Belinkov (18973897)Nadir Durrani (5297438)Fahim Dalvi (18427905)Hassan Sajjad (5297441)James Glass (11410946)Information and computing sciencesArtificial intelligenceMachine learningLanguage, communication and cultureLinguisticsNeural Machine Translation (MT)State-of-the-Art PerformanceEnd-to-End ArchitectureLanguage RepresentationsTraining ProcessGranularity Levels<p dir="ltr">Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs. character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs. decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.</p><h2>Other Information</h2><p dir="ltr">Published in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)<br>License: <a href="http://creativecommons.org/licenses/by/4.0/" target="_blank">http://creativecommons.org/licenses/by/4.0/</a><br>See conference contribution on publisher's website: <a href="https://dx.doi.org/10.18653/v1/p17-1080" target="_blank">https://dx.doi.org/10.18653/v1/p17-1080</a></p><p dir="ltr">Conference information: 55th Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 518–523 Vancouver, Canada, July 30 - August 4, 2017</p><p dir="ltr"><br></p>2017-07-30T06:00:00ZTextConference contributioninfo:eu-repo/semantics/publishedVersiontextconference object10.18653/v1/p17-1080https://figshare.com/articles/conference_contribution/What_do_Neural_Machine_Translation_Models_Learn_about_Morphology_/27050740CC BY 4.0info:eu-repo/semantics/openAccessoai:figshare.com:article/270507402017-07-30T06:00:00Z
spellingShingle What do Neural Machine Translation Models Learn about Morphology?
Yonatan Belinkov (18973897)
Information and computing sciences
Artificial intelligence
Machine learning
Language, communication and culture
Linguistics
Neural Machine Translation (MT)
State-of-the-Art Performance
End-to-End Architecture
Language Representations
Training Process
Granularity Levels
status_str publishedVersion
title What do Neural Machine Translation Models Learn about Morphology?
title_full What do Neural Machine Translation Models Learn about Morphology?
title_fullStr What do Neural Machine Translation Models Learn about Morphology?
title_full_unstemmed What do Neural Machine Translation Models Learn about Morphology?
title_short What do Neural Machine Translation Models Learn about Morphology?
title_sort What do Neural Machine Translation Models Learn about Morphology?
topic Information and computing sciences
Artificial intelligence
Machine learning
Language, communication and culture
Linguistics
Neural Machine Translation (MT)
State-of-the-Art Performance
End-to-End Architecture
Language Representations
Training Process
Granularity Levels