Jointly Learning to Label Sentences and Tokens

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Learning to construct text representations in end-to-end systems can be difficult, as natural languages are highly compositional and task-specific annotated datasets are often limited in size. Methods for directly supervising language composition can allow us to guide the models based on existing knowledge, regularizing them towards more robust and interpretable representations. In this paper, we investigate how objectives at different granularities can be used to learn better language representations and we propose an architecture for jointly learning to label sentences and tokens. The predictions at each level are combined together using an attention mechanism, with token-level labels also acting as explicit supervision for composing sentence-level representations. Our experiments show that by learning to perform these tasks jointly on multiple levels, the model achieves substantial improvements for both sentence classification and sequence labeling.
Original languageEnglish
Title of host publicationProceedings of 33nd AAAI Conference on Artificial Intelligence, AAAI 2019
PublisherAAAI Press
Publication date2019
Pages6916-6923
ISBN (Print) 978-1-57735-809-1
DOIs
Publication statusPublished - 2019
Event33rd AAAI Conference on Artificial Intelligence - AAAI 2019 - Honolulu, United States
Duration: 27 Jan 20191 Feb 2019

Conference

Conference33rd AAAI Conference on Artificial Intelligence - AAAI 2019
LandUnited States
ByHonolulu
Periode27/01/201901/02/2019

Links

ID: 240420866