When does deep multi-task learning work for loosely related document classification tasks?

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

When does deep multi-task learning work for loosely related document classification tasks? / Kerinec, Emma; Søgaard, Anders; Braud, Chloé.

Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, 2018. p. 1-8.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Kerinec, E, Søgaard, A & Braud, C 2018, When does deep multi-task learning work for loosely related document classification tasks? in Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, pp. 1-8, 2018 EMNLP Workshop BlackboxNLP, Brussels, Belgium, 01/11/2018.

APA

Kerinec, E., Søgaard, A., & Braud, C. (2018). When does deep multi-task learning work for loosely related document classification tasks? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (pp. 1-8). Association for Computational Linguistics.

Vancouver

Kerinec E, Søgaard A, Braud C. When does deep multi-task learning work for loosely related document classification tasks? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics. 2018. p. 1-8

Author

Kerinec, Emma ; Søgaard, Anders ; Braud, Chloé. / When does deep multi-task learning work for loosely related document classification tasks?. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics, 2018. pp. 1-8

Bibtex

@inproceedings{1b8e9c85f782431cab6eb4c27d846b51,
title = "When does deep multi-task learning work for loosely related document classification tasks?",
abstract = "This work aims to contribute to our understandingof when multi-task learning throughparameter sharing in deep neural networksleads to improvements over single-task learning.We focus on the setting of learning fromloosely related tasks, for which no theoreticalguarantees exist. We therefore approach thequestion empirically, studying which propertiesof datasets and single-task learning characteristicscorrelate with improvements frommulti-task learning. We are the first to studythis in a text classification setting and acrossmore than 500 different task pairs.",
author = "Emma Kerinec and Anders S{\o}gaard and Chlo{\'e} Braud",
year = "2018",
language = "English",
pages = "1--8",
booktitle = "Proceedings of the 2018 EMNLP Workshop BlackboxNLP",
publisher = "Association for Computational Linguistics",
note = "null ; Conference date: 01-11-2018 Through 01-11-2018",

}

RIS

TY - GEN

T1 - When does deep multi-task learning work for loosely related document classification tasks?

AU - Kerinec, Emma

AU - Søgaard, Anders

AU - Braud, Chloé

PY - 2018

Y1 - 2018

N2 - This work aims to contribute to our understandingof when multi-task learning throughparameter sharing in deep neural networksleads to improvements over single-task learning.We focus on the setting of learning fromloosely related tasks, for which no theoreticalguarantees exist. We therefore approach thequestion empirically, studying which propertiesof datasets and single-task learning characteristicscorrelate with improvements frommulti-task learning. We are the first to studythis in a text classification setting and acrossmore than 500 different task pairs.

AB - This work aims to contribute to our understandingof when multi-task learning throughparameter sharing in deep neural networksleads to improvements over single-task learning.We focus on the setting of learning fromloosely related tasks, for which no theoreticalguarantees exist. We therefore approach thequestion empirically, studying which propertiesof datasets and single-task learning characteristicscorrelate with improvements frommulti-task learning. We are the first to studythis in a text classification setting and acrossmore than 500 different task pairs.

M3 - Article in proceedings

SP - 1

EP - 8

BT - Proceedings of the 2018 EMNLP Workshop BlackboxNLP

PB - Association for Computational Linguistics

Y2 - 1 November 2018 through 1 November 2018

ER -

ID: 214759272