Neural Speed Reading Audited

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

Several approaches to neural speed reading have been presented at major NLP and machine learning conferences in 2017–20; i.e., “human-inspired” recurrent network architectures that learn to “read” text faster by skipping irrelevant words, typically optimizing the joint objective of minimizing classification error rate and FLOPs used at inference time. This paper reflects on the meaningfulness of the speed reading task, showing that (a) better and faster approaches to, say, document classification, already exist, which also learn to ignore part of the input (I give an example with 7% error reduction and a 136x speed-up over the state of the art in neural speed reading); and that (b) any claims that neural speed reading is “human-inspired”, are ill-founded.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EMNLP 2020
PublisherAssociation for Computational Linguistics
Publication date2020
Pages148–153
DOIs
Publication statusPublished - 2020
EventThe 2020 Conference on Empirical Methods in Natural Language Processing - online
Duration: 16 Nov 202020 Nov 2020
http://2020.emnlp.org

Conference

ConferenceThe 2020 Conference on Empirical Methods in Natural Language Processing
Locationonline
Periode16/11/202020/11/2020
Internetadresse

Number of downloads are based on statistics from Google Scholar and www.ku.dk


No data available

ID: 258378396