Named-Entity Tagging a Very Large Unbalanced Corpus. Training and Evaluating NE classifiers.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

  • Joachim Bingel
  • Thomas Haider
We describe a systematic and application-oriented approach to training and evaluating named entity recognition and classification (NERC) systems, the purpose of which is to identify an optimal system and to train an optimal model for named entity tagging DeReKo, a very large general-purpose corpus of contemporary German (Kupietz et al., 2010). DeReKo 's strong dispersion wrt. genre, register and time forces us to base our decision for a specific NERC system on an evaluation performed on a representative sample of DeReKo instead of performance figures that have been reported for the individual NERC systems when evaluated on more uniform and less diverse data. We create and manually annotate such a representative sample as evaluation data for three different NERC systems, for each of which various models are learnt on multiple training data. The proposed sampling method can be viewed as a generally applicable method for sampling evaluation data from an unbalanced target corpus for any sort of natural language processing.
Original languageEnglish
Title of host publicationProceedings of the Ninth International Conference on Language Resources and Evaluation : LREC '14
Number of pages6
Publication date2014
Pages2578-2583
Article number967
ISBN (Print)978-2-9517408-8-4
Publication statusPublished - 2014

ID: 154008746