Der Login über E-Mail und Passwort wird in Kürze abgeschaltet. Für Externe steht ab sofort der Login über ORCID zur Verfügung.
The login via e-mail and password will be retired in the near future. External uses can login via ORCID from now on.
 

Analyzing Dataset Annotation Quality Management in the Wild

datacite.relation.isReferencedBy https://github.com/UKPLab/arxiv2023-qanno
datacite.relation.isSupplementTo https://arxiv.org/abs/2307.08153
dc.contributor.author Klie, Jan-Christoph
dc.contributor.author Eckart de Castilho, Richard
dc.contributor.author Gurevych, Iryna
dc.date.accessioned 2023-09-07T21:45:11Z
dc.date.available 2023-09-07T21:45:11Z
dc.date.created 2023-09-07
dc.date.issued 2023-09-07
dc.description This is the accompanying data for the paper "Analyzing Dataset Annotation Quality Management in the Wild". Data quality is crucial for training accurate, unbiased, and trustworthy machine learning models and their correct evaluation. Recent works, however, have shown that even popular datasets used to train and evaluate state-of-the-art models contain a non-negligible amount of erroneous annotations, bias or annotation artifacts. There exist best practices and guidelines regarding annotation projects. But to the best of our knowledge, no large-scale analysis has been performed as of yet on how quality management is actually conducted when creating natural language datasets and whether these recommendations are followed. Therefore, we first survey and summarize recommended quality management practices for dataset creation as described in the literature and provide suggestions on how to apply them. Then, we compile a corpus of 591 scientific publications introducing text datasets and annotate it for quality-related aspects, such as annotator management, agreement, adjudication or data validation. Using these annotations, we then analyze how quality management is conducted in practice. We find that a majority of the annotated publications apply good or very good quality management. However, we deem the effort of 30% of the works as only subpar. Our analysis also shows common errors, especially with using inter-annotator agreement and computing annotation error rates. de_DE
dc.identifier.uri https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3939
dc.identifier.uri https://doi.org/10.48328/tudatalib-1220
dc.rights.licenseCC-BY-NC-4.0 (https://creativecommons.org/licenses/by-nc/4.0)
dc.subject annotation de_DE
dc.subject quality management de_DE
dc.subject nlp de_DE
dc.subject.classification 4.43-04
dc.subject.classification 4.43-05
dc.subject.ddc 004
dc.title Analyzing Dataset Annotation Quality Management in the Wild de_DE
dc.type Dataset de_DE
dcterms.accessRights restrictedAccess
person.identifier.orcid 0000-0003-0181-6450
person.identifier.orcid 0000-0003-0991-7045
person.identifier.orcid 0000-0003-2187-7621
tuda.history.classification Version=2016-2020;409-05 Interaktive und intelligente Systeme, Bild- und Sprachverarbeitung, Computergraphik und Visualisierung
tuda.unit TUDa

Files

Original bundle

Now showing 1 - 1 of 1
NameDescriptionSizeFormat
qanno-tudatalib-2025-09-16.zip1.93 GBZIP-Archivdateien Download

Collections