Zur Kurzanzeige

dc.creatorHeinzerling, Benjamin
dc.date2019-02-06
dc.identifierhttps://doi.org/10.11588/data/V9CXPR
dc.descriptionBPEmb is a collection of pre-trained subword unit embeddings in 275 languages, based on Byte-Pair Encoding (BPE). In an evaluation using fine-grained entity typing as testbed, BPEmb performs competitively, and for some languages better than alternative subword approaches, while requiring vastly fewer resources and no tokenization.
dc.languageNot applicable
dc.publisherheiDATA
dc.subjectComputer and Information Science
dc.subjectsubword embeddings
dc.subjectbyte-pair encoding
dc.subjectmultilingual
dc.titleBPEmb: Pre-trained Subword Embeddings in 275 Languages (LREC 2018)


Dateien zu dieser Ressource

DateienGrößeFormatAnzeige

Zu diesem Datensatz gibt es keine Dateien.

Der Datensatz erscheint in:

Zur Kurzanzeige