Show simple item record

dc.creatorHeinzerling, Benjamin
dc.date2019-02-06
dc.identifierhttps://doi.org/10.11588/data/V9CXPR
dc.descriptionBPEmb is a collection of pre-trained subword unit embeddings in 275 languages, based on Byte-Pair Encoding (BPE). In an evaluation using fine-grained entity typing as testbed, BPEmb performs competitively, and for some languages better than alternative subword approaches, while requiring vastly fewer resources and no tokenization.
dc.languageNot applicable
dc.publisherheiDATA
dc.subjectComputer and Information Science
dc.subjectsubword embeddings
dc.subjectbyte-pair encoding
dc.subjectmultilingual
dc.titleBPEmb: Pre-trained Subword Embeddings in 275 Languages (LREC 2018)


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record