Exploring the impact of word embeddings for disjoint semisupervised Spanish verb sense disambiguation

Authors

  • Cristian Cardellino Universidad Nacional de Córdoba
  • Laura Alonso Alemany Universidad Nacional de Córdoba

DOI:

https://doi.org/10.4114/intartif.vol21iss61pp67-81

Abstract

This work explores the use of word embeddings as features for Spanish  verb sense disambiguation (VSD). This type of learning technique is named disjoint semisupervised learning: an unsupervised algorithm (i.e. the
word embeddings) is trained on unlabeled data separately as a first step, and then its results are used by a supervised classifier. In this work we primarily focus on two aspects of VSD trained with unsupervised word representations. First, we show how the domain where the word embeddings are trained affects the performance of the supervised task. A specific domain can improve the results if this domain is shared with the domain of the supervised task, even if the word embeddings are trained with smaller corpora. Second, we show that the use of word embeddings can help the model generalize when compared to not using word embeddings. This means embeddings help by decreasing the model tendency to overfit.

Downloads

Download data is not yet available.

Published

2018-03-21

How to Cite

Cardellino, C., & Alonso Alemany, L. (2018). Exploring the impact of word embeddings for disjoint semisupervised Spanish verb sense disambiguation. Inteligencia Artificial, 21(61), 67-81. https://doi.org/10.4114/intartif.vol21iss61pp67-81