Learning Turkish Hypernymy Using Word Embeddings

Yükleniyor...
Küçük Resim

Tarih

2018

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

ATLANTIS PRESS

Erişim Hakkı

info:eu-repo/semantics/openAccess

Özet

Recently, Neural Network Language Models have been effectively applied to many types of Natural Language Processing (NLP) tasks. One popular type of tasks is the discovery of semantic and syntactic regularities that support the researchers in building a lexicon. Word embedding representations are notably good at discovering such linguistic regularities. We argue that two supervised learning approaches based on word embeddings can be successfully applied to the hypernym problem, namely, utilizing embedding offsets between word pairs and learning semantic projection to link the words. The offset-based model classifies offsets as hypernym or not. The semantic projection approach trains a semantic transformation matrix that ideally maps a hyponym to its hypernym. A semantic projection model can learn a projection matrix provided that there is a sufficient number of training word pairs. However, we argue that such models tend to learn is-a-particular-hypernym relation rather than to generalize is-a relation. The embeddings are trained by applying both the Continuous Bag-of Words and the Skip-Gram training models using a huge corpus in Turkish text. The main contribution of the study is the development of a novel and efficient architecture that is well-suited to applying word embeddings approaches to the Turkish language domain. We report that both the projection and the offset classification models give promising and novel results for the Turkish Language.

Açıklama

Anahtar Kelimeler

Word Embeddings, Semantic Relation Projection, Semantic Relation Classification

Kaynak

INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS

WoS Q Değeri

Q3

Scopus Q Değeri

Cilt

Sayı

Künye