Construction of Context Models for Word Sense Disambiguation

Search this article

Abstract

This paper presents a study on the use of word context features for Word Sense Disambiguation (WSD). State-of-the-art WSD systems achieve high accuracy by using resources such as dictionaries, taggers, lexical analyzers or topic modeling packages. However, these resources are either too heavy or don’t have sufficient coverage for large-scale tasks such as information retrieval. The use of local context for WSD is common, but the rationale behind the formulation of features is often based on trial and error. We therefore investigate the notion of relatedness of context words to the target word (the word to be disambiguated), and propose an unsupervised method for finding the optimal weights for context words based on their distance to the target word. The key idea behind the method is that the optimal weights should maximize the similarity of two context models constructed from different context samples of the same word. Our experimental results show that the strength of the relation between two words follows approximately a power law. The resulting context models are used in Naïve Bayes classifiers for word sense disambiguation. Our evaluation on Semeval WSD tasks in both English and Japanese show that our method can achieve state-of-the-art effectiveness even though it does not use external tools like most existing methods. The high efficiency makes it possible to use our method in large scale applications such as information retrieval.

Journal

References(24)*help

See more

Related Projects

See more

Details 詳細情報について

Report a problem

Back to top