Language modeling for information retrieval
著者
書誌事項
Language modeling for information retrieval
(The Kluwer international series on information retrieval, 13)
Kluwer Academic Publishers, 2003
大学図書館所蔵 全14件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
内容説明・目次
内容説明
A statisticallanguage model, or more simply a language model, is a prob abilistic mechanism for generating text. Such adefinition is general enough to include an endless variety of schemes. However, a distinction should be made between generative models, which can in principle be used to synthesize artificial text, and discriminative techniques to classify text into predefined cat egories. The first statisticallanguage modeler was Claude Shannon. In exploring the application of his newly founded theory of information to human language, Shannon considered language as a statistical source, and measured how weH simple n-gram models predicted or, equivalently, compressed natural text. To do this, he estimated the entropy of English through experiments with human subjects, and also estimated the cross-entropy of the n-gram models on natural 1 text. The ability of language models to be quantitatively evaluated in tbis way is one of their important virtues. Of course, estimating the true entropy of language is an elusive goal, aiming at many moving targets, since language is so varied and evolves so quickly. Yet fifty years after Shannon's study, language models remain, by all measures, far from the Shannon entropy liInit in terms of their predictive power. However, tbis has not kept them from being useful for a variety of text processing tasks, and moreover can be viewed as encouragement that there is still great room for improvement in statisticallanguage modeling.
目次
- Preface. Contributing Authors. 1: Probabilistic Relevance Models Based on Document and Query Generation
- J. Lafferty, ChengXiang Zhai. 1. Introduction. 2. Generative Relevance Models. 3. Discussion. 4. Historical Notes. 2: Relevance Models in Information Retrieval
- V. Lavrenko, W.B. Croft. 1. Introduction. 2. Relevance Models. 3. Estimating a Relevance Model. 4. Experimental Results. 5. Conclusions. 3: Language Modeling and Relevance
- K. Sparck Jones, S. Robertson, D. Hiemstra, H. Zaragoza. 1. Introduction. 2. Relevance in LM. 3. A Possible LM Approach: Parsimonious Models. 4. Concluding Comment. 4: Contributions of Language Modeling to the Theory and Practice of IR
- W.R. Greiff, W.T. Morgan. 1. Introduction. 2. What is Language Modeling in IR. 3. Simulation Studies of Variance Reduction. 4. Continued Exploration. 5: Language Models for Topic Tracking
- W. Kraai, M. Spitters. 1. Introduction. 2. Language Models for IR Tasks. 3. Experiments. 4. Discussion. 5. Conclusions. 6: A Probabilistic Approach to Term Translation for Cross-Lingual Retrieval
- Jinxi Xu, R. Weischedel. 1. Introduction. 2. A Probabilistic Model for CLIR. 3. Estimating Term Translation Probabilities. 4. Related Work. 5. Test Collections. 6. Comparing CLIR with Monolingual Baseline. 7. Comparing Probabilistic and Structural Translations. 8. Comparing Probabilistic Translation and MT. 9. Measuring CLIR Performance as a Function of Resource Sizes. 10. Reducing the Translation Cost of Creating a Parallel Corpus. 11. Conclusions. 7: Using Compression-Based Language Models for Text Categorization
- W.J. Teahan, D.J. Harper. 1. Background. 2. Compression Models. 3. Bayes Classifiers. 4. PPM-Based Language Models. 5. Experimental Results. 6. Discussion. 8: Applications of Score Distributions in Information Retrieval
- R. Manmatha. 1. Introduction. 2. Related Work. 3. Modeling Score Distributions of Search Engines. 4. Combining Search Engines Indexing the Same Database. 5. Applications to Filtering and Topic Detection and Tracking. 6. Combining Engines Indexing Different Databases or Languages. 7. Conclusion. 9: An Unbiased Generative Model for Setting Dissemination Thresholds
- Yi Zhang, J. Callan. 1. Introduction. 2. Generative Models of Dissemination Thresholds. 3. The Non-Random Sampling Problem & Solution. 4. Experimental Methodology. 5. Experimental Results. 6. Conclusion. 10: Language Modeling Experiments in Non-Extractive Summarization
- V.O. Mittal, M.J. Witbrock. 1. Introduction. 2. Related Work. 3. Statistical Models of Gisting. 4. Training the Models. 5. Output and Evaluation. 6.
「Nielsen BookData」 より