Statistical language learning

書誌事項

Statistical language learning

Eugene Charniak

(Bradford book)(Language, speech, and communication)

MIT Press, 1996

  • : pbk.

大学図書館所蔵 件 / 30

この図書・雑誌をさがす

注記

Includes bibliographical references (p. [163]-164) and index

内容説明・目次

内容説明

Eugene Charniak breaks new ground in artificial intelligence research by presenting statistical language processing from an artificial intelligence point of view in a text for researchers and scientists with a traditional computer science background. New, exacting empirical methods are needed to break the deadlock in such areas of artificial intelligence as robotics, knowledge representation, machine learning, machine translation, and natural language processing (NLP). It is time, Charniak observes, to switch paradigms. This text introduces statistical language processing techniques; word tagging, parsing with probabilistic context free grammars, grammar induction, syntactic disambiguation, semantic wordclasses, word-sense disambiguation; along with the underlying mathematics and chapter exercises. Charniak points out that as a method of attacking NLP problems, the statistical approach has several advantages. It is grounded in real text and therefore promises to produce usable results, and it offers an obvious way to approach learning: "one simply gathers statistics."

目次

  • Part 1 The Standard Model: Two Technologies
  • Morphology and Knowledge of Words
  • Syntax and Context-Free Grammars
  • Chart Parsing
  • Meaning and Semantic Processing
  • Exercises. Part 2 Statistical Models and the Entropy of English: A Fragment of Probability Theory
  • Statistical Models
  • Speech Recognition
  • Entropy
  • Markov Chains
  • Cross Entropy
  • Cross Entropy as a Model Evaluator
  • Exercises. Part 3 Hidden Markov Models and Two Applications: Trigram Models of English
  • Hidden Markov Models
  • Part-of-Speech Tagging
  • Exercises. Part 4 Algorithms for Hidden Markov Models: Finding the Most Likely Path
  • Computing HMM Output Probabilities
  • HMM Training
  • Exercises. Part 5 Probabilistic Context-Free Grammars: Probabilistic Grammars
  • PCFGs and Syntactic Ambiguity
  • PCFGs and Grammar Induction
  • PCFGs and Ungrammaticality
  • PCFGs and Language Modelling
  • Basic Algorithms for PCFGs
  • Exercises. Part 6 The Mathematics of PCFGs: Relation of HMMs to PCFGs
  • Finding Sentence Probabilities for PCFGs
  • Training PCFGs
  • Exercises. Part 7 Learning Probabilistic Grammars: Why the Simple Approach Fails
  • Learning Dependency Grammars
  • Learning from a Bracketed Corpus
  • Improving a Partial Grammar
  • Exercises. Part 8 Syntactic Disambiguation: Simple Methods for Prepositional Phrases
  • Using Semantic Information
  • Relative-Clause Attachment
  • Uniform Use of Lexical/Semantic Information
  • Exercises. Part 9 Word Classes and Meaning: Clustering
  • Clustering by Next Word
  • Clustering with Syntactic Information
  • Problems with Word Clustering
  • Exercises. Part 10 Word Senses and Their Disambiguation: Word Senses Using Outside Information
  • Word Senses Without Outside Information
  • Meanings and Selectional Restrictions
  • Discussion
  • Exercises.

「Nielsen BookData」 より

関連文献: 2件中  1-2を表示

詳細情報

ページトップへ