Deep learning and linguistic representation
Author(s)
Bibliographic Information
Deep learning and linguistic representation
(Chapman & Hall/CRC machine learning & pattern recognition series)
CRC Press, 2021
- : pbk
Available at 2 libraries
  Aomori
  Iwate
  Miyagi
  Akita
  Yamagata
  Fukushima
  Ibaraki
  Tochigi
  Gunma
  Saitama
  Chiba
  Tokyo
  Kanagawa
  Niigata
  Toyama
  Ishikawa
  Fukui
  Yamanashi
  Nagano
  Gifu
  Shizuoka
  Aichi
  Mie
  Shiga
  Kyoto
  Osaka
  Hyogo
  Nara
  Wakayama
  Tottori
  Shimane
  Okayama
  Hiroshima
  Yamaguchi
  Tokushima
  Kagawa
  Ehime
  Kochi
  Fukuoka
  Saga
  Nagasaki
  Kumamoto
  Oita
  Miyazaki
  Kagoshima
  Okinawa
  Korea
  China
  Thailand
  United Kingdom
  Germany
  Switzerland
  France
  Belgium
  Netherlands
  Sweden
  Norway
  United States of America
Note
Includes bibliographical references and index
Description and Table of Contents
Description
The application of deep learning methods to problems in natural language processing has generated significant progress across a wide range of natural language processing tasks. For some of these applications, deep learning models now approach or surpass human performance. While the success of this approach has transformed the engineering methods of machine learning in artificial intelligence, the significance of these achievements for the modelling of human learning and representation remains unclear.
Deep Learning and Linguistic Representation looks at the application of a variety of deep learning systems to several cognitively interesting NLP tasks. It also considers the extent to which this work illuminates our understanding of the way in which humans acquire and represent linguistic knowledge.
Key Features:
combines an introduction to deep learning in AI and NLP with current research on Deep Neural Networks in computational linguistics.
is self-contained and suitable for teaching in computer science, AI, and cognitive science courses; it does not assume extensive technical training in these areas.
provides a compact guide to work on state of the art systems that are producing a revolution across a range of difficult natural language tasks.
Table of Contents
Chapter 1 Introduction: Deep Learning in Natural Language Processing
1.1 OUTLINE OF THE BOOK
1.2 FROM ENGINEERING TO COGNITIVE SCIENCE
1.3 ELEMENTS OF DEEP LEARNING
1.4 TYPES OF DEEP NEURAL NETWORKS
1.5 AN EXAMPLE APPLICATION
1.6 SUMMARY AND CONCLUSIONS
Chapter 2 Learning Syntactic Structure with Deep Neural Networks
2.1 SUBJECT-VERB AGREEMENT
2.2 ARCHITECTURE AND EXPERIMENTS
2.3 HIERARCHICAL STRUCTURE
2.4 TREE DNNS
2.5 SUMMARY AND CONCLUSIONS
Chapter 3 Machine Learning and The Sentence Acceptability Task
3.1 GRADIENCE IN SENTENCE ACCEPTABILITY
3.2 PREDICTING ACCEPTABILITY WITH MACHINE LEARNING MODELS
3.3 ADDING TAGS AND TREES
3.4 SUMMARY AND CONCLUSIONS
Chapter 4 Predicting Human Acceptability Judgments in Context
4.1 ACCEPTABILITY JUDGMENTS IN CONTEXT
4.2 TWO SETS OF EXPERIMENTS
4.3 THE COMPRESSION EFFECT AND DISCOURSE COHERENCE
4.4 PREDICTING ACCEPTABILITY WITH DIFFERENT DNN MODELS
4.5 SUMMARY AND CONCLUSIONS
Chapter 5 Cognitively Viable Computational Models of Linguistic Knowledge
5.1 HOW USEFUL ARE LINGUISTIC THEORIES FOR NLP APPLICATIONS?
5.2 MACHINE LEARNING MODELS VS FORMAL GRAMMAR
5.3 EXPLAINING LANGUAGE ACQUISITION
5.4 DEEP LEARNING AND DISTRIBUTIONAL SEMANTICS 1
5.5 SUMMARY AND CONCLUSIONS
Chapter 6 Conclusions and Future Work
6.1 REPRESENTING SYNTACTIC AND SEMANTIC KNOWLEDGE
6.2 DOMAIN SPECIFIC LEARNING BIASES AND LANGUAGE ACQUISITION
6.3 DIRECTIONS FOR FUTURE WORK
REFERENCES
Author Index
Subject Index
by "Nielsen BookData"