Reliable reasoning : induction and statistical learning theory

著者

書誌事項

Reliable reasoning : induction and statistical learning theory

Gilbert Harman and Sanjeev Kulkarni

(The Jean Nicod lectures / François Recanati, editor)

MIT Press, c2007

  • : [pbk]

大学図書館所蔵 件 / 9

この図書・雑誌をさがす

注記

"A Bradford book"

Includes bibliographical references (p. [99]-104) and index

内容説明・目次

巻冊次

ISBN 9780262083607

内容説明

In Reliable Reasoning, Gilbert Harman and Sanjeev Kulkarni -- a philosopher and an engineer -- argue that philosophy and cognitive science can benefit from statistical learning theory (SLT), the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors -- a central topic in SLT. After discussing philosophical attempts to evade the problem of induction, Harman and Kulkarni provide an admirably clear account of the basic framework of SLT and its implications for inductive reasoning. They explain the Vapnik-Chervonenkis (VC) dimension of a set of hypotheses and distinguish two kinds of inductive reasoning. The authors discuss various topics in machine learning, including nearest-neighbor methods, neural networks, and support vector machines. Finally, they describe transductive reasoning and suggest possible new models of human reasoning suggested by developments in SLT.
巻冊次

: [pbk] ISBN 9780262517348

内容説明

The implications for philosophy and cognitive science of developments in statistical learning theory. In Reliable Reasoning, Gilbert Harman and Sanjeev Kulkarni-a philosopher and an engineer-argue that philosophy and cognitive science can benefit from statistical learning theory (SLT), the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors-a central topic in SLT. After discussing philosophical attempts to evade the problem of induction, Harman and Kulkarni provide an admirably clear account of the basic framework of SLT and its implications for inductive reasoning. They explain the Vapnik-Chervonenkis (VC) dimension of a set of hypotheses and distinguish two kinds of inductive reasoning. The authors discuss various topics in machine learning, including nearest-neighbor methods, neural networks, and support vector machines. Finally, they describe transductive reasoning and suggest possible new models of human reasoning suggested by developments in SLT.

「Nielsen BookData」 より

関連文献: 1件中  1-1を表示

詳細情報

ページトップへ