Computational methods of feature selection
著者
書誌事項
Computational methods of feature selection
(Chapman & Hall/CRC data mining and knowledge discovery series)
Chapman & Hall/CRC, c2008
大学図書館所蔵 全10件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
注記
Includes bibliographical references and index
内容説明・目次
内容説明
Due to increasing demands for dimensionality reduction, research on feature selection has deeply and widely expanded into many fields, including computational statistics, pattern recognition, machine learning, data mining, and knowledge discovery. Highlighting current research issues, Computational Methods of Feature Selection introduces the basic concepts and principles, state-of-the-art algorithms, and novel applications of this tool.
The book begins by exploring unsupervised, randomized, and causal feature selection. It then reports on some recent results of empowering feature selection, including active feature selection, decision-border estimate, the use of ensembles with independent probes, and incremental feature selection. This is followed by discussions of weighting and local methods, such as the ReliefF family, k-means clustering, local feature relevance, and a new interpretation of Relief. The book subsequently covers text classification, a new feature selection score, and both constraint-guided and aggressive feature selection. The final section examines applications of feature selection in bioinformatics, including feature construction as well as redundancy-, ensemble-, and penalty-based feature selection.
Through a clear, concise, and coherent presentation of topics, this volume systematically covers the key concepts, underlying principles, and inventive applications of feature selection, illustrating how this powerful tool can efficiently harness massive, high-dimensional data and turn it into valuable, reliable information.
目次
Preface. Less Is More. Unsupervised Feature Selection. Randomized Feature Selection. Causal Feature Selection. Active Learning of Feature Relevance.A Study of Feature Extraction Techniques Based on Decision Border Estimate.Ensemble-Based Variable Selection Using Independent Probes.Efficient Incremental-Ranked Feature Selection in Massive Data.Non-Myopic Feature Quality Evaluation with (R)ReliefF.Weighting Method for Feature Selection in k-Means.Local Feature Selection for Classification.Feature Weighting through Local Learning.Feature Selection for Text Classification.A Bayesian Feature Selection Score Based on Naive Bayes Models.Pairwise Constraints-Guided Dimensionality Reduction.Aggressive Feature Selection by Feature Ranking.Feature Selection for Genomic Data Analysis.A Feature Generation Algorithm with Applications to Biological Sequence Classification.An Ensemble Method for Identifying Robust Features for Biomarker Discovery.Model Building and Feature Selection with Genomic Data. Index.
「Nielsen BookData」 より