Measures of interobserver agreement
著者
書誌事項
Measures of interobserver agreement
Chapman & Hall/CRC, c2004
大学図書館所蔵 全7件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
注記
Includes bibliographical references (p. 143-148) and index
内容説明・目次
内容説明
Agreement among at least two evaluators is an issue of prime importance to statisticians, clinicians, epidemiologists, psychologists, and many other scientists. Measuring interobserver agreement is a method used to evaluate inconsistencies in findings from different evaluators who collect the same or similar information. Highlighting applications over theory, Measure of Interobserver Agreement provides a comprehensive survey of this method and includes standards and directions on how to run sound reliability and agreement studies in clinical settings and other types of investigations.
The author clearly explains how to reduce measurement error, presents numerous practical examples of the interobserver agreement approach, and emphasizes measures of agreement among raters for categorical assessments. The models and methods are considered in two different but closely related contexts: 1) assessing agreement among several raters where the response variable is continuous and 2) where there is a prior decision by the investigators to use categorical scales to judge the subjects enrolled in the study. While the author thoroughly discusses the practical and theoretical issues of case 1, a major portion of this book is devoted to case 2. He explores issues such as two raters randomly judging a group of subjects, interrater bias and its connection to marginal homogeneity, and statistical issues in determining sample size.
Statistical analysis of real and hypothetical datasets are presented to demonstrate the various applications of the models in repeatability and validation studies. To help with problem solving, the monograph includes SAS code, both within the book and on the CRC Web site. The author presents information with the right amount mathematical details, making this a cohesive book that reflects new research and the latest developments in the field.
目次
INTRODUCTION
RELIABILITY FOR CONTINUOUS SCALE MEASUREMENTS
Model for Reliability Studies
Inference Procedures on the Index of Reliability for Case (1)
Analysis of Method - Comparison Studies
Comparing Reliability Coefficients
MEASURES OF 2x2 ASSOCIATION AND AGREEMENT OF CROSS CLASSIFIED DATA
Introduction
Indices of Adjusted Agreement
Cohen's Kappa =Chance Corrected Measure of Agreement
Intraclass Kappa
The 2x2 Kappa in the Context of Association
Stratified Kappa
Conceptual issues
COEFFICIENTS OF AGREEMENT FOR MULTIPLE RATERS AND MULTIPLE CATEGORIES
Introduction
Multiple Categories and Two Raters
Agreement for Multiple Raters and Dichotomous Classification
Multiple Raters and Multiple Categories
Testing the Homogeneity of Kappa Statistic from Independent Studies
ASSESSING AGREEMENT FROM DEPENDENT
Introduction
Dependent Dichotomous Assessments
Adjusting for Covariates
Likelihood Based Approach
Estimating Equations Approach
Loglinear and Association Models
Appendix I: Joint probability distribution of repeated dichotomous assessments
Appendix II: Correlation between estimated kappas
SAMPLE SIZE REQUIREMENTS FOR THE DESIGN OF RELIABILITY STUDY
Introduction
The Case of Continuous Measurements
The Non-Normal Case
Cost Implications
The Case of Dichotomous Assessments
Bibliography
「Nielsen BookData」 より