Program evaluation : alternative approaches and practical guidelines
著者
書誌事項
Program evaluation : alternative approaches and practical guidelines
Allyn and Bacon, c2011
4rd ed
大学図書館所蔵 全3件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
注記
Includes bibliographical references (p. 519-541) and indexes
内容説明・目次
内容説明
A highly esteemed and comprehensive overview of program evaluation that covers common approaches, models, and methods.
As schools and other organizations increase their demand for information on program effectiveness and outcomes, it has become even more important for students to understand the prevalent approaches and models for evaluation, including approaches based on objectives and logic models, participative, and decision-making approaches. The new tenth edition of Program Evaluation not only covers these vital approaches but also teaches readers how to best mix and match elements of different approaches to conduct optimal evaluation studies for individual programs.
Helping both students as well as professionals who are new to the field, this text provides practical guidelines for conducting evaluations, from identifying the questions that the evaluation should address, to determining how to collect and analyze evaluative information, to ascertaining how to provide evaluative information to others. Making extensive use of checklists, examples, and other study aides, Program Evaluation teaches students how to effectively determine the central purpose of their evaluation, thus making their evaluation more valid, more useful, and more efficient.
The revised edition of the text includes new approaches to program evaluation, an expanded discussion of logic models, added information on mixed models, and, as always, updated coverage of the most current trends and controversial issues in evaluation.
目次
Contents
Preface
PART ONE Introduction to Evaluation
Chapter 1 Evaluation's Basic Purposes, Uses, and Conceptual Distinctions
Informal versus Formal Evaluation
A Brief Definition of Evaluation and Other Key Terms
Differences in Evaluation and Research
The Purposes of Evaluation
Roles and Activities of Professional Evaluators
Uses and Objects of Evaluation
Some Basic Types of Evaluation
Formative and Summative Evaluation
Needs Assessment, Process, and Outcome Evaluations
Internal and External Evaluations
Evaluation's Importance-and Its Limitations
Chapter 2 Origins and Current Trends in Modern Program Evaluation
The History and Influence of Evaluation in Society
Early Forms of Formal Evaluation
Program Evaluation: 1800-1940
Program Evaluation: 1940-1964
The Emergence of Modern Program Evaluation: 1964-1972
Evaluation Becomes a Profession: 1973-1989
1990-The Present: History and Current Trends
Spread of Evaluation to Other Countries
Non-Evaluators Take On Internal Evaluation
Responsibilities
A Focus on Measuring Outcomes and Impact
Considering Organizational Learning and Evaluation's Larger Potential Impacts
Chapter 3 Political, Interpersonal, and Ethical Issues in Evaluation
Evaluation and its Political Context
How Is Evaluation Political?
Suggestions for Working within the Political Environment
Establishing and Maintaining Good Communications
Maintaining Ethical Standards: Considerations, Issues, and Responsibilities for Evaluators
What Kinds of Ethical Problems Do Evaluators Encounter?
Ethical Standards in Evaluation
Protections to Human Subjects and the Role of Institutional Review Boards
Reflecting on Sources of Bias and Conflicts of Interest
Ethics beyond a Code of Ethics
PART II Alternative Approaches To Program Evaluation
Chapter 4 Alternative Views Of Evaluation
Diverse Conceptions of Program Evaluation
Origins of Alternative Views of Evaluation
Philosophical and Ideological Differences
Methodological Backgrounds and Preferences
Classifications of Evaluation Theories or Approaches
Existing Categories and Critiques
A Classification Schema for Evaluation Approaches
Chapter 5 First Approaches: Expertise and Consumer-Oriented Approaches
The Expertise-Oriented Approach
Developers of the Expertise-Oriented Evaluation Approach and Their Contributions
Formal Professional Review Systems: Accreditation
Informal Review Systems
Ad Hoc Panel Reviews
Ad Hoc Individual Reviews
Influences of the Expertise-Oriented Approach: Uses, Strengths and Limitations
The Consumer-Oriented Evaluation Approach
The Developer of the Consumer-Oriented Evaluation Approach
Applying the Consumer-Oriented Approach
Other Applications of the Consumer Oriented Approach
Influences of the Consumer-Oriented Approach: Uses, Strengths and Limitations
Chapter 6 Program-Oriented Evaluation Approaches
The Objectives-Oriented Evaluation Approach
The Tylerian Evaluation Approach
Provus's Discrepancy Evaluation Model
A Schema for Generating and Analyzing Objectives: The Evaluation Cube
Logic Models and Theory-Based Evaluation Approaches
Logic Models
Theory-Based or Theory-Driven Evaluation
How Program-Oriented Evaluation Approaches Have Been Used
Strengths and Limitations of Program-Oriented Evaluation Approaches
Goal-Free Evaluation
Chapter 7 Decision-Oriented Evaluation Approaches
Developers of Decision-Oriented Evaluation Approaches and Their Contributions
The Decision-Oriented Approaches
The CIPP Evaluation Model
The UCLA Evaluation Model
Utilization-Focused Evaluation
Evaluability Assessment and Performance Monitoring
How the Decision-Oriented Evaluation Approaches Have Been Used
Strengths and Limitations of Decision-Oriented Evaluation Approaches
Chapter 8 Participant-Oriented Evaluation Approaches
Evolution of Participatory Approaches
Developers of Participant-Oriented Evaluation Approaches and Their Contributions
Robert Stake and his Responsive Approach
Egon Guba and Yvonna Lincoln: Naturalistic and Fourth Generation Evaluation
Participatory Evaluation Today: Two Streams and Many Approaches
Categories of Participatory Approaches
Differences in Current Participatory Approaches
Developmental Evaluation
Empowerment Evaluation
Democratically-Oriented Approaches to Evaluation
Looking Back
How Participant-Oriented Evaluation Approaches Have Been Used
Research on Involvement of Stakeholders
Use of Approaches by Developers
Strengths and Limitations of Participant-Oriented Evaluation Approaches
Strengths of Participatory Approaches
Limitations of Participatory Approaches
Chapter 9 Other Current Considerations: Cultural Competence and Capacity Building
The Role of Culture and Context in Evaluation Practice and Developing Cultural Competence
Growing Attention to the Need for Cultural Competence
Why is Cultural Competence Important?
Evaluation's Roles in Organizations: Evaluation Capacity Building and Mainstreaming Evaluation
Mainstreaming Evaluation
Evaluation Capacity Building
Limitations to Mainstreaming Evaluation and Capacity Building
Chapter 10 A Comparative Analysis Of Approaches
A Summary and Comparative Analysis of Evaluation Approaches
Cautions about the Alternative Evaluation Approaches
Evaluation Approaches are Distinct but May Be Mixed in Practice
"Discipleship" to a Particular Evaluation "Model" Is a Danger
Calls to Abandon Pluralism and Consolidate Evaluation Approaches into One Generic Model Are Still Unwise
The Choice of Evaluation Approach Is Not Empirically Based
Contributions of the Alternative Evaluation Approaches
Comparative Analysis of Characteristics of Alternative Evaluation Approaches
Eclectic Uses of the Alternative Evaluation Approaches
Drawing Practical Implications from the Alternative Evaluation Approaches
PART III Practical Guidelines for Planning Evaluations
Chapter 11 Clarifying the Evaluation Request and Responsibilities
Understanding the Reasons for Initiating the Evaluation
Direct, Informational Uses of Evaluation
Noninformational Uses of Evaluation
Conditions under Which Evaluation Studies Are Inappropriate
Evaluation Would Produce Trivial Information
Evaluation Results Will Not Be Used
Evaluation Cannot Yield Useful, Valid Information
The Type of Evaluation Is Premature for the Stage of the Program
Propriety of Evaluation Is Doubtful
Determining When an Evaluation Is Appropriate: Evaluability Assessment
How Does One Determine Whether a Program Is Evaluable?
Checklist of Steps for Determining When to Conduct an Evaluation
Using an Internal or External Evaluator
Advantages of External Evaluations
Advantages of Internal Evaluations
Advantages of Combining Internal and External Evaluation
Checklist of Steps for Determining Whether to Use an External Evaluator
Hiring an Evaluator
Competencies Needed By Evaluators
Possible Approaches to Hiring An Evaluator
Checklist of Questions to Consider in Selecting an Evaluator
How Different Evaluation Approaches Clarify the Evaluation Request and Responsibilities
Chapter 12 Setting Boundaries and Analyzing the Evaluation Context
Identifying Stakeholders and Intended Audiences for an Evaluation
Identifying Stakeholders to be Involved in the Evaluation and Future Audiences
Importance of Identifying and Involving Various Stakeholders
Describing What Is To Be Evaluated: Setting the Boundaries
Factors to Consider in Characterizing the Object of the Evaluation
Using Program Theory and Logic Models to Describe the Program
Methods for Describing the Program and Developing Program Theory
Dealing with Different Perceptions
Re-Describing the Program as it Changes
A Sample Description of an Evaluation Object
Analyzing the Resources and Capabilities That Can Be Committed to the Evaluation
Analyzing Financial Resources Needed for the Evaluation
Analyzing Availability and Capability of Evaluation Personnel
Analyzing Technological and Other Resources and Constraints for Evaluations
Analyzing the Political Context for the Evaluation
Variations Caused By the Evaluation Approach Used
Determining Whether to Proceed with the Evaluation
Chapter 13 Identifying and Selecting the Evaluation Questions and Criteria
Identifying Useful Sources for Evaluation Questions: The Divergent Phase
Identifying Questions, Concerns, and Information Needs of Stakeholders
Using Evaluation Approaches as Heuristics
Using Research and Evaluation Work in the Program Field
Using Professional Standards, Checklists, Guidelines, and Criteria Developed or Used Elsewhere
Asking Expert Consultants to Specify Questions or Criteria
Using the Evaluator's Professional Judgment
Summarizing Suggestions from Multiple Sources
Selecting The Questions, Criteria, And Issues To Be Addressed: The Convergent Phase
Who Should Be Involved in the Convergent Phase?
How Should the Convergent Phase Be Carried Out?
Specifying the Evaluation Criteria and Standards
Absolute Standards
Relative Standards
Remaining Flexible during the Evaluation: Allowing New Questions, Criteria, and Standards to Emerge
Chapter 14 Planning How to Conduct the Evaluation
Developing the Evaluation Plan
Selecting Designs for the Evaluation
Identifying Appropriate Sources of Information
Identifying Appropriate Methods for Collecting Information
Determining Appropriate Conditions for Collecting Information: Sampling and Procedures
Determining Appropriate Methods and Techniques for Organizing, Analyzing, and Interpreting Information
Determining Appropriate Ways to Report Evaluation Findings
Work Sheets to Summarize an Evaluation Plan
Specifying How the Evaluation Will Be Conducted: The Management Plan
Estimating and Managing Time for Conducting Evaluation Tasks
Analyzing Personnel Needs and Assignments
Estimating Costs of Evaluation Activities and Developing Evaluation Budgets
Establishing Evaluation Agreements and Contracts
Planning and Conducting the Metaevaluation
The Development of Metaevaluation and Its Use Today
Some General Guidelines for Conducting Metaevaluations
A Need for More Metaevaluation
PART IV Practical Guidelines for Conducting and Using Evaluations
Chapter 15 Collecting Evaluative Information: Design, Sampling, and Cost Choices
Using Mixed Methods
Evaluation Controversies over Methodology
A Definition and Discussion of Mixed Methods
Designs for Collecting Descriptive and Causal Information
Descriptive Designs
Case Studies
Cross-Sectional Designs
Time-Series Designs
Causal Designs
Experimental Designs
Quasi-Experimental Designs
Mixed Method Designs
Sampling
Sample Size
Selecting a Random Sample
Using Purposive Sampling
Cost Analysis
Cost Benefit Analysis
Cost-Effectiveness Studies
Chapter 16 Collecting Evaluative Information: Data Sources and Methods, Analysis and Interpretation
Common Sources and Methods for Collecting Information
Existing Documents and Records
Identifying Sources and Methods for Original Data Collection: A Process
Observations
Surveys
Interviews
Focus Groups
Tests and Other Methods for Assessing Knowledge and Skill
Planning and Organizing the Collection of Information
Technical Problems in Data Collection
Analysis of Data and Interpretation of Findings
Data Analysis
Interpreting Data
Chapter 17 Reporting Evaluation Results: Maximizing Use and Understanding
Purposes of Evaluation Reporting and Reports
Different Ways of Reporting
Important Factors in Planning Evaluation Reporting
Accuracy, Balance, and Fairness
Tailoring Reports to Their Audience(s)
Timing of Evaluation Reports
Strategies to Communicate and Persuade
Appearance of the Report
Human and Humane Considerations in Reporting Evaluation Findings
Delivering Negative Messages
Key Components of a Written Report
Executive Summary
Introduction to the Report
Focus of the Evaluation
Brief Overview of the Evaluation Plan and Procedures
Presentation of Evaluation Results
Conclusions and Recommendations
Minority Reports or Rejoinders
Appendices
Suggestions for Effective Oral Reporting
A Checklist for Good Evaluation Reports
How Evaluation Information Is Used
Models of Use
Steps To Take To Influence Evaluation Use
Reporting and Influence
Chapter 18 The Future Of Evaluation
The Future of Evaluation
Predictions Concerning the Profession of Evaluation
A Vision for Evaluation
Conclusion
Appendix A
References
Author Index
Subject Index
「Nielsen BookData」 より