Handbook of economic field experiments / edited by Abhijit Vinayak Banerjee, Esther Duflo.

Format
Book
Language
English
Published/​Created
  • Amsterdam, Netherlands : North-Holland, 2017.
  • ©2017
Description
1 online resource (655 pages)

Details

Subject(s)
Editor
Series
Subseries of
Handbook of Economic Field Experiments
Summary note
Handbook of Field Experiments provides tactics on how to conduct experimental research, also presenting a comprehensive catalog on new results from research and areas that remain to be explored. This updated addition to the series includes an entire chapters on field experiments, the politics and practice of social experiments, the methodology and practice of RCTs, and the econometrics of randomized experiments. These topics apply to a wide variety of fields, from politics, to education, and firm productivity, providing readers with a resource that sheds light on timely issues, such as robustness and external validity. Separating itself from circumscribed debates of specialists, this volume surpasses in usefulness the many journal articles and narrowly-defined books written by practitioners. Balances methodological insights with analyses of principal findings and suggestions for further research Appeals broadly to social scientists seeking to develop an expertise in field experiments Strives to be analytically rigorous Written in language that is accessible to graduate students and non-specialist economists
Bibliographic references
Includes bibliographical references and index.
Source of description
Description based on online resource; title from PDF title page (ebrary, viewed April 6, 2017).
Contents
  • Front Cover
  • Handbook of ECONOMIC FIELD EXPERIMENTS: Handbook of Field Experiments
  • Handbook of ECONOMIC FIELD EXPERIMENTS
  • Copyright
  • INTRODUCTION TO THE SERIES
  • CONTENTS
  • CONTRIBUTORS
  • 1 - An Introduction to the "Handbook of Field Experiments"
  • 1. THE IMPACT ON THE WAY WE DO RESEARCH11THIS SECTION DRAWS ON BANERJEE ET AL. (2016A).
  • 1.1 A greater focus on identification across the board
  • 1.2 Assessing external validity
  • 1.2.1 Combine existing evaluations and conduct meta-analyses
  • 1.2.2 Use other experiments to understand mechanisms
  • 1.2.3 Multi-site projects
  • 1.2.4 Structured speculation
  • 1.3 Testing theories
  • 1.4 Data collection
  • 1.5 Iterate and build on previous research in the same settings
  • 1.6 Unpacking the interventions
  • 2. THE IMPACT ON THE WAY WE THINK ABOUT THE WORLD
  • 2.1 On the value of better human capital
  • 2.2 On reforming education
  • 2.3 On the design of redistributive programs
  • 2.4 On the design of incentives for public officials
  • 2.5 On access to financial products
  • 2.6 On the demand for insurance and other prophylactic products
  • 2.7 On preferences and preference change
  • 2.8 On the role of the community
  • 2.9 On getting people to vote
  • 3. CONCLUSION
  • REFERENCES
  • I - Some Historical Background
  • 2 - The Politics and Practice of Social Experiments: Seeds of a Revolution
  • 1. WHY FOCUS ON WELFARE?
  • 2. WHY EXPERIMENT?
  • 3. THE STORY
  • 4. MAJOR CHALLENGES
  • 5. DEMONSTRATING FEASIBILITY: THE NATIONAL SUPPORTED WORK DEMONSTRATION
  • 6. SOCIAL EXPERIMENTS REINCARNATED AS A PARTNERSHIP: TESTING FEASIBILITY ANEW BY EVALUATING STATE INITIATIVES
  • 7. USING RANDOMIZED CONTROLLED TRIALS TO TEST FULL-SCALE PROGRAMS: THE FIGHT GOT TOUGHER
  • 8. WHAT WORKS BEST? A MULTIARM TEST OF LABOR FORCE ATTACHMENT VERSUS HUMAN CAPITAL DEVELOPMENT
  • 9. THE MOMENTUM SHIFTS.
  • 10. USEFUL AND USED
  • 10.1 The credibility of random assignment, replication, and relevance
  • 10.2 The findings from comprehensive studies
  • 10.3 The timeliness of results
  • 10.4 Forceful, nontechnical, and even-handed communication
  • 11. LESSONS AND CHALLENGES
  • 11.1 A confluence of supportive factors
  • 11.2 The payoff to building an agenda
  • 11.3 The need for realistic expectations
  • 11.4 Maintaining a culture of quality
  • 11.5 The advantage of transparent measures and relatively short treatments
  • 11.6 The payoff to multiple studies and synthesis
  • 11.7 Major challenges remain
  • ACKNOWLEDGMENTS
  • II - Methodology and Practiceof RCTs
  • 3 - The Econometrics of Randomized Experimentsa
  • 1. INTRODUCTION
  • 2. RANDOMIZED EXPERIMENTS AND VALIDITY
  • 2.1 Randomized experiments versus observational studies
  • 2.2 Internal validity
  • 2.3 External validity
  • 2.4 Finite population versus random sample from superpopulation
  • 3. THE POTENTIAL OUTCOME/RUBIN CAUSAL MODEL FRAMEWORK FOR CAUSAL INFERENCE
  • 3.1 Potential outcomes
  • 3.2 A classification of assignment mechanisms
  • 3.2.1 Completely randomized experiments
  • 3.2.2 Stratified randomized experiments
  • 3.2.3 Paired randomized experiments
  • 3.2.4 Clustered randomized experiments
  • 4. THE ANALYSIS OF COMPLETELY RANDOMIZED EXPERIMENTS
  • 4.1 Exact p-values for sharp null hypotheses
  • 4.2 Randomization inference for average treatment effects
  • 4.3 Quantile treatment effects
  • 4.4 Covariates in completely randomized experiments
  • 5. RANDOMIZATION INFERENCE AND REGRESSION ESTIMATORS
  • 5.1 Regression estimators for average treatment effects
  • 5.2 Regression estimators with additional covariates
  • 6. THE ANALYSIS OF STRATIFIED AND PAIRED RANDOMIZED EXPERIMENTS
  • 6.1 Stratified randomized experiments: analysis
  • 6.2 Paired randomized experiments: analysis.
  • 7. THE DESIGN OF RANDOMIZED EXPERIMENTS AND THE BENEFITS OF STRATIFICATION
  • 7.1 Power calculations
  • 7.2 Stratified randomized experiments: benefits
  • 7.3 Rerandomization
  • 8. THE ANALYSIS OF CLUSTERED RANDOMIZED EXPERIMENTS
  • 8.1 The choice of estimand in clustered randomized experiments
  • 8.2 Point estimation in clustered randomized experiments
  • 8.3 Clustered sampling and completely randomized experiments
  • 9. NONCOMPLIANCE IN RANDOMIZED EXPERIMENTS
  • 9.1 Intention-to-treat analyses
  • 9.2 Local average treatment effects
  • 9.3 Generalizing the local average treatment effect
  • 9.4 Bounds
  • 9.5 As-treated and per protocol analyses
  • 10. HETEROGENOUS TREATMENT EFFECTS AND PRETREATMENT VARIABLES
  • 10.1 Randomized experiments with pretreatment variables
  • 10.2 Testing for treatment effect heterogeneity
  • 10.3 Estimating the treatment effect heterogeneity
  • 10.3.1 Data-driven subgroup analysis: recursive partitioning for treatment effects
  • 10.3.2 Nonparametric estimation of treatment effect heterogeneity
  • 10.3.3 Treatment effect heterogeneity using regularized regression
  • 10.3.4 Comparison of methods
  • 10.3.5 Relationship to optimal policy estimation
  • 11. EXPERIMENTS IN SETTINGS WITH INTERACTIONS
  • 11.1 Empirical work on interactions
  • 11.2 The analysis of randomized experiments with interactions in subpopulations
  • 11.3 The analysis of randomized experiments with interactions in networks
  • 12. CONCLUSION
  • 4 - Decision Theoretic Approaches to Experiment Design and External Validitya
  • 1.1 Motivation
  • 1.2 Overview
  • 1.3 A brief history
  • 2. THE FRAMEWORK
  • 3. PERSPECTIVES ON EXPERIMENTAL DESIGN
  • 3.1 Bayesian experimentation
  • 3.1.1 Example: the logic of Bayesian experimentation
  • 3.2 Ambiguity or an audience
  • 3.2.1 A theory of experimenters.
  • 4. RERANDOMIZATION, REGISTRATION, AND PREANALYSIS
  • 4.1 Rerandomization
  • 4.2 Registration
  • 4.2.1 Good commitment
  • 4.2.2 Bad commitment
  • 4.2.3 Examples
  • 4.3 Preanalysis plans
  • 4.3.1 Preanalysis and bounded rationality
  • 4.3.2 Caveats
  • 4.3.3 Theory
  • 5. EXTERNAL VALIDITY
  • 6. STRUCTURED SPECULATION
  • 6.1 The Value of Structured Speculation
  • 6.2 Examples
  • 6.2.1 A Post hoc evaluation
  • 7. ISSUES OF PARTICULAR INTEREST
  • 7.1 Scalability
  • 7.2 Effect on other populations
  • 7.3 Same population, different circumstances
  • 7.4 Formats for structured speculation
  • 8. CONCLUSION
  • 5 - The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency
  • 1. COLLABORATION BETWEEN RESEARCHERS AND IMPLEMENTERS
  • 1.1 Developing a good researcher-implementer partnership
  • 1.2 What makes a good implementing partner?
  • 1.3 What can a researcher do to foster a good partnership with an implementing organization?
  • 1.4 Special considerations when partnering with governments
  • 1.5 Self-implementation
  • 2. PREPARING FOR PRACTICAL PITFALLS IN FIELD EXPERIMENTS
  • 2.1 Noncompliance
  • 2.2 Attrition
  • 2.3 Poor data quality
  • 2.4 Avoiding systematic differences in data collection between treatment and comparison
  • 3. ETHICS1010THIS SECTION DRAWS ON GLENNERSTER AND POWERS IN THE OXFORD HANDBOOK OF PROFESSIONAL ECONOMIC ETHICS, EDITED BY GE ...
  • 3.1 Institutional review boards
  • 3.2 When is ethical review required?
  • 3.3 Practical issues in complying with respect-for-human-subjects requirements
  • 3.4 The ethics of implementation
  • 3.5 Potential harm from different forms of randomization
  • 4. TRANSPARENCY OF RESEARCH1919IN PREPARING THIS SECTION, I LEARNED A LOT FROM THE LECTURE NOTES OF EDWARD MIGUEL'S SEMESTER L ...
  • 4.1 The statistics of data mining, multiple hypothesis testing and publication bias
  • 4.2 Publication bias
  • 4.3 Current moves to address publication bias
  • 4.4 Data mining and correcting for multiple hypothesis testing
  • 4.5 Preanalysis plans
  • 4.6 Evidence on the magnitude of the problem
  • 4.7 Incentives for replication and transparency
  • 5. CONCLUSION
  • 6 - The Psychology of Construal in the Design of Field Experimentsa
  • 1.1 Principle of construal
  • 2. PILOT: SEEK SHARED CONSTRUAL OF BEHAVIOR AND THE SITUATION BETWEEN INVESTIGATORS AND PARTICIPANTS
  • 3. DESIGN: ENSURE THE INTERVENTION DESIGN, MEASUREMENT, AND DEPLOYMENT ACHIEVE SHARED CONSTRUAL BETWEEN INVESTIGATORS AND PART ...
  • 3.1 Intervention design and deployment
  • 3.2 Measurement of outcomes and processes
  • 3.3 Investigator presence
  • 4. INTERPRET: HOW DO INVESTIGATORS CONSTRUE WHAT MATTERS IN THE DATA?
  • 4.1 Replicating experiments
  • 4.2 Institutionalizing and scaling up experimental results
  • 5. CONCLUDING THOUGHTS
  • 7 - Field Experiments in Markets
  • 2. PREAMBLE
  • 2.1 Defining markets
  • 2.2 Studies covered by the literature review
  • 2.3 Classifying the field experiments in markets
  • 2.4 What are the advantages and disadvantages of field experiments?
  • 3. MAIN RESULTS
  • 3.1 Conventional commodity markets
  • 3.2 Financial markets
  • 3.3 Single auctions
  • 3.4 Behavioral anomalies
  • 3.5 Experience and behavioral anomalies
  • 4. METHODOLOGICAL INSIGHTS
  • 5. CLOSING REMARKS
  • III - Understanding Preferencesand Preference Change
  • 8 - Field Experiments on Discriminationa
  • 2. MEASURING DISCRIMINATION IN THE FIELD
  • 2.1 Audit studies
  • 2.1.1 Limitations of audit studies
  • 2.2 Correspondence studies
  • 2.2.1 Correspondence studies in the labor market.
  • 2.2.1.1 Race and ethnicity.
ISBN
  • 9780444633248
  • 0444633243
Statement on responsible collection description
Princeton University Library aims to describe library materials in a manner that is respectful to the individuals and communities who create, use, and are represented in the collections we manage. Read more...
Other views
Staff view

Supplementary Information