Campbell's law
Campbell's law is an adage developed by Donald T. Campbell, a psychologist and social scientist who often wrote about research methodology, which states:
The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.[1]
Applications
[edit]Campbell's law is related to the cobra effect, which is the sometimes unintended negative effect of public policy and other government interventions in economics, commerce, and healthcare.[2]
Education
[edit]In 1976, Campbell wrote: "Achievement tests may well be valuable indicators of general school achievement under conditions of normal teaching aimed at general competence. But when test scores become the goal of the teaching process, they both lose their value as indicators of educational status and distort the educational process in undesirable ways. (Similar biases of course surround the use of objective tests in courses or as entrance examinations.)"[1]
The social science principle of Campbell's law is used to point out the negative consequences of high-stakes testing in U.S. classrooms. This may take the form of teaching to the test or outright cheating.[3] "The High-Stakes Education Rule" is identified and analyzed in the book "Measuring Up: What Educational Testing Really Tells Us".[4]
Campbell's law has been used in criticism of Race to the Top, an Obama administration program, and the No Child Left Behind Act, enacted during the George W. Bush Administration.[5]
Similar rules
[edit]There are closely related ideas known by different names, such as Goodhart's law and the Lucas critique. Another concept related to Campbell's law emerged in 2006 when UK researchers Rebecca Boden and Debbie Epstein published an analysis of evidence-based policy, a practice espoused by Prime Minister Tony Blair. In the paper, Boden and Epstein described how a government that tries to base its policy on evidence can actually end up producing corrupted data because it "seeks to capture and control the knowledge producing processes to the point where this type of 'research' might best be described as 'policy-based evidence'."[6]
When someone distorts decisions in order to improve the performance measure, they often surrogate, coming to believe that the measure is a better measure of true performance than it really is.[7]
Campbell's law imparts a more positive but complicated message. It is important to measure progress making use of quantitative and qualitative indicators.[8] However, using quantitative data for evaluation can distort and manipulate these indicators. Concrete measures must be adopted to reduce alteration and manipulation of information. In his article "Assessing the Impact of Planned Social Change",[9] Campbell emphasized that "the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."
See also
[edit]- Perverse incentive – Incentive that has a contrary result
- Reflexivity (social theory) – Circular relationships between cause and effect
- Proxy (statistics) – Variable used in place of another variable
- Goodhart's law – Adage about statistical measures (Strathern variant)
- Observer effect (physics) – Fact that observing a situation changes it
- Lucas critique – 1970s paradigm shift in economic thought, named for American economist Robert Lucas
- The purpose of a system is what it does – Systems thinking heuristic
Notes
[edit]- ^ a b Campbell, Donald T (1979). "Assessing the impact of planned social change". Evaluation and Program Planning. 2 (1): 67–90. doi:10.1016/0149-7189(79)90048-X.
- ^ Coy, Peter (2021-03-26). "Goodhart's Law Rules the Modern World. Here Are Nine Examples". Bloomberg.com. Archived from the original on 2021-04-25. Retrieved 2021-06-03.
- ^ Aviv, Rachel (21 July 2014). "Wrong Answer". The New Yorker.
- ^ Koretz, Daniel M. (2009). Measuring Up. Harvard University Press. ISBN 978-0-674-03972-8.
- ^ Porter-Magee, Kathleen. "Trust but verify: The real lessons of Campbell's Law". The Thomas B. Fordham Institute. Retrieved 2018-06-30.
- ^ Boden, Rebecca; Epstein, Debbie (2006). "Managing the research imagination? Globalisation and research in higher education". Globalisation, Societies and Education. 4 (2): 223–236. doi:10.1080/14767720600752619. S2CID 144077070.
- ^ Bentley, Jeremiah W. (2017-02-24). "Decreasing Operational Distortion and Surrogation through Narrative Reporting". The Accounting Review. Rochester, New York. SSRN 2924726.
- ^ "Quantitative & Qualitative Indicators". Monitoring & Evaluation. Retrieved 2018-06-30.
- ^ Campbell, Donald T. (1979-01-01). "Assessing the impact of planned social change". Evaluation and Program Planning. 2 (1): 67–90. doi:10.1016/0149-7189(79)90048-X. ISSN 0149-7189.
References
[edit]- Rothstein, Jesse (2011-01-13). "Review of Learning About Teaching". National Education Policy Center.
- "Learning About Teaching" (PDF). Bill & Melinda Gates Foundation. 2010-12-10.
- Berliner, David C.; Nichols, Sharon L. (2007-03-12). "High-Stakes Testing Is Putting the Nation At Risk". Education Week.
- Nichols, Sharon L.; Berliner, David C. (March 2005). "The Inevitable Corruption of Indicators and Educators Through High-Stakes Testing" (PDF). East Lansing, MI: The Great Lakes Center for Education Research & Practice. Archived from the original (PDF) on 2017-07-09.
- Nichols, Sharon L.; Berliner, David C. (2007). Collateral damage: how high-stakes testing corrupts America's schools. Harvard Education Press. ISBN 978-1-891792-36-6.
- Waters, Tony (14 May 2013). "Campbell's law, planned social change, Vietnam war deaths, and condom distributions in refuge camps". Ethnography.com.