You are here

Education Research: Telling Good Science From Bad

EducationWorld is pleased to present this blog post shared on the EducationWorld Community by Anne O'Brien of Learning First Alliance. In this article, O'Brien reviews helpful advice on examining the credibility of educational research.

Who can you trust about educational technology?

So asked Richard Rose in September’s issue of School Administrator. He argues that research on educational technology should be approached with skepticism for a number of reasons. For example, money plays a big role in this research: Interested parties, including technology providers, nonprofits and even unions, often directly or indirectly benefit from research showing results for a particular product.

There is also a lack of consensus within the research–one can find research that supports almost any position. In addition, there is bias introduced by the “publication strainer” (publications prefer research that supports their platform of doctrines) and author timidity and pragmatism (not wanting to waste time on research that isn’t published, busy authors submit what they know publications want).

In reading this piece, it struck me that it could have been written about any aspect of education. While the motivation of technology companies may be different than the motivation of those pushing vouchers, charter schools, alternative certification programs, particular reading programs or any other educational products or policies, it is widely acknowledged that much education research–for the reasons Rose cites and others–is substandard.

But as Rose acknowledges, not all educational technology research (which I would change to "not all education research") is “tainted by vested interest or too insipid to bother with.” There is good research out there–you (whether you are a practitioner, parent, policymaker or other) just need to know how to look for it.

In a recent article in American Educator based on his new book, Dan Willingham offers a four-step approach for those who haven’t taken years of statistics courses and/or don’t have the time to comprehensively review the research. Use these steps to help determine whether a policy, program or other educational resource is evidence-based and worth adopting.

For each “scientific” claim regarding a new curriculum, program or strategy you are investigating, he recommends that you:

  1. Strip it and flip it.  Examine the claim in its simplest form. Get clear on the change suggested, the outcome promised as a result of the change, and the probability that the promised outcome will occur if you make the change. Willingham suggests filling in the following statement: If I do X, then there is Y percent chance that Z will happen. For example: “If my child uses this reading software an hour each day for five weeks, there is a 50 percent chance she will double her reading speed.” (If you can’t figure out either what you are supposed to do – X – or what is supposed to happen after you do it – Z – there is a serious problem with the claim).

    Then flip the promised outcome: There is a 50 percent chance she won’t double her reading speed–is the risk that the program won’t work acceptable? And flip the claim in another way: What happens if I don’t do X? Once you’ve thoroughly examined the claim, you might decide it’s not worth your time to investigate further. (The American Educator article goes into great detail on this particular step of the process.)
  2. Trace it.  Who is making the claim about a product or program? Pay attention to the qualifications and motivations of the person trying to persuade you, but do not rely too heavily on credentials alone. And do not discredit research simply because you tend to disagree with the person conducting it or institution funding it.
  3. Analyze it.  Consider the claim in the context of your experience, but recognize that experience is not an infallible guide. Then apply some simple guidelines to evaluate research claims. You do not need to get too technical, but consider general principles of good practice. For example, does a study claiming program effectiveness include both a treatment group using the program and a control group?
  4. Decide whether to adopt it.  While a lack of scientific support does not always mean you should avoid a program or position (as Willingham points out, most programs lack such support), decisions should only be made once you have all available, relevant information in front of you.

Willingham admits that this system is imperfect and “not a substitute for a thoughtful evaluation by a knowledgeable scientist.” Still, for those who are charged with making decisions in education–be they about technology, curriculum, governance or anything else–it is a starting point to ensure that education research is neither given too much weight nor ignored completely.

After all, to truly advance education both for our nation and for each child, we have to avoid accepting at face value the easy, politically popular claims that a particular strategy is "evidence-based."

 

Education World®    
Copyright © 2012 Education World

Sign up for our FREE Newsletters!

Thank you for subscribing to the Educationworld.com newsletter!

Comments