As you contemplate the research and the value of the studies also consider sample sizes, what defines a standard, and also, self-deception. Please ponder before and as you peruse.
Truth About Dishonesty yours “We all want explanations for why we behave as we do and for the ways the world around us functions. Even when our feeble explanations have little to do with reality. We’re storytelling creatures by nature, and we tell ourselves story after story until we come up with an explanation that we like and that sounds reasonable enough to believe. And when the story portrays us in a more glowing and positive light, so much the better.”
― Dan Ariely, The Honest Truth About Dishonesty: How We Lie to Everyone–Especially Ourselves
Guesses and Hype Give Way to Data in Study of Education
Originally Published at The New York Times September 2, 2013
What works in science and math education? Until recently, there had been few solid answers — just guesses and hunches, marketing hype and extrapolations from small pilot studies.
But now, a little-known office in the Education Department is starting to get some real data, using a method that has transformed medicine: the randomized clinical trial, in which groups of subjects are randomly assigned to get either an experimental therapy, the standard therapy, a placebo or nothing.
The findings could be transformative, researchers say. For example, one conclusion from the new research is that the choice of instructional materials — textbooks, curriculum guides, homework, quizzes — can affect achievement as profoundly as teachers themselves; a poor choice of materials is at least as bad as a terrible teacher, and a good choice can help offset a bad teacher’s deficiencies.
So far, the office — the Institute of Education Science — has supported 175 randomized studies. Some have already concluded; among the findings are that one popular math textbook was demonstrably superior to three competitors, and that a highly touted computer-aided math-instruction program had no effect on how much students learned.
Other studies are under way. Cognitive psychology researchers, for instance, are assessing an experimental math curriculum in Tampa, Fla.
The institute gives schools the data they need to start using methods that can improve learning. It has a What Works Clearinghouse — something like a mini Food and Drug Administration, but without enforcement power — that rates evidence behind various programs and textbooks, using the same sort of criteria researchers use to assess effectiveness of medical treatments.
Without well-designed trials, such assessments are largely guesswork. “It’s as if the medical profession worried about the administration of hospitals and patient insurance but paid no attention to the treatments that doctors gave their patients,” the institute’s first director, Grover J. Whitehurst, now of the Brookings Institution, wrote in 2012.
But the “what works” approach has another hurdle to clear: Most educators, including principals and superintendents and curriculum supervisors, do not know the data exist, much less what they mean.
A survey by the Office of Management and Budget found that just 42 percent of school districts had heard of the clearinghouse. And there is no equivalent of an F.D.A. to approve programs for marketing, or health insurance companies to refuse to pay for treatments that do not work.
Nor is it clear that data from rigorous studies will translate into the real world. There can be many obstacles, says Anthony Kelly, a professor of educational psychology at George Mason. Teachers may not follow the program, for example.
“By all means, yes, we should do it,” he said. “But the issue is not to think that one method can answer all questions about education.”
In this regard, other countries are no further along than the United States, researchers say. They report that only Britain has begun to do the sort of randomized trials that are going on here, with the assistance of American researchers.
As Peter Tymms, the director of the International Performance Indicators in Primary Schools center at Durham University in England, wrote in an e-mail: “The wake-up call was a national realization, less than a decade ago,” that all the money spent on education reform “had almost no impact on basic skills.” Suddenly, scholars who had long argued for randomized trials began to be heard.
In the United States, the effort to put some rigor into education research began in 2002, when the Institute of Education Sciences was created and Dr. Whitehurst was appointed the director.
“I found on arriving that the status of education research was poor,” Dr. Whitehurst said. “It was more humanistic and qualitative than crunching numbers and evaluating the impact.
“You could pick up an education journal,” he went on, “and read pieces that reflected on the human condition and that involved interpretations by the authors on what was going on in schools. It was more like the work a historian might do than what a social scientist might do.”
At the time, the Education Department had sponsored only a few randomized trials. One was a study of Upward Bound, a program that was thought to improve achievement among poor children. The study found it had no effect.
So Dr. Whitehurst brought in new people who had been trained in more rigorous fields, and invested in doctoral training programs to nurture a new generation of more scientific education researchers. He faced heated opposition from some people in schools of education, he said, but he prevailed.
The studies are far from easy to do.
“It is an order of magnitude more complicated to do clinical trials in education than in medicine,” said F. Joseph Merlino, president of the 21st Century Partnership for STEM Education, an independent nonprofit organization. “In education, a lot of what is effective depends on your goal and how you measure it.”
Then there is the problem of getting schools to agree to be randomly assigned to use an experimental program or not.
“There is an art to doing it,” Mr. Merlino said. “We don’t usually go and say, ‘Do you want to be part of an experiment?’ We say, ‘This is an important study; we have things to offer you.’ ”
As the Education Department’s efforts got going over the past decade, a pattern became clear, said Robert Boruch, a professor of education and statistics at the University of Pennsylvania. Most programs that had been sold as effective had no good evidence behind them. And when rigorous studies were done, as many as 90 percent of programs that seemed promising in small, unscientific studies had no effect on achievement or actually made achievement scores worse.
For example, Michael Garet, the vice president of the American Institutes for Research, a behavioral and social science research group, led a study that instructed seventh-grade math teachers in a summer institute, helping them understand the math they teach — like why, when dividing fractions, do you invert and multiply?
The teachers’ knowledge of math improved, but student achievement did not.
“The professional development had many features people think it should have — it was sustained over time, it involved opportunities to practice, it involved all the teachers in the school,” Dr. Garet said. “But the results were disappointing.”
The findings were added to the What Works Clearinghouse.
“There was a joke going around that it was the ‘What Doesn’t Work’ Clearinghouse,” said John Easton, the current director of the Institute of Education Sciences.
Jon Baron, the president of the Coalition for Evidence-Based Policy, a nonprofit, nonpartisan organization, said the clearinghouse “shows why it is important to do rigorous evaluations.”
“Most programs claim to be evidence-based,” he said, but most have no good evidence that they work.
Now, though, with a growing body of evidence on what works, researchers wonder how they can get educators and the public to pay attention.
“It’s fascinating what a secret this is,” said Robert Slavin, director of the Center for Research and Reform in Education at Johns Hopkins University.
“If you talk to your seatmate on an airplane,” he continued, “100 times out of 100 they will not have heard of it. Invariably they will have loads of opinions about what schools should or shouldn’t do, and they are utterly unaware and uninterested in the idea that there is actual evidence.”
Educators often are not much better, Dr. Slavin said. Too often, they are swayed by marketing or anecdotes or the latest fad. And “invariably,” he added, “folks trying to sell a program will say there is evidence behind it,” even though that evidence is far from rigorous.
Dr. Merlino agreed. “A lot of districts go by the herd mentality,” he said, citing the example of a Singapore-based math program now in vogue that has never been rigorously compared with other programs and found to be better. “Personal anecdote trumps data.”
There are solutions, Dr. Slavin said. The federal government or states could require school districts to use programs that work — when sufficient data are available — or forfeit funds. But “there is very little political drive for that to happen,” he said.
Yet he retains a grain of optimism because the Obama administration — as well as the Bush administration, which established the Institute of Education Sciences — says its goal is to enable schools to use programs that have been shown to work.
“Sooner or later,” Dr. Slavin said, “this has to become consequential.”
This article has been revised to reflect the following correction:
Correction: September 3, 2013
An earlier version of this article misstated the number of randomized trials that had been sponsored by the Education Department in 2002, when the Institute of Education Sciences, an office within the department, was created. At the time, there had been a few randomized trials, not “exactly one.”
Leave A Comment