I often work with juvenile justice programs and their staff, advising them on research and evaluation issues. I recently learned that people need to be reminded that using pre-/post-outcome comparisons to judge the effectiveness of a program can be misleading.
In a recent meeting I attended, a program director was defending the effectiveness of his agency's intervention approach. He described what he believed were solid measures of impact by first describing the rate of offending among his program's clients prior to intake (in terms of average arrests per year).
Then, he told us how that number was cut in half during the first year after a youth completed the program. According to him, this meant that the program had been proven effective.
For emphasis, he added, “With such good before-and-after data, we don't need any more evidence to know that we’re effective.”
Eeek, I thought to myself.
He clearly didn't realize that his assertion of effectiveness was risky and possibly flawed.
Many people believe that agencies can assess their effectiveness entirely with pre/post comparisons of youth outcomes, such as recidivism or drug use before and after treatment.
Apparently, they do not know about the statistical bias present in that sort of comparison.