Post image for [Guest Post] The Importance of Testing Treatments for Mental Illness Before we Sell Them

[Guest Post] The Importance of Testing Treatments for Mental Illness Before we Sell Them

by Kimberly on March 13, 2014 · 1 comment

I read an interesting exchange on a Facebook feed recently between this author, clinical psychologist and researcher Dr. Michael Anestis, and some of my music therapy friends and colleagues about the perceived state of music therapy research. The exchange, though unresolved, was enticing enough that I reached out to Michael and invited him to share his thoughts and perceptions on music therapy research which, as you will read, focuses primarily on his area of expertise, mental health. 

Next week I will share my thoughts and responses to Michael’s post. In the meantime, Reader, feel free to comment below if you feel you can contribute to the conversation. Thank you.

Last week, I had a really interesting back-and-forth on the Facebook feed for Ben Folds.  Ben had posted a link to a white paper recently released regarding music therapy and noted its “indisputable evidence on effectiveness.”  This posting caught my attention and I weighed in, which prompted some interesting responses and some understandable concerns.  Before I get into the nature and rationale of my response, let me explain a little bit about where I’m coming from on all of this.

I am primarily a suicide researcher.  I’m an Assistant Professor in the Department of Psychology at the University of Southern Mississippi and the director of the Suicide and Emotion Dysregulation lab.  When I was still a graduate student, however, I started a blog called Psychotherapy Brown Bag.  The driving force behind PBB was to take esoteric psychology studies, translate them into laymen terms, and disseminate those findings to a broad audience so that the gap between science and the general public could shrink a bit.  Much of the work on that site is devoted to describing research on treatments for mental illness and, over the years, I have developed a passion for discussing the evidence base for specific treatments.  Indeed, I now teach our annual seminar course on empirically supported treatments for adults for the clinical doctoral students at USM.  All this being said, I’m coming at this from a the perspective of somebody who cares immensely about the effective treatment of mentally ill individuals, who believes firmly in the importance of science guiding heath care, and who spends a bizarrely large amount of time reading and evaluating treatment research.

Back to the issue at hand.  When I read Ben’s post, I was concerned because, although I appreciate his passion for people to receive help, I know for a fact that the evidence base for music therapy as a treatment (stand alone or adjunctive) for any mental illnesses is limited.  Unfortunately, because most treatment research is never discussed in the media, most consumers of health care are unaware of the data supporting (or not supporting) certain treatments for certain conditions.  On top of that, most clinicians do not provide empirically supported treatments.  As such, consumers are often not equipped to know which treatments have the best support, clinicians are not selling those approaches, and the potential impact of messages from influential folks (like Ben) that deviate from the actual evidence to cause harm increases.

Here’s what I mean when I discuss whether or not a treatment has a strong evidence base:

  • Have there been multiple large trials in which individuals diagnosed with a specific condition (using reliable diagnostic methods) who received Treatment A were compared to individuals who did not receive Treatment A?
  • In those trials, did the individuals receive no treatment, “treatment as usual,” or Treatment B (a different treatment with evidence supporting its use in that context)
    • If the comparison group was a waitlist, we only know that Treatment A is “better than nothing”
    • If the comparison group was “treatment as usual,” we only know that Treatment A is better than standard care in some setting, which is almost always devoid of scientific evidence
    • If the comparison group was Treatment B (with a strong evidence base), this is more useful, as it provides evidence that Treatment A offers something incrementally useful
  • Were participants randomized to treatment?  Random assignment ensures that individuals are not deliberately placed in treatments in such a way that folks getting one treatment a systematically different from one another in a way that would influence results (e.g., more severe patients in one condition versus the other).  Now, randomization doesn’t always succeed in actually distributing all relevant variables evenly, but that is something that can be tested, reported, and controlled for statistically.
  • In those trials, did the people making the assessments know what treatment the participants received?  If so, that could influence results (giving better scores to people in the treatment of choice), so I want raters to be blind to treatment condition
  • Did the clinicians have adequate training in the treatment?
  • Did the clinicians receive supervision and were they monitored for fidelity to the treatment protocol?
  • Did the clinicians use a treatment manual?
  • What measures did they use to assess outcomes and did they selectively report statistically significant results while overlooking important non-significant results?
  • What were the effect sizes (e.g., how BIG was the effect)?
  • Did the researchers use intent-to-treat analyses to account for people who drop out of treatment or did they only consider people who completed the full trial (remember, folks doing poorly are often more likely to drop out, so not considering them can skew the findings)
  • Did the trial provide long-term follow-up information?  If not, we have no idea if the effects are enduring (and indeed, some treatments begin to show differential impact relative to other treatments AFTER treatment is over).
  • Did the authors report a specific mechanism or mechanisms they believe explain the impact of treatment and did they test whether changes in that variable(s) explained their results?

These are a lot of questions that might seem far removed from the emotional idea of helping somebody in need.  Believe it or not, I’ve left a lot of important ones out here.  The thing is, as mechanical and restrictive as some of these ideas might seem, they are absolutely pivotal in determining whether what we think is working is actually doing the job and how its performance compares to other treatments for whatever illness is being examined.  For newer treatments, building up an evidence base like this is difficult.  It takes time, money, training, coordination of large groups of people, and immense amounts of energy.  I can completely understand the impulse to skip those steps, follow intuition, and run with a treatment that you feel is likely to help.  The problem with that is, time and again, wonderful sounding and intuitively appealing treatments have been shown to not work or—worse yet—cause harm.

I don’t think music therapy causes direct harm.  I’m suspicious about its ability to treatment mental illnesses, but my opinions (and anyone else’s) on that are meaningless.  This is an empirical question and proponents of the treatment have the same ability to test their chosen treatment as anyone else.  It is simply beholden upon them to do so.  Selling a treatment before we know if it works creates opportunity costs by directing people away from evidence-based care and towards a treatment of unknown utility.  New treatments often lag in the kind of evidence I mentioned above and, for that reason, they should also lack in promotion and implementation.  Music therapy may one day have this type of evidence base for at least some mental illnesses.  If it does, I’ll champion it as loudly (or melodically, given the nature of the treatment) as anyone else.  Until then, I simply remain cautious.

This is already getting long, but let me leave you with a few links that explain things far better than I ever could.

For a description of the evidence for particular treatments for specific mental illnesses visit: http://www.psychologicaltreatments.org/

For a description of the amazing work of Paul Meehl, who for so long clarified the relative value of data versus intuition, visit: http://www.psych.umn.edu/faculty/grove/112clinicalversusstatisticalprediction.pdf

To read a description of why science is so incredibly important in clinical psychology, read a great speech by McFall by visiting: 
http://horan.asu.edu/ced522readings/mcfall/manifesto/manifest.htm

About the Author: Dr. Michael Anestis, Ph.D. is an Assistant Professor in the Department of Psychology at the University of Southern Mississippi and the director of the Suicide and Emotion Dysregulation laboratory at USM. Dr. Anestis has published 45 peer-reviewed articles and several book chapters and is the co-founder of Psychotherapy Brown Bag, a blog devoted to the dissemination of information regarding the science of clinical psychology. Dr. Anestis is the primary investigator of a large longitudinal study investigating risk factors for suicidal behavior in the US military funded by the Military Suicide Research Consortium and was just recently award the USM Faculty Senate Junior Faculty Research Award. Dr. Anestis is also a diehard Pittsburgh Pirates fan and is still coming down from the thrill of last year’s Wild Card game.

{ 1 comment… read it below or add one }

Mike Anestis March 13, 2014 at 1:26 pm

Thanks for posting this! I’m looking forward to seeing this conversation unfold.

Previous post:

Next post: