Episode 137 - VBT Study Hall: Evaluating Research

How do we know which psychotherapy research is trustworthy? Dr. Alex Williams and Dr. John Sakaluk help us search for evidence in all the logical places: the replication crisis, RCTs, qualitative studies, dolphin therapy, Canadian football, researchers fighting Connor McGregor, and of course, EMDR. This episode is brought to you by MR. BEAR (Meta-analysis, Registered, Big sample size, Experiment, Active control group, Replicated).

Thank you for listening. To support the show and receive access to regular bonus episodes, check out the Very Bad Therapy Patreon community. Today’s episode is sponsored by Sentio Counseling Center – high-quality, low-fee online therapy in California with immediate availability for new clients.

Show Notes:

  • Carrie Wiita [00:00:00]:

    Welcome to Very Bad Therapy, a closer look at what goes wrong in the counseling room and how it could go better, as told by the clients who survived. From Los Angeles, I'm Caroline Wiita.

    Ben Fineman [00:00:11]:

    And I'm Ben Fineman. Legally encouraged to say that this podcast does not constitute therapeutic advice, but it will get interesting. Let's get started. You well, Carrie, we're going to try and keep this episode to under 3 hours because I get the sense that you and I have so many questions about research, how to evaluate research, what is good research, especially in our field, where I'm not sure how you feel. But there's a part of me that often wonders, is any of the research in our field good for anything if potentially outcomes haven't improved in quite some time? And we'll get into that as well. How are you feeling about this topic of evaluating research in psychotherapy?

    Carrie Wiita [00:00:56]:

    I'm so excited about it because I think this is exactly I don't know. I feel like a lot of people had the experience that I did, which is you get into grad school and you're in your research methods class or whatever, and I was an English major in undergrad, so I learned about the scientific method in high school and AP biology. But after that, my mom went to grad school for nursing, and so she did teach me how to read a study. And there was some of that. I had some understanding, basic understanding of that. And then in my research methods class, I assumed that everyone else had been a Psych undergrad, and so they probably knew all of this, and this was all just refresher for them. And I would be scared to ask questions that if I didn't get what a research study design if I had questions I was too intimidated to ask them. And then I slowly realized, I think everybody feels that way in grad school. And so I think this is one of those opportunities where we have to ask people who really know what they're talking about and explain it to us. Like we're five.

    Ben Fineman [00:02:06]:

    Yeah. And to that point, I would guess that some people hear our podcast and they think that we are among those people who know what they're talking about. When it comes to evaluating research, we don't. We read abstracts. We will do a scan of the methodology and probably rule out citing studies that have a sample size of, like, eight. I don't think we put too much predictive validity. That might not even be the right terminology, which would be giveaway, but we don't read qualitative. Qualitative? Yeah, qualitative studies and say, based on this, we can make assumptions about the future and about what the et cetera.

    Carrie Wiita [00:02:44]:

    Well, I think you and I debate that often because I do feel like I've gotten better at looking at studies and saying, this is an interesting result. I have to take it with grains of salt because of XYZ. And so I am not throw baby out with bathwater so much as you are. You're like, replication crisis, so let's not read studies ever. And I'm like, I think that's wrong.

    Ben Fineman [00:03:13]:

    So my background as it pertains to evaluating research is I had no prior experience in academia and in anything where having to read and evaluate the quality of a study was relevant or useful in any way until I became a therapist. And I just assumed, for better or worse, that the research in our field and a lot of research in general was very trustworthy.

    Carrie Wiita [00:03:35]:

    Right. Same.

    Ben Fineman [00:03:37]:

    And this is not to drift into don't trust authority perspectives, but even before I started my grad program, somebody recommended to me The Heart and Soul of Change, which we've talked about extensively on this show. A great book that I thought, and I still do to a large extent, was very trustworthy in terms of how they evaluate research in our field. The conclusions that the common factors are what matter in therapy, which may be true, may not be true. We can potentially get into that as well. But at some point, I found conflicting research everywhere, and I just assumed I was right and that all the research that disagreed with my conclusions was wrong because that's how confirmation bias works. But, yeah, you're right. Carrie at some point, I read about the replication crisis. I learned about researcher biases that are so prevalent in our field. And it gave me this kind of I don't know if I want to say distrust on research, on psychotherapy, but a heavy skepticism because it seems like it's so hard to know which studies are good. But also you'll even come across meta analyses that disagree with one another on certain conclusions. And that feels very perplexing in terms of how can we actually read a study and trust that this is worth taking something from as opposed to a fun anecdote that doesn't really mean anything. And so I think that's what we're doing here is trying to ask people who have the experience and background in evaluating research, how do we evaluate research in our field to get something worthwhile from it?

    Carrie Wiita [00:05:02]:

    Yeah, my big question is I'm at this point where it's like I have to at some point trust the math that they have done and put in the paper, because I can't even begin to read that and be like, oh, I see what you did there. That was some creative interpretation I can't even begin. So that's my question is, if I have the very base level understanding of statistics I didn't take calculus. I can't comprehend high level math stuff. If that's where I am, how do I pick up a study and know if I can trust it or not? And that's why I feel like we brought in the big guns today.

    Dr. John Sakaluk [00:05:49]:

    Yeah.

    Ben Fineman [00:05:50]:

    So Dr. Alex Williams, Dr. John Sakaluk. Welcome back to Very Bad Therapy. We are so excited to ask you a million questions about how do we make sense of research in our field. We'd love to have both of you introduce yourselves, your backgrounds and things you've done within the field that are really relevant to this topic of how can we evaluate research in a way that leads to something of value, I guess.

    Dr. Alex Williams [00:06:15]:

    Yeah. Ben, Carrie, thank you so much. I love to be back on the show. Is it fair for me to say I'm a friend of the show?

    Carrie Wiita [00:06:22]:

    Yes, absolutely.

    Ben Fineman [00:06:24]:

    Maybe we'll see how this episode goes.

    Dr. Alex Williams [00:06:28]:

    So that's like, I can check that off the bucket list right there. Friend of the show. So I'm a teaching professor at the University of Kansas and I do a lot of research on the credibility of evidence for different forms of psychotherapy. My PhD is in clinical psychology and so that's kind of my training. So I am a licensed psychologist and yeah, I collaborate a lot with John Sacklock. And so now that the big gun has gotten his introduction out of the way, he can turn things over to the other gun, john small revolver.

    Dr. John Sakaluk [00:07:02]:

    John Sakaluk here. And maybe I can say that I'm a frenemy of the show if Alex is a friend. And I'm an assistant professor at Western University in London, Ontario. Not the cool London. London, Ontario, Canada, just a couple of hours north of Detroit. And Alex and I have this connection where we met in grad school. I started off in a clinical psychology program, but then I very quickly switched to social psychology. So that's the area that I work in now. But I'm one of these stats people. So in the context of this conversation, I teach research methods at the undergraduate and graduate level. I teach graduate statistics courses. And Ben mentioned this word, meta analysis. I'm one of those people that takes all of the studies and crunches the numbers and tries to spin a yarn about it. And I hear you, it's messy, it's complicated, it's technical, and we need to find ways of doing better to get the message to the folks who need to use that information.

    Ben Fineman [00:08:05]:

    So when the topic of research comes up in, I think anything that has to do with like, soft sciences and psychotherapy, I think falls into that category. And correct me if I'm using the wrong terminology there, where my mind automatically goes is the replication crisis that we briefly mentioned a few minutes ago and that Carrie, you and I did an episode on a while back. John, Alex, either one of you can take this, but I think it's a good place to start. Whether it kind of undermines the fact that any study that's peer reviewed is worth taking at face value, whether it puts into context some of the things we're going to be talking about going forward. Can you just give a brief overview of what the replication crisis was and potentially still is.

    Dr. Alex Williams [00:08:51]:

    Yeah, I will say that I think soft sciences is kind of pejorative phrase, but I don't disagree with it. But I think, though, sometimes people push back on it because of the idea of, like, oh, so you're saying that it's not as good. But I don't know. I mean, that's the phrase that gets used. So I think it's fine. I think that people's opinion of psychology is probably not tied too much to whether we call it a soft science or not. So the replication crisis, though, is the idea that Terry, you referenced oh, like high school biology or whatever, right. People may remember, like, okay, basic part of science is supposed to be that if you do an experiment that people should be able to replicate it. Meaning that if I'm in my lab somewhere and I do something and I get a certain result, other people in their lab should be able to do the same or similar things and get the same or a similar result. And the replication crisis is the idea that the realization in the last decade or so that this is not necessarily true for quite a few scientific fields. So it even gets into the hard sciences that there have been subfields of biology that have been criticized for this. But psychology certainly has been criticized on these grounds. And to their credit, many psychologists have tried to face the problem head on. But the difficulty that they realized was that basically 50% more or less of studies in psychology were not replicable, meaning that people, when they followed the same or similar procedures, couldn't get the same or similar results. And so, again, I guess the crisis refers to there have been prominent names before who had said, hey, I think we got an issue here, but it's really more in the last ten years. So you start getting a critical mass of people in the field saying, oh, this is a problem. And so if we can't replicate these results, how do we know what's true? Can we put any credence in any of this?

    Dr. John Sakaluk [00:10:52]:

    The only thing that I would add to that, I think that's a really kind of good, high level characterization of the crisis. But part of the existential crisis, part of it was it wasn't the case that these unreplicable studies were people who were doing intentionally kind of bad science, right? It was once you peaked under the hood, these papers that weren't replicating, you look at their methodology, they're playing by the rules, at least as they were determined in that day and age. Right. So a kind of classic example of this. That's one of the more dramatic ones. But Daryl Ben right, who an Ivy League psychologist, very well known, very influential in the field of social psychology, published a paper in the Journal of Personality and Social Psychology that's Social psychology's flagship journal. That all our best work goes to five experiments or seven experiments or some such, claiming to have solid evidence for the existence of ESP extrasensory perception. The paper well known called Feeling the Future. And people looked at this and immediately thought, no way. But you go into the details and what we understand now as some problematic practices, then Ben was just applying the same rulebook that everyone was who was trying to get into these journals. Right. Which is an important piece. It was that the kind of norms of doing science at the time were creating this literature that wouldn't replicate.

    Carrie Wiita [00:12:26]:

    Can you give me a small example of what you're talking about that I would understand one of these rules that used to apply but doesn't anymore? Because I'm having a hard time wrapping my head around that.

    Dr. Alex Williams [00:12:35]:

    Yeah.

    Dr. John Sakaluk [00:12:37]:

    You hear a lot about the kind of umbrella term that captures a lot of this is phacking or questionable research practices. But the idea of null hypothesis significance testing, the letter of the law there is you get one crack at your statistical test. Right. And so anytime that you perform an analysis and then you change something, you're actually violating the rules of that procedure, and you're giving yourself too much chance to get lucky. Right. And you go back, and that's kind of what Bem there were signs that that was at least part of what was happening. And, for example, would run a study, didn't find the effect, claimed it didn't work, ran another study. Carry on or collect a sample. No effect of ESP. Oh, let's collect a little bit more data. More data is good. Right? Well, on the surface, that seems like a noble thing to do. But again, in the null hypothesis significance testing ritual, you need to adjust for as many peaks as you take. Right. And that wasn't happening.

    Carrie Wiita [00:13:41]:

    Oh, interesting.

    Ben Fineman [00:13:42]:

    Okay, so can you elaborate on, I guess, the idea that there are different levels of research? And I don't know if that's the right language, but sometimes I'll come across a study, and it'll look interesting, and it'll be one study sample size of, let's say, 75 clients of therapy. Sometimes I'll come across a study, and it'll be a replication of another study, which actually seems incredibly rare in the field of psychotherapy. Sometimes I'll see a meta analysis, and I assume that some of these are more credible than others or at least more deserving of trust. Can you walk Carrie and I through what is the language? Is it different levels? There's got to be something that you learn when you get a PhD.

    Carrie Wiita [00:14:26]:

    Right. What's better if you could make a hierarchy of studies? What's the best?

    Dr. John Sakaluk [00:14:36]:

    So there's a lot of different ways that we can kind of categorize research, a really popular one that I think is a useful heuristic to your listeners. Is there's this notion of a so called evidence pyramid right. That as Ben was kind of saying, there's this very kind of visual way of representing what are the types of studies or the types of research documents that you want to put if you're going to put blind faith in anything, right? If you want to put more of it, you would prefer to put it in studies that are closer to the top of that pyramid and systematic reviews and meta analyses in that pyramid. And that's not to say that they can't be done poorly or can't mislead, but all else being equal, we like meta analyses more because they average out everything that's going on in the literature, right, but then for individual studies, there's a lot of different ways that we can categorize the research designs that they do. But essentially the brass taxes, it comes down to whether you're using experimental design or not and whether you're monitoring the effect of a treatment over time or not. And I'll let Alex say kind of more about it, but kind of the gold standard of one off studies in psychotherapy research are what are called randomized controlled trials, which combine experimental design with longitudinal kind of overtime assessment of symptoms.

    Dr. Alex Williams [00:16:00]:

    So like John said there, I think that type of the randomized controlled trial, sometimes it's called the randomized clinical trial, but an RCT. So it's just like an experiment that you might do in many different types of scientific fields, but just like when new drugs are being tested or new vaccines or what have you, that we say, okay, let's compare the drug that we're interested in with a control group. And listeners may think, oh, like a placebo group for drug trials and that's exactly right, okay, how can we do that for therapy? So let's take the therapy we're interested in and let's compare it with some type of control group. And then do people in this experiment who got the therapy that we're interested in, do they get better quicker or do they get more better than the people in the control group?

    Carrie Wiita [00:16:51]:

    So this is my understanding of a classic study, right? And I do also remember this is when I in grad school, started learning that, oh, you can't trust those, because all these things and so some of those things that I learned that applied specifically to psychotherapy right, are things like oh, well, this is why CBT is overrepresented in the academic literature. Because it lends itself to RCTs. Because it's highly manualized or can be. You can make it very standard and then train the therapists involved in the study to all do it the same way, whereas it's harder to manualize to standardize something like a psychodynamic approach or God forbid, an eclectic or integrative approach, right. So that was what was introduced as like, oh, you can't it's like we have this evidence base that we're so proud of kind of as a field and saying, look, CBT works so good, gold standard. And then there's like this other undercurrent in our field saying no you can't trust that though, because of these problems. That is just one kind of example. There's also the people who are recruited for these studies, they filter out folks who have comorbidities, which is not representative of people you actually see in practice. Therefore we have to throw out that or the studies. Often you mentioned longitudinal or longevity of the study. So many studies, people can't afford to track them for very long. And so it's like, is that even helpful? Maybe therapy works because it happens now and then. Maybe you don't get better in like six to eight weeks, but you definitely get better in six years. But we just can't study things that long. What say, ye fix that for me?

    Dr. Alex Williams [00:18:48]:

    The criticisms you brought up, those are legitimate criticisms. I think people also highlight that in many experiments on therapy that have been done over time, that the people in the study, the patients, the participants have been overly white. So not enough people of color being studied in these. All those criticisms are legitimate at the end of the day. And people can come down on different sides. I sound like a politician, but people in good faith can come to different conclusions on this. But speaking of politics, I'm reminded of the old line about democracy is the worst possible governing system except for all the others. Right, right, yes. So the RCT, the experiment has got lots of problems, but in terms of showing whether a therapy works or not, yeah, it's still better than other systems we have, at least would be my opinion on that. Also, sometimes those criticisms which absolutely should be mentioned and brought up for why, if I say, well, I have a study here showing that this treatment worked for people with depression.

    Ben Fineman [00:19:55]:

    Absolutely.

    Dr. Alex Williams [00:19:55]:

    People should say, well, who was in this study? What were they like? Were they like the people I see in my clinic? How long did the study go on? For? How long were outcomes tracked? And kind of if you think about like that's chipping away maybe at how applicable that study is for the work you do. Right, so absolutely chips away at it. What I see though sometimes is and people take an extra step, and I think the extra step is really what's not warranted, they say. Therefore, given I've chipped away at the relevance of this study, I'm going to do this other approach that I like, yeah, I'm going to use this other treatment that I think works. And it's like all the decreasing of the positive evidence for therapy A, doesn't magically produce positive evidence for therapy B.

    Carrie Wiita [00:20:42]:

    Oh, of course, yeah, right.

    Dr. Alex Williams [00:20:46]:

    Some of the claims like that by folks, and I do understand that, but like, oh, certain types of therapy, well, we can't put this into a treatment protocol. It's easily it's harder for us to study. And again, to varying degrees, I agree with that. And the difficulty of studying it doesn't magically produce evidence for it as an outsider, too.

    Dr. John Sakaluk [00:21:08]:

    So I'm not a therapist, right? I've received therapy, but I don't do therapy. And that's probably never going to happen. But those sorts of claims in particular always stand out for me because then I'm like but then what gives you confidence that yours actually works? Right? We know again, and Ben, you tossed out this buzz term earlier, validity. Right? And so the reason we like RCTs as kind of the best way of providing evidence that a psychotherapy works is because it maximizes so called internal validity, right? That at your highest level of confidence that any differences you see are because group one got therapy A and group two got therapy B. But I hear this claim, right, that like, oh, well, poopoo CBT or whatever, because it can fit into that experimental paradigm and ours just can't. I'm like but then what makes you confident in the delivery of it in the real world? If you can't bundle it up, if you can't bundle it up for research, what makes you think you can bundle it up for delivery?

    Carrie Wiita [00:22:16]:

    I mean, I completely agree with that and I think that this is exactly my frustration and maybe we should just keep going because you said Alex makes such a good point. Just because there are problems with or you're pointing out the weaknesses that have come from studying treatment A, that doesn't mean therefore use treatment B. And I think that also, John, what you're talking about in terms of why do people then want to use these that can't be packaged for study? Because therapists like it. They like it better. And I feel like that becomes I think that's what happens to a lot of us in the field is like, when we can't understand or it's too obtuse or opaque, when we can't understand the research world, we default fall back to, I like it, it makes sense to me. And that feels really compelling. And then you can kind of cherry pick from the problems that have been put forth about research and say, oh, well, see, I'm right to not trust XYZ. And Ben, not to call you out, but this is a little bit kind of what your argument is, is like, why late are egging me on.

    Dr. John Sakaluk [00:23:42]:

    Get them.

    Dr. Alex Williams [00:23:43]:

    Yes. This is why we agreed to be guest.

    Carrie Wiita [00:23:46]:

    I know, and this is exactly our debate, is we can't trust these things. Are they applicable? Blah, blah, blah. Therefore, why don't we just do what we want to do, do what the.

    Ben Fineman [00:24:00]:

    Client wants us to do. Yeah, let me make that very important distinction.

    Carrie Wiita [00:24:05]:

    Yeah, no, please explain, because I think you actually do have an explanation for that perspective, so share it.

    Ben Fineman [00:24:14]:

    Sure. In my mind, and this is informed a lot by the work that the two of you have done, alex and John, we don't know how therapy works. The argument that it's the common factors and the dodo bird verdict that all therapies are equally effective. I think that is a very compelling argument, and I wholeheartedly bought into that for a long time, and my gut would tell me that in the long run, that will prove to be more true than any of the other hypotheses that have been floated. But there is research out that shows that certain treatments are more effective than others for certain diagnoses. But for the most part, we don't know. We're still in the early stages in our field, incredibly, because the field's been around for well over a century now of saying how it works, why it works, what we should do to make it work better. That's all a mystery in large part, as far as I know. But we do know that client preferences are very important. The therapeutic alliance is very important. Client expectations are very important. And so if a client comes in and says, I think what would help me is X, Y, and Z, and you can't point to the literature and say, no client, you're wrong, because we have solid evidence that what you need is this other thing, why wouldn't you do your best to give the client what they're asking? Because that, by default, will make therapy more effective. Therefore, in a sense, the client is always right, unless they're going so far outside of what is realistic. Or maybe it's not even inaccurate to say that if a client thinks that hopping on 1ft and clucking like a chicken will help them write that into your treatment plan and therapy will be more effective than if you don't allow that to be part of it. That, to me, makes intuitive sense. That if we don't know what works. But we do know that when clients feel like they're getting what they think will work, why not give the clients what they think will work? Then it will become effective. That's the argument. And I think, honestly, if you watch all the videos of all of my sessions, you would see something like that, that my clinical work from one client to the next looks very different because I'm trying to get a sense of what do they want and then giving that to them. So, Alex or John, you're more than welcome to poke holes in my clinical approach and make me feel very self conscious about my efficacy as a therapist.

    Dr. Alex Williams [00:26:27]:

    So, Ben, the last time we talked about clinical stuff, you changed my mind on me getting upset about clients eating or drinking during session. The whole poking went the other way. There's a lot of nuance here, right? I don't know how into the weeds we want to go on this. Yeah, I can defer to John here because I think I need just to collect my thoughts for a second.

    Dr. John Sakaluk [00:26:55]:

    I mean, I'll come at this from two angles again with the heavy preface that I'm not a therapist.

    Dr. Alex Williams [00:27:03]:

    Thank God.

    Dr. John Sakaluk [00:27:06]:

    But I'm not saying that this is what happens for you, Ben, but I do think that good faith concern about the limitations of evidence can sometimes kind of hijack us to overcorrect right. In terms of the level of felt skepticism or cynicism. So even some of the therapies that we've critically evaluated right. I don't think we're necessarily calling for certain kind of gold standard therapies to no longer be practiced or whatever. It's like, still use them as a default and do assessment. Right. See if you're getting changed. And then if things aren't going to plan, consider other options, which may include right. I think all else being equal, I think it's a fine thing to do to center kind of client perspectives. The issue is that I think all else being equal is a standard that is often felt but not necessarily met. Right. So you feel all else is equal with a little bit of dialed up cynicism, but you've kind of overcorrected. Right. The other thing that I would offer just more on the social psychological kind of view of things. Now, I don't know if this has been replicated, but classic experimental social psychological work by Nsbid and Wilson shows that people are actually really poor at introspecting about their motives, what changes them, and that they're even changing. Right. And so Nizbin and Wilson is often used as kind of a rallying paper just to invite some modesty around. We want participants, we want clients. We want people who experience psychological products to feel respected and seen and affirmed. But I wouldn't necessarily trust a person to be always the best judge of what they feel, what they need.

    Ben Fineman [00:29:02]:

    So you're saying be a bit skeptical or guarded about taking research at face value, but be even more guarded at taking your own motivations at face value and default back to the research? Because that's going to be a more objective approach than just taking your own gut at face value.

    Dr. John Sakaluk [00:29:22]:

    I'm saying there's room for skepticism all around. I mean, be skeptical of the research, and there's good reason to do it. Be skeptical of yourself. Right. You do have a vested interest in what you're doing and be skeptical of your client to a certain degree. I think, again, you probably want to treat your client that they're there in good faith, but that they're not an omniscient source of information about what's going on and what's needed. If that were true, you wouldn't be necessary.

    Carrie Wiita [00:29:51]:

    Exactly.

    Dr. Alex Williams [00:29:53]:

    So I agree with everything John said there. If I can throw in, like, four quick hits here. So I'm reminded the movie Inherit the Wind, or the play that became a movie I'm sure the audience all appreciates, like, an 80 year old reference.

    Carrie Wiita [00:30:08]:

    Yeah.

    Ben Fineman [00:30:08]:

    Deep cut.

    Dr. Alex Williams [00:30:09]:

    Yeah. But there is a line in there about there's a reverend who begins getting so caught up in what he's saying that he condemns his own daughter to hell and the. Character who in some ways is the antagonist in the movie, who's playing like, this really over the top anti Darwin attorney kind of steps in there, and it's like, all right, man, this is getting a little much, even for me. And that's where the line comes from. The Inherit the Wind, he says I believe he's quoting the Bible, but he says something like, he who trouble with his own he says something like, it's possible to be overzealous, to destroy that you wish to protect, and he who trouble with his own house shall inherit the wind. That's a very long way of saying, ben, is that preacher?

    Carrie Wiita [00:30:54]:

    Yes.

    Dr. Alex Williams [00:30:57]:

    But that's what comes to my mind. I agree with John, and I think that the skepticism on all around is warranted. And just in terms of a practical step, I think you said this, Ben, but the idea of, like, just go back to the research and follow that, even with its flaws, just, you know what? As a basic rule of thumb, that's what I would kind of go with. So it's like, here's the gold standard treatment for this. Clients who have this kind of problem. I would probably default to that. Even though some of the people saying, like, hey, this evidence ain't maybe what's cracked up to be, I would still default to, yeah, but compared to what? Right? So I would still probably default to following back to that. I had three other points, but they were not I said these were going to be four quick hits, and I feel like that first quick hit went on for, like, two minutes. So I'll hold the other points in abeyance for the moment.

    Carrie Wiita [00:31:47]:

    I just want to say thank you. This exchange right here really helped me, and so I hope it helped other people too. But what I think I'm hearing is, yes, it's all true. I really like this idea of holding skepticism for all of it, remembering that no part of this, not the research, not you as a clinician, and not even your client is infallible or omniscient, I think is a great word. And I think that that kind of accurately describes my hesitation at just abandoning the research, because I feel like there's got to be something there. And I feel like this gives me a better kind of practical way to kind of hold all of this in my head in terms of actually going in and doing therapy in the room right. Is kind of holding the skepticism, but lightly to Ben's point, in terms of honoring clients and how they want to work and what they want to do, I obviously default to that 100%, theoretically, philosophically, I think. But John, what you said reminded me that, though, yes, they're coming to us for a reason, and part of that reason is they think we know something they don't. And I think we should make a good faith effort to know the things that they don't know and introduce them, maybe not as like Ben, you said earlier, shut up, you don't know what you're talking about. Do it this way because that's what the research says. Not to that extent, but I think to be able to introduce it and say this is what we have in the field. If it's helpful, take it. If it's not, I guess don't. But this is being able to introduce it as a professional, that makes a lot of sense to me.

    Dr. John Sakaluk [00:33:39]:

    I was just going to say, Carrie, that I think it's fair to assume that clinicians are going to have their own continuum of what they're comfortable with. And that's okay. Right. Even as evidence based practitioners. Right. That if someone comes in to Ben's point, if someone comes in and says, I've been doing some reading and I like the sound of this mindfulness thing, I think that kind of resonates with me. I think that's a useful tidbit of information. And if that's in your arsenal and there's some evidence for mindfulness in the space that you're working in, then right on. Right. But if that same person came to you and said, I've been doing serene, I think what would really help me is dolphin therapy. Right? That's a thing. And you can go in the literature and you can find some studies on dolphin therapy. Right. I think a clinician like Ben would be well within his rights to say, I'm just not comfortable doing that. Based on my understanding. Just the evidence for that treatment isn't great. And so if that's really important to you, God speed, but it ain't going to be me. All right.

    Ben Fineman [00:34:49]:

    So we've talked about that RCTs, generally speaking, better than studies that do not use that methodology. We've talked about meta analyses, generally speaking, being better than studies that are not meta analyses. And in a bit we're going to talk about, within those categories, how do we know what to trust? So when we find something that's an RCT or a meta analysis and we say, oh good, this is what they were talking about, how to go even deeper to evaluate what's in the study, to know if it's where it falls on the continuum of trustworthiness. But before we go there, I want to take a quick detour to the world of qualitative studies, which the four of us have already off air debated. Is this worth including? I want to talk about it because my thought about qualitative studies, which for anybody listening who doesn't know the distinction between quantitative and qualitative actually, I should probably ask you guys to better define that in a second. My gut has told me that qualitative studies are interesting, almost like reading an editorial in a magazine or in a paper where it's one person's story and there's stuff to be learned from that. But I have never felt like they have any kind of tangible use in terms of pushing the field of psychotherapy forward because it's such a small sample size, often n equals one. So I want to ask the two of you to define a qualitative study and answer the question, are they actually good for anything? And now you two get to fight.

    Dr. John Sakaluk [00:36:23]:

    I'll maybe start off with the high level thing, and then I'll kick it back over to Alex, and we may get into some of my kind of thornier thoughts and feelings. I would characterize qualitative research as Ben, you said, often N of one. I think N of one is possible. Single person qualitative studies are possible, but I don't think they're necessarily the rule. I think they're more the exception. So I would say these are methodologies that involve conducting interviews either with individuals or in group settings, oftentimes not always transcribing those interviews to written word. Actually, we have software that helps with this a lot. That was one of the weird things that came about through the pandemic thank you. Zoom and auto AI. But we then look at those transcripts of conversations and we try to extract some kind of meaning of something that we're studying either through the lens of some theoretical framework or through this kind of presumably bottom up process where we just let the words kind of speak for themselves and the themes kind of trickle up. Now, before I kick it over to Alex, the one thing that I'll say just as a counterpoint to Ben your skepticism around the value proposition of qualitative in psychotherapy. I'll just offer the literature on the HIV AIDS epidemic as a counterexample where participatory qualitative research has really oftentimes shaped and driven the agenda that I think if you talk to people that work in that sector, they have a very healthy appreciation for the way that getting in and talking to these people and kind of hearing their voices in their own words has really led to material insights into the problems and the solutions in that area.

    Carrie Wiita [00:38:16]:

    Wait, I know we don't want to get into the weeds, but can you explain a little bit more what you mean? What came out of the qualitative research there that we couldn't have gotten from quantitative research?

    Dr. John Sakaluk [00:38:27]:

    Well, I think some of it was, what are the experiences of people who have HIV? Right. And I think you're talking about an epidemic that in its heyday still to some degree, but certainly in its heyday that was driven by a lot of stereotypes and a lot of negative stereotypes about queer, sexualities and risk factors and certain behaviors and things like that, as well as what those communities needed. Right. That sector of research is one of those ones that really popularized they may have coined the term and I'm a jerk to not know, but they may have coined the expression nothing about us without us where if you want to come down from on upon high and think that you know what's best for us without talking to us and hearing from us what we need. We're not going to play. Right. And so it was a really, I think, empowering approach to research in that community. That, again, I think so. I am not an HIV research expert, but I know people who work in that field who are quantitative and mixed in their methodological approaches, and they have an incredibly healthy respect for the insights that paradigm of research has delivered over the years.

    Carrie Wiita [00:39:46]:

    That explains why that's why it sounds good to me. Ben, I'm curious. Does that kind of example of what qualitative research can do, does that shift your thinking that it's more than, like, just an anecdote?

    Ben Fineman [00:40:03]:

    I think where I get tripped up is kind of taking what John was saying and applying it to psychotherapy specifically.

    Carrie Wiita [00:40:10]:

    Okay.

    Ben Fineman [00:40:11]:

    And I could see qualitative studies asking clients to talk about their experiences of psychotherapy. Kind of like our podcast, in a sense.

    Carrie Wiita [00:40:20]:

    Yeah.

    Ben Fineman [00:40:21]:

    It'd be very hard to quantitatively study what we get at when we interview our guests who have had bad therapy experiences. That, to me, seems like it could be valuable. And now I realize I'm just patting myself on the back for saying only this would be valuable. And it's the one thing I'm doing. I don't see a ton of use, and maybe I'm overlooking something. And this is where I'd love to get your perspective, Alex, is applying it to psychotherapy. I don't see a ton of usefulness, but I'm sure it's there. In asking people, what was your experience of this approach to therapy? How did it feel for you as a client receiving dolphin therapy or CBT? Because it's so subjective and goodness of fit between the therapist and client or dolphin and client is so key that what do you learn from that other than hearing somebody's anecdotal experience where you say, oh, that was interesting, that was a fun read, but how does that inform my practice or the trajectory of our field? So, yeah, Alex, I'm curious how you can take what John said and kind of shoehorn it into specifically psychotherapy.

    Dr. Alex Williams [00:41:27]:

    Yeah. So it's interesting because based on our conversations before going on there, I thought that I was going to be an extremist on this. I want to just note in passing that I am an extremist on the therapeutic alliance. So that was brought 15 minutes ago, and I just have to always drop my little note here about, like, hey, we don't actually have any direct evidence that therapeutic alliance leads to client improvement. We don't have experimental evidence for that. But that's really an aside.

    Carrie Wiita [00:41:57]:

    Wait, I want to know more. Oh, my God. I want to take it.

    Ben Fineman [00:42:00]:

    How dare you just drop that in knowing that we're not going too far into the weeds. We're not going on tangents. We have so much to cover. How dare you undermine the foundation of everything we know to be true? Carrie and I about this field and be like, moving on. Can you make your argument in, like, two minutes or less or support your claim?

    Carrie Wiita [00:42:21]:

    Yeah. Okay, wait, do you want to finish your first point that you were making about this thing? And then we can yeah, okay.

    Dr. Alex Williams [00:42:29]:

    So I find myself not being an extremist here, maybe as much as I thought in the sense of I basically agree with what John was saying. And so Ben and tell me how this sits with you. But to give an example, let's say we do qualitative research with people who have a particular form of mental health problem that we at this point don't have effective therapy for and we could find out about, okay, what is their experience like? And the more we find out about what their experience is like, maybe that gives us better targets to target. With therapy.

    Ben Fineman [00:43:05]:

    One qualitative study is valuable insofar as it can be pooled with other qualitative studies to build a narrative that shows something that could not be captured quantitatively or that was misrepresented or ignored previously.

    Dr. Alex Williams [00:43:19]:

    Yes, I think that's a good way of putting it. Now, I think this is, again, where maybe my preferences show that I suspect, and John can tell me if he thinks I'm wrong on this, but I suspect probably qualitative researchers don't like that. I'm basically saying, oh, the utility is that we find out good information and that we can test in a quantitative study. So maybe this is where I get closer to being the extremist here. I'm saying yeah, that's basically what I'm saying. I'm saying that's actually super valuable. And for the busy practicing therapist, I'm not sure that those qualitative studies are particularly useful right. For the like, okay, they're going to have ten minutes, five minutes, whatever, to read something. And this is their attempt to keep up on the literature that therapy outcome studies, for instance, that are quantitative are probably more useful for them than those qualitative studies, even though I think the qualitative studies are very important to informing the quantitative experimental studies like we've been talking about.

    Carrie Wiita [00:44:20]:

    I just want to clarify you're saying that they're very valuable to the field in terms of moving our field forward, exploring where to go next. But on a practical, clinical application level, maybe therapists should if you need a quick hit or do a quick search, maybe go to the quantitative stuff first, if that's what you're saying.

    Dr. Alex Williams [00:44:40]:

    I would say as a rule of thumb, yes.

    Ben Fineman [00:44:42]:

    Okay.

    Dr. Alex Williams [00:44:42]:

    And if it makes any better, probably I'm pissing off a qualitative researcher. But, I mean, I'm not claiming my own studies are the things that a busy therapist should be looking at. I'm just saying that busy therapist were talking very little time here. Qualitative studies would not be my go to most of the time.

    Carrie Wiita [00:44:56]:

    Got it. Okay.

    Dr. John Sakaluk [00:44:57]:

    Before Alex attacks Ben's religion, can I just provide two? I think use cases that I do think have applied value to the everyday clinician for qualitative research?

    Carrie Wiita [00:45:14]:

    Yes.

    Dr. John Sakaluk [00:45:14]:

    And those would be without going I don't have to go off the deep end with them. But one, as we've alluded to already, RCTs are trying to max out internal validity. Right. How much confidence we can have that a treatment. The reason people got better is because of the therapy they received. But we know that literature has a lot of shortcomings in terms of who is included and who is excluded and who is excluded as a matter of principle and then who is excluded as a matter of prejudice. And I think that one thing that qualitative researchers really excel at is speaking to people on the margins and who are marginalized. Right. And so if you're looking at a psychotherapy literature and you're not seeing your client reflected in it, I do think qualitative research can be really useful for going to and kind of get the context around. How is this going over with folks who have the experiences and some of the background that your clients do? The other thing that I think it's really useful for, and again, this is kind of a sensitive topic, so we don't have to go too far here, but I think qualitative research could be really useful for getting information and warning signs about harmful therapies before we deploy them at scale. Right. I think when we think about, for example, when same sex attractions were considered a mental health diagnosis, if you had talked to queer people then about what their experience of conversion therapy was, the answer would have been, not fucking great. Right. And that's not so far gone. Right. Because you hear some of the same stuff coming out of folks who identify as trans or non binary. Right. And some of the the therapeutic experiences that they've had, and those won't necessarily be caught in an RCT because, for one, those people might leave. If their experiences are bad enough, they just might drop out. Right. So in that case, I think qualitative research can be a really useful kind of guide for are we doing harm to people here? Right. Are people feeling the helpfulness even on a very kind of subjective level? And I think that those two things remain useful beyond just kind of informing quantitative research, which certainly it can. And that's a good thing to do too.

    Carrie Wiita [00:47:31]:

    That was really helpful. Thank you, guys.

    Ben Fineman [00:47:34]:

    Carrie, this so far has exceeded my hopes. I feel like I am learning and I too am now buying into the value of qualitative studies.

    Carrie Wiita [00:47:44]:

    Yes.

    Ben Fineman [00:47:44]:

    So if nothing else, I have been converted. You want to take a pause and do a quick support pitch?

    Carrie Wiita [00:47:50]:

    Yeah, let's do it. First of all, thank you, everyone, for listening and all the other ways that you support the show, whether you leave a rating and a review on Apple podcast or Stitcher or wherever it is you listen or by joining our Patreon page over on our patreon. For $5 a month, you get access to our special bonus episodes. Our most recent release is another edition of Bad Therapist Facebook posts, so check it out at www.patreon.com/verybadtherapy.

    Ben Fineman [00:48:20]:

    And carrie. This episode is also sponsored by Sentio Counseling Center, which is where I work full time as my day job as the clinic director. We are a nonprofit, fully online counseling center serving clients in the state of California. We provide what I would like to believe is very good therapy. Our fees start at $30 per session, and we have recently expanded. So if you are a client in the state of California looking for therapy, check us out. If you are a therapist in the state of California looking for a high quality, low fee referral source, feel free to send people our way. That's at sentiocc.org. We'll have a link to that in the show notes. And yeah, thank you so much for listening. Let's get back into it.

    Carrie Wiita [00:49:02]:

    So if I'm understanding things correctly, this is where we get to take a little tangent. But actually, I'm going to make a case for not being a tangent into this whole therapeutic alliance is bullshit bomb that Alex just dropped a second ago. Why I don't think this is actually a tangent is this is exactly what I'm talking about. I don't know. With my level of understanding, which, admittedly, I'm kind of like a nerd for psychotherapy research and academic shit, and with my level of understanding, I thought I had a pretty good grasp of the evidence base that said the therapeutic alliance was pretty important. And that was one of the few things that we kind of really knew about the field. So I think that this is a really instructive will be an illustrative tangent to help us explain how understanding or misunderstanding research can lead to misinformed conclusions. I don't know. Am I just totally wrong?

    Dr. Alex Williams [00:50:06]:

    So probably not.

    Carrie Wiita [00:50:07]:

    Okay.

    Dr. Alex Williams [00:50:08]:

    I just feel like that I can't let the therapy just sit there. I got on its little shelf and look so pristine. I got to point out that there's a researcher whose name I'm probably mispronouncing, the Netherlands name. Pim Cuijpers. C-U-I-J-P-E-R-S-I always read his research when it comes out. Okay. And he wrote a paper a few years back just pointing out we have no compelling experimental evidence that the therapeutic alliance is what leads to leads to better outcomes in therapy. Now, he's not claiming in the paper that we have compelling evidence for other things either, but he is pointing out that we don't have it's a paper on the common factors. And he's just pointing out we don't have experimental evidence that compelling experimental evidence for the common factors lean to change in therapy. It makes sense. It would, certainly. I think Mike and Estes, a psychologist and I believe a professor now at Rutgers, made the .1 time in the old psychotherapy brown bag blog. I wouldn't want a therapist who was punching me in the face. It's probably a good thing to have an working alliance with your therapist, but we don't have direct experimental evidence for it. The history of medicine, the history of psychology is rife with things that look good in correlations and then didn't turn out to be causal. And so just the idea that we just stack up more and more evidence that well, these two things are correlated, that if you have a good relationship with your therapist and getting better in therapy, those two things are correlated. And I don't have to remind the listeners of correlation does not equal causation. Right. It's suggestive of it oftentimes but it is not equal causation. So we don't have causal evidence. I mean, if you ask me, I'd probably say, yeah, I bet it helps.

    Carrie Wiita [00:52:12]:

    Yes. But my understanding is not like it seems. Right. I feel like I've read a lot of studies and my general I feel like an idiot admitting this right now, Alex, so please tell me how I'm misreading this. But I've read specifically a lot of studies about the therapeutic alliance relationship operationalized a million different ways. But what are you saying? Are you literally saying it's just that there's not been a study? Is it literally what you're saying is correlation is not equal causation? We have a lot of great studies that show there's awful close correlation or strong correlation, but it's not causation or are you saying all these studies that I've read are being willfully optimistic?

    Dr. John Sakaluk [00:53:13]:

    I think to try to clarify and Alex, you tell me if this clock is true. But I think what Alex is saying is that we were just having a conversation about how RCTs are the gold standard of what causes improvement and the simple point is being made here that no one's done an RCT on the presence or absence of the alliance. Right. And from my understanding, and I'll kick it back to Alex here because he's the one really championing this view. Right. I actually don't have a dog in this fight.

    Ben Fineman [00:53:42]:

    Way to walk it back, way to hand.

    Dr. John Sakaluk [00:53:44]:

    Well, hey, bet if you want some ammo to fight back on Alex, I could give it to you. But I'll just say I think the evidence that I'm aware of that Alex often talks about the common factors folks like to argue from Omission, right. Where meta analyses, some metaanalyses don't show big differences between treatments and they say, AHA, see common factors and it's like, no, that's a test of specific treatments for specific conditions. That's not a kind of a test that is approaching the efficacy of common factors specifically.

    Carrie Wiita [00:54:23]:

    Right, okay.

    Dr. Alex Williams [00:54:27]:

    Alex, well, quick comment on that. And this is one where I think we have to climb our way out of the weed before I ditch us into it. But I'd say yeah, John's bringing up a good point. Sometimes people say, well, here is a study and true treatments were compared, and it doesn't look like there's a difference in outcomes. Therefore, that's evidence that they're working equally well, the dodo bird verdict, or that they're working because of the common factors. And again, without getting to Minutiae, that's just wrong. Okay. You can't conclude that if that's all you know from that study, if that's all the study is telling you, you cannot conclude that therefore the common factors are why people are getting better. Right.

    Carrie Wiita [00:55:07]:

    That makes sense. Okay.

    Dr. John Sakaluk [00:55:09]:

    Yeah.

    Dr. Alex Williams [00:55:09]:

    My point, carrie, your summary of my point is basically my bigger point, though, and absolutely I am an extremist on this. I am way out there. You can agree with everything else I said and find plenty of people who agree with you on this one. It's a pretty lonely group out here. But yeah, my basic point is that correlation is not causation. We don't, as John said, have the RCTs, that is, the experiment showing that the alliance is causing people to get better. Some people say it'd be unethical to run those studies, whether that's true or not, and I'm not sure that's the case. But whether that's true or not, again, the difficulty of running the study doesn't mean that the verdict we want to be true is therefore true. And Carrie, to your point as well, about, like, are people overly hyping the studies? Not necessarily, but I would just say that speaking of the replication crisis, that all these things we've talked about with problems with research over the years and how credible is this research? No one to date has done that sort of credibility investigating meta scientific review of that literature. So it might be very credible. Even those correlational studies, I want to be very clear, I'm not saying we have evidence that they're unbelievable or wrong or anything like that. I'm just saying it is an open question. A lot of studies in psychology aren't very credible when you look under the hood. Nobody's looked under the hood at that body of literature.

    Ben Fineman [00:56:42]:

    So I am feeling a familiar wave of nihilism kind of crashing over me here, where I think, so we don't know anything.

    Carrie Wiita [00:56:54]:

    So close to buying back into research, and now we've just destroyed it.

    Ben Fineman [00:56:59]:

    Which I think this actually turned out to be a very helpful sidebar because I know what's next on this outline that we put together. Here is the question how do we know what to trust when it comes to evaluating studies, evaluating research? And it looks like there's like a dozen different things that John and Alex, you two have thrown in here. So bringing me back from just kind of like surrendering all hope that there's really any answer to how do you evaluate research? Walk us through the different bullet points in terms of what, as a therapist, we can pay attention to when looking at a study and saying, okay, how can we determine if this is credible versus maybe not so credible?

    Dr. Alex Williams [00:57:40]:

    I'm happy to do this. Can I say one more thing on it that maybe helps also pull us back from the abyss?

    Ben Fineman [00:57:47]:

    Yeah, please.

    Dr. Alex Williams [00:57:49]:

    Yeah. Just that the message to people listening is, again, don't listen to me, I'm a crank on this. But I would just say maybe that if you take anything away that just absolutely still try to build a therapeutic alliance with your clients 100%. I do it in my own practice. I would encourage you to do it. Just be aware that if you notice yourself saying, I'm going to deviate from this other experimental research because you have bent used the silly example of like, oh, let's hop on 1ft and cluck like a chicken. And I'm going to deviate a lot from the research and just make that the focus of session because I think that will build an alliance with my client and that will help them get better. Just having that degree of skepticism of yourself, of like, okay, maybe this isn't then maybe I can't deviate as much from the research as I thought, just for the sake of, quote unquote, building the alliance.

    Carrie Wiita [00:58:44]:

    This is like, for me, the biggest takeaway so far from this conversation, because everybody does that. Every therapist does that. Every therapist, when they encounter something that they don't like or they don't want to do, we all, I do, default back to, well, but what matters is the alliance. And I just need to go back to the relationship and focus on the relationship. And that's why I won't do XYZ or that's why I'm going to do this instead of this other thing. And you are saying, hold up, you're putting an enormous amount of faith in something that hasn't actually been conclusively proven.

    Dr. Alex Williams [00:59:24]:

    Yes, that's well put. Yes. And the same thing with if you said, well, I know this works because I see my clients get better every day when I do XYZ.

    Carrie Wiita [00:59:34]:

    Right.

    Dr. Alex Williams [00:59:35]:

    Also, the shorthand here is we can't no, don't trust yourself, have humility on this. And when people tell you, oh, but I'm the guru therapist, and I know what's best, we should all have humility on these things.

    Dr. John Sakaluk [00:59:51]:

    Something that is spontaneously coming to mind that I didn't plan to put out there, and we can scrub it if it's not bringing joy. But I'm reminded of this model of what is knowledge and knowledge as justified true belief. Right? Those are the three components in this model, right? That what is true. Do you believe what is true and are you justified in believing what is true? And I think a lot of the anxieties that we're kind of talking about here touch on these components, right? Like, what do you do if you don't know what is true? You're doing your best to make a good faith read. Does CBT work? Does emotion focus therapy work? Some of the problems that. We're bringing up here are what happens when therapists let their belief really do the talking without maybe necessarily getting in touch with assessing to what extent is that belief justified. Right. But to Ben I would say walk back from the edge of the abyss, man, because just because the common factors, value proposition might not be fully justified seems plausible.

    Ben Fineman [01:01:07]:

    Right.

    Dr. John Sakaluk [01:01:08]:

    That's something that Alex is saying.

    Ben Fineman [01:01:09]:

    Right?

    Dr. John Sakaluk [01:01:09]:

    It seems plausible and he does it and people talk about it. There's some reason to think that it's true and it's just about strengthening the justifications then, or at least calibrating calibrating your belief in light of the level of justification that's currently out there.

    Ben Fineman [01:01:26]:

    It's hard to walk back from the abyss when I've built such a comfortable home here. How dare you ask me to abandon my creature comforts here on the edge of sanity?

    Dr. John Sakaluk [01:01:38]:

    We have beer back here.

    Carrie Wiita [01:01:41]:

    So let's go there.

    Ben Fineman [01:01:43]:

    How do we know what to trust? Hit me with some metaphors, some bullet points, so that next time I look at a study, I can have in the back of my mind a checklist of sorts of what should I be paying attention to to evaluate? Should this push me closer to the abyss or bring me back to John and his stockpile of beer?

    Dr. John Sakaluk [01:02:03]:

    Yeah. So in terms of knowing what to trust, we were talking a lot back channel about how to make these technical pieces of research more accessible to everyday clinicians. And one of the things that we thought was maybe kind of a metaphor might help kind of explain some of the intuition behind these different features. So the metaphor that I came up with is evaluating psychotherapies and their effectiveness is kind of like a sports competition where you have all of these different psychotherapies in the league and you want to know which ones are good, which ones are bad, and which ones are best. Right. And there's a lot of ways that competition between those teams could play out in ways that could be really convincing versus ways that would just have you thinking, like, we have no idea how good that team is.

    Ben Fineman [01:02:54]:

    Right.

    Dr. John Sakaluk [01:02:54]:

    That's not informative at all. So just to start with the most intuitive one sample size. Right. We recommend folks trust studies, especially RCTs, that have larger sample sizes. And the kind of sports metaphor version of this. Is there's a reason why football games, you know, go the length that they do, or hockey games have three periods, basketball games have have four quarters. It's because you don't learn much if you only put the team out there to observe them for two minutes. Right. And so likewise, you need to give these therapies enough data to feel like you've gotten a reasonable observation at what their therapeutic potential is.

    Carrie Wiita [01:03:35]:

    I love that that makes so much.

    Dr. Alex Williams [01:03:37]:

    Sense just for John's in Canada. So for our American listeners, hockey is a sport where the players skate around on ice and they hit a puck with sticks. You may have seen it before.

    Dr. John Sakaluk [01:03:49]:

    More damning. Is the reason why I didn't say how many sections of time are in a football game is because Canadian football is weird and I don't remember anymore all the ways that it's different from American football. So I don't even feel comfortable speaking about it.

    Dr. Alex Williams [01:04:04]:

    I love that. And I may break John's brain here, but I was going to say that. So for an RCT, for therapy, for an experiment, rule of thumb, look for 200 participants. Look for 200 people in the study.

    Carrie Wiita [01:04:18]:

    Great.

    Ben Fineman [01:04:19]:

    I love it. Yeah. I know these are not exact and these are just heuristics as opposed to rules, but I think those are really helpful for myself, certainly. And I'm guessing for some people listening because how do you know what constitutes a four quarter match as opposed to two minutes of scrimmaging? Is it 75 people? Is it 200? Is it 1000? So I really love that you're throwing these in there, Alex, even if we're just kind of flagging that these are not any kind of like objective factual interpretations, but just rough guidelines, that when I pick up a study and I say, oh, 60 participants, cool. This is like one quarter of a football game as opposed to the whole game.

    Dr. John Sakaluk [01:04:58]:

    I hate to give Alex credit for anything, but the 200 is actually for many reasons that we won't go into the weeds on. 200 is a really nice number.

    Ben Fineman [01:05:07]:

    Awesome. Keep going. Keep going.

    Dr. John Sakaluk [01:05:09]:

    So another thing that we would encourage people to look at, right? We talked about this evidence pyramid and how original studies come in different designs and RCTs are the preference, right. But you'll see, studies roll out, oh, just longitudinal, just comparing groups and things like that. And a lot of folks feel like there's more information in those between group comparisons where we have therapy versus placebo or some other control condition rather than just observing, oh, people before and after they got a treatment. And the sports kind of metaphor there, right, is you wouldn't really trust a team that was like, oh, yeah, we're the best. And you know how we know? Because we scrimmage so hard against ourselves, right. And we feel we're pretty good. Right. You measure the performance of the team against a competitor. Right. Those are the sorts of comparisons that are increasingly important.

    Carrie Wiita [01:06:08]:

    God, I wish I had this metaphor in grad school. I swear to God, the ways I have tried to memorize between group and within group, if I had just had that metaphor, I feel like I would have gotten it a long time ago. Okay, next one.

    Dr. Alex Williams [01:06:28]:

    I got to give credit to John here because this particular outline is one that John made up. So I was going to let him take the lead here, but if he's going to let me jump in, then I will do it. So there's a difference between studies that are experiments like we've been talking about, and then there are studies that are non experimental and non experimental studies in an experiment. If I say I have depression, I'm signing up for this experiment to test this treatment for depression. There's an equal chance they'll end up getting the treatment in question or being the control group, right? And the control group could look different ways, but equal chance, like eating up in either group. In non experimental studies, there's not an equal chance that people could end up in either group. So it might be like, for instance, we said, okay, well, there are two different hospitals. Everyone at hospital A who comes into treatment will get the therapy we're testing. Everyone in Hospital B will be the control group. Well, that's a study. But notice that it wasn't well, you had an equal probability equal chance of being in either group. It was just no, everyone in one hospital got one treatment. Everyone in a different hospital got the other. And John's analogy here was that if you said, well, this team, we've only ever seen them play when they're on their home court, or only when they have certain referees who are favorable refs to them, that's not as convincing as if we saw that team. Will they beat actually other teams on neutral courts with fair referees? So same thing, then, for treatments. If you say, well, this treatment worked, but the study was not an RCT, it wasn't an experiment. Think of it. Well, okay, so they won on their home court with some referees who were biased in their favor. What we want then is, what about when they're on a neutral court with neutral referees?

    Ben Fineman [01:08:28]:

    So this is kind of like and correct me if I'm wrong, but this is kind of like how you see when new modalities come out or you look at the initial studies on certain modalities. There is this trend, and it's so common from my perspective, where the initial studies like this is groundbreaking effect size, way bigger than other modalities. And then people who aren't affiliated with that tribute modality do their own studies and their own RCTs, and it's, like, just as effective as everything else.

    Dr. Alex Williams [01:08:57]:

    That is definitely a problem. I think that's a little bit of a different problem than here could be related, but it's not necessarily the same thing. So this is if we run the study right, is there an equal chance that you end up getting let's say that I'm developing a new form of CBT, Alex CBT, and you sign up to participate in my study, god help you. Is there an equal chance that you end up getting Alex CBT? And let's say the control group was old school CBT. Is there an equal chance you end up in either group? Or is it I just said no. Everybody who are the first 100 people who sign up get Alex CBT. The next 100 people get old school CBT.

    Ben Fineman [01:09:39]:

    Okay, I think I'm just going to erase my question and your response because it occurs to me that I just didn't hear what you said the first time, and it was straightforward enough for.

    Dr. John Sakaluk [01:09:48]:

    The I actually really like your question. If I could lobby for you to keep it in, because we hadn't planned to talk about it. But this is actually something that we see that is I mean, it's a more technical warning sign, but treatment effects that are just implausibly too good, like too large to be true. Right. And that's like, if you have maybe more in Alex's kind of home court to invoke a boxing analogy, right. If you have a really good boxing division and you have some people who are really good, maybe they're ten and three, right, or they're twelve and two, whatever, and then you have someone who's 50 and o, you probably question the 50 and o guy. Right. And is something else going on there? And that's certainly the case with psychotherapy research, too, where people sometimes think like, oh, the bigger the effect size, the better, but we know what size at this point psychological effects should be. And sometimes when you see studies on psychotherapies that are just like, oh, yeah, our effect size is triple the effect size of all the competitors, that's actually not more convincing, that's much less usually.

    Dr. Alex Williams [01:10:59]:

    Right.

    Dr. John Sakaluk [01:10:59]:

    So scrub it if you want. I think actually there's some good wisdom there.

    Ben Fineman [01:11:03]:

    It makes me think of Lance Armstrong. That's the first thing that came sure.

    Carrie Wiita [01:11:09]:

    Okay. Wait, I'm sorry to admit I'm still just slightly confused. So wait, when you're talking about experimental versus non experimental, when you're saying that why experimental is stronger than non experimental, is this where researcher bias comes in? Or is that no, this is a different thing.

    Dr. John Sakaluk [01:11:28]:

    No, this is really about again, to go back to the sports analogy, you would not find it convincing if a sports team was only tested in one arena for the whole season, because maybe that arena is favorable to them. Maybe the officials, right? Maybe one of the nets are wonky. Right. So in a lot of sports, we oscillate. Right. We change the conditions of where you compete. Different teams, different arenas, different refs, different sides of the court. Right. And the reason you do this is because you're trying to use randomization to level the playing field. Right. And that's the signature of experimental design. It's random assignment to condition or control. Right. And so when you just have, as Alex said, site A and Site B, we don't know, is Site A just a train wreck?

    Carrie Wiita [01:12:25]:

    Yeah.

    Dr. John Sakaluk [01:12:26]:

    Is their administration gong Show. Did they not hire good people? That might not be a fair test. Right.

    Carrie Wiita [01:12:32]:

    Or is it, in a way, better part of town where everyone's got better health care to start with and got it. Okay.

    Dr. John Sakaluk [01:12:38]:

    This is why everyone we trust more in experiments where everyone had the same probability of getting the treatment or not.

    Carrie Wiita [01:12:46]:

    Right, okay, thank you. That was so helpful.

    Ben Fineman [01:12:49]:

    So I think we're probably at about an hour 15 here. I'm wondering with the rest of these bullet points in terms of how do we know what to trust? Can we do a bit of a lightning round? Because I think they're all important, but I want to make sure that the editing process for this episode ends up not taking the entire weekend.

    Dr. Alex Williams [01:13:08]:

    Yes. So we've been saying this phrase control group a lot. Is there a strong control group? That's something people can look for. So John, sports analogy. If we said, my football team is great, they beat the local high school team, that's not as impressive as if I say, well, they beat Patrick Mahomes in Kansas City. So the idea that we want strong competition for therapy studies, that's usually going to mean if you said, well, it was tested against some other form of therapy or against a psychiatric medication, a weak competition, the equivalent of, well, we beat up the local high schoolers and they weren't that good to begin with. Would be if you said, well, our therapy was tested against no treatment at all. Or a waitlist, something like that.

    Dr. John Sakaluk [01:13:54]:

    Another buzz term that your listeners can keep a watch for is the term registered or preregistered. And what this means is the research team went on the record saying exactly how they were going to run their study, exactly what variables they were going to monitor and what statistical tests they were going to do with the commitment being that they were actually going to do that once they had the data. And this is kind of akin to like, you don't get to change the rules of the sport midway through. Right. If you play in the NBA, you play by the NBA's rules and those suddenly don't change mid game. And if they did, you would kind of say, that's an unfair contest. Now this is an interesting one because even though the term registered should give you some confidence and we like RCTs better, this is actually one of the ways that RCTs get gamed is that what gets published oftentimes doesn't match the registration. And the only way you'd know that is if you read the paper, you look at what they did and you're diligent enough to go back into the registration and see for yourself.

    Carrie Wiita [01:14:50]:

    Oh, shit. Okay, keep going, keep going.

    Dr. Alex Williams [01:14:53]:

    So transparency if I said guess what? I got into a fight with Conor McGregor and I knocked him out and people said, well, really, Alex, is there any video of this? No. Will Connor admit to it? No. You just have to take my word for it. Trust me, it happened. Nobody saw it, but it happened. We probably wouldn't put a lot of faith in that. So for studies, the more transparent they are where they say, look, data are available, you can review our data. We have the study materials posted publicly where you can see it and see for yourselves what we did. That is generally a sign of trustworthiness, I think, for listeners. Kind of like John's last point of the registration, both these things can be abused. There's a joke in the field, data available upon request, and then you ask for the data and you never get it. But all that said, I would put a little more just general rule of thumb, I'd put a little more faith in a study. If you see those words, this study was preregistered and the data are available and there's a website link. It's not a lot. And John and I may feel a little differently on this too, but I'd put a little more faith in that, though, even if you never bothered to follow those links.

    Dr. John Sakaluk [01:16:05]:

    Now, related to that, you wouldn't feel convinced by Alex if he said, trust me, I knocked McGregor out. You'd want some other people to kind of bear witness to that. Well, you wouldn't feel terribly convinced if Alex said, trust me, my parents and my partner will vouch for me. Right. My wife was there. My wife was there and she saw it happen, right?

    Dr. Alex Williams [01:16:25]:

    She was, John.

    Dr. John Sakaluk [01:16:28]:

    I guess we'll just have to take her word for it. So, peer review, right? This is something that usually we kind of accept as kind of an essential quality. And that's just saying that. Independent third parties had a look at the methodology and the analyses and they more or less thought what was done here was reasonable. Now you might ask yourself, it's 2023, what studies aren't peer reviewed? And there's kind of this rise in what are called preprints, where researchers are trying to make their research openly available for everyone to read, which is a good thing. But some of those things, and there are good reasons for doing it that we don't have to talk about, but sometimes people share research before it's been peer reviewed. And so you just want to make sure that you're not betting the farm on something necessarily before some independent third parties have had their chance to weigh in.

    Carrie Wiita [01:17:17]:

    I feel like I have seen those so much more often and they're like they come up in my Google searches because they're not behind a paywall. Oh, shit. Okay, cool, thanks.

    Dr. John Sakaluk [01:17:28]:

    And again, there's good reason at least I love open access, right? So I'm entirely here for us making our research freely available to the public. The public is largely who pays for it, but one of the complicating pieces of that is there's good reasons to put your research out before it's been peer reviewed. But nonetheless you want people to calibrate for that?

    Carrie Wiita [01:17:49]:

    Yeah, it didn't even occur to me. God damn. Okay, great.

    Dr. Alex Williams [01:17:53]:

    And then if our other bullet point here, studies are replicated and so we've talked about this already, with the replication crisis. But if you see other people have followed similar procedures, gotten similar results, particularly if those studies feature some of the things we've already talked about, like they were preregistered or that it's a meta analysis. So it's a bunch of different studies, 20 different studies looking at the same question. And yeah, there's a result from that meta analysis that's interesting to you. That's something that you can put a little more weight in than if the study has never been replicated. It's just a one off study. You should question that more and put less faith in that.

    Dr. John Sakaluk [01:18:32]:

    And maybe one of the last ones that I'll do, this notion of what's called allegiance. And so let's say Ben came up with Ben's newfangled therapy, right? If you read ten research papers on it and they were RCTs and they had large ends and they're all saying Ben's newfangled therapy is great, but then you see that Ben is on all of these papers and there are no papers without Ben on them. This should invite you should be concerned about this because Ben gets prestige for having this fancy therapy with his name on it. He probably gets some money, right, either from clients or he's doing some consulting or some training on the side. Right. So we like to see clinical literatures where there are varied authorship teams.

    Carrie Wiita [01:19:18]:

    That was so incredibly helpful for me. I wish we could keep going for another hour, but I know I don't want to ruin Ben's weekend. So instead of doing that, guys, can you maybe give us a sum up, takeaway sort of yeah, bring it home. What should we take out of this entire conversation?

    Dr. Alex Williams [01:19:40]:

    So I gave John credit earlier for the sports metaphor, but I'm going to claim the credit then for what you're about to hear the acronym Mr. Bear. M-R-B-E-A-R. So this would be my way of remembering six things that a busy therapist could try to quickly look at. If they're trying to say, is this study useful to me? Oh, great, yeah. How much weight should I put on the study? So the M, is it a meta analysis? Again, we said if you see the phrase meta analysis, it's a big study of studies. Generally speaking, that's better. R is the study registered or preregistered? So do you see indication in there that the researcher said yes before we ever collected any data? We have a hypothesis that people can verify. If we don't have that, it's the equivalent of, well, I shot a bunch of arrows on the side of the barn and then I went up and drew bullseyes on them afterwards. Right. So you don't want to change your hypothesis after you started the study. B is it a big sample size. So again, very Rough Guideline. 200 people in an RCT E. That's a problem with great acronyms, not remembering the letters offhand E. Oh, is it an experiment. Yeah. So if you're looking at something, is it an RCT? Is it actually a comparison of a therapy with a control group where people had an equal chance of being in either group, but you could just quickly look for RCT A. Is there an active control group? So did the control group in the study, are they doing something or is it just we compared our therapy with nothing at all, no treatment or clients sitting on a waitlist and then R is aren't you glad you didn't forget what the R stood for? Because I actually don't have this in front of me.

    Dr. John Sakaluk [01:21:33]:

    Replication.

    Ben Fineman [01:21:34]:

    Replication.

    Dr. Alex Williams [01:21:35]:

    Replication. Thank you. Wow. Look at John.

    Dr. John Sakaluk [01:21:38]:

    It must be that effective acronym.

    Carrie Wiita [01:21:43]:

    Wait, I want to try it. I want to try it. I want to try it. M, meta-analysis, R is registered B, big sample size E. Is it an experiment? Oh, there's a parrot outside my room. A is for ADHD, a is active control group and R is replicated. Is it a replicated study? Did I get it?

    Dr. Alex Williams [01:22:09]:

    Yes, you nailed it.

    Carrie Wiita [01:22:10]:

    That's helpful. I like that.

    Ben Fineman [01:22:12]:

    Alex, does this exist anywhere online? If we want to link to this in the show notes so people can take Mr. Bear and share him far and wide with the field.

    Dr. Alex Williams [01:22:23]:

    It exists in my dashed preparation for this podcast episode. It exists in this podcast. This is a world exclusive Mr. Bear.

    Dr. John Sakaluk [01:22:30]:

    Buddy, I think you need to get the visualization of Mr. Bear ready to disseminate.

    Carrie Wiita [01:22:35]:

    Absolutely.

    Ben Fineman [01:22:37]:

    Yeah.

    Carrie Wiita [01:22:37]:

    We need a PDF for short.

    Dr. John Sakaluk [01:22:40]:

    I was just going to say that, all jokes aside about the acronym, I do think it's a really useful set of features that whenever you hear someone kind of grandstanding about a particular therapy, you could do a lot worse than putting that therapy's research base through this acronym. Right. And there's a lot of therapies that people like to make a lot of grandiose claims about EMDR being an example of one. Right. Some of the original RCTs of EMDR would totally flunk a lot of these features.

    Carrie Wiita [01:23:14]:

    Interesting.

    Ben Fineman [01:23:15]:

    Yeah. And stay tuned for our podcast next week, which is a Patreon select episode where we had the wonderful Angela to do a deep dive or talking about her deep dive into the EMDR research, which gosh, I wish we had more time because I'd love to hear both of you kind of just actually do we have can we take like 1 minute? Is it possible for both of you to kind of share thanks, Carrie. For both of you to share your thoughts on EMDR kind of as a precursor to what we talk about in next week's episode, because it really feels like it ticks a lot of the boxes when you talk about narratives in our field around studies that maybe aren't as credible as ones that adhere to Mr. Bear's. We need like, a good sentence that follow the teachings of Mr. Bear.

    Dr. John Sakaluk [01:24:05]:

    I like that. Use of that term narrative. Because again, not being a therapist, but what I have heard about EMDR over the years, being around people who do therapy, is always this narrative that it's equivalent to exposure therapy and its effectiveness. And one of the interesting experiences that our research team had when we did our first kind of big critical synthesis of the credibility of the psychotherapy literature is we looked at some of the trials on EMDR and we kind of were able to backtrace what study and what test did that claim kind of come from? And it was this tiny comparison of like twelve people, right? Like, I think six six were EMDR, six were exposure. Again, I'm not here to say that EMDR doesn't work. I'm just here to say that if that's what you're betting the farm on, right. That it's equivalent to exposure. And once that was said in narrative format, that belief just took off. Right? And I think a lot of people would be surprised to know just how small the comparative samples are in that literature of exposure versus EMDR.

    Dr. Alex Williams [01:25:17]:

    I will say it well, first, I'm a proud Patreon select subscriber to very Bad Therapy. So I've already listened to this podcast, so I encourage listeners to subscribe. It's kind of in a weird category, right? It's better than brain spotting. So it's like we actually have and I may have crushed some listeners hopes and dreams there, but brain spotting, we don't have RCTs for that, right. EMDR we do, at least for the treatment of PTSD. And there, I think it's a purple hat therapy.

    Carrie Wiita [01:25:48]:

    Wait, what does that mean?

    Dr. Alex Williams [01:25:50]:

    Well, okay, so not my term originally. I wish I remember who coined it, but basically if you said, I'm doing I gave the joke earlier, Alex CBT or whatever I'm calling it. Right, yeah. You look at it and it's just old school CBT, but the client wears a purple hat. And then I say, well, see, it works and it's because of the purple hat. Well, of course, I didn't actually show that. I just showed that old school CBT worked when the client was wearing a purple hat. EMDR adds in a bunch of stuff with the eye movements, the bilateral stimulation, which is fancy, kind of nonsense term, but the idea is that those add ins, it's like a purple hat. There really isn't evidence that those are adding to the therapy's effectiveness. It gets a bit thorny in the research, but I think it's pretty clear at this point we don't have compelling evidence for that. I'd say at the least, and that's for the treatment of PTSD. I think once you get outside PTSD and some of the things EMDR has been claimed to work for, that's where it's like we don't really have RCTs at all. I would not pay money as a clinician to get trained in EMDR. I would not pay for the fancy equipment.

    Ben Fineman [01:26:56]:

    And if you are listening and you take umbrage with what was said. I encourage you to wait a week before you send us an angry email because you will have a lot more content to get about.

    Dr. John Sakaluk [01:27:08]:

    Yeah, and if you do have an angry email, Alex's email is well, this.

    Ben Fineman [01:27:14]:

    Might be a good moment to thank both of you once again for sharing your valuable insights and your wonderful metaphors, any parting thoughts? And also where people can find you online to send angry emails or to follow your work or to get in touch.

    Dr. Alex Williams [01:27:30]:

    I'm on Twitter, at least at the time of this episode, at Williamsych. P-S-Y-C-H if you Google my name and I'm at the University of Kansas, so if you Google Alex Williams, University of Kansas, my faculty website will pop up. You can probably send me an angry email that way. I'll just note that I know therapists who do EMDR. I love them. My takeaway here from the episode is no one's trying to trash you or what you're doing out there, right? Or at least that's not my intention. That comes next week. But EMDR or anything else, just the Mr. Bear type thing, it's just a way to maybe approach some of these questions differently. It doesn't mean that you're doing a bad job as a therapist or that we don't like you or anything like that.

    Dr. John Sakaluk [01:28:15]:

    And you can find me on the Hellscape Bird site at John Sacklock. And if you Google me, just be aware that there's two other John Sacklocks who are both dead and I'm the living one. And maybe what I would throw off by sign off is just the issues that we've been talking about appraising the impact of psychotherapies, who is or isn't represented in those literatures, how credible the research base is. This is a really hard thing to do, and I think it's a really tall ask of clinicians to be able to clock all of this in real time, to be able to deliver it in their practice. And so maybe if I can just drop a teaser. Alex and I have an invited review paper that's under consideration right now where we're kind of proposing a way of synthesizing evidence that bundles all of this up in kind of a really accessible dashboard format, where if you want the more technical details right, you can click around and geek out on things. But we kind of recognize that clinicians shouldn't be expected to have a specialized degree in meta analytic methods and RCT design and this, that, and the other thing in order to make sense of this literature is because even really highly trained ones find it incredibly difficult. So if you're feeling that way, you're not alone. And we're not the only team that's working on this. But I think the field is starting to wake up that we need to pay more attention to how folks are able to use that knowledge.

    Ben Fineman [01:29:45]:

    I am. So glad that we brought it back around to that, because I think it's so validating just for myself to hear you say we shouldn't be expected to be able to do this after doing two and a half years in a master's program and to find ways to bridge that for people, to make it more accessible, more digestible. It just seems like it's so sorely needed. So please let us know when that is published so we can mention it or have you back on, because that seems just like such a necessity to take everything we've talked about today and make it so that people don't have to listen to this episode to even start to follow the ways of Mr. Bear.

    Dr. John Sakaluk [01:30:21]:

    We can send you along the sneak peek if you want, and you can laugh at my broken PowerPoint graphic illustration of what a dashboard looks like. It's like very five year old cartoony. It's brutal.

    Carrie Wiita [01:30:34]:

    I can't wait.

    Ben Fineman [01:30:36]:

    Academic, lots of good ideas. Not great in the delivery and accessibility part of it.

    Dr. Alex Williams [01:30:42]:

    This episode, though, has given me great confidence and certainty that Mr. Bear must be incorporated into the dashboard.

    Dr. John Sakaluk [01:30:49]:

    Oh, God.

    Carrie Wiita [01:30:49]:

    100%. Absolutely.

    Dr. John Sakaluk [01:30:52]:

    I just want to reiterate frenemy of the podcast and this is reasons like this are exactly why.

    Ben Fineman [01:30:59]:

    Well. Dr. John Sakaluk, D r. Alex Williams. Thank you tremendously. We had a lot of fun. Look forward to the next time.

    Dr. John Sakaluk [01:31:07]:

    Thanks, y'all.

    Dr. Alex Williams [01:31:09]:

    Thanks, guys.

    Carrie Wiita [01:31:16]:

    Thank you for listening to very Bad Therapy. The views and opinions expressed do not constitute therapeutic or legal advice, nor do they represent any entity other than ourselves or our guests.

    Ben Fineman [01:31:26]:

    Visit us www.verybadtherapy.com for more content ways to support the podcast or to let us know if you have a story you'd like to share on the show. If you'd like to join our patreon community and get access to our monthly bonus episodes, check us out at www.patreon.com/verybadtherapy

Carrie Wiita

I'm an actor and blogger living in Los Angeles with my beautiful dog, Chance!

http://www.carriewiita.com
Previous
Previous

Episode 138 - Patreon Selects: Is EMDR a Cultish Pyramid Scheme?

Next
Next

Episode 136 - Very Bad Group Therapy