Thursday, August 18, 2011

Deception in Research

When is it okay to lie in a study? When someone volunteers to be a research subject, whether it is a psychological study or clinical trial for a new drug, a corporate study or student project, what are the limits on just how much you can deceive the person? What aspects of the deception should be disclosed, and which are okay to keep hidden from the subject? If you do disclose the deception, do you need to be up front about it? Can you wait until after the study is under way or even completed?

The easy answer is that researchers shouldn't lie or mislead study subjects. Lying is wrong, right? Besides, lying could have a very big impact on whether or not the subject participates. And then there are the psychological impacts of being lied to. Just look at the famous (infamous) Shocking Experiment by Stanley Milgram. Most people learn about this in high school social studies, but for those who may be a bit fuzzy or haven't heard about it, a quick summary.

Milgram enrolled volunteers (teachers) in a study, telling them that they were going to read a series of word pairs to other study subject (the learner) and then read one word and four possible matches, delivering a shock if the subject responded with the incorrect match. The teachers and learner met before the experiment began and engaged in small talk, forming a friendly acquaintance. What Milgram did not tell his volunteers was that the learner was just an actor who never received any shocks and that the responses they heard were prerecorded; the volunteers were the ones really being studied to see how far they would go, how strong of a shock they would deliver, in deference to authority.

At the end of the experiment, with a majority of subjects having delivered quite high shocks, and some even going to the maximum, the truth was revealed. During the study, many of the subjects became quite agitated and stressed. The potential for serious psychological harm, as well as stress-related physiological effects, was present, though all subjects were debriefed afterward. An analysis of the study can be read here. Suffice to say, Milgram's experiment would almost certainly not be approved today. It is often held up as an example of just why research ethics and straight-dealing with subjects is so important, and to illustrate the risks of deception in research.

But the questions I asked are not quite so easily answered. There are times when deception is not just okay, but required in order for the study to provide accurate, meaningful results. Taking a little autonomy away from the subject is sometimes required.

The Randomized, Blinded, Controlled Study

Probably the most common and accepted form of deception in research is when subjects are randomly assigned to receive either the experimental condition (activity/procedure/drug/device being studied) or the control condition (comparative item/procedure or placebo), but during the course of the study, they will not be informed of which condition they received. In other words, they are blinded to the condition, the deception being that key information is withheld from them. The epitome is the placebo-controlled trial. Deception of this nature is vital for a great deal of research, since the opposite (open-label or unblinded) design would allow for the subject to respond differently, based on which condition they know they are in. In a drug study, if the subject knows whether they are receiving the study drug or a placebo, they may react or behave differently, report greater improvement or more side effects if they know they are getting the drug than if they know they are getting the placebo. Blinding the subject to this helps to reduce or even eliminate that kind of bias, which can render the study results meaningless.

In order for this kind of deception to be approved by an ethics committee, though, the subjects must be told up front that the deception will be occurring and the nature of the deception. The informed consent process must disclose that there will be some drug/etc. other than the experimental drug and that they will not be told which they are receiving. They must also be told the nature of the other drug (active comparator, placebo and so on). In other words, the researchers need to say, "Look, we're going to be lying to you. This is the specific part of the experiment that we are lying about, and this is the nature of the lie. All the rest of the study is on the up and up."

This kind of deceit has little risk for the subject. The information withheld is unlikely to have any significant effect on their psychological well-being and, having been informed the range of possible conditions they may receive, they have enough information to make an informed choice about whether or not to participate.

Other Forms of Deceit

What happens with other forms of deception? Suppose the true purpose of the study is not disclosed (e.g., the Milgram study above, or when subjects are only told that their performance on a quiz is being tested, when the true purpose is to gauge the effects of distractions on performance), is that ethical? Or what about if subjects are told that X is going to happen, but the researchers never intended on delivering X or had always planned to give Y instead?

One of the big problems that needs to be considered when deciding whether or not these forms of deception are ethical in human research is whether it affects the subjects willingness to participate or to keep participating in the research. If the information that is being withheld would affect their decision, then it should be disclosed. Of course, disclosure may jeopardize the validity of the study. In the example of quiz performance and distractions, suppose the distraction that is planned is a very sudden, unexpected and loud noise. For many people, that may not be an issue at all. It gives a bit of a start, but no real harm is done. Now, what about individuals with a panic or anxiety disorder? Or those with a heart condition? For these people, that little bit of information about the purpose of the study and the fact that there would be a startling event is information that could affect their decision whether or not to participate. They know what types of things could trigger or exacerbate their condition, and such an event may be one of those. On the other hand, telling subjects the nature of the study (effects of distraction on performance) or the type of distraction (the loud noise) may subtly influence their behavior, thus invalidating the results of the study. Before I talk about how to address this, let's look at another example.

There was a recent study published in the New England Journal of Medicine by Wechsler, et al., comparing the outcomes of an active drug, one of two placebos or no intervention in the treatment of asthma, titled Active Albuterol or Placebo, Sham Acupuncture, or No Intervention in Asthma1. In this study, researchers gave subjects with stable asthma either an albuterol inhaler, a placebo inhaler, placebo acupuncture (sham acupuncture) or no intervention at all. Each participant received three sessions of each treatment, but did not know whether they were receiving the real deal or a placebo. I won't go into the science or conclusions from this study, since Orac at Respectful Insolence already did so. Instead, I'm going to try to focus solely on the deceptions used and the ethical considerations. In addition to the article linked above, readers may find it useful to refer to the study protocol2 (PDF) and Supplementary Appendix3.

So, what specific deceptions were used? First off, we have blinding to the treatments being received. However, what also occurred was deception about the range of treatments to be used. As stated in the protocol, though not in the actual paper, the subjects:

will be informed that they may receive several strengths of active or
placebo bronchodilating medication and active or placebo acupuncture at different times during the study. [emphasis added]

Similarly, the Supplementary Appendix notes that a research assistant:

informed [the study subjects] that they would receive one of the following interventions on that day: an inhaled medicine (active or placebo), acupuncture (genuine or placebo), or observation of natural history. [emphasis added]

Subjects were told that they might receive actual acupuncture instead of sham acupuncture (shamcupuncture?), even though real, skin-piercing, meridian-aligning acupuncture was never going to be used at all. In other words, subjects were led to the false belief that real acupuncture could be administered as a treatment.

After subjects completed participation, researchers gauged whether subjects thought they were receiving real or placebo treatment, finding (from the actual paper):

Treatment credibility was high, and most patients believed that they had received active treatment (73% for double-blind albuterol, 66% for double-blind placebo inhaler, and 85% for sham acupuncture). The two double-blind conditions did not differ significantly from each other, but sham acupuncture was significantly more credible than both inhaler conditions (P<0.05).

Now that we know what actual deceptions were in place and how the researchers measured the effectiveness of the deception, let's consider the ethics involved. Since I already addressed the use of blinding subjects to the treatments, I won't rehash it here. What about the misleading list of possible treatments? None of the available materials state how this misinformation was presented in the informed consent form, but we can guess, since the study was approved by an IRB, that something like authorized deception was used, in which subjects are told in the consent form that deception will be used, but the exact nature of the deception is withheld. Is that all that is required? Should participants be told which treatments were genuine and which were false once the study is over? Neither the article, protocol nor Supplementary Appendix make any mention of whether or not subjects were disabused of the deception when they finished participation or when the study as a whole ended.

In the Archives of Internal Medicine, the National Institutes of Health's Dr. David Wendler uses the conditions for the use of deception in research from the Ethical Principles of Psychologists as a basis for IRB approval of research beyond just psychological studies4:

(1) the use of deception is necessary; (2) the use of deception is justified by the study’s social value; (3) subjects are not deceived about aspects of the study that would affect their willingness to participate, including risks and potential benefits; (4) subjects are informed prospectively of the use of deception and consent to its use; and (5) subjects are informed of the nature of the deception at the end of their participation

Wendler's list provides an answer for my question above; the participants should have been debriefed regarding the nature of the deception. But why? What harm would there be in not informing the subjects that there was no real acupuncture, as they were informed by the researchers?

A large number of the subjects (85%) reported that they believed the sham acupuncture was the genuine article. Furthermore, after receiving the sham acupuncture, subjects reported about a 46% improvement in subjective outcomes, just shy of the 50% subjective improvement from albuterol. Therefore, if they are not told otherwise, people who participated in the trial may leave with the belief that acupuncture is effective for treating asthma. Not only that, but because the subjective ratings were quite similar to those for albuterol (the only treatment that demonstrated significant objective improvement), they may believe that acupuncture is as effective as using an inhaler to control their asthma. There is potential, then, for the subjects to opt for acupuncture instead of albuterol to treat their condition, with the result being a worsening of their illness. They may also tell others not involved in the research, spreading this erroneous belief.

Though there may be no direct negative impact on the subject by failing to debrief them upon completion of the study or earlier, we can see that there is potential for harm and, thus, reason to clear things up at the end of the study. But this is not the only possible harm that can come from using deception in a study.

More Unintended Harm

Wendler also notes that there may be further unintended consequences stemming from deceiving study participants. He reminds us that in many clinical studies, especially those involving treatment of a condition, subjects may assume that they can trust the researcher. Doctor-patient relationships are built on trust. Deception holds the potential to undermine that trust. Subjects who respond negatively to being misled in the research context may be less likely to participate in future research and may even come to view doctors in general as untrustworthy. The lie robs the subject of the fundamental ethical principles of autonomy and respect.

What to Do?

There are always going to be situations where deception is unavoidable. Some aspects of research may require lying to study participants in order to generate data that will actually lead to meaningful results that advance our understanding of various conditions and improve the quality of medicine. But there are pitfalls that may affect the participant, the research endeavor or even medicine as a whole.

Informing potential subjects that they will be misinformed about some aspect of the study, i.e., authorized deception, can mitigate some of the negative facets of lying or withholding information about the study. Subjects that are comfortable with being tricked are more likely to participate than those who are not, and they are less likely to suffer harm as a result of the ruse once the study is finished.

But, as illustrated by the Wechsler example, alerting subjects before they agree to be in the study that they will be deceived is no guarantee that the prevarication will not result in some manner of harm. Debriefing subjects as early as feasible (either when their participation ends or when the whole study is finished) is therefore necessary to further minimize the risks to subjects.

We can use the points outlined by Dr. Wendler as a general guide to determining whether or not it is ethically acceptable to use deception in research. Ultimately, though, studies must be judged on a case by case basis. What may be appropriate in one study may not be so for a different study.
____________________
References

(1) Wechsler, M., Kelley, J., Boyd, I., Dutile, S., Marigowda, H., Kirsch, I., Israel, E., & Kaptchuk, T. (2011). Active albuterol or placebo, sham acupuncture, or no intervention in asthma. New Engl J Med, 365, 119-126.
(2) ibid. Active albuterol or placebo, sham acupuncture, or no intervention in asthma - Protocol. New Engl J Med, 365.
(3) ibid. Active albuterol or placebo, sham acupuncture, or no intervention in asthma. New Engl J Med, 365 (Suppl.).
(4) Wendler, D. (2004). Deception in the pursuit of science. Arch Intern Med, 164, 597-600.

3 comments:

  1. I cannot tell a lie, I lied when I participated in a study last fall.

    I was asked not to take medication beforehand but I forgot so I was medicated and I didn't tell them because I find these things interesting and didn't want to miss out on the fun just because I'd messed up the experiment.

    Anyhow, the experiment was fun. The researcher was keen to get rid of me, I didn't stop talking!

    I thought nothing more about it until a couple of months later I was speaking to my brother on the phone and he said a researcher had called him to ask if I was really asperger's!

    Suggesting maybe that the medicine worked that day, or the researcher was not very good at spotting autism (I doubt it, they've seen thousands)?

    Hopefully they will have excluded me from the results. Meanwhile, I'm happy with my result :)

    ReplyDelete
  2. Forgive my late comment to this post. Ethics in research is an interest of mine and one I'd like to learn more about. Thank you for posting your references, too. I will try to get a copy of Wendler's article and see if he changes my mind.

    Aside from its contribution to the understanding of complicit behavior, I've heard the Milgram study referenced more often as a prime example not of why lying is bad, but rather of why internal review boards exist and why scientific understanding should never be placed above the health and safety of the participant.

    In double-blind experiments such as the one you first describe, I do not see how patients are lied to. They are given full disclosure from the start that neither they nor the treatment administrators know whether the treatment is the experimental treatment or a placebo. To lie, they would have to be told that they will receive "X" treatment, whether or not that knowledge is correct. Additionally, administrators should never act as if any participant were receiving the experimental treatments, as that would present a major confound to the study. (Was improvement due to placebo effect, treatment, communication between expectant administrator and participant, or interaction effect of treatment and communication?)

    While I thoroughly agree that complete debriefing of each participant should should occur as soon as the study allows without delay to prevent a participants' possible false assumptions from entering his or her own personal catalog of factual experience, I have trouble seeing "I don't know, and neither do you" at the start of each trial as flatly deceptive.

    My department runs into the opposite problem, however. The pool of student participants on which we often rely are aware of deception in psychological studies. Most of the studies in which they participate lack deception, but because they expect it regardless, there is a potential for paranoia and alertness not representative of the population. I've been tempted, for laughs if nothing else, to design a study in which they are told to the point of excess that they will be deceived, and see if that actually lowers their expectations of deception by the assumption that the promise of deception is the deception.

    ReplyDelete
  3. @skepticarebear

    Thanks for your comment, and better late than never!

    Regarding double-blind studies being deceptive, the deception is that the subjects will be receiving something (treatment, placebo, comparator), but the researchers will not be telling them what it is.

    Suppose that you don't have anything in the consent describing blinding, randomization or that there are even different groups. Even if the researchers are blinded, the subjects are being led to believe that what they are getting is the real deal, when they might instead be getting a placebo or active comparator. The way around this deception is to tell them up front, "There are actually things other than the real treatment under study. You won't know which you'll get. Neither will we."

    In other words, tell them up front that they will be deceived regarding whatever it is they get. Even the the researcher is also blinded, deception is still involved.

    Your point on the psych experiments and students was something that I actually ran across while researching this post. It's a big problem that can really skew the results and render the study meaningless.

    ReplyDelete

Spam comments will be deleted.

Due to spammers and my lack of time, comments will be closed until further notice.

Note: Only a member of this blog may post a comment.