11/4/12

Catherine Will & Tiago Moreira, eds, Medical Proofs, Social Experiments. Clinical Trials in Shifting Contexts, Farnham, Ashgate, 2010.

More than half a century ago now, physicians began to struggle with how to assess the efficacy of treatments. As the late Harry Marks documented at length, around the 1950s the two alternatives considered for making these assessments were the case-based judgment of individual experts and the results of randomized clinical trials (RCTs). However, after only a few decades, the RCT reached the apex of the hierarchy of clinical evidence, where it remains despite the objections of a number of dissenting doctors, philosophers and sociologists. The compilation edited by Catherine Will and Tiago Moreira brings us a selection of the most recent sociological literature on medical experiments. It is interesting to note that, as the editors themselves present it, this book constitutes a vindication of case-based reasoning against the purported generality of RCTs. In these latter, we assume that we are dealing with a representative sample of patients and a standardized treatment protocol, allowing us to generalize our conclusions beyond the trial. The case studies compiled in this book question the possibility of such generalization: as the editors conclude, information about how clinical trials are organized and carried out goes beyond reporting of methods and is crucial for critical interpretation of evidence. This information should be compiled precisely through case studies, bridging the gap between the agents defined in the research protocol and the communities and contexts where these protocols are implemented.

Unlike other edited collections of case studies, this one aims at constructing a systematic argument. In this respect, Will and Moreira have done a wonderful editorial job, making explicit the threads between the different chapters in their introduction and conclusion and in short prefaces to each of the three parts into which they divide their compilation. In part I, “The Practices of Research,” three case studies, by Stefan Timmermans, Ben Heaven and Claes-Fredrik Helgesson, analyze how researchers struggle with trial protocols, either adapting them to their own goals, resisting them if they conflict with these latter or supplementing the protocols with their own ad hoc methods in order to assure that trials are completed. The editors present their own papers in part II, “Framing Collective Interpretation”. Both deal with the appraisal of trial results by third parties: the medical profession through their specialized journals and the State (the British National Institute for Clinical Excellence). In part III, “Testing the Limits for Policy,” three more papers discuss the use of trials for policy-making purposes. Again, the analyses focus on the role of contexts in policy-oriented trials: the adverse consequences of bracketing contextual information (briefly discussed by Trudy Dehue regarding depression) or the virtues of making the most of it in the trial (Ann Kelly and Alex Faulkner).

This quick summary is obviously guilty of saying very little about the actual the content of the papers. If we list them according to the interventions examined, we find a trial on the use of antidepressants (bupropion) against methamphetamine dependency (Timmermans), a comparison of two lifestyle interventions with medication against a common, chronic condition (Heaven), the controversy on the rosuvastatin trials (Will), the NICE cost-utility analysis of dementia drugs (Moreira), two hybrid trials of an arthritis screening program and of mesh screens against malaria (Kelly), and a recent British prostate cancer detection program (Faulkner).

The general point the editors are trying to make is that the conduct of clinical trials and the interpretation of their results depend not only on their research protocol, but on the intentions of the many agents who, one way or another, are involved in the process. Generalizing the results of a trial beyond their “context of discovery” is something that we can only decide on case by case basis. Indeed, from what they hint in the conclusion (e.g., p. 158), the editors would rather advocate redesigning regulatory trials so that their different stakeholders could have their say.

Emphasizing the ultimate context dependence of RCTs is a point worth making against those philosophers (or perhaps statisticians) who allow no epistemic role for such contextual dependencies of RCTs. But do the contextual dependencies discussed in this volume actually interfere with our ability to identify treatments that are efficacious for the general population? Do we have less reliable trials as a result of these out-of-the protocol interventions, and should the medical community consider alternatives to the RCT ?

Unfortunately, none of the papers in this collection addresses this crucial problem. The one that comes closest is Helgesson’s analysis the practices of out-of-protocol data cleaning in large Swedish RCTs. Helgesson tracks the ways in which data are informally recorded and corrected without leaving a trace in the trial’s logbook, from post-it notes to guesses about the misspelling of an entry. In his view, the trial participants who make such corrections do them in good faith in order to increase the credibility of their results. However, Helgesson explicitly refuses to discuss what sort of errors may be thus introduced in the data, as if “any idiosyncratic shaping of data should be understood as producing biased data and biased results” (p. 52) and we therefore cannot draw any conclusions about the impact of such errors on the interpretation of the study. But psychologists have documented at length how the credibility these practitioners seek is directly connected with confirmation biases, despite Helgesson’s contention: we all tend to accept more easily information that confirms our prior beliefs than disconfirming data. Confirmation biases have been documented in scientific laboratories, for instance, by Kevin Dunbar and his team at Toronto, who have shown as well that experimenters rely on bias-correction procedures from which the reliability of the data stems.

Are these informal practices of data recording and correction threatening the goals of trials as safety and efficacy tests? We know that RCTs do not provide full information about the effects of a drug, as the statistics on adverse effects reported to the FDA show. But, at the same time, regulatory clinical trials have so far been reasonably good at screening off the pharmaceutical markets from toxic and ineffective compounds. If trials were conducted to learn as much as we could about new treatments, perhaps the sort of contextual information provided in these case studies would help. In her chapter, Ann Kelly shows, for instance, how the self-selection of participants in a trial may turn out to be a good thing if the information gathered about this particular group of patients shows how to best implement a medical intervention. However, most RCTs are conducted just to prove certain effects to a skeptical audience (the regulatory agencies). Given RCTs track record of efficacy for regulatory purposes, how would an ethnography of the trial, or any reform of the type we saw Will and Moreira advocate, help the regulator in making his decision? Will it improve our current standards of safety and efficacy? Would it just make the trials more credible to the public?

At any rate, if case studies are to play a role in this re-shaped regulatory process, we ought to require from them the same warrants of impartiality we require from RCTs. A number of well-documented biases interfere in the conduct of trials and we try, at least, to prevent them with devices such as blinding and randomization. If a case study on the conduct of a trial should be taken into account by the regulator, by way of background information, how does this latter know that the report is not biased? Sociologists and anthropologists are presumably as vulnerable to biases as any other researcher involved in a trial, and the case-study should incorporate methodological caveats preventing partiality. Will and Moreira do not mention any such safeguards in their conclusions, but if their proposal ever succeeds, I am sure this is a problem they will have to address.

{December 2011}
{Theoretical Medicine and Bioethics 33.5 (2012)}

No hay comentarios:

Publicar un comentario