Mostrando entradas con la etiqueta sociología de la ciencia. Mostrar todas las entradas
Mostrando entradas con la etiqueta sociología de la ciencia. Mostrar todas las entradas

10/9/21


The rule in the knowledge machine

Michael Strevens, The Knowledge Machine: How Irrationality Created Modern Science, Liveright, New York, 2020, 350 pp. hard-back, 30$

The Knowledge machine is a book about the Iron rule of explanation (IRE). According to Michael Strevens, science has worked because scientific communities have strictly played by this rule ever since Newton. In the author’s own words, this is:

The rule demanding that all scientific arguments be settled by empirical testing, along with the elaborations that give the demand its distinctive content: a definition of empirical testing in terms of shallow causal explanation, a definition of official scientific argument as opposed to informal or private reasoning, and the exclusion of all subjective considerations and nonempirical considerations (philosophical, religious, aesthetic) from official scientific argument. [293]

With the IRE Strevens wants to settle the Great Method debate, initiated by methodists like Popper and Kuhn and then dominated by radical subjectivism (now prevalent among historians and sociologists of science). According to Strevens, the former focused on the wrong rule, be it falsificationism or the organization of scientific paradigms. The latter deny that there is any correct rule,  scientific outcomes are just like any social agreement, a matter of taste, interests, power etc. Strevens accepts the role of all these factors in the dynamics of science, but condensed into plausibility rankings, “a scientist’s level of confidence that a hypothesis or other assumption is true” [293]. But subjectivity is then constrained by the IRE: the game of science is about scientists organizing empirical tournaments in which a winner emerges, independently of the conflicting interests or values of the participants. According to Strevens, the accumulation of evidence, in the long run, brings about consensus on the true theory, the one that explains all relevant observations.

The gist of the iron rule is to minimize scientific debate about things scientists may not easily agree on and motivate scientists to “squeeze every last drop of predictive power” from a scientific paradigm. For Strevens, playing by the IRE and only the IRE is irrational: the IRE “imposes a wholesale prohibition on all forms of nonempirical thinking, no matter their track record, no matter how well they synergize with empirical observations” [237]. A chapter on the fruitfulness of beauty as a guiding principle of science exemplifies this point. But the alternative (using some other guiding principles in addition to the IRE) is worse: scientists may never reach an agreement.

Strevens discusses the Thirty Years’ War to illustrate how making religion a private matter is the best strategy to avoid civil unrest, and modern science would have its foundation in this separation. The recipe is still valid today: Keep empirical tests separate from any other consideration and let these tests proceed until a consensus is reached, keepscience working like a well-oiled automaton (a knowledge machine), do not meddle with the IRE.

Although Strevens is famous for his dense prose and subtle conceptual analyses, The Knowledge Machine was conceived as a popular philosophy book, an a quite successful one at that –already with reviews in major international newspapers and magazines such The New Yorker. A reason for this lies in the long collection of snapshots in the History of science that illustrate the concepts presented above and make for a fun reading. Radical –and a few moderate- subjectivists will probably challenge the details of these abridged case studies, but this is a scholarly debate, for which Strevens will probably be ready –although the footnotes and references are rather sketchy so that it is often difficult to determine the depth of his knowledge of each particular case.

What was less clear to me though was the message this book is sending to the public. As Strevens acknowledges in the first half of the book, his predecessors in the Great Method Debate were all conveying an image of scientists that became hugely influential among the educated Westerners: the Popperian dissenter, the Kuhnian Cold warrior, the Latourian black-boxer. These images made plain sense against the background of the political dilemmas of their time, partly reconstructed by Strevens for his readers. However, about our own dilemmas, Strevens remains mostly silent and his final advice sounds almost like an oracle: “Do not tamper with the workings of the knowledge machine. Set its agenda, and then step back: let it run its course” [285]. Strevens does not explicitly says  who is meddling with the IRE and who would oppose it after grasping Streven’s consequentialist argument. Perhaps a few ongoing agendas in philosophy of science (and on Science and Technology Studies) could be seen as targeting the IRE. Feminist standpoint theories, for instance, defend a reassessment of what counts as evidence to illuminate potential sexist biases. Similarly, advocates for the embedment of philosophers in scientific laboratories claim that conceptual analyses can have a real impact on the advancement of science. Would any of these approaches count as threats to the IRE?

Perhaps  more serious meddlers  are the many forms of populism proliferating around the world. After all, the IRE has a technocratic taste: once their goals are set by democratic parliaments, science, like hospitals, courts, or central banks, work better as independent institutions where experts make relevant decisions according to their own rules. Challenging the autonomy of science in the name of “subjective or nonempirical considerations” would be a typical populist move. For instance, I would count as populist the call to accommodate  patients’ preferences in the design of clinical trial at the expenses of traditional debiasing methods (such as blinding). But I cannot tell whether this is the sort of challenge with which  Strevens is concerned because almost of all of the examples discussed in the book are success stories from the natural sciences before 1950.

It is always nice to be reminded of how well some scientific disciplines have worked in the past and, at least to me, the IRE seems a plausible account for this success. But I am not sure about the effectiveness of such a reminder in persuading contemporary audiences about the benefits of the autonomy of science. My first concern is that such reminders have been tried before with not much success. Reading The Knowledge Machine, I could not but think of Max Weber’s arguments about the scientific vocation. Like Strevens, Weber was inspired by how the Protestant reformation and brought about a world in which the private faith of individual agents had unintended beneficial consequences for everyone (i.e., economic growth), provided that the Church and the State were kept apart. Like Strevens, Weber praised scientific specialization and called for leaving aside all value judgments so as to prioritize consequentialist considerations. And yet the Great Method debate started because, after World War II, only Merton was persuaded that a general code of conduct was enough to account for the success of science. WillStrevens’ arguments be more persuasive today?

I agree that having clearly articulated (iron) rules will  increase the public trust in any institution. Nonetheless, my second concern is that the problem we are now facing is the increasing mistrust regarding the enforcement of any such rule. Think again of randomized clinical trials in medicine: despite the conflicting interests at stake, the systematic implementation of the IRE allows the truth about whether medical treatments work to emerge with the accumulation of evidence (thanks, e.g., to the Cochrane collaboration). And yet more and more patients are persuaded that the whole testing system is bankrupt because some particular trials are rigged by their corporate sponsors. A Weberian reminder that scientists have successfully played by the IRE in the past to everyone’s satisfaction and that we should keep their effort going will do little, in my view, to appease an audience sceptical about whether the IRE is being enforced today

But maybe I am overinterpreting Strevens’ argument. After all, defending science from the meddlers is just the topic of 7 pages out of 300. Perhaps this is just a public reminder that science works for a very simple reason that the examples in the book easily convey. It is an entertaining read and it will help to comfort any Weberian soul struggling to keep alive her faith in science in our increasingly challenging world. At least, it has helped me. 

{August, 2021}

{Metascience}

 

 .


5/4/15

Paul Erickson, Judy L. Klein, Lorraine Daston, Rebecca Lemov, Thomas Sturm, Michael D. Gordin, How Reason Almost Lost Its Mind. The Strange Career of Cold War Rationality, Chicago (Ill.), University of Chicago Press, 2013

Is rationality as clean and well-defined concept, such as a system of axioms, or rather a sticky syrup, like a “bowl of molasses”? Philosophers, at least within the analytic tradition, usually opt for axiomatic definitions and a paradigmatic illustration is, for instance, expected utility theory (EUT). Established by von Neumann and Morgenstern in 1947, and expanded later by Savage in 1954, it quickly became the paradigm for the analysis of decisions between uncertain alternatives, until the accumulation of experimental anomalies forced economists to search for alternatives –although none has completely displaced it, as of today. The normative appeal of EUT (as a warrant of consistency in our decisions) still captivates philosophers, but all this anomalies have forced us to rethink whether there is some axiomatic unity in our rational choices or whether our decisions are like molasses and EUT is just one bowl containing some.
Herbert Simon made the remark about the bowl of molasses commenting on how “the irrational is the boundary of the rational” (115). As I read it, How reason almost lost its mind(CWR from now on) is a book about such boundaries: the concepts that came to define rationality among the social scientists around the 1950s and 1960s would not have hold together were it not for the context (the bowl) provided by the Cold War.
The ideal type of Cold War Rationality (3-4) would be a formal algorithm providing mechanically the best solution for a given problem. These algorithms would originate in the analytical decomposition of the actions of a person of “seasoned experience and proven judgment” (43), so that anyone could implement such sets of rules and obtain the same success. Human calculators, computing variables for astronomers in many 19th and 20th century observatories, provide a paradigm of the rationality that Cold War would generalize, thanks mostly to the success of algorithms in foundational research on the paradoxes of set theory. Ideas then flew from the more abstract regions of mathematics to the social sciences, in a process accelerated by the II World War and its posterity.
One major issue in the historiography of rational choice theory is to what extent it was shaped by such context: for instance, is it something more than a formal rendition of the neoliberal ideology that emerged after the war? In this case, we may wonder whether the rules defining rationality were somehow tainted by their military uses. The answer of CWR seems mostly negative. For a start, Cold War Rationality is something more than a simple combination of decision and game theory. Bounded rationality was just as much a Cold War product. The organization of the airlift that would provide basic supplies to Berlin during the Soviet blockade started a research agenda on military management that ultimately led Herbert Simon to defend the necessity of non-optimizing decision rules. The limitations of information and computing power that plagued such projects left no alternative.
Even further from rational choice standards, but equally part of Cold War rationality was Charles Osgood’s GRIT, the acronym for “graduated and reciprocated initiatives for tension reduction”. Osgood, a psychologist, studied strategies for de-escalating conflicts –paradigmatically, in the nuclear arms race. Osgood did not establish his decision rules on formal grounds and, as CWR point out, they were difficult to test experimentally. But nonetheless they had the algorithmic form distinctive of the era. And there is even more to Cold War rationality. Rules did not only emerge in theoretical contexts as a solution to a given problem, be it formal or not. There was also empirical research on how rules emerged, via the analysis of “situations”. These were small group interactions placed in a context that could be externally controlled and observed: e.g., a negotiation in a room with microphones and one-way mirrors, as the one used by Robert Bales at the Harvard Laboratory of Social Relations in the 1950s. Decomposing the interaction into its minimal elements and coding how often they featured would allow social scientists to engineer future exchanges so that they yielded the desired outcomes.
If Cold War rationality is so diverse (and showing it is a major contribution of this book), we may well grant that its content was not constrained by one single agenda. But then what brings all these different algorithms together under the umbrella of rationality? The bowl containing this molasses would have been the military demand for procedures that could handle the complexity of Cold War issues –from nuclear strategies to logistics and negotiation processes. Military budgets funded research according to their needs, independently of any disciplinary boundary. The RAND Corporation was probably the most successful hotbed of Cold War rationality, but we can find research programs tied to the military in University departments all over the United States.
According to CWR, the threat of a nuclear conflictwas powerful enough to break through the different paradigms then available for the study of decision making and bring them into a real debate. Had it not been for the Cold War, the topic might have been studied along more conventional disciplinary paths, with a different level of mutual engagement.Just as it happened after the end of the Cold War. When the bowl of military demand cracked, the molasses of rationality spilled in a plethora of experiments that showed a plurality of decision rules at work (namely, heuristics and biases), more or less deviant regarding formal standards of rational choice.
CWR shows, in sum, that Cold War rationality was more diverse than rational choice theory and that the one in the many, bringing together all such diversity, was the nuclear threat providing the context. Both points are carefully argued and I have learnt a great deal with this volume. But, of course, it is my task to challenge them –trying to live up, I hope, to the spirit of those foundational Cold War debates.
Starting with diversity, the authors define their ideal type in the most encompassing manner, but they often argue as if the canon within Cold War rationality was rational choice theory. I don’t think, at least, that any other of the approaches discussed in the book exhibits to the same degree the features of the type listed in p.5: formal methods modelling self-interested individuals in conflict, with a radical simplification of the circumstances and a step-by-step impersonal approach to a solution. As the authors acknowledge (p. 94), GRIT rulesfor conflict resolution are not as algorithmic as the identification of Nash equilibria. Even if the setting put individuals in conflict, situation rooms apparently neither sought nor yieldeddecision rules (p. 124). And certainly “the collapse of Cold War rationality” came with experimental tests of rational choice models. The other research agendas sparked by the nuclear threat apparently did not make it that far: whereas Cold War results in, e.g., game theory are still part of the standard curriculum in some social sciences, most other topics addressed in this book only belong in the history of their disciplines.
Why not telling the story of Cold War rationality, in all its diversity, giving its weight to its difference constituents? At this point it wouldn’t seem Whiggish, just an acknowledgment of the longer reach of rational choice theory among Cold War theories. My impression is that the authors do not seem very inclined to make such distinction, because they implicitly disagree with the sort of social science associated with rational choice theory (e.g., “the notoriously mean and lean Homo economicus”, p. 185), and they treat it as if its time was already past. This is how I make sense, at least, of the title of the book: “How Reason Almost Lost its Mind”. The reduction of rationality to formal decision models seems was often achieved “at the expense of reason” (p.2), where this latter is understood as the sort of Enlightened wisdom that an automaton can just poorly imitate. The “almost” in the title seems to suggest that the Cold War is over and there is a chance for reason to regain its grounds. Thereby, perhaps, the surprise at philosophers still spending so much time on rational choice theory as of today (p. 187). But what are the reasonable alternatives that we should be discussing instead?
I would have expected though that a historical analysis of Cold War rationality would have made explicit instances of such alternatives at the point where they were perhaps considered and discarded. Taking up again, by way of instance, EUT, it might have been more fruitfulto discuss how a reasonable decision rule from the Enlightement(Bernoullian utility functions) became the standard of Cold War decision making? After all, choosing between uncertain alternatives according to an average (expected utility maximization) is just an option among others (why not focusing on the variance). What made it so attractivecirca 1940?Can we explain its normative appeal on a priori considerations (such as Savage’s Dutch Book argument) or is it also ideological?After all, the alternatives to EUT now under construction among decision theorists (prospect theory, etc.) are closer to Cold War rationality standards than to any sort of mindful reason. And this was the case already in the 1950s: think of Allais’ arguments about the reasonability of EUT. So what is the reason beyond rationality in CWR?
Alternatively, we may wonder whether the Cold War is really over, as far as the idea of rationality in the social sciences is concerned. The contextual pressure of a nuclear threat may have propelled the deployment of rational choice theory in the social sciences. But probably something else is keeping it in place now, for the right or the wrong reasons.Reading CWR I cannot understand well why such algorithmic standards of rationality still prevail. The answer might not be epistemic at all: since the American military and the policy makers were mostly alone on the demand side for rationality during the Cold War, I can’t help wondering whether they were satisfied with the outcome they funded so generously. Perhaps the survival of Cold War Rationality is, after all, the survival of the intellectual institutions that won such War. Historians are certainly among the few who can answer that.

{September, 2014}