Friday, October 30, 2009

Causal realism for sociology



The subject of causal explanation in the social sciences has been a recurring thread here (thread). Here are some summary thoughts about social causation.

First, there is such a thing as social causation. Causal realism is a defensible position when it comes to the social world: there are real social relations among social factors (structures, institutions, groups, norms, and salient social characteristics like race or gender). We can give a rigorous interpretation to claims like "racial discrimination causes health disparities in the United States" or "rail networks cause changes in patterns of habitation".

Second, it is crucial to recognize that causal relations depend on the existence of real social-causal mechanisms linking cause to effect. Discovery of correlations among factors does not constitute the whole meaning of a causal statement. Rather, it is necessary to have a theory of the mechanisms and processes that give rise to the correlation. Moreover, it is defensible to attribute a causal relation to a pair of factors even in the absence of a correlation between them, if we can provide evidence supporting the claim that there are specific mechanisms connecting them. So mechanisms are more fundamental than regularities.

Third, there is a key intellectual obligation that goes along with postulating real social mechanisms: to provide an account of the ontology or substrate within which these mechanisms operate. This I have attempted to provide through the theory of methodological localism (post) -- the idea that the causal nexus of the social world is constituted by the behaviors of socially situated and socially constructed individuals. To put the claim in its extreme form, every social mechanism derives from facts about institutional context, the features of the social construction and development of individuals, and the factors governing purposive agency in specific sorts of settings. And different research programs target different aspects of this nexus.

Fourth, the discovery of social mechanisms often requires the formulation of mid-level theories and models of these mechanisms and processes -- for example, the theory of free-riders. By mid-level theory I mean essentially the same thing that Robert Merton meant to convey when he introduced the term: an account of the real social processes that take place above the level of isolated individual action but below the level of full theories of whole social systems. Marx's theory of capitalism illustrates the latter; Jevons's theory of the individual consumer ss a utility maximizer illustrates the former. Coase's theory of transaction costs is a good example of a mid-level theory (The Firm, the Market, and the Law): general enough to apply across a wide range of institutional settings, but modest enough in its claim of comprehensiveness to admit of careful empirical investigation. Significantly, the theory of transaction costs has spawned major new developments in the new institutionalism in sociology (Mary Brinton and Victor Nee, eds., The New Institutionalism in Sociology).

And finally, it is important to look at a variety of typical forms of sociological reasoning in detail, in order to see how the postulation and discovery of social mechanisms play into mainstream sociological research. Properly understood, there is no contradiction between the effort to use quantitative tools to chart the empirical outlines of a complex social reality, and the use of theory, comparison, case studies, process-tracing, and other research approaches aimed at uncovering the salient social mechanisms that hold this empirical reality together.

Wednesday, October 28, 2009

Fair prices?




We live in a society that embraces the market in a pretty broad way. We accept that virtually all goods and services are priced through the market at prices set competitively. We accept that sellers are looking to maximize profits through the prices, quantities, and quality of the goods and services that they sell us. We accept, though a bit less fully, the idea that wages are determined by the market -- a person's income is determined by what competing employers are willing to pay. And we have some level of trust that competition protects us against price-gouging, adulteration, exploitation, and other predatory practices. A prior posting questioned this logic when it comes to healthcare. Here I'd like to see whether there are other areas of dissent within American society over prices.

Because of course it wasn't always so. E. P. Thompson's work on early modern Britain reminds us that there was a "moral economy of the crowd" that profoundly challenged the legitimacy of the market; that these popular moral ideas specifically and deeply challenged the idea of market-defined prices for life's necessities; and that the crowd demanded "fair prices" for food and housing (Customs in Common: Studies in Traditional Popular Culture). The moral economy of the crowd focused on the poor -- it assumed a minimum standard of living and demanded that the millers, merchants, and officials respect this standard by charging prices the poor could afford. And the rioting that took place in Poland in 1988 over meat prices or rice riots in Indonesia in 2008 are reminders that this kind of moral reasoning isn't merely part of a pre-modern sensibility.  (For some quotes collected by E. P. Thompson from "moral economy" participants on the subject of fair prices see an earlier posting on anonymity.)

So where do contemporary Americans show a degree of moral discomfort with prices and the market? Where does the moral appeal of the principles of market justice begin to break down -- principles such as "things are worth exactly what people are willing to pay for them" and "to each what his/her market-determined purchasing power permit him to buy"?

There are a couple of obvious exceptions in contemporary acceptance of the market. One is the public outrage about executive compensation in banking and other corporations that we've seen in the past year. People seem to be morally offended at the idea that CEOs are taking tens or hundreds of millions of dollars in compensation -- even in companies approaching bankruptcy. Part of the outrage stems from the perception that the CEO can't have brought a commensurate gain to the company or its stockholders, witness the failing condition of many of these banks and companies. Part is a suspicion that there must be some kind of corrupt collusion going on in the background between corporate boards and CEOs. But the bottom line moral intuition seems to be something like this: nothing could justify a salary of $100 million, and executive compensation in that range is inherently unfair. And no argument proceeding simply along the lines of fair market competition -- "these are competitive rational firms that are offering these salaries, and therefore whatever they arrive at is fair" -- cuts much ice with the public.

Here is another example of public divergence from acceptance of pure market outcomes: recent public outcries about college tuition. There is the common complaint that tuition is too high and students can't afford to attend. (This overlooks the important fact that public and private tuitions are almost an order of magnitude apart -- $6,000-12,000 versus $35,00-42,000!) But notice that this is a "fair price" argument that would be nonsensical when applied to the price of an iPod or a Lexus. People don't generally feel aggrieved because a luxury car or a consumer device is too expensive; they just don't buy it. It makes sense to express this complaint in application to college tuition because many of us think of college as a necessity of life that cannot fairly be allocated on the basis of ability to pay. (This explains why colleges offer need-based financial aid.) And this is a moral-economy argument.

And what about that other necessity of life -- gasoline? Public complaints about $4/gallon gas were certainly loud a year ago. But they seem to have been grounded in something different -- the suspicion that the oil companies were manipulating prices and taking predatory profits -- rather than an assumption of a fair price determined by the needs of the poor.

Finally, what about salaries and wages? How do we feel about the inequalities of compensation that exist within the American economy and our own places of work? Americans seem to accept a fairly wide range of salaries and wages when they believe that the differences correspond ultimately to the need for firms to recruit the most effective personnel possible -- a market justification for high salaries. But they seem to begin to feel morally aggrieved when the inequalities that emerge seem to exceed any possible correspondence to contribution, impact, or productivity. So -- we as Americans seem to have a guarded level of acceptance of the emergence of market-driven inequalities when it comes to compensation.

One wonders whether deeper resentment about the workings of market forces will begin to surface in our society, as unemployment and economic recession settle upon us.

Saturday, October 24, 2009

Comparative life satisfaction


We tend to think of the past century as being a time of great progress when it comes to the quality of life -- for ordinary people as well as the privileged. Advances in science, technology, and medicine have made life more secure, predictable, productive, educated, and healthy. But in what specific ways is ordinary life happier or more satisfying for ordinary people in 2000 compared to their counterparts in 1900 or 1800 -- or the time of Socrates, for that matter?

There are a couple of things that are pretty obvious. Nutrition is one place to start: the mass population of France, Canada, or the United States is not subject to periodic hunger, malnutrition, or famine. This is painfully not true for many poor parts of the world -- Sudan, Ethiopia, and Bangladesh, for example. But for the countries of the affluent world, the OECD countries, hunger has been largely conquered for most citizens.

Second, major advances in health preservation and the treatment of illness have taken place. We know how to prevent cholera, and we know how to treat staph infections with antibiotics. Terrible diseases such as polio have been eradicated, and we have effective treatments for some kinds of previously incurable cancers. So the basic health status of people in the affluent twenty-first century world is substantially better than that of previous centuries -- with obvious consequences for our ability to find satisfaction in life activities.

These advances in food security and public health provision have resulted in a major enhancement to quality of life -- life expectancy in France, Germany, or Costa Rica has increased sharply. And many of the factors underlying much of this improvement are not high-tech, but rather take the form of things like improvement of urban sanitation and relatively low-cost treatment (antibiotics for children's ear infections, for example).

So living longer and more healthily is certainly an advantage in our quality of life relative to conditions one or two centuries ago.

Improvements in labor productivity in agriculture and manufacturing have resulted in another kind of enhancement of modern quality of life. It is no longer necessary for a large percentage of humanity to perform endless and exhausting labor in order to feed the rest of us. And because of new technologies and high labor productivity, almost everyone has access to goods that extend the enjoyment of life and our creative talents. Personal computing and communications, access to the world's knowledge and culture through the Internet, and ability to travel widely all represent opportunities that even the most privileged could not match one or two centuries ago.

But the question of life satisfaction doesn't reduce to an inventory of the gadgets we can use. Beyond the minimum required for sustaining a healthy human body, the question of satisfaction comes down to the issue of what we do with the tools and resources available to us and the quality of our human relationships. How do we organize our lives in such a way as to succeed in achieving goals that really matter?

Amartya Sen's economic theory of "capabilities and realizations" supports a pretty good answer to these questions about life satisfaction (Development as Freedom). Each person has a bundle of talents and capabilities. These talents can be marshalled into a meaningful life plan. And the satisfying life is one where the person has singled out some important values and goals and has used his/her talents to achieve these goals. (This general idea underlies J. S. Mill's theory of happiness as well in Utilitarianism.)

By this standard, it's not so clear that life in the twenty-first century is inherently more satisfying than that in the eighteenth or the second centuries. When basic needs were satisfied -- nutrition, shelter, health -- the opportunities for realizing one's talents in meaningful effort were no less extensive than they are today. This is true for the creative classes -- obviously. The creative product of J. S. Mill's or Victor Hugo's generation was no less substantial or satisfying than our own. But perhaps it is true across the board. The farmer-gardener who shapes his/her land over the course of a lifetime has created something of great personal value and satisfaction. The mason or smith may have taken more pride and satisfaction in his life's work than does the software programmer or airline flight attendant. The parent who succeeded in nurturing a family in 1800 County Cork may have found the satisfactions as great or greater than parents in Boston or Seattle today.  (Richard Sennett explores some of these satisfactions in The Craftsman.)

So we might say that the chief unmistakable improvement in quality of life in the past century is in the basics -- secure nutrition, improved health, and decent education during the course of a human life. And the challenge of the present is to make something meaningful and sustaining of the resources we are given.

Thursday, October 22, 2009

Cooperation



How important is cooperation in a market society?

First, what is cooperation? Suppose a number of individuals occupy a common social and geographical space. They have a variety of individual interests and things they value, and they have outcomes they'd like to bring about. Some of those outcomes are purely private goods, and some can be brought about through private activities by each individual.  These are the circumstances where private market-based activity can bring about socially optimal outcomes.

But some outcomes may look more like public or common goods -- for example, greater safety in the neighborhood or more sustainable uses of resources.  These are outcomes that no single individual can bring about, and -- once established -- no one can be excluded from the enjoyment of these goods.  (Public choice theorists sometimes look at other kinds of non-private goods such as "club goods"; see Dennis Mueller, Perspectives on Public Choice: A Handbook.)

Further, some outcomes may in fact be private goods, but may be such that they require coordinated efforts by multiple individuals to achieve them efficiently. An example of this is traditional farming: it may be that the yield on one individual's plot is greater if a group of neighbors provide concentrated labor on weeding this plot today and the neighbor's plot tomorrow than if each of us do all the weeding on our individual plots. The technical conditions surrounding traditional agriculture impose a cycle of labor demand that makes cooperation an efficient strategy.

This is where cooperation comes in. If a number of the members of a group agree to contribute our efforts to a common project we may find that the total results are greater -- for both common goods and private goods -- than if we had each pursued these goods through individual efforts. Cooperation can lead to improvement in the overall production of a good for a given level of sacrifice of time and effort.  This description uses the word "agree"; but Robert Axelrod (The Evolution of Cooperation) and David Lewis (Convention: A Philosophical Study) observe that many examples of cooperation depend on "convention" and tacit agreement rather than an explicit understanding among participants.

So cooperation can lead to better outcomes for a group and each individual in the group than would be achievable through entirely private efforts.

Cooperation should be distinguished from altruistic behavior; cooperation makes sense for rationally self-interested individuals if appropriate conditions are satisfied.  A cooperative arrangement can make everyone better off.  So we don't have to assume that individuals act altruistically in order to account for cooperation.

So why is cooperation not ubiquitous? It is in fact pretty widespread. But there are a couple of important obstacles to cooperation in ordinary social life: the rational incentive that exists to become a freerider or easy rider when the good in question is a public good; and the risk that cooperators run that the endeavor will fail because of non-contribution from other potential contributors. There is also often a timing problem: it is common for the contribution and the benefit to be separated in time, so contributors are even more concerned that they will be denied the benefits of cooperation. If Mr Wong is asked to weed today in consideration of assistance from Mr Li in harvesting the crop four months from now, he may be doubtful about the future benefit.

The basic logic of this situation has stimulated a mountain of great social science research and theory. Garrett Hardin's "tragedy of the commons" (Managing the Commons) and Mancur Olsen's The Logic of Collective Action: Public Goods and the Theory of Groups set the negative case for thinking that cooperation is all but impossible to sustain.  Elinor Ostrom's Nobel-prize winning work on common property resource regimes documents the ways in which communities have solved these cooperation dilemmas (Governing the Commons: The Evolution of Institutions for Collective Action). Douglas North essentially argues that only private property and binding contracts can do the job (The Rise of the Western World: A New Economic History). And Robert Axelrod has made the case for the rational basis of cooperation in The Evolution of Cooperation: Revised Edition. He argues that there are specific conditions that enhance or undermine cooperation and reciprocity; essentially, participants need to be able to reidentify each other over time and they need to have a high likelihood of continuing to interact with each other over an extended time. (His analysis is based on a series of experiments involving repeated prisoners' dilemmas.)

A market can "simulate" cooperation through enforceable contracts; so, for example, a peasant farming community could create a legally binding system of labor exchange among households.  And organizations can create quasi-binding agreements for cooperation through "memoranda of understanding" and "inter-governmental agreements" -- written agreements that may not be enforceable through legal remedies but nonetheless create a strong incentive for each party to fulfill the obligations of cooperation.  However, quite a bit of the opportunities for cooperation seem to fall outside the sphere of these formal and semi-formal mechanisms for binding agreements.

Informal cooperation needs some kind of institutional or normative setting that encourages compliance with the cooperative arrangement.  So there has been an energetic debate in the past twenty years over the feasibility of non-coercive solutions to cooperation problems; this is an area where the new institutionalism has played a key role.  And in the real world, we do in fact find numerous sustainable examples of informal cooperation.  Individuals work in community gardens; foundations join together in supporting urban renewal projects; villagers create labor-sharing practices.  But it is an interesting question to consider: are there institutional reforms that we could invent that would allow us as a society to capture more of the benefits of cooperation than we currently realize?

Monday, October 19, 2009

Paying for health



A person's income determines his/her access to many things he wants and needs: food, clothing, transportation, housing, entertainment, and the internet, for example. And people who have higher income are able to consume more of all of these categories than people with lower income, if they choose to. More affluent people shop for food at Papa Joe's or Whole Food; live in larger and more luxurious homes; buy their clothing from boutiques rather than Penny's or the thrift shop; and drive multiple handsome cars. Poor people can't afford the luxury end of these forms of consumption. And in some way our culture has judged that these sorts of inequalities of consumption are a legitimate and fair part of a market economy; if you judge that inequalities of income are justifiable (perhaps with some limits on extremes), then you pretty much have to support the idea of inequalities of consumption as well.

But what about goods that have a price but that are essential to living a decent human life? Food certainly falls in this category; if 30% of society could literally not afford to purchase enough calories to provide 2200-2900 calories per day for adults and 1800 calories for children, then we would probably have a different idea about the fairness of a market for food -- the principle that says "to each according to his/her earning capacity" doesn't seem very convincing in circumstances where it leads to malnutrition or starvation. In other words, if the normal workings of a market economy left a significant segment of the population without the ability to purchase enough food for subsistence, we would surely judge that this isn't a fair or socially just way of distributing income and food. And there is an important point to be noted here: there is hunger in America, and the system of producing goods and income isn't fully satisfying the subsistence needs of the whole population. (This is exactly what makes it compelling that our government needs to provide food assistance for the very poor, through food stamps or targeted income supplements.) So there is an important issue about the justice of current actual distributions of such basic goods as food, clothing, or shelter across the U.S. population.

But push a little deeper and consider the "market for health care". Supporting one's current healthy status is a costly effort; repairing the body in times of traumatic injury or serious illness is even more costly; and our society leaves a lot of the allocation of health care services to private purchasing power. Health insurance is the primary vehicle through which many Americans provide financially for their health care needs. Some people have insurance provided or subsidized through their employers; some families purchase health insurance through the private market; and many families lack health insurance entirely. Upwards on 47 million Americans are uninsured, including 20% of adults and 9% of children (CDC link). And this includes a wide range of Americans, from the extremely poor to the working poor to the solidly middle class.

It is clear that access to doctors, hospitals, nurses, and prescription drugs is a critical need that everyone faces at various points in life. It is obvious as well that one's future ability to live and work productively and to enjoy a satisfying life is conditioned by one's ability to gain access to health care when it is needed. It is also clear that uncertainty about the availability of health care is a major source of anxiety for many, many people in U.S. society today. So it is self-evident that decent health care is one of our most basic and unavoidable needs.

So what do people do when they lack health insurance and serious illness or injury occurs? This isn't a mystery anymore; families go into debt to doctors and hospitals, they face bankruptcy, they find some limited sources of free care (free clinics, pro bono doctors' services), and they forego "optional" treatments that may well extend the length or quality of life. And it is evident that this pattern results in very serious harms and limitations for people in these groups. People who have the least access to health care through our basic institutions may be expected to live shorter lives and to suffer more.

And what about people at the high end of the income spectrum? How do they relate to the problems of health? Here too the answers are fairly well known: they are able to seek out the best (and most expensive) specialists, travel to national centers for specialized treatment, and undergo advanced diagnostic tests that are not covered by insurance. (Here is a news story from CNN on boutique health care.) The affluent aren't able to assure their health through expenditure -- but they can certainly improve their odds.

In other words, ability to pay influences the quality and extent of health care that an individual or family is able to gain access to; and the health status of the family is affected by these variations in quality and access. So, to some meaningful extent, our social system places health care in the category of a market good.

But here is the question I'm working around to: what does justice require when it comes to health care? Is it right to look at health care as just another consumption good like shoes -- affluent people wear Gucci and poor people wear Dollar Store, but everyone has his/her feet covered? Or is health care in a special category, too closely linked to living a full human life to allow it to be distributed so unequally? (Norm Daniels has spent most of his career looking at this issue, from the points of view of philosophy and concrete policy reform. See Just Health: Meeting Health Needs Fairly for some of his findings.)

It seems a bitter but unavoidable truth that there are very substantial inequalities in the provision of health care in our society. One person's likelihood of surviving a devastating cancer may be significantly less than another person's chances, simply based on the second person's ability to pay for premium health care services. Further, it seems unavoidable that these extreme inequalities are flatly unjust in any society that believes in the equal worth of all human beings. And where this seems to lead is to the conclusion that some system of universal health insurance is a fundamental requirement of justice.

Saturday, October 17, 2009

Demystifying social knowledge



There seem to be a couple of fundamentally different approaches to the problem of "understanding society." I'm not entirely happy with these labels, but perhaps "empiricist" and "critical" will suffice to characterize them.  We might think of these as styles of sociological thinking.  One emphasizes the ordinariness of the phenomena, and looks at the chief challenges of sociology as embracing the tasks of description, classification, and explanation.  The other highlights the inherent obscurity of the social world, and conceives of sociology as an exercise in philosophical theory, involving the work of presenting, clarifying and critiquing texts and abstract philosophical ideas as well as specific social circumstances.

The first approach looks at the task of social knowing as a fairly straightforward intellectual problem. It could be labeled "empiricist", or it could simply be called an application of ordinary common sense to the challenge of understanding the social world. It is grounded in the idea that the social world is fundamentally accessible to observation and causal discovery.  The elements of the social world are ordinary and visible. There are puzzles, to be sure; but there are no mysteries.  The social world is given as an object of study; it is partially orderly; and the challenge of sociology is to discover the causal processes that give rise to specific observed features of the social world.

This approach begins in the ordinariness of the objects of social knowledge.  We are interested in other people and how and why they behave, we are interested in the relationships and interactions they create, and we are interested in institutions and populations that individuals constitute. We have formulated a range of social concepts in terms of which we analyze and describe the social world and social behavior -- for example, "motive," "interest," "emotion," "aggressive," "cooperative," "patriotic," "state," "group," "ethnicity," "mobilization," "profession," "city," "religion." We know pretty much what we mean by these concepts; we can define them and relate them to ordinary observable behaviors and social formations. And when our attention shifts to larger-scale social entities (states, uprisings, empires, occupational groups), we find that we can observe many characteristics of each of these kinds of social phenomena.  We also observe various patterns and regularities in behavior, institution, and entity that we would like to understand -- the ways in which people from different groups behave towards each other, the patterns of diffusion of information that exist along a transportation system, the features of conflicts among groups in various social settings. There are myriad interesting and visible social patterns which we would like to understand, and sociologists develop a descriptive and theoretical vocabulary in terms of which to describe and explain various kinds of social phenomena.

In short, on this first approach, the social world is visible, and the task of the social scientist is simply to discover some of the observable and causal relations that obtain among social actors, actions, and composites. To be sure, there are hypothetical or theoretical beliefs we have about less observable features of the social world -- but we can relate these beliefs to expectations about more visible forms of social behavior and organization. If we refer to "social class" in an explanation, we can give a definition of what we mean ("position in the property system"), and we can give some open-ended statements about how "class" is expected to relate to observable social and political behavior. And concepts and theories for which we cannot give clear explication should be jettisoned; obscurity is a fatal defect in a theory.  In short, the task of social science research on this approach is to discover some of the visible and observable characteristics of social behavior and entities, and to attempt to answer causal questions about these characteristics.

This is a rough-and-ready empiricism about the social world. But there is another family of approaches to social understanding that looks quite different from this "empiricist" or commonsensical approach: critical theory, Marxist theory, feminist theory, Deleuzian sociology, Foucault's approach to history, the theory of dialectics, and post-modern social theory. These are each highly distinctive programs of understanding, and they are certainly different from each other in multiple ways. But they share a feature in common: they reject the idea that social facts are visible and unambiguous. Instead, they lead the theorist to try to uncover the hidden forces, meanings, and structures that are at work in the social world and that need to be brought to light through critical inquiry. Paul Ricoeur's phrase "the hermeneutics of suspicion" captures the flavor of the approach.  (See Alison Scott-Baumann's Ricoeur and the Hermeneutics of Suspicion for discussion.) Neither our concepts nor our ordinary social observations are unproblematic. There is a deep and sometimes impenetrable difference between appearance and reality in the social realm, and it is the task of the social theorist (and social critic) to lay bare the underlying social realities. The social realities of power and deception help to explain the divergence between appearance and reality: a given set of social relations -- patriarchy, racism, homophobism, class exploitation -- give rise to systematically misleading social concepts and theories in ordinary observers.

Marx's idea of the fetishism of commodities (link) illustrates the point of view taken by many of the theorists in this critical vein: what looks like a very ordinary social fact -- objects have use values and exchange values -- is revealed to mystify or conceal a more complex reality -- a set of relations of domination and control between bosses, workers, and consumers.  With a very different background, a book like Gaston Bachelard's The Psychoanalysis of Fire makes a similar point: the appearance represented by behavior systematically conceals the underlying human reality or meaning.  The word "critique" enters into most of Marx's titles -- for example, "Contribution to a Critique of Political Economy."  And for Marx, the idea of critique is intended to bring forward a methodology of critical reading, unmasking the assumptions about the social world that are implicit in the theorizing of a particular author (Smith, Ricardo, Say, Quesnay).  So Capital: Volume 1: A Critique of Political Economy is a book about the visible realities of capitalism, to be sure; but it is also a book intended to unmask both the deceptive appearances that capitalism presents and the erroneous assumptions that prior theorists have brought into their accounts.

The concepts of ideology and false consciousness have a key role to play in this discussion about the visibility of social reality.  And it turns out to be an ambiguous role.  Here is a paragraph from Slavoj Zizek on the concept of ideology from Mapping Ideology:
These same examples of the actuality of the notion of ideology, however, also render clear the reasons why today one hastens to renounce the notion of ideology: does not the critique of ideology involve a privileged place, somehow exempted from the turmoils of social life, which enables some subject-agent to perceive the very hidden mechanism that regulates social visibility and non-visibility? Is not the claim that we can accede to this place the most obvious case of ideology? Consequently, with reference to today's state of epistemological reflection, is not the notion of ideology self-defeating? So why should we cling to a notion with such obviously outdated epistemological implications (the relationship of 'representation' between thought and reality, etc.)? Is not its utterly ambiguous and elusive character in itself a sufficient reason to abandon it? 'Ideology' can designate anything from a contemplative attitude that misrecognizes its dependence on social reality to an action-orientated set of beliefs, from the indispensable medium in which individuals live out their relations to a social structure to false ideas which legitimate a dominant political power. It seems to pop up precisely when we attempt to avoid it, while it fails to appear where one would clearly expect it to dwell.
Zizek is essentially going a step beyond either of the two positions mentioned above.  The empiricist position says that we can perceive social reality.  The critical position says that we have to discover reality through critical theorizing.  And Zizek's position in this passage is essentially that there is no social reality; there are only a variety of texts.

So we have one style that begins in ordinary observation, hypothesis-formation, deductive explanation, and an insistence on clarity of exposition; and another style that begins in a critical stance, a hermeneutic sensibility, and a confidence in purely philosophical reasoning.  Jurgen Habermas draws attention to something like this distinction in his important text, On the Logic of the Social Sciences (1967), where he contrasts approaches to the social sciences originating in analytical philosophy of science with those originating in philosophical hermeneutics: "The analytic school dismisses the hermeneutic disciplines as prescientific, while the hermeneutic school considers the nomological sciences as characterized by a limited preunderstanding."  (This text as well as several others discussed here are available at AAARG.)  Habermas wants to help to overcome the gap between the two perspectives, and his own work actually illustrates the value of doing so.  His exposition of abstract theoretical ideas is generally rigorous and intelligible, and he makes strenuous efforts to bring his theorizing into relationship to actual social observation and experience. 

A contemporary writer (philosopher? historian? sociologist of science?) is Bruno Latour, who falls generally in the critical zone of the distinction I've drawn here.  An important recent work is Reassembling the Social: An Introduction to Actor-Network-Theory, in which he argues for a deep and critical re-reading of the ways we think the social -- the ways in which we attempt to create a social science. The book is deeply enmeshed in philosophical traditions, including especially Giles Deleuze's writings.  The book describes "Actor-Network-Theory" and the theory of assemblages; and Latour argues that these theories provide a much better way of conceptualizing and knowing the social world.  Here is an intriguing passage that invokes both themes of visibility and invisibility marking the way I've drawn the distinction between the two styles:
Like all sciences, sociology begins in wonder.  The commotion might be registered in many different ways but it's always the paradoxical presence of something at once invisible yet tangible, taken for granted yet surprising, mundane but of baffling subtlety that triggers a passionate attempt to tame the wild beast of the social.  'We live in groups that seem firmly entrenched, and yet how is it that they transform so rapidly?'  ... 'There is something invisible that weights on all of us that is more solid than steel and yet so incredibly labile.'  ...  It would be hard to find a social scientist not shaken by one or more of these bewildering statements.  Are not these conundrums the source of our libido scindi? What pushes us to devote so much energy into unraveling them? (21)
What intrigues many readers of Latour's works is that he too seems to be working towards a coming-together of critical theory with empirical and historical testing of beliefs.  He seems to have a genuine interest in the concrete empirical details of the workings of the sciences or the organization of a city; so he brings both the philosophical-theoretic perspective of the critical style along with the empirical-analytical goal of observational rigor of the analytic style. 

Also interesting, from a more "analytic-empiricist" perspective, are Andrew Abbott, Methods of Discovery: Heuristics for the Social Sciences, and Ian Shapiro, The Flight from Reality in the Human Sciences.  Abbott directly addresses some of the contrasts mentioned here (chapter two); he puts the central assumption of my first style of thought in the formula, "social reality is measurable".  And Shapiro argues for reconnecting the social sciences to practical, observable problems in the contemporary world; his book is a critique of the excessive formalism and model-building of some wings of contemporary political science.

My own sympathies are with the "analytic-empirical" approach.  Positivism brings some additional assumptions that deserve fundamental criticism -- in particular, the idea that all phenomena are governed by nomothetic regularities, or the idea that the social sciences must strive for the same features of abstraction and generality that are characteristic of physics.  But the central empiricist commitments -- fidelity to observation, rigorous reasoning, clear and logical exposition of concepts and theories, and subjection of hypotheses to the test of observation -- are fundamental requirements if we are to arrive at useful and justified social knowledge.  What is intriguing is to pose the question: is there a productive way of bringing insights from both approaches together into a more adequate basis for understanding society?

Friday, October 16, 2009

Food security



Food security is a crucial aspect of life, both for a population and a household. By "food security" specialists often mean two different things: the capacity of a typical poor household to secure sufficient food over a twelve-month period (through farm work, day labor, government entitlements, etc.); and the capacity of a poor country to satisfy the food needs of its whole population (through direct production, foreign trade, and food stocks). This involves both food availability and the ability to gain access to food (through entitlements).

A representative description of food security is offered by Shlomo Reutlinger in Malnutrition and Poverty: Magnitude and Policy Options:
Food security ... is defined here as access by all people at all times to enough food for an active, healthy life. Its essential elements are the availability of food and the ability to acquire it. Conversely, food insecurity is the lack of access to sufficient food and can be either chronic or transitory.  Chronic food insecurity is a continuously inadequate diet resulting from the lack of resources to produce or acquire food.  Transitory food insecurity, however, is a temporary decline in a household’s access to enough food.  It results from instability in food production and prices or in household incomes.  The worst form of transitory food insecurity is famine.
Here is how Sen formulates his "capabilities" understanding (developed, for example, in Hunger and Public Action):
The standard of adequacy is best understood functionally: a person, household, or population has food security if it has sufficient access to food to permit full, robust human development and realization of human capacities.
There is an obvious connection between the two definitions at the household and country levels; but from a human point of view it seems more useful to focus on household food security rather than national food security.  A country may in principle have more than sufficient resources to satisfy the food needs of its population, but fail to do so because of internal inequalities.  Thus achieving household food security in the less‑developed world requires both equity and growth.  Amartya Sen and Jean Dreze have made major contributions on hunger and famine in the developing world, and their work can almost always be linked back to the household level.  Here is a good source on their writings: The Amartya Sen and Jean Dreze Omnibus: (comprising) Poverty and Famines; Hunger and Public Action; India: Economic Development and Social Opportunity.

Michael Lipton has also been an important voice on this set of topics.  His central task in Poverty, Undernutrition, and Hunger is an attempt to provide criteria for distinguishing between the poor and the ultra-poor.  The ultra-poor have incomes and entitlements that are absolutely below that required to gain access to 80% of 1973 FAO/WHO caloric requirements.  Below this level is likely to lead to undernutrition (the failure of food security).  Lipton constructs a "food adequacy standard" as a way of measuring the incidence in a given country of absolute poverty.  Here is his statement of a food adequacy standard:
Income or outlay, just sufficient on this assumption to command the average caloric requirement for one’s age, sex and activity group (ASAG) in a given climatic and work environment, will be taken as meeting the poverty FAS; this is income or outlay on the borderline of poverty, indicating a risk of hunger. Income or outlay, just sufficient to command 80% of this average requirement, will be taken as meeting the ultra-poverty FAS; this is income or outlay at the borderline between poverty and ultra-poverty, indicating a risk of undernutrition and a severe risk of important anthropometric shortfalls. (Lipton 1983): 7.)
Food security can be put at risk in a variety of ways. Natural conditions can lead to a shortfall of grain production -- flood, drought, or other natural disasters can reduce or destroy the crop across a wide region, leading to a shortfall of supply. Population increase can gradually reduce the grain-to-population ratio to the point where nutrition falls below the minimum required by the population or household. And, perhaps most importantly, prices can shift rapidly in the market for staple foods, leaving poor families without the ability to purchase a sufficient supply to assure the nutritional minimum. It is this aspect of the system that Amartya Sen highlights in his study of famine (Poverty and Famines: An Essay on Entitlement and Deprivation). And it is the circumstance that is most urgent in developing countries today in face of the steep and rapid rise in grain prices over the past year.

The results of a failure of food security are dire. Chronic malnutrition, sustained over months and years, has drastic effects on the health status of a population. Infant and child mortality increases sharply. Often the gender differences in health and mortality statistics widen. And economic productivity falls, as working families lack the strength and energy needed to labor productively. Famine is a more acute circumstance that arises when food shortfalls begin to result in widespread deaths in a region. The Great Bengal famine, the Ethiopian famine, the Great Leap Forward famine, and the famines in North Korea offer vivid and terrible examples of hunger in the twentieth century.

So what is needed to maintain food security in a poor nation? Some developing countries have aimed at food self-sufficiency -- to enact policies in agriculture that assure that the country will produce enough staples to feed its population. Other countries have relied on a strategy of purchasing large amounts of staple foods on international markets. Here the strategy is to generate enough national income through exported manufactured goods to be able to purchase the internationally traded grain. This is the strategy recommended by neoliberal trade theory. If agriculture is a low value-added industry and the manufacture of electronic components is high value-added, neoliberals reason, then surely it makes sense for the country to generate the larger volume of income through the latter and purchase food with the proceeds.

This logic has given rise to several important problems, however. First is the vulnerability it creates for the nation in face of sharp price shocks. This is what we have seen in many countries over the past year. And the second is the reality of extensive income inequalities in most developing countries -- with the result that the "gains of trade" may not be sufficiently shared in the incomes of the poorest 40% to permit them to maintain household food security.

These considerations suggest the wisdom for developing countries to expend more resources on agricultural development (which often has an income-inequality narrowing effect) and a greater emphasis on national and regional food self-sufficiency.

Saturday, October 10, 2009

If Marx had been born in Shanghai


Is Marx's vision still relevant in the twenty-first century world?

At bottom, Marx's biggest ideas were "critique," "exploitation," "alienation," "ideology," and "class." He also constructed a fairly specific theory of capitalism and capitalist development -- a theory that has historical pluses and minuses -- and a theory of socialism that can be understood along more democratic and more authoritarian lines. We might say that his theory of capitalism was too deeply grounded in the observed experience of mid-nineteenth-century Britain, and his theory of socialism paid too little attention to the crushing possibilities of power wielded by a future socialist state. Too much economics, too little politics in his worldview -- and too much of a Hegelian "necessitarianism" in his expectations about the future.

History has shown us a few things that Marx too would have recognized, with the benefit of another century of experience. History does not conform to a necessary logic of development. Capitalism is not one thing, but a set of institutions that have proven fairly malleable. There is no single "logic of capitalist development." Compromises and institutional accommodations are possible between contending economic classes. Social democracy, democratic socialism, Stalinist communism, fascist dictatorship, and liberal democracy are all feasible political institutions governing "modern" economic development.

So we might take a deep breath, take a step back, and ask a big counterfactual: How might Marx, with his critical eye for inequality and power and his acute sensibilities as a sociologist -- how might this social critic and theorist have processed the social realities of China in Shanghai in the 1980s?

The question forces a lot of refocusing for historical particulars. China was a "proletarian and peasant" state under the governance of a Communist Party. The Great Leap Forward had taken place, massive famine had occurred during agricultural collectivization, the Cultural Revolution had recently ended with great violence throughout -- and the beginnings of a new direction in economic life were starting. Private incentives and market forces were beginning to find a place in the economy. The "family responsibility system" in agriculture was beginning to demonstrate major improvements in productivity in farming. Similar reforms were beginning in industrial ownership and management.

Given these large differences between China and Birmingham -- what sorts of analysis might Marx have arrived at? What would Capital have looked like?

Here are a few possibilities. Given the pre-eminence of politics in China's affairs, the book would have been less exclusive in its focus on the "economic mode of production" and might have offered analysis of the instruments and institutions of coercion. It would have given less prominence to the labor theory of value, even as it would have retained some scheme for tracking value and wealth. Political institutions, and the forms of power associated with office and position, would have been a prominent part of the analysis. The role and dynamics of great cities would have come in. The book would have paid much more attention to international economic relations -- Marx would surely have had much to say about globalization. (Why? Because Marx was an astute and nuanced social observer; and these are crucial factors in metropolitan China in the 1980s-2000.)

But a distinctly Marxist analysis could nonetheless have emerged. The Chinese version of Capital would have emphasized some of the same human and social circumstances that are highlighted in Capital: coercion, inequality, exploitation, domination, and human suffering as a result of social institutions; the leverage provided for the personnel of the state; population movement; and the alienation of ordinary people from their species being. Marx surely would have examined very carefully the large social effects of official corruption, as a system of surplus extraction. The result would have been a different theory, emphasizing different social mechanisms; but giving primacy to many of the same large social characteristics of inequality, domination, and exploitation; perhaps more about the full social order and less of a microscopic view of the economic relations of "capitalism."

This small counterfactual experiment perhaps underlines something else too: that there are important threads of Marx's social theory and social critique that continue to be relevant as we try to analyze and diagnose the fundamental social realities of contemporary societies. And we might also draw this hypothetical impression as well: Marx would probably have been as unwelcome to the authorities of the CCP in China as he was to the rulers of the Prussian state.

Friday, October 9, 2009

Rebuilding employment


The Federal Reserve Bank of Chicago hosted a two-day conference in Detroit this week on the subject of work force adjustment (link). It was convened by the Federal Reserve Bank, the W. E. Upjohn Institute for Employment Research, and the Brookings Institution Metropolitan Policy Program. This is one of the many efforts underway to attempt to address the unemployment crisis we now face in the industrial Midwest. Participants included state and federal jobs officials, foundation leaders, and a few academic specialists.

Are there strategies that a region can pursue that will result in significant jobs creation? To grow employment in a region there are only a couple of possibilities: to expand employment in existing companies, to stimulate the creation of new businesses, and to recruit relocation of existing businesses from other regions. In each case the business owner or entrepreneur needs to be confident that he/she can add marginal revenue to the company by hiring the additional worker. This requires that the worker has knowledge and skills whose use will contribute to a saleable product. The product needs to have features of quality and utility that consumers want. Finding the workers who have the right kinds of talent, skill, and knowledge is a key challenge for the business owner. And availability of talented prospective workers is a key aspect of the company's decision to locate or grow in the region.

So what options do these pathways suggest for policy intervention to increase employment? It might be the case that there is latent labor demand out there in existing industries, where employers would hire more workers if they could find people with the right qualifications. In this case, remedial and transformative training could lead to new jobs, shifting workers from old industries to new industries. Second, there may be identified areas of potential expansion of employment where there are specific skills missing in the workforce. Maybe specialized bakeries could sell more products if they could only hire more qualified pastry chefs. Here too it is credible that we could devise specialized training programs that fill in the missing skills. There are specific community college programs that were developed for this reason, responding to the specialized needs of existing employers. But third, we can imagine a region preparing itself for a new surge of business creation and job growth in new industries and sectors. And this requires raising the number of college-educated adults in the region. This constitutes a talent pool that will encourage the expansion of businesses and overall employment.

And sure enough -- this conference focused on "talent" and "entrepreneurship." The industrial Midwest needs more of both; it is pretty well recognized that revitalization requires enhancement of the talent base of the region, and it is recognized that recovery requires the creation of vast numbers of new small businesses.

But what I find interesting and worrisome is the level of skill development that gets most of the attention in these discussions. There is a very clear focus on training rather than higher education. Much of the focus in this conferences was on targeted jobs training at a pretty low level -- training programs that provide new skills for unemployed and underemployed workers, with emphasis on laid-off auto workers in Ohio and Michigan. Several speakers emphasized that training programs need to tailor their educational programs closely to the specific needs of regional employers. The key words are skills and training --not creativity, innovation, and the bachelor's or master's degree.

But this seems wrong-headed to me; surely the most valuable asset a region can have is a significant population of well-educated, creative, and innovative people who have been challenged and stretched by a demanding university education. So shouldn't there be a lot of priority given to the complicated challenge of sustaining high-quality universities and making sure that a high percentage of high school graduates attend them?

In fact, people like Richard Florida at CreativeClass sound a very consistent drumbeat when they talk about the twenty-first century economy, emphasizing innovation and the college-educated workforce. Creativity and invention are the central components of future economic success. But the jobs-training orthodoxy points in a different direction. They emphasize vocational training and community college programs -- the message conveyed by President Obama in his July announcements at Macomb Community College relating to investments in the US community college system. (Perhaps the President's position was influenced by the findings of the 2009 Economic Report to the President, which is worth reading in detail; post.)

It seems to me that Richard Florida is surely right about the medium- and long-term story: our economy needs to constantly move towards greater innovation and greater concentration on knowledge-based sectors. So the goal of increasing the percentage of baccalaureate-level adults in a region is a crucial element of our future economic success. The ability to offer innovative ideas, to provide new kinds of problem-solving, and to work well in nimble teams -- these are crucial "skills" that emerge most frequently from a college-educated workforce. And they are crucial for vibrant business and job growth.

This means that states really need to recognize the crucial role that their universities play in their economic potential for the future. And we need to work hard in seeking out ways of allowing talented young adults to complete their college degrees -- including those 25-34 year-olds who have done some college without completing a degree. Unfortunately, public universities are suffering from fiscal crisis almost everywhere in the country. This implies that we are likely to fall even further behind in creating the highly qualified talent pools that our regional and national economies need in order to thrive in conditions of global competition. And this in turn is likely to impede the growth of employment that we all want to see.

Tuesday, October 6, 2009

Technology innovation in Chinese agriculture


It is a commonplace in world history to observe that China had achieved a high level of sophistication in science, medicine, and astronomy by the Middle Ages, but that some unknown feature of social organization or culture blocked the further development of this science into the expansion of technology in the early modern period. Chinese culture was "blocked" from making significant technological advances during the late Ming and early Qing periods -- in spite of its scientific advantage over the West in medieval times; or so it is believed in a standard version of Chinese economic history.

A variety of hypotheses have been offered to account for this supposed fact. For example, Mark Elvin argues that China's social and demographic system created conditions for a "high-level equilibrium trap" in the early modern period in The Pattern of the Chinese Past. According to Elvin, Chinese social arrangements favored population growth; innovative and resourceful farmers discovered all feasible refinements of traditional agricultural techniques to refine a highly labor-intensive system of agriculture; and population expanded to the point where the whole population was at roughly the subsistence level while consuming virtually the whole of the agricultural product. There was consequently no social surplus that might have been used to invest in discovery of major innovations in agricultural technology; so the civilization was trapped. (Here is a more developed discussion of Elvin's argument.)

Other historians have speculated about potential features of Confucian culture that might have blocked the transition from scientific knowledge to technology applications. The leading Western expert on Chinese science is Joseph Needham (1900-1995), whose multi-volume studies on Chinese science set the standard in this area (Science and Civilisation in China. Volume 1: Introductory Orientations; Clerks and Craftsmen in China and the West). And Needham attributes China's failure to continue to make scientific progress to features of its traditional culture.

But here is a more fundamental question: is the received wisdom in fact true? Was Chinese technology unusually stagnant during the early-modern period (late Ming, early Qing)? Agriculture is a particularly important aspect of traditional economic life; so we might reformulate our question a bit more specifically: what was the status of agricultural technology in the seventeenth and eighteenth centuries (late Ming, early Qing)? (See an earlier posting on Chinese agricultural history for more on this subject.)


Economic historian Bozhong Li considers this question with respect to the agriculture of the lower Yangzi Delta in Agricultural Development in Jiangnan, 1620-1850. And since this was the most important agricultural region in China for centuries, his findings are important. (It was also the major cultural center of China; see the concentration of literati in the map above.) Li makes an important point about technological innovation by distinguishing between invention and dissemination. An important innovation may be discovered in one time period but only adopted and disseminated over a wide territory much later. And the economic effects of the innovation only take hold when there is broad dissemination. This was true for Chinese agriculture during the Ming period, according to Li:
The revolutionary advance in Jiangnan rice agriculture technology appeared in the late Tang and led to the emergence and development of intensive agriculture composed of double-cropping rice and wheat. But this kind of intensive agriculture in pre-Ming times was largely limited to the high-fields of western Jiangnan. In the Ming this pattern developed into what Kitada has called the 'new double-cropping system' and spread throughout Jiangnan, but only in the late Ming did it become a leading crop regime. Similar were the development and spread of mulberry and cotton farming technologies, though they were limited to particular areas and cotton technology's advances came later because cotton was introduced later. Each had its major advances in the Ming. Therefore, technology advances in Ming Jiangnan agriculture were certainly not inferior to those of Song times which are looked at as a period of 'farming revolution'. (40)
Li also finds that there was a significant increase in the number of crop varieties in the early Qing -- another indication of technological development. He observes, "The later the date, the greater the number of varieties. For example, in the two prefectures of Suzhou and Changzhou, 46 varieties were found in the Song, but the number rose to 118 in the Ming and 259 in the Qing" (40). And this proliferation of varieties permitted farmers to adjust their crop to local soil, water, and climate conditions -- thus increasing the output of the crop per unit of land. Moreover, formal knowledge of the properties of the main varieties increased from Ming to Qing periods; "By the mid-Qing, the concept of 'early' rice had become clear and exact, and knowledge of 'intermediate' and 'late' strains had also deepened" (42). This knowledge is important, because it indicates an ability to codify the match between the variety to the local farming environment.

Another important process of technology change in agriculture had to do with fertilizer use. Here again Li finds that there was significant enhancement, discovery, and dissemination of new uses of fertilizer in the Ming-Qing period.
A great advance in fertilizer use took place in Jiangnan during the early and mid-Qing, an advance so significant that it can be called a 'fertilizer revolution'. The advance included three aspects: (a) an improvement in fertilizer application techniques, centring on the use of top dressing; (b) progress in the processing of traditional fertilizer; and (c) an introduction of a new kind of fertilizer, oilcake. Although all three advances began to appear in the Ming, they were not widespread until the Qing. (46)
And the discovery of oilcake was very important to the increases in land productivity that Qing agriculture witnessed -- thus permitting a constant or slightly rising standard of living during a period of some population increase.

There were also advances in the use of water resources. Raising fish in ponds, for example, became an important farming activity in the late Ming period, and pond fish became a widely commercialized product in the Qing. Li describes large-scale fishing operations in Lake Tai in Jiangnan using large fishing boats with six masts to catch and transport the fish (62).

So Li's estimate of agricultural technology during the Ming period is that it was not stagnant; rather, there was significant diffusion of new crops, rotation systems, and fertilizers that led to significant increases in agricultural product during the period. "In sum, in the Jiangnan plain, land and water resources were used more rationally and fully in the early and mid-Qing than they had been in the late Ming" (64).

Two points emerge from this discussion. First, Li's account does in fact succeed in documenting a variety of knowledge-based changes in agricultural practices and techniques that led to rising productivity during the Ming-Qing period in Jiangnan. So the stereotype of "stagnant Chinese technology" does not serve us well. Second, though, what Li does not find is what we might call "science-based" technology change: for example, the discovery of chemical fertilizer, controlled experiments in rice breeding, or the use of machinery in irrigation. The innovations that he describes appear to be a combination of local adaptation and diffusion of discoveries across a broad territory.

So perhaps the question posed at the start still remains: what stood in the way of development of empirical sciences like chemistry or mechanics that would have supported science-based technological innovations in the early modern period in China?

Sunday, October 4, 2009

Kuhn's paradigm shift


Thomas Kuhn's The Structure of Scientific Revolutions (1962) brought about a paradigm shift of its own, in the way that philosophers thought about science. The book was published in the Vienna Circle's International Encyclopedia of Unified Science in 1962. (See earlier posts on the Vienna Circle; post, post.) And almost immediately it stimulated a profound change in the fundamental questions that defined the philosophy of science. For one thing, it shifted the focus from the context of justification to the context of discovery. It legitimated the introduction of the study of the history of science into the philosophy of science -- and thereby also legitimated the perspective of sociological study of the actual practices of science. And it cast into doubt the most fundamental assumptions of positivism as a theory of how the science enterprise actually works.

And yet it also preserved an epistemological perspective. Kuhn forced us to ask questions about truth, justification, and conceptual discovery -- even as he provided a basis for being skeptical about the stronger claims for scientific rationality by positivists like Reichenbach and Carnap. And the framework threatened to lead to a kind of cognitive relativism: "truth" is relative to a set of extra-rational conventions of conceptual scheme and interpretation of data.

The main threads of Kuhn's approach to science are well known. Science really gets underway when a scientific tradition has succeeded on formulating a paradigm. A paradigm includes a diverse set of elements -- conceptual schemes, research techniques, bodies of accepted data and theory, and embedded criteria and processes for the validation of results. Paradigms are not subject to testing or justification; in fact, empirical procedures are embedded within paradigms. Paradigms are in some ways incommensurable -- Kuhn alluded to gestalt psychology to capture the idea that a paradigm structures our perceptions of the world. There are no crucial experiments -- instead, anomalies accumulate and eventually the advocates of an old paradigm die out and leave the field to practitioners of a new paradigm. Like Polanyi, Kuhn emphasizes the concrete practical knowledge that is a fundamental component of scientific education (post). By learning to use the instruments and perform the experiments, the budding scientist learns to see the world in a paradigm-specific way. (Alexander Bird provides a good essay on Kuhn in the Stanford Encyclopedia of Philosophy.)

A couple of questions are particularly interesting today, approaching fifty years after the writing of the book. One is the question of origins: where did Kuhn's basic intuitions come from? Was the idea of a paradigm a bolt from the blue, or was there a comprehensible line of intellectual development that led to it? There certainly was a strong tradition of study of the history of science from the late nineteenth to the twentieth century; but Kuhn was the first to bring this tradition into explicit dialogue with the philosophy of science. Henri Poincaré (The Foundations of Science: Science and Hypothesis, The Value of Science, Science and Methods) and Pierre Duhem (The Aim and Structure of Physical Theory) are examples of thinkers who brought a knowledge of the history of science into their thinking about the logic of science. And Alexandre Koyré's studies of Galileo are relevant too (From the Closed World to the Infinite Universe); Koyré made plain the "revolutionary" character of Galileo's thought within the history of science. However, it appears that Kuhn's understanding of the history of science took shape through his own efforts to make sense of important episodes in the history of science while teaching in the General Education in Science curriculum at Harvard, rather than building on prior traditions.

Another question arises from the fact of its surprising publication in the Encyclopedia. The Encyclopedia project was a fundamental and deliberate expression of logical positivism. Structure of Scientific Revolutions, on the other hand, became one of the founding texts of anti-positivism. And this was apparent in the book from the start. So how did it come to be published here? (Michael Friedman takes up this subject in detail in "Kuhn and Logical Positivism" in Thomas Nickles, Thomas Kuhn (link).) George Reisch and Brazilian philosopher J. C. P. Oliveira address exactly this question. Oliveira offers an interesting discussion of the relationship between Kuhn and Carnap in an online article. He quotes crucial letters from Carnap to Kuhn in 1960 and 1962 about the publication of SSR in the Encyclopedia series. Carnap writes,
I believe that the planned monograph will be a valuable contri­bution to the Encyclopedia. I am myself very much interested in the problems which you intend to deal with, even though my knowledge of the history of science is rather fragmentary. Among many other items I liked your emphasis on the new conceptual frameworks which are proposed in revolutions in science, and, on their basis, the posing of new questions, not only answers to old problems. (REISCH 1991, p. 266)

I am convinced that your ideas will be very stimulating for all those who are interested in the nature of scientific theories and especially the causes and forms of their changes. I found very illuminating the parallel you draw with Darwinian evolution: just as Darwin gave up the earlier idea that the evolution was directed towards a predeter­mined goal, men as the perfect organism, and saw it as a process of improvement by natural selection, you emphasize that the develop­ment of theories is not directed toward the perfect true theory, but is a process of improvement of an instrument. In my own work on in­ductive logic in recent years I have come to a similar idea: that my work and that of a few friends in the step for step solution of prob­lems should not be regarded as leading to “the ideal system”, but rather as a step for step improvement of an instrument. Before I read your manuscript I would not have put it in just those words. But your formulations and clarifications by examples and also your analogy with Darwin’s theory helped me to see clearer what I had in mind. From September on I shall be for a year at the Stanford Center. I hope that we shall have an opportunity to get together and talk about problems of common interest. (REISCH 1991, pp.266-267)
Against what Oliveira calls "revisionist" historians of the philosophy of science, Oliveira does not believe that SSR was accepted for publication by Carnap because Carnap or other late Vienna School philosophers believed there was a significant degree of agreement between Kuhn and Carnap. Instead, he argues that the Encyclopedia group believed that the history of science was an entirely separate subject from the philosophy of science. It was a valid subject of investigation, but had nothing to do with the logic of science. Oliveira writes,
Thus, the publication of Structure in Encyclopedia could be justified merely by the fact that the Encyclopedia project had already reserved space for it. Indeed, it is worth pointing out that the editors commissioned Kuhn’s book as a work in history of science especially for publication in the Encyclopedia.
Also interesting is to consider where Kuhn's ideas went from here. How much influence did the theory have within philosophy? Certainly Kuhn had vast influence within the next generation of anti-positivist or post-positivist philosophy of science. And he had influence in fields very remote from philosophy as well. Paul Feyerabend was directly exposed to Kuhn at UCLA and picks up the anti-positivist thread in Against Method. Imre Lakatos introduces important alternatives to the concept of paradigm with his concept of a scientific research programme. Lakatos makes an effort to reintroduce rational standards into the task of paradigm choice through his idea of progressive problem shifts (The Methodology of Scientific Research Programmes: Volume 1: Philosophical Papers). An important volume involving Kuhn, Feyerabend, and Lakatos came directly out of a conference focused on Kuhn's work (Criticism and the Growth of Knowledge: Volume 4: Proceedings of the International Colloquium in the Philosophy of Science, London, 1965). Kuhn's ideas have had a very wide exposure within the philosophy of science; but as Alexander Bird notes in his essay in the Stanford Encyclopedia of Philosophy, there has not emerged a "school" of Kuhnian philosophy of science.

From the perspective of a half century, some of the most enduring questions raised by Kuhn are these:
  • What does the detailed study of the history of science tell us about scientific rationality?
  • To what extent is it true that scientific training inculcates adherence to a conceptual scheme and approach to the world that the scientist simply can't critically evaluate?
  • Does the concept of a scientific paradigm apply to other fields of knowledge? Do sociologists or art historians have paradigms in Kuhn's strong sense?
  • Is there a meta-theory of scientific rationality that permits scientists and philosophers to critically examine alternative paradigms?
  • And for the social sciences -- are Marxism, verstehen theory, or Parsonian sociology paradigms in the strong Kuhnian sense?
Perhaps the strongest legacy is this: Kuhn's work provides a compelling basis for thinking that we can do the philosophy of science best when we consider the real epistemic practices of working scientists carefully and critically. The history and sociology of science is indeed relevant to the epistemic concerns of the philosophy of science. And this is especially true in the case of the social sciences.

Reference
Reisch, George (1991). Did Kuhn Kill Logical Empiricism? Philosophy of Science, 58.

 
Design by Free Wordpress Themes | Bloggerized by Lasantha - Premium Blogger Templates