William R. Albury, PhD

University of New England, Armidale, Australia

Debauchery and disease

In the early years of British settlement in Australia the colonial authorities regarded drunkenness as one of the major evils of the day. Their preoccupation with this social problem was mirrored by the concern of the colony’s medical men with drunkenness as a cause of illness. In 1821, for example, James Bowman, Principal Surgeon of New South Wales, advised a Commission of Inquiry that dysentery was “attributable to the abrupt changes of the temperature as well as to debauchery.”1 This view persisted well into the nineteenth century, with the editor of the Australian Medical Journal in 1846 “still citing climate and intemperance as a source of dysentery.”2

It is noteworthy here that social or moral judgments about drunkenness were combined with an attempt to identify a purely natural cause, such as the weather, in explaining the occurrence of dysentery in New South Wales. The weather was seen as a contributing cause of dysentery, but since it was a natural force, it was no one’s responsibility. Drunkenness, on the other hand, was considered both a cause of dysentery and a matter of personal responsibility. If, as a result of “debauchery,” you brought on a case of dysentery, you presumably had only yourself to blame, no matter what the contribution of the weather to this state of affairs may have been.


Cause, responsibility and blame

The connection of the ideas of cause, responsibility and blame in the medical sphere is not a special feature of nineteenth-century Australia, however. It has its origin in the earliest human beliefs about “why things happen” and continues to color our attitudes toward disease and disability in contemporary life. We have only to think of the AIDS epidemic to see that the HIV virus corresponds to the weather in our previous example and that “unsafe sex”—a different form of debauchery—corresponds to drunkenness. In this contemporary version of the argument, judgments about moral responsibility are again combined with a reference to a natural cause; and it is again claimed by some that individuals are to blame for their illness if debauchery contributed to it in some way.

A reaction of this kind is simply one instance of a more general phenomenon whereby “causal beliefs and the assignment of responsibility generate feelings of anger and sympathy that, in turn, direct social conduct toward others.”3 Stigmatized persons—such as those with illnesses or disabilities—are typically assessed according to the degree of responsibility they are presumed to bear for their own condition. For most people, the prevalence of this reaction is something that can be observed in daily experience. But research specifically designed to identify the relationship between stigma, perceptions of responsibility, and positive or negative responses by others, has also confirmed this point:

Persons not held responsible for their stigmata were rated high on liking, elicited pity but not anger, and generated high ratings on willingness to help. Conversely, persons with stigmata for which they were responsible were rated low on liking, evoked little pity and comparatively high anger, and elicited low help-giving intentions.4

It is not surprising that causal beliefs are intimately linked with moral judgments. The very language we use in explaining “why things happen” is laden with moral connotations. The ancient Greek word aitia, which we translate as “cause,” had as its original meaning “guilt.” It was the word applied in legal proceedings when assigning responsibility to someone for the consequences of their actions.5 In modern medicine, the term “etiology” (derived from the root, aitia) is still used for the explanation of disease causation.

Our English word “cause” also has its origin in a legal context—but that of Rome rather than Greece. It is derived from the Latin causa, which refers to a lawsuit or prosecution. We still speak in English of “pleading one’s cause.” Here again, the notion of personal responsibility is central. In the natural sciences we expect the idea of causality to be depersonalized, but in the sphere of human affairs we tend to see the notion of personal responsibility as essential. Our culture’s predominant theories of morality, society and jurisprudence all rest upon the presumption that individuals can, in principle, be held responsible for their actions.6 And medicine, as a domain encompassing both the natural sciences and human affairs, seems to share elements of both these approaches to causation in its explanations of disease and disability.

The changing focus of blame

Dr. John Langdon Haydon Down, 1828-1896

One of the most interesting features of the history of medicine is the way in which changing social concerns are  reflected in altered notions about what is blameworthy regarding a disease or disability’s contributing factor. As both  moral values and scientific knowledge change, so too do attributions of blame. As ethicists have noted, “Specifying   responsibilities for health is a moral matter, rather than a purely scientific one. It involves singling out some causal  factors as the primary basis for holding individuals accountable and setting aside others as irrelevant.”7

In the second half of the nineteenth century, for example, the noted physician J. Langdon Down (1828-1896) reported that parents who have an intellectually disabled child “always prefer to refer the case to a post-uterine or non-congenital origin, partly because they think it frees them from the suspicion of hereditary influence, and partly  from a notion that the child is more likely to be restored to its pristine state.”8 Down is best known in the history of  medicine for his identification of what he called “the Mongolian type of idiocy”—the condition now referred to as Trisomy-21 or Down syndrome. The short paper in which he first described this condition also makes a similar point about the demands of “the anxious parents” for an explanation of their child’s “condition, for which any cause is sought, rather than hereditary taint or parental [i.e. genetic] influence.”9 A century later, however, the suggestion that such a disability was genetically caused rather than resulting from post-natal events was said to lift “a heavy burden of shame and guilt” from the child’s parents.10

This change represents a complete reversal of the way in which society understands the notion of parental  responsibility. In the late nineteenth century, the emphasis was on eugenics and the maintenance of a “healthy  bloodline”—a viewpoint that originated in the publications of the British scientific writer, Francis Galton (1822-1911).11 A large number of disabling or stigmatized conditions, from alcoholism and delinquency to a predisposition to tuberculosis, were regarded as hereditary. So for the strengthening of both the nation and the race, it was the moral duty of families who were “tainted” to refrain from having children. To tell the parents of an intellectually disabled child that the condition was hereditary was not only to condemn them morally for having irresponsibly produced the child, but also to stigmatize all the relatives of both parents as being hereditarily tainted.12


Sir Francis Galton, 1822-1911

An accident occurring at or after the birth of the child, however, carried no such moral condemnation or stigma for the parents, unless gross parental negligence or abuse was suspected. Given a society in which even the humblest middle class families had domestic servants, any blame for postnatal negligence or abuse was more likely to be placed on the servants than the parents: “The nurse may be suspected of having allowed the infant to fall or of having drugged it with opiates.”13 Another possible source of injury to the child mentioned by Down was “the instrumental interference which maternal safety demanded”—i.e. the use of forceps by the physician attending the birth.14 In both these cases, the parents were absolved of responsibility for the child’s condition. In addition, as Down noted in the passage quoted earlier, there seemed to be more chance of recovering from an accident than of escaping the iron law of heredity.

By the middle of the twentieth century, the social preoccupations and social structures that characterized Down’s era had disappeared. It was no longer thought that the destinies of nations were ruled by heredity, both because of changes in biological theory and because of revulsion against the racial ideology of the Nazis. Instead, the strength of the nation was said to depend on the improvement of child rearing techniques. And since it was increasingly less common for any but the wealthiest families to have servants, the responsibility for raising children rested almost exclusively with their parents. Thus nearly all major social problems of the day—such as racism, crime and poverty— were thought to have their origins in bad parenting practices of one kind or another.15

Under these circumstances, to tell the parents of an intellectually disabled child that their child’s condition was caused postnatally was to condemn them morally for failing in their most important duty—a duty that they owed not just to their child, but to society at large. And in the absence of demonstrable injury to the child, it was also to suggest that the child’s condition must be the result of an emotional reaction to parental indifference or hostility.16 Thus the parents would be suspected of having allowed their own supposedly pathological personalities to destroy the personality of their child.

Heredity, on the other hand, seemed to remove the stigma from childhood disability. With the expansion of biological research after World War II (in particular, the advent of molecular cell biology and the discovery of the structure of DNA), genetically-caused disabilities became candidates for scientific investigation and potential elimination. The existence of these disabilities could not be blamed on any person; they could only be blamed on our lack of scientific knowledge—and that was something which the research effort was seeking to remedy.

In the last hundred years, social beliefs about heredity have changed so thoroughly that the introduction to a 1990 reprint of Down’s book interprets his comments on parents’ concerns about the cause of their child’s disability in the following way: “He noted the relief from crippling self-blame of the parents when they were told that the child’s condition pre-dated the process of birth.”17 This interpretation is quite contrary to Down’s own statement, quoted above, that “Parents always prefer to refer the case to a post-uterine or non-congenital origin.”18 The commentator’s words express a late twentieth-century view of blameworthiness in the area of childhood disability, but they bear no resemblance to the view reported by Down a century earlier.


In recent times medical and other health personnel, as well as most providers of social services, have been trained to approach disease and disability in a non-judgmental way. But even assuming an ideal outcome in this regard, with all professionals adopting an overtly non-judgmental attitude in their work, the difficult problem of the allocation of scarce medical and social resources will continue to evoke moral as well as technical judgments. In addition, attitudes found in society at large will continue to associate ideas of responsibility and blame with the causation of disease or disability. Indeed, such ideas can play a positive role in health education and preventative medicine.19

We should not expect either the biological or the social sciences to eliminate the moral dimension from our thinking about the causes of disease and disability. But the evidence of history indicates that both the scientific understanding of relevant causal factors, and society’s moral attitudes towards them, are capable of changing within a fairly short time. An explanation invoked today to exonerate the sufferer from blame (i.e. the equivalent of Down’s “post-uterine or non-congenital” causes) may within a few generations become, instead, a source of blame (i.e. the equivalent of the mid-twentieth century’s idea of bad parenting). An awareness of history should therefore introduce a note of caution into any assignment of personal responsibility within the medical context.


  1. James Bowman to J. T. Bigge, 4 February 1821; quoted in W. Nichol, “Medical Technology in New South Wales, 1788-1850,” Journal of Australian Studies, 18 (1986): 60-73.
  2. Nichol, “Medical Technology,” p. 65.
  3. Bernard Weiner, “On Sin versus Sickness: A Theory of Perceived Responsibility and Social Motivation,” American Psychologist, 48 (1993): 957-965, p.  957.
  4. Ibid., p. 960.
  5. Hans Kelsen, Society and Nature: A Sociological Inquiry (Chicago: University of Chicago Press, 1943), pp. 248, 263, 379n.
  6. This principle also allows for cases of ‘diminished responsibility’ when both natural causes and personal actions contribute to an event; see, for example, H. L. A. Hart, Punishment and Responsibility: Essays in the Philosophy of Law (Oxford: Clarendon Press, 1968).
  7. Mike W. Martin, “Responsibility for Health and Blaming Victims,” Journal of Medical Humanities, (2001)22: 95-114, at p. 102.
  8. J. Langdon Down, On Some of the Mental Affections of Childhood and Youth (Oxford: Blackwell, 1990; reprint of first edition, 1887), p. 8.
  9. Down, “Observations on an Ethnic Classification of Idiots,” London Hospital Reports, (1866)3: 259-262; available online at <http://www.neonatology.org/classics/down.html> (accessed 19 December 2009).
  10. Bernard Rimland, Infantile Autism: The Syndrome and Its Implications for a Neural Theory of Behavior (New York: Appleton-Century-Crofts, 1964), p. 65.
  11. Ruth Schwartz Cowan, “Nature and Nurture: The Interplay of Biology and Politics in the Work of Francis Galton,” Studies in History of Biology, 1977, 1: 133-208.
  12. See, for example, Richard Hofstadter, Social Darwinism in American Thought, revised edition (Boston: Beacon Press, 1955), pp. 161-167; and Sara Vogt, “Diagnosing Defectives: Disability, Gender and Eugenics in the United States, 1910-1924,” Hektoen International: A Journal of Medical Humanities, April 2009, volume 3, <Diagnosing defectives: disability, gender and eugenics in the United States, 1910-1924> (accessed 19 December 2009).
  13. Down, Mental Affections, p. 6.
  14. Down, “Observations,” p. 259.
  15. Fred Matthews, “The Utopia of Human Relations: The Conflict-Free Family in American Social Thought, 1930-1960,” Journal of the History of the Behavioral Sciences, 1988, 24: 343-362; and Christina Hardyment, Perfect Parents: Baby-Care Advice Past and Present (Oxford: Oxford University Press, 1995), chapters 4-5.
  16. For a much-cited exposition of this view, see Bruno Bettelheim, The Empty Fortress: Infantile Autism and the Birth of the Self (New York: Free Press, 1967).
  17. Ann Gath, “Foreword’” in Down, Mental Affections, p. v.
  18. Down, Mental Affections, p. 8.
  19. Martin, “Responsibility for Health,” pp. 99-105.


WILLIAM R. ALBURY, PhD is Adjunct Professor of History in the School of Humanities at the University of New England in Armidale, NSW, Australia. His principal research interests are the history of science and medicine (ca. 1500-1900), the use of medical and cosmological metaphors in political thought, and social history as reflected in works of art. His most recent publications in these areas are: G. M. Weisz, Marco Matucci Cerinic, W. R. Albury and Donatella Lippi, "The Medici Syndrome: a Medico-Historical Puzzle," International Journal of Rheumatic Diseases, 2010, 13: 125-31; W. R. Albury, "Medicine and Statecraft in the Book of the Courtier," Intellectual History Review, 2008, 18: 75-89; and W. R. Albury and G. M. Weisz, "Depicting the Bread of the Last Supper: Religious Representation in Italian Renaissance Society," Journal of Religion and Society, 2009, 11: 1-17.