Health and
the Rise of Civilization
Mark Nathan Cohen
(excerpt from book of same
title: pp. 131-141)
There is no evidence either from ethnographic accounts or archaeological excavations to suggest that rates of accidental trauma or interpersonal violence declined substantially with the adoption of more civilized forms of political organization. In fact, some evidence from archaeological sites and from historical sources suggests the opposite.
Evidence from both ethnographic descriptions of contemporary hunters and the archaeological record suggests that the major trend in the quality and quantity of human diets has been downward. Contemporary hunter-gatherers, although lean and occasionally hungry, enjoy levels of caloric intake that compare favorably with national averages for many major countries of the Third World and that are generally above those of the poor in the modern world. Even the poorest recorded hunter-gatherer group enjoys a caloric intake superior to that of impoverished contemporary urban populations. Prehistoric hunter-gatherers appear to have enjoyed richer environments and to have been better nourished than most subsequent populations (primitive and civilized alike). Whenever we can glimpse the remains of anatomically modern human beings who lived in early prehistoric environments still rich in large game, they are often relatively large people displaying comparatively few signs of qualitative malnutrition. The subsequent trend in human size and stature is irregular but is more often downward than upward in most parts of the world until the nineteenth or twentieth century.
The diets of hunter-gatherers appear to be comparatively well balanced, even when they are lean. Ethnographic accounts of contemporary groups suggest that protein intakes are commonly quite high, comparable to those of affluent modern groups and substantially above world averages. Protein deficiency is almost unknown in these groups, and vitamin and mineral deficiencies are rare and usually mild in comparison to rates reported from many Third World populations. Archaeological evidence suggests that specific deficiencies, including that of iron (anemia), vitamin D (rickets), and, more controversially, vitamin C (scurvy) as well as such general signs of protein calorie malnutrition as childhood growth retardation have generally become more common in history rather than declining.
Among farmers, increasing population required more and more frequent cropping of land and the use of more and more marginal soils, both of which further diminished returns for labor. This trend may or may not have been offset by such technological improvements in farming as the use of metal tools, specialization of labor, and efficiencies associated with large-scale production that tend to increase individual productivity as well as total production.
But whether the efficiency of farming increased or declined, the nutrition of individuals appears often to have declined for any of several reasons: because increasingly complex society placed new barriers between individuals and flexible access to resources, because trade often siphoned resources away, because some segments of the society increasingly had only indirect access to food, because investments in new technology to improve production focused power in the hands of elites so that their benefits were not widely shared, and perhaps because of the outright exploitation and deprivation of some segments of society. In addition, more complex societies have had to devote an increasing amount of their productive energy to intergroup competition, the maintenance of intragroup order, the celebration of the community itself, and the privilege of the elite, rather than focusing on the biological maintenance of individuals.
In any case, the popular impression that nutrition has improved through history reflects twentieth-century affluence and seems to have as much to do with class privilege as with an overall increase in productivity. Neither the lower classes of prehistoric and classical empires nor the contemporary Third World have shared in the improvement in caloric intake; consumption of animal protein seems to have declined for all but privileged groups.
There is no clear evidence that the evolution of civilization has reduced the risk of resource failure and starvation as successfully as we like to believe. Episodes of starvation occur among hunter-gatherer bands because natural resources fail and
because they have limited ability either to store or to transport food. The risk of starvation is offset, in part, by the relative freedom of hunter-gatherers to move around and find new resources, but it is clear that with limited technology of transport they can move neither far nor fast enough to escape severe fluctuations in natural resources. But each of the strategies that sedentary and civilized populations use to reduce or eliminate food crises generate costs and risks as well as benefits. The supplementation of foraging economies by small-scale cultivation may help to reduce the risk of seasonal hunger, particularly in crowded and depleted environments. The manipulation and protection of species involved in farming may help to reduce the risk of crop failure. The storage of food in sedentary communities may also help protect the population against seasonal shortages or crop failure. But these advantages may be outweighed by the greater vulnerability that domestic crop species often display toward climatic fluctuations or other natural hazards, a vulnerability that is then exacerbated by the specialized nature or narrow focus of many agricultural systems. The advantages are also offset by the loss of mobility that results from agriculture and storage, the limits and failures of primitive storage systems, and the vulnerability of sedentary communities to political expropriation of their stored resources.
Although the intensification of agriculture expanded production, it may have increased risk in both natural and cultural terms by increasing the risk of soil exhaustion in central growing areas and of crop failure in marginal areas. Such investments as irrigation to maintain or increase productivity may have helped to protect the food supply, but they generated new risks of their own and introduced new kinds of instability by making production more vulnerable to economic and political forces that could disrupt or distort the pattern of investment. Similarly, specialization of production increased the range of products that could be made and increased the overall efficiency of production, but it also placed large segments of the population at the mercy of fickle systems of exchange or equally fickle social and political entitlements.
Modern storage and transport may reduce vulnerability to natural crises, but they increase vulnerability to disruption of the technological or political and economic basis of the storage and transport systems themselves. Transport and storage systems are difficult and expensive to maintain. Governments that have the power to move large amounts of food long distances to offset famine and the power to stimulate investment in protective systems of storage and transport also have and can exercise the power to withhold aid and divert investment. The same market mechanisms that facilitate the rapid movement of produce on a large scale, potentially helping to prevent starvation, also set up patterns of international competition in production and consumption that may threaten starvation to those individuals who depend on world markets to provide their food, an ever-increasing proportion of the world population.
It is therefore not clear, in theory, that civilization improves the reliability of the individual diet. As the data summarized in earlier chapters suggest, neither the record of ethnography and history nor that of archaeology provide any clear indication of progressive increase in the reliability (as opposed to the total size) of human food supplies with the evolution of civilization.
Similar points can be made with reference to the natural history of infectious disease. The data reviewed in preceding chapters suggest that prehistoric hunting and gathering populations would have been visited by fewer infections and suffered lower overall rates of parasitization than most other world populations, except for those of the last century, during which antibiotics have begun to offer serious protection against infection.
The major infectious diseases experienced by isolated hunting and gathering bands are likely to have been of two types: zoonotic diseases, caused by organisms whose life cycles were largely independent of human habits; and chronic diseases, handed directly from person to person, the transmission of which were unlikely to have been discouraged by small group size. Of the two categories, the zoonotic infections are undoubtedly the more important. They are likely to have been severe or even rapidly fatal because they were poorly adapted to human hosts. Moreover, zoonotic diseases may have had a substantial impact on small populations by eliminating productive adults. But in another respect their impact would have been limited because they did not pass from person to person.
By virtue of mobility and the handling of animal carcasses, hunter-gatherers are likely to have been exposed to a wider range of zoonotic infections than are more civilized populations. Mobility may also have exposed hunter-gatherers to the traveler's diarrhea phenomenon in which local microvariants of any parasite (including zoonoses) placed repeated stress on the body's immune response.
The chronic diseases, which can spread among small isolated groups, appear to have been relatively unimportant, although they undoubtedly pose a burden of disease that can often be rapidly eliminated by twentieth-century medicine. First, such chronic diseases appear to provoke relatively little morbidity in those chronically exposed. Moreover, the skeletal evidence suggests that even yaws and other common low-grade infections (periostitis) associated with infections by organisms now common to the human environment were usually less frequent and less severe among small, early mobile populations than among more sedentary and dense human groups. Similar arguments appear to apply to tuberculosis and leprosy, judging from the record of the skeletons. Even though epidemiologists now concede that tuberculosis could have spread and persisted in small groups, the evidence suggests overwhelmingly that it is primarily a disease of dense urban populations.
Similarly, chronic intestinal infestation by bacterial, protozoan, and helminth parasites, although displaying significant variation in occurrence according to the natural; environment, generally appears to be minimized by small group size and mobility. At least, the prevalence of specific parasites and the parasite load, or size of the individual dose, is minimized, although in some environments mobility actually appears to have increased the variety of parasites encountered. Ethnographic observations suggest that parasite loads are often relatively low in mobile bands and commonly increase as sedentary lifestyles are adopted. Similar observations imply that intestinal infestations are commonly more severe in sedentary populations than in their more mobile neighbors. The data also indicate that primitive populations often display better accommodation to their indigenous parasites (that is, fewer symptoms of disease in proportion to their parasite load) than we might otherwise expect. The archaeological evidence suggests that, insofar as intestinal parasite loads can be measured by their effects on overall nutrition (for example, on rates of anemia), these infections were relatively mild in early human populations but became increasingly severe as populations grew larger and more sedentary. In one case where comparative analysis of archaeological mummies from different periods has been undertaken, there is direct evidence of an increase in pathological intestinal bacteria with the adoption of sedentism. In another case, analysis of feces has documented an increase in intestinal parasites with sedentism.
Many major vector-borne infections may also have been less important among prehistoric hunter-gatherers than they are in the modern world. The habits of vectors of such major diseases as malaria, schistosomiasis, and bubonic plague suggest that among relatively small human groups without transportation other than walking these diseases are unlikely to have provided anything like the burden of morbidity and mortality that they inflicted on historic and contemporary
populations.
Epidemiological theory further predicts the failure of most epidemic diseases ever to spread in small isolated populations or in groups of moderate size connected only by transportation on foot. Moreover, studies on the blood sera of contemporary isolated groups suggest that, although small size and isolation is not a complete guarantee against the transmission of such diseases in the vicinity, the spread from group to group is at best haphazard and irregular. The pattern suggests that contemporary isolates are at risk to epidemics once the diseases are maintained by civilized populations, but it seems to confirm predictions that such diseases would and could not have flourished and spread because they would not reliably have been transmitted in a world inhabited entirely by small and isolated groups in which there were no civilized reservoirs of diseases and all transportation of diseases could occur only at the speed of walking human beings.
In addition, overwhelming historical evidence suggests that the greatest rates of morbidity and death from infection are associated with the introduction of new diseases from one region of the world to another by processes associated with civilized transport of goods at speeds and over distances outside the range of movements common to hunting and gathering groups. Small-scale societies move people among groups and enjoy periodic aggregation and dispersal, but they do not move the distances associated with historic and modern religious pilgrimages or military campaigns, nor do they move at the speed associated with rapid modern forms of transportation. The increase in the transportation of people and exogenous diseases seems likely to have had far more profound effects on health than the small burden of traveler's diarrhea imposed by the small-scale movements of hunter-gatherers.
Prehistoric hunting and gathering populations may also have had one other important advantage over many more civilized groups. Given the widely recognized (and generally positive or synergistic) association of malnutrition and disease, the relatively good nutrition of hunter-gatherers may further have buffered them against the infections they did encounter.
In any case, the record of the skeletons appears to suggest that severe episodes of stress that disrupted the growth of children (acute episodes of infection or epidemics and/or episodes of resource failure and starvation) did not decline and if anything became increasingly common with the evolution of civilization in prehistory.
There is also evidence, primarily from ethnographic sources, that primitive populations suffer relatively low rates of many degenerative diseases compared, at least, to the more affluent of modern societies, even after corrections are made for the different distribution of adult ages. Primitive populations (hunter-gatherers, subsistence farmers, and all groups who do not subsist on modern refined foods) appear to enjoy several nutritional advantages over more affluent modern societies that protect them from many of the diseases that now afflict us. High bulk diets, diets with relatively few calories in proportion to other nutrients, diets low in total fat (and particularly low in saturated fat), and diets high in potassium and low in sodium, which are common to such groups, appear to help protect them against a series of degenerative conditions that plague the more affluent of modern populations, often in proportion to their affluence. Diabetes mellitus appears to be extremely rare in primitive groups (both hunter-gatherers and farmers) as are circulatory problems, including high blood pressure, heart disease, and strokes. Similarly, disorders associated with poor bowel function, such as appendicitis, diverticulosis, hiatal hernia, varicose veins, hemorrhoids, and bowel cancers, appear rare. Rates of many other types of cancer particularly breast and lung appear to be low in most small-scale societies, even when corrected for the small proportion of elderly often observed; even those cancers that we now consider to be diseases of under-development, such as Burkitt's lymphoma and cancer of the liver, may be the historical product of changes in human behavior involving food storage or the human-assisted spread of vector-borne infections. The record of the skeletons suggests, through the scarcity of metastases in bone, that cancers were comparatively rare in prehistory. The history of human life expectancy is much harder to describe or summarize with any precision because the evidence is so fragmentary and so many controversies are involved in its interpretation. But once we look beyond the very high life expectancies of mid-twentieth century affluent nations, the existing data also appear to suggest a pattern that is both more complex and less progressive than we are accustomed to believe.
Contrary to assumptions once widely held, the slow growth of prehistoric populations need not imply exceedingly high rates of mortality. Evidence of low fertility and/or the use of birth control by small-scale groups suggests (if we use modern life tables) that average rates of population growth very near zero could have been maintained by groups suffering only historically moderate mortality (life expectancy of 25 to 30 years at birth with 50 to 60 percent of infants reaching adulthood figures that appear to match those observed in ethnographic and archaeological samples) that would have balanced fertility, which was probably below the averages of more sedentary modern populations. The prehistoric acceleration of population growth after the adoption of sedentism and farming, if it is not an artifact of archaeological reconstruction, could be explained by an increase in fertility or altered birth control decisions that appear to accompany sedentism and agriculture. This explanation fits the available data better than any competing hypothesis.
It is not clear whether the adoption of sedentism or farming would have increased or decreased the proportion of individuals dying as infants or children. The advantages of sedentism may have been offset by risks associated with increased infection, closer spacing of children, or the substitution of starchy gruels for mother's milk and other more nutritious weaning foods. The intensification of agriculture and the adoption of more civilized lifestyles may not have improved the probability of surviving childhood until quite recently. Rates of infant and child mortality observed in the smallest contemporary groups (or reconstructed with less certainty among prehistoric groups) would not have embarrassed most European countries until sometime in the nineteenth century and were, in fact, superior to urban rates of child mortality through most of the nineteenth century (and much of the twentieth century in many Third World cities).
There is no evidence from archaeological samples to suggest that adult life expectancy increased with the adoption of sedentism or farming; there is some evidence (complicated by the effects of a probably acceleration of population growth on cemetery samples) to suggest that adult life expectancy may actually
have declined as farming was adopted. In later stages of the intensification of agriculture and the development of civilization, adult life expectancy most often increased and often increased substantially but the trend was spottier than we sometimes realize. Archaeological populations from the Iron Age or even the Medieval period in Europe and the Middle East or from the Mississippian period in North America often suggest average adult ages at death in the middle or upper thirties, not substantially different from (and sometimes lower than) those of the earliest visible populations in the same regions. Moreover, the historic improvement in adult life expectancy may have resulted at least in part from increasing infant and child mortality and the consequent "select" nature of
those entering adulthood as epidemic diseases shifted their focus from adults to children.
These data clearly imply that we need to rethink both scholarly and popular images of human progress and cultural evolution. We have built our images of human history too exclusively from the experiences of privileged classes and populations, and we have assumed too close a fit between technological advances and progress for individual lives.
In scholarly terms, these data which often suggest diminishing returns to health and nutrition tend to undermine models of cultural evolution based on technological advances. They add weight to theories of cultural evolution that emphasize environmental constraints, demographic pressure, and competition and social exploitation, rather than technological or social progress, as the primary instigators of social change. Similarly, the archaeological evidence that outlying populations often suffered reduced health as a consequence of their inclusion in larger political units, the clear class stratification of health in early and modern civilizations, and the general failure of either early or modern civilizations to promote clear improvements in health, nutrition, or economic homeostasis for large segments of their populations until the very recent past all reinforce competitive and exploitative models of the origins and function of civilized states. In popular terms, I think that we must substantially revise our traditional sense that civilization represents progress in human well-being or at least that it did so for most people for most of history prior to the twentieth century. The comparative data simply do not support that image.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment