The origins of agriculture:
a biological perspective and a new hypothesis
by Greg Wadley and Angus Martin
Published in Australian Biologist 6: 96-105, June 1993
Introduction
What might head a list of the defining characteristics of the human species? While our view of ourselves could hardly avoid highlighting our accomplishments in engineering, art, medicine, space travel and the like, in a more dispassionate assessment agriculture would probably displace all other contenders for top billing. Most of the other achievements of humankind have followed from this one. Almost without exception, all people on earth today are sustained by agriculture. With a minute number of exceptions, no other species is a farmer. Essentially all of the arable land in the world is under cultivation. Yet agriculture began just a few thousand years ago, long after the appearance of anatomically modern humans.
Given the rate and the scope of this revolution in human biology, it is quite extraordinary that there is no generally accepted model accounting for the origin of agriculture. Indeed, an increasing array of arguments over recent years has suggested that agriculture, far from being a natural and upward step, in fact led commonly to a lower quality of life. Hunter-gatherers typically do less work for the same amount of food, are healthier, and are less prone to famine than primitive farmers (Lee & DeVore 1968, Cohen 1977, 1989). A biological assessment of what has been called the puzzle of agriculture might phrase it in simple ethological terms: why was this behaviour (agriculture) reinforced (and hence selected for) if it was not offering adaptive rewards surpassing those accruing to hunter-gathering or foraging economies?
This paradox is responsible for a profusion of models of the origin of agriculture. 'Few topics in prehistory', noted Hayden (1990) 'have engendered as much discussion and resulted in so few satisfying answers as the attempt to explain why hunter/gatherers began to cultivate plants and raise animals. Climatic change, population pressure, sedentism, resource concentration from desertification, girls' hormones, land ownership, geniuses, rituals, scheduling conflicts, random genetic kicks, natural selection, broad spectrum adaptation and multicausal retreats from explanation have all been proffered to explain domestication. All have major flaws ... the data do not accord well with any one of these models.'
Recent discoveries of potentially psychoactive substances in certain agricultural products -- cereals and milk -- suggest an additional perspective on the adoption of agriculture and the behavioural changes ('civilisation') that followed it. In this paper we review the evidence for the drug-like properties of these foods, and then show how they can help to solve the biological puzzle just described.
The emergence of agriculture and civilisation in the Neolithic
The transition to agriculture
From about 10,000 years ago, groups of people in several areas around the world began to abandon the foraging lifestyle that had been successful, universal and largely unchanged for millennia (Lee & DeVore 1968). They began to gather, then cultivate and settle around, patches of cereal grasses and to domesticate animals for meat, labour, skins and other materials, and milk.
Farming, based predominantly on wheat and barley, first appeared in the Middle East, and spread quickly to western Asia, Egypt and Europe. The earliest civilisations all relied primarily on cereal agriculture. Cultivation of fruit trees began three thousand years later, again in the MiddleEast, and vegetables and other crops followed (Zohari 1986). Cultivation of rice began in Asia about 7000 years ago (Stark 1986).
To this day, for most people, two-thirds of protein and calorie intake is cereal-derived. (In the west, in the twentieth century, cereal consumption has decreased slightly in favour of meat, sugar, fats and so on.) The respective contributions of each cereal to current total world production are: wheat (28 per cent), corn/maize (27 per cent), rice (25 per cent), barley (10 per cent), others (10 per cent) (Pedersen et al. 1989).
The change in the diet due to agriculture
The modern human diet is very different from that of closely related primates and, almost certainly, early hominids (Gordon 1987). Though there is controversy over what humans ate before the development of agriculture, the diet certainly did not include cereals and milk in appreciable quantities. The storage pits and processing tools necessary for significant consumption of cereals did not appear until the Neolithic (Washburn & Lancaster 1968). Dairy products were not available in quantity before the domestication of animals.
The early hominid diet (from about four million years ago), evolving as it did from that of primate ancestors, consisted primarily of fruits, nuts and other vegetable matter, and some meat -- items that could be foraged for and eaten with little or no processing. Comparisons of primate and fossil-hominid anatomy, and of the types and distribution of plants eaten raw by modern chimpanzees, baboons and humans (Peters & O'Brien 1981, Kay 1985), as well as microscope analysis of wear patterns on fossil teeth (Walker 1981, Peuch et al.1983) suggest that australopithecines were 'mainly frugivorous omnivores with a dietary pattern similar to that of modern chimpanzees' (Susman 1987:171).
The diet of pre-agricultural but anatomically modern humans (from 30,000 years ago) diversified somewhat, but still consisted of meat, fruits, nuts, legumes, edible roots and tubers, with consumption of cereal seeds only increasing towards the end of the Pleistocene (e.g. Constantini 1989 and subsequent chapters in Harris and Hillman 1989).
The rise of civilisation
Within a few thousand years of the adoption of cereal agriculture, the old hunter-gatherer style of social organisation began to decline. Large, hierarchically organised societies appeared, centred around villages and then cities. With the rise of civilisation and the state came socioeconomic classes, job specialisation, governments and armies.
The size of populations living as coordinated units rose dramatically above pre-agricultural norms. While hunter-gatherers lived in egalitarian, autonomous bands of about 20 closely related persons, with at most a tribal level of organisation above that, early agricultural villages had 50 to 200 inhabitants, and early cities 10,000 or more. People 'had to learn to curb deep-rooted forces which worked for increasing conflict and violence in large groups' (Pfeiffer 1977:438).
Agriculture and civilisation meant the end of foraging -- a subsistence method with shortterm goals and rewards -- and the beginning (for most) of regular arduous work, oriented to future payoffs and the demands of superiors. 'With the coming of large communities, families no longer cultivated the land for themselves and their immediate needs alone, but for strangers and for the future. They worked all day instead of a few hours a day, as hunter-gatherers had done. There were schedules, quotas, overseers, and punishments for slacking off' (Pfeiffer 1977:21).
Explaining the origins of agriculture and civilisation
The phenomena of human agriculture and civilisation are ethologically interesting, because (1) virtually no other species lives this way, and (2) humans did not live this way until relatively recently. Why was this way of life adopted, and why has it become dominant in the human species?
Problems explaining agriculture
Until recent decades, the transition to farming was seen as an inherently progressive one: people learnt that planting seeds caused crops to grow, and this new improved food source led to larger populations, sedentary farm and town life, more leisure time and so to specialisation, writing, technological advances and civilisation. It is now clear that agriculture was adopted despite certain disadvantages of that lifestyle (e.g. Flannery 1973, Henry 1989). There is a substantial literature (e.g. Reed 1977), not only on how agriculture began, but why. Palaeopathological and comparative studies show that health deteriorated in populations that adopted cereal agriculture, returning to pre-agricultural levels only in modem times. This is in part attributable to the spread of infection in crowded cities, but is largely due to a decline in dietary quality that accompanied intensive cereal farming (Cohen 1989). People in many parts of the world remained hunter-gatherers until quite recently; though they were quite aware of the existence and methods of agriculture, they declined to undertake it (Lee & DeVore 1968, Harris 1977). Cohen (1977:141) summarised the problem by asking: 'If agriculture provides neither better diet, nor greater dietary reliability, nor greater ease, but conversely appears to provide a poorer diet, less reliably, with greater labor costs, why does anyone become a farmer?'
Many explanations have been offered, usually centred around a particular factor that forced the adoption of agriculture, such as environmental or population pressure (for reviews see Rindos 1984, Pryor 1986, Redding 1988, Blumler & Byrne 1991). Each of these models has been criticised extensively, and there is at this time no generally accepted explanation of the origin of agriculture.
Problems explaining civilisation
A similar problem is posed by the post-agricultural appearance, all over the world, of cities and states, and again there is a large literature devoted to explaining it (e.g. Claessen & Skalnik 1978). The major behavioural changes made in adopting the civilised lifestyle beg explanation. Bledsoe (1987:136) summarised the situation thus:
'There has never been and there is not now agreement on the nature and significance of the rise of civilisation. The questions posed by the problem are simple, yet fundamental. How did civilisation come about? What animus impelled man to forego the independence, intimacies, and invariability of tribal existence for the much larger and more impersonal political complexity we call the state? What forces fused to initiate the mutation that slowly transformed nomadic societies into populous cities with ethnic mixtures, stratified societies, diversified economies and unique cultural forms? Was the advent of civilisation the inevitable result of social evolution and natural laws of progress or was man the designer of his own destiny? Have technological innovations been the motivating force or was it some intangible factor such as religion or intellectual advancement?'
To a very good approximation, every civilisation that came into being had cereal agriculture as its subsistence base, and wherever cereals were cultivated, civilisation appeared. Some hypotheses have linked the two. For example, Wittfogel's (1957) 'hydraulic theory' postulated that irrigation was needed for agriculture, and the state was in turn needed to organise irrigation. But not all civilisations used irrigation, and other possible factors (e.g. river valley placement, warfare, trade, technology, religion, and ecological and population pressure) have not led to a universally accepted model.
Pharmacological properties of cereals and milk
Recent research into the pharmacology of food presents a new perspective on these problems.
Exorphins: opioid substances in food
Prompted by a possible link between diet and mental illness, several researchers in the late 1970s began investigating the occurrence of drug-like substances in some common foodstuffs.
Dohan (1966, 1984) and Dohan et al. (1973, 1983) found that symptoms of schizophrenia were relieved somewhat when patients were fed a diet free of cereals and milk. He also found that people with coeliac disease -- those who are unable to eat wheat gluten because of higher than normal permeability of the gut -- were statistically likely to suffer also from schizophrenia. Research in some Pacific communities showed that schizophrenia became prevalent in these populations only after they became 'partially westernised and consumed wheat, barley beer, and rice' (Dohan 1984).
Groups led by Zioudrou (1979) and Brantl (1979) found opioid activity in wheat, maize and barley (exorphins), and bovine and human milk (casomorphin), as well as stimulatory activity in these proteins, and in oats, rye and soy. Cereal exorphin is much stronger than bovine casomorphin, which in turn is stronger than human casomorphin. Mycroft et al. (1982, 1987) found an analogue of MIF-1, a naturally occurring dopaminergic peptide, in wheat and milk. It occurs in no other exogenous protein. (In subsequent sections we use the term exorphin to cover exorphins, casomorphin, and the MIF-1 analogue. Though opioid and dopaminergic substances work in different ways, they are both 'rewarding', and thus more or less equivalent for our purposes.)
Since then, researchers have measured the potency of exorphins, showing them to be comparable to morphine and enkephalin (Heubner et al. 1984), determined their amino acid sequences (Fukudome &Yoshikawa 1992), and shown that they are absorbed from the intestine (Svedburg et al.1985) and can produce effects such as analgesia and reduction of anxiety which are usually associated with poppy-derived opioids (Greksch et al.1981, Panksepp et al.1984). Mycroft et al. estimated that 150 mg of the MIF-1 analogue could be produced by normal daily intake of cereals and milk, noting that such quantities are orally active, and half this amount 'has induced mood alterations in clinically depressed subjects' (Mycroft et al. 1982:895). (For detailed reviews see Gardner 1985 and Paroli 1988.)
Most common drugs of addiction are either opioid (e.g heroin and morphine) or dopaminergic (e.g. cocaine and amphetamine), and work by activating reward centres in the brain. Hence we may ask, do these findings mean that cereals and milk are chemically rewarding? Are humans somehow 'addicted' to these foods?
Problems in interpreting these findings
Discussion of the possible behavioural effects of exorphins, in normal dietary amounts, has been cautious. Interpretations of their significance have been of two types:
where a pathological effect is proposed (usually by cereal researchers, and related to Dohan's findings, though see also Ramabadran & Bansinath 1988), and
where a natural function is proposed (by milk researchers, who suggest that casomorphin may help in mother-infant bonding or otherwise regulate infant development).
We believe that there can be no natural function for ingestion of exorphins by adult humans. It may be that a desire to find a natural function has impeded interpretation (as well as causing attention to focus on milk, where a natural function is more plausible) . It is unlikely that humans are adapted to a large intake of cereal exorphin, because the modern dominance of cereals in the diet is simply too new. If exorphin is found in cow's milk, then it may have a natural function for cows; similarly, exorphins in human milk may have a function for infants. But whether this is so or not, adult humans do not naturally drink milk of any kind, so any natural function could not apply to them.
Our sympathies therefore lie with the pathological interpretation of exorphins, whereby substances found in cereals and milk are seen as modern dietary abnormalities which may cause schizophrenia, coeliac disease or whatever. But these are serious diseases found in a minority. Can exorphins be having an effect on humankind at large?
Other evidence for 'drug-like' effects of these foods
Research into food allergy has shown that normal quantities of some foods can have pharmacological, including behavioural, effects. Many people develop intolerances to particular foods. Various foods are implicated, and a variety of symptoms is produced. (The term 'intolerance' rather than allergy is often used, as in many cases the immune system may not be involved (Egger 1988:159). Some intolerance symptoms, such as anxiety, depression, epilepsy, hyperactivity, and schizophrenic episodes involve brain function (Egger 1988, Scadding & Brostoff 1988).
Radcliffe (1982, quoted in 1987:808) listed the foods at fault, in descending order of frequency, in a trial involving 50 people: wheat (more than 70 per cent of subjects reacted in some way to it), milk (60 per cent), egg (35 per cent), corn, cheese, potato, coffee, rice, yeast, chocolate, tea, citrus, oats, pork, plaice, cane, and beef (10 per cent). This is virtually a list of foods that have become common in the diet following the adoption of agriculture, in order of prevalence. The symptoms most commonly alleviated by treatment were mood change (>50 per cent) followed by headache, musculoskeletal and respiratory ailments.
One of the most striking phenomena in these studies is that patients often exhibit cravings, addiction and withdrawal symptoms with regard to these foods (Egger 1988:170, citing Randolph 1978; see also Radcliffe 1987:808-10, 814, Kroker 1987:856, 864, Sprague & Milam 1987:949, 953, Wraith 1987:489, 491). Brostoff and Gamlin (1989:103) estimated that 50 per cent of intolerance patients crave the foods that cause them problems, and experience withdrawal symptoms when excluding those foods from their diet. Withdrawal symptoms are similar to those associated with drug addictions (Radcliffe 1987:808). The possibility that exorphins are involved has been noted (Bell 1987:715), and Brostoff and Gamlin conclude (1989:230):
'... the results so far suggest that they might influence our mood. There is certainly no question of anyone getting 'high' on a glass of milk or a slice of bread - the amounts involved are too small for that - but these foods might induce a sense of comfort and wellbeing, as food-intolerant patients often say they do. There are also other hormone-like peptides in partial digests of food, which might have other effects on the body.'
There is no possibility that craving these foods has anything to do with the popular notion of the body telling the brain what it needs for nutritional purposes. These foods were not significant in the human diet before agriculture, and large quantities of them cannot be necessary for nutrition. In fact, the standard way to treat food intolerance is to remove the offending items from the patient's diet.
A suggested interpretation of exorphin research
But what are the effects of these foods on normal people? Though exorphins cannot have a naturally selected physiological function in humans, this does not mean that they have no effect. Food intolerance research suggests that cereals and milk, in normal dietary quantities, are capable of affecting behaviour in many people. And if severe behavioural effects in schizophrenics and coeliacs can be caused by higher than normal absorption of peptides, then more subtle effects, which may not even be regarded as abnormal, could be produced in people generally.
The evidence presented so far suggests the following interpretation.
The ingestion of cereals and milk, in normal modern dietary amounts by normal humans, activates reward centres in the brain. Foods that were common in the diet before agriculture (fruits and so on) do not have this pharmacological property. The effects of exorphins are qualitatively the same as those produced by other opioid and / or dopaminergic drugs, that is, reward, motivation, reduction of anxiety, a sense of wellbeing, and perhaps even addiction. Though the effects of a typical meal are quantitatively less than those of doses of those drugs, most modern humans experience them several times a day, every day of their adult lives.
Hypothesis: exorphins and the origin of agriculture and civilisation
When this scenario of human dietary practices is viewed in the light of the problem of the origin of agriculture described earlier, it suggests an hypothesis that combines the results of these lines of enquiry.
Exorphin researchers, perhaps lacking a long-term historical perspective, have generally not investigated the possibility that these foods really are drug-like, and have instead searched without success for exorphin's natural function. The adoption of cereal agriculture and the subsequent rise of civilisation have not been satisfactorily explained, because the behavioural changes underlying them have no obvious adaptive basis.
These unsolved and until-now unrelated problems may in fact solve each other. The answer, we suggest, is this: cereals and dairy foods are not natural human foods, but rather are preferred because they contain exorphins. This chemical reward was the incentive for the adoption of cereal agriculture in the Neolithic. Regular self-administration of these substances facilitated the behavioural changes that led to the subsequent appearance of civilisation.
This is the sequence of events that we envisage.
Climatic change at the end of the last glacial period led to an increase in the size and concentration of patches of wild cereals in certain areas (Wright 1977). The large quantities of cereals newly available provided an incentive to try to make a meal of them. People who succeeded in eating sizeable amounts of cereal seeds discovered the rewarding properties of the exorphins contained in them. Processing methods such as grinding and cooking were developed to make cereals more edible. The more palatable they could be made, the more they were consumed, and the more important the exorphin reward became for more people.
At first, patches of wild cereals were protected and harvested. Later, land was cleared and seeds were planted and tended, to increase quantity and reliability of supply. Exorphins attracted people to settle around cereal patches, abandoning their nomadic lifestyle, and allowed them to display tolerance instead of aggression as population densities rose in these new conditions.
Though it was, we suggest, the presence of exorphins that caused cereals (and not an alternative already prevalent in the diet) to be the major early cultigens, this does not mean that cereals are 'just drugs'. They have been staples for thousands of years, and clearly have nutritional value. However, treating cereals as 'just food' leads to difficulties in explaining why anyone bothered to cultivate them. The fact that overall health declined when they were incorporated into the diet suggests that their rapid, almost total replacement of other foods was due more to chemical reward than to nutritional reasons.
It is noteworthy that the extent to which early groups became civilised correlates with the type of agriculture they practised. That is, major civilisations (in south-west Asia, Europe, India, and east and parts of South-East Asia; central and parts of north and south America; Egypt, Ethiopia and parts of tropical and west Africa) stemmed from groups which practised cereal, particularly wheat, agriculture (Bender 1975:12, Adams 1987:201, Thatcher 1987:212). (The rarer nomadic civilisations were based on dairy farming.)
Groups which practised vegeculture (of fruits, tubers etc.), or no agriculture (in tropical and south Africa, north and central Asia, Australia, New Guinea and the Pacific, and much of north and south America) did not become civilised to the same extent.
Thus major civilisations have in common that their populations were frequent ingesters of exorphins. We propose that large, hierarchical states were a natural consequence among such populations. Civilisation arose because reliable, on-demand availability of dietary opioids to individuals changed their behaviour, reducing aggression, and allowed them to become tolerant of sedentary life in crowded groups, to perform regular work, and to be more easily subjugated by rulers. Two socioeconomic classes emerged where before there had been only one (Johnson & Earle 1987:270), thus establishing a pattern which has been prevalent since that time.
Discussion
The natural diet and genetic change
Some nutritionists deny the notion of a pre-agricultural natural human diet on the basis that humans are omnivorous, or have adapted to agricultural foods (e.g. Garn & Leonard 1989; for the contrary view see for example Eaton & Konner 1985). An omnivore, however, is simply an animal that eats both meat and plants: it can still be quite specialised in its preferences (chimpanzees are an appropriate example). A degree of omnivory in early humans might have preadapted them to some of the nutrients contained in cereals, but not to exorphins, which are unique to cereals.
The differential rates of lactase deficiency, coeliac disease and favism (the inability to metabolise fava beans) among modern racial groups are usually explained as the result of varying genetic adaptation to post-agricultural diets (Simopoulos 1990:27-9), and this could be thought of as implying some adaptation to exorphins as well. We argue that little or no such adaptation has occurred, for two reasons: first, allergy research indicates that these foods still cause abnormal reactions in many people, and that susceptibility is variable within as well as between populations, indicating that differential adaptation is not the only factor involved. Second, the function of the adaptations mentioned is to enable humans to digest those foods, and if they are adaptations, they arose because they conferred a survival advantage. But would susceptibility to the rewarding effects of exorphins lead to lower, or higher, reproductive success? One would expect in general that an animal with a supply of drugs would behave less adaptively and so lower its chances of survival. But our model shows how the widespread exorphin ingestion in humans has led to increased population. And once civilisation was the norm, non-susceptibility to exorphins would have meant not fitting in with society. Thus, though there may be adaptation to the nutritional content of cereals, there will be little or none to exorphins. In any case, while contemporary humans may enjoy the benefits of some adaptation to agricultural diets, those who actually made the change ten thousand years ago did not.
Other 'non-nutritional' origins of agriculture models
We are not the first to suggest a non-nutritional motive for early agriculture. Hayden (1990) argued that early cultigens and trade items had more prestige value than utility, and suggested that agriculture began because the powerful used its products for competitive feasting and accrual of wealth. Braidwood et al. (1953) and later Katz and Voigt (1986) suggested that the incentive for cereal cultivation was the production of alcoholic beer:
'Under what conditions would the consumption of a wild plant resource be sufficiently important to lead to a change in behaviour (experiments with cultivation) in order to ensure an adequate supply of this resource? If wild cereals were in fact a minor part of the diet, any argument based on caloric need is weakened. It is our contention that the desire for alcohol would constitute a perceived psychological and social need that might easily prompt changes in subsistence behaviour' (Katz & Voigt 1986:33).
This view is clearly compatible with ours. However there may be problems with an alcohol hypothesis: beer may have appeared after bread and other cereal products, and been consumed less widely or less frequently (Braidwood et al. 1953). Unlike alcohol, exorphins are present in all these products. This makes the case for chemical reward as the motive for agriculture much stronger. Opium poppies, too, were an early cultigen (Zohari 1986). Exorphin, alcohol, and opium are primarily rewarding (as opposed to the typically hallucinogenic drugs used by some hunter-gatherers) and it is the artificial reward which is necessary, we claim, for civilisation. Perhaps all three were instrumental in causing civilised behaviour to emerge.
Cereals have important qualities that differentiate them from most other drugs. They are a food source as well as a drug, and can be stored and transported easily. They are ingested in frequent small doses (not occasional large ones), and do not impede work performance in most people. A desire for the drug, even cravings or withdrawal, can be confused with hunger. These features make cereals the ideal facilitator of civilisation (and may also have contributed to the long delay in recognising their pharmacological properties).
Compatibility, limitations, more data needed
Our hypothesis is not a refutation of existing accounts of the origins of agriculture, but rather fits alongside them, explaining why cereal agriculture was adopted despite its apparent disadvantages and how it led to civilisation.
Gaps in our knowledge of exorphins limit the generality and strength of our claims. We do not know whether rice, millet and sorghum, nor grass species which were harvested by African and Australian hunter-gatherers, contain exorphins. We need to be sure that preagricultural staples do not contain exorphins in amounts similar to those in cereals. We do not know whether domestication has affected exorphin content or-potency. A test of our hypothesis by correlation of diet and degree of civilisation in different populations will require quantitative knowledge of the behavioural effects of all these foods.
We do not comment on the origin of noncereal agriculture, nor why some groups used a combination of foraging and farming, reverted from farming to foraging, or did not farm at all. Cereal agriculture and civilisation have, during the past ten thousand years, become virtually universal. The question, then, is not why they happened here and not there, but why they took longer to become established in some places than in others. At all times and places, chemical reward and the influence of civilisations already using cereals weighed in favour of adopting this lifestyle, the disadvantages of agriculture weighed against it, and factors such as climate, geography, soil quality, and availability of cultigens influenced the outcome. There is a recent trend to multi-causal models of the origins of agriculture (e.g. Redding 1988, Henry 1989), and exorphins can be thought of as simply another factor in the list. Analysis of the relative importance of all the factors involved, at all times and places, is beyond the scope of this paper.
Conclusion
'An animal is a survival machine for the genes that built it. We too are animals, and we too are survival machines for our genes. That is the theory. In practice it makes a lot of sense when we look at wild animals.... It is very different when we look at ourselves. We appear to be a serious exception to the Darwinian law.... It obviously just isn't true that most of us spend our time working energetically for the preservation of our genes' (Dawkins 1989:138).
Many ethologists have acknowledged difficulties in explaining civilised human behaviour on evolutionary grounds, in some cases suggesting that modern humans do not always behave adaptively. Yet since agriculture began, the human population has risen by a factor of 1000: Irons (1990) notes that 'population growth is not the expected effect of maladaptive behaviour'.
We have reviewed evidence from several areas of research which shows that cereals and dairy foods have drug-like properties, and shown how these properties may have been the incentive for the initial adoption of agriculture. We suggested further that constant exorphin intake facilitated the behavioural changes and subsequent population growth of civilisation, by increasing people's tolerance of (a) living in crowded sedentary conditions, (b) devoting effort to the benefit of non-kin, and (c) playing a subservient role in a vast hierarchical social structure.
Cereals are still staples, and methods of artificial reward have diversified since that time, including today a wide range of pharmacological and non-pharmacological cultural artifacts whose function, ethologically speaking, is to provide reward without adaptive benefit. It seems reasonable then to suggest that civilisation not only arose out of self-administration of artificial reward, but is maintained in this way among contemporary humans. Hence a step towards resolution of the problem of explaining civilised human behaviour may be to incorporate into ethological models this widespread distortion of behaviour by artificial reward.
References
Adams, W .M., 1987, Cereals before cities except after Jacobs, in M. Melko & L.R. Scott eds, The boundaries of civilizations in space and time, University Press of America, Lanham.
Bell, I. R., 1987, Effects of food allergy on the central nervous system, in J. Brostoff and S. J. Challacombe, eds, Food allergy and intolerance, Bailliere Tindall, London.
Bender, B., 1975, Farming in prehistory: from hunter-gatherer to food producer, John Baker, London.
Bledsoe, W., 1987, Theories of the origins of civilization, in M. Melko and L. R. Scott, eds, The boundaries of civilizations in space and time, University Press of America, Lanham.
Blumler, M., & Byrne, R., 1991, The ecological genetics of domestication and the origins of agriculture, Current Anthropology 32: 2-35.
Braidwood, R. J., Sauer, J.D., Helbaek, H., Mangelsdorf, P.C., Cutler, H.C., Coon, C.S., Linton, R., Steward J. & Oppenheim, A.L., 1953, Symposium: did man once live by beer alone? American Anthropologist 55: 515-26.
Brantl, V., Teschemacher, H., Henschen, A. & Lottspeich, F., 1979, Novel opioid peptides derived from casein (beta-casomorphins), Hoppe-Seyler's Zeitschrift fur Physiologische Chemie 360:1211-6.
Brostoff, J., & Gamlin, L., 1989, The complete guide to food allergy and intolerance, Bloomsbury, London.
Chang, T. T., 1989, Domestication and the spread of the cultivated rices, in D.R. Harris and G.C. Hillman, eds, Foraging and farming: the evolution of plant exploitation, Unwin Hyman, London.
Claessen, H. J. M. & Skalnik P., eds, 1978, The early state, Mouton, The Hague.
Cohen, M. N., 1977, Population pressure and the origins of agriculture: an archaeological example from the coast of Peru, in Reed, C.A., ed., The origins of agriculture, Mouton, The Hague.
Cohen, M. N., 1989, Health and the rise of civilization, Yale University Press, New Haven.
Constantini, L., 1989, Plant exploitation at Grotta dell'Uzzo, Sicily: new evidence for the transition from Mesolithic to Neolithic subsistence in southern Europe, in Harris, D. R. & Hillman, G. C., eds, Foraging and farming: the evolution of plant exploitation, Unwin Hyman, London.
Dawkins, R., 1989, Darwinism and human purpose, in Durant, J. R., ed., Human origins, Clarendon Press, Oxford.
Dohan, F., 1966, Cereals and schizophrenia: data and hypothesis, Acta Psychiatrica Scandinavica 42:125-52.
Dohan, F., 1983, More on coeliac disease as a model of schizophrenia, Biological Psychiatry 18:561-4.
Dohan, F. & Grasberger, J., 1973, Relapsed schizophrenics: earlier discharge from the hospital after cereal-free, milk-free diet, American Journal of Psychiatry 130:685-8.
Dohan, F., Harper, E., Clark, M., Ratigue, R., & Zigos, V., 1984, Is schizophrenia rare if grain is rare? Biological Psychiatry 19: 385-99.
Eaton, S. B. & Konner, M., 1985, Paleolithic nutrition - a consideration of its nature and current implications, New England Journal of Medicine 312: 283-90.
Egger, J., 1988, Food allergy and the central nervous system, in Reinhardt, D. & Schmidt E., eds, Food allergy, Raven, New York.
Flannery, K. V., 1973, The origins of agriculture, Annual Review of Anthropology 2:271-310.
Fukudome, S., & Yoshikawa, M., 1992, Opioid peptides derived from wheat gluten: their isolation and characterization, FEBS Letters 296:107-11.
Gardner, M. L. G., 1985, Production of pharmacologically active peptides from foods in the gut. in Hunter, J. & Alun-Jones, V., eds, Food and the gut, Bailliere Tindall, London.
Gam, S. M. & Leonard, W. R., 1989, What did our ancestors eat? Nutritional Reviews 47:337 45.
Gordon, K. D., 1987, Evolutionary perspectives on human diet, in Johnston, F., ed, Nutritional Anthropology, Alan R. Liss, New York.
Greksch, G., Schweiger C., Matthies, H., 1981, Evidence for analgesic activity of beta-casomorphin in rats, Neuroscience Letters 27:325~8.
Harlan, J. R., 1986, Plant domestication: diffuse origins and diffusion, in Barigozzi, G., ed., The origin and domestication of cultivated plants, Elsevier, Amsterdam.
Harris, D. R., 1977, Alternative pathways towards agriculture, in Reed, C. A., ed., The origins of agriculture, Mouton, The Hague.
Harris, D. R. & Hillman, G. C., eds, 1989, Foraging and farming: the evolution of plant exploitation, Unwin Hyman, London.
Hayden, B., 1990, Nimrods, piscators, pluckers, and planters: the emergence of food production, Journal of Anthropological Archaeology 9:31-69.
Henry, D. O., 1989, From foraging to agriculture: the Levant at the end of the ice age, University of Pennsylvania Press, Philadelphia.
Heubner, F., Liebeman, K., Rubino, R. & Wall, J., 1984, Demonstration of high opioid-like activity in isolated peptides from wheat gluten hydrolysates, Peptides 5:1139-47.
Irons, W., 1990, Let's make our perspective broader rather than narrower, Ethology and Sociobiology 11: 361-74
Johnson, A. W. & Earle, T., 1987, The evolution of human societies: from foraging group to agrarian state, Stanford University Press, Stanford.
Katz, S. H. & Voigt, M. M., 1986, Bread and beer: the early use of cereals in the human diet, Expedition 28:23-34.
Kay, R. F., 1985, Dental evidence for the diet of Australopithecus, Annual Review of Anthropology 14:315 41.
Kroker, G. F., 1987, Chronic candiosis and allergy, in Brostoff, J. & Challacombe, S.J., eds, Food allergy and intolerance, Bailliere Tindall, London.
Lee, R. B. & DeVore, I., 1968, Problems in the study of hunters and gatherers, in Lee, R.B. & DeVore, I., eds, Man the hunter, Aldine, Chicago.
Mycroft, F. J., Wei, E. T., Bernardin, J. E. & Kasarda, D. D., 1982, MlF-like sequences in milk and wheat proteins, New England Journal of Medicine 301:895.
Mycroft, F. J., Bhargava, H. N. & Wei, E. T., 1987, Pharmacalogical activities of the MIF-1 analogues Pro-Leu-Gly, Tyr-Pro-Leu-Gly and pareptide, Peptides 8:1051-5.
Panksepp, J., Normansell, L., Siviy, S., Rossi, J. & Zolovick, A., 1984, Casomorphins reduce separation distress in chicks, Peptides 5:829-83.
Paroli, E., 1988, Opioid peptides from food (the exorphins), World review of nutrition and dietetics 55:58-97.
Pedersen, B., Knudsen, K. E. B. & Eggum, B. 0., 1989, Nutritive value of cereal products with emphasis on the effect of milling, World review of nutrition and dietetics 60:1-91.
Peters, C. R. & O'Brien, E. M., 1981, The early hominid plant-food niche: insights from an analysis of plant exploitation by Homo, Pan, and Papio in eastern and southern Africa, Current Anthropology 22:127-40.
Peuch, P., Albertini, H. & Serratrice, C., 1983, Tooth microwear and dietary patterns in early hominids from Laetoli, Hadar, and Olduvai, Journal of Human Evolution 12:721-9.
Pfeiffer, J. E., 1977, The emergence of society: a prehistory of the establishment, McGraw Hill, New York.
Pryor, F. L., 1986, The adoption of agriculture: some theoretical and empirical evidence, American Anthropologist 88:879-97.
Radcliffe, M. J., 1987, Diagnostic use of dietary regimes, in Brostoff, J. & Challacombe, S. J., eds, Food allergy and intolerance, Bailliere Tindall, London.
Ramabadran, K. & Bansinath, M., 1988, Opioid peptides from milk as a possible cause of Sudden Infant Death Syndrome, Medical Hypotheses 27:181-7.
Randolph, T. G., 1978, Specific adaptation, in Annals of Allergy 40:333-45
Redding, R., 1988, A general explanation of subsistence change from hunting and gathering to food production, Journal of Anthropological Archaeology 7:56-97.
Reed, C. A., ed., 1977, The origins of agriculture, Mouton, The Hague.
Rindos, D., 1984, The origins of agriculture: an evolutionary perspective, Academic Press, Orlando.
Scadding, G. K. & Brostoff, J., 1988, The dietic treatment of food allergy, in Reinhardt, D. & Schmidt, E., eds, Food allergy, Raven, New York.
Simopoulos, A. P., 1990, Genetics and nutrition: or what your genes can tell you about nutrition, World review of nutrition and dietetics 63:25-34.
Sprague, D. E. & Milam, M. J., 1987, Concept of an environmental unit, in Brostoff, J. & .Challacombe, S. J., eds, Food allergy and intolerance, Bailliere Tindall, London.
Stark, B. L., 1986, Origins of food production in the New World, in Meltzer, D. J., Fowler, D. D. & Sabloff, J. A., eds, American archaeology past and future, Smithsonian Institute Press, Washington.
Susman, R. L., 1987, Pygmy chimpanzees and common chimpanzees: models for the behavioural ecology of the earliest hominids, in Kinzey, W. G., ed., The evolution of human behaviour: primate models, State University of New York Press, Albany.
Svedburg, J., De Haas, J., Leimenstoll, G., Paul, F. & Teschemacher, H., 1985, Demonstration of betacasomorphin immunoreactive materials in in-vitro digests of bovine milk and in small intestine contents after bovine milk ingestion in adult humans, Peptides 6:825-30.
Thatcher, J. P., 1987, The economic base for civilization in the New World, in Melko, M. & Scott, L. R., eds, The boundaries of civilizations in space and time, University Press of America, Lanham.
Walker, A., 1981, Dietary hypotheses and human evolution, Philosophical Transactions of the Royal Society of London B292:57-64.
Washburn, L. & Lancaster, C. S., 1968, The evolution of hunting, in Lee, R. B. & DeVore, I., eds, Man the hunter, Aldine, Chicago.
Wittfogel, K., 1957, Oriental Despotism, Yale University Press, New Haven.
Wraith, D. G., 1987, Asthma, in Brostoff, J. & Challacombe, S. J., eds, Food allergy and intolerance, Bailliere Tindall, London.
Wright, H. E., 1977, Environmental changes and the origin of agriculture in the Near East, in Reed, C. A., ed, The origins of agriculture, Mouton, The Hague.
Zioudrou, C., Streaty, R. & Klee, W., 1979, Opioid peptides derived from food proteins: the exorphins Journal of Biological Chemistry 254:244S9.
Zohari, D., 1986, The origin and early spread of agriculture in the Old World, in Barigozzi, G., ed., The origin and domestication of cultivated plants, Elsevier, Amsterdam
Friday, November 6, 2009
Wednesday, November 4, 2009
"The Critique of Civilization" By Journalist and Professor Richard Heinberg
Richard Heinberg: The Critique of Civilization
(A paper presented at the 24th annual meeting of the International Society for the Comparative Study of Civilizations,June 15, 1995.)
I. Prologue
Having been chosen - whether as devil’s advocate or sacrificial lamb, I am not sure - to lead off this discussion on the question, “Was Civilization a Mistake?”, I would like to offer some preliminary thoughts.
From the viewpoint of any non-civilized person, this consideration would appear to be steeped in irony. Here we are, after all, some of the most civilized people on the planet, discussing in the most civilized way imaginable whether civilization itself might be an error. Most of our fellow civilians would likely find our discussion, in addition to being ironic, also disturbing and pointless: after all, what person who has grown up with cars, electricity, and television would relish the idea of living without a house, and of surviving only on wild foods?
Nevertheless, despite the possibility that at least some of our remarks may be ironic, disturbing, and pointless, here we are. Why? I can only speak for myself. In my own intellectual development I have found that a critique of civilization is virtually inescapable for two reasons.
The first has to do with certain deeply disturbing trends in the modern world. We are, it seems, killing the planet. Revisionist “wise use” advocates tell us there is nothing to worry about; dangers to the environment, they say, have been wildly exaggerated. To me this is the most blatant form of wishful thinking. By most estimates, the oceans are dying, the human population is expanding far beyond the long-term carrying capacity of the land, the ozone layer is disappearing, and the global climate is showing worrisome signs of instability. Unless drastic steps are taken, in fifty years the vast majority of the world’s population will likely be existing in conditions such that the lifestyle of virtually any undisturbed primitive tribe would be paradise by comparison.
Now, it can be argued that civilization per se is not at fault, that the problems we face have to do with unique economic and historical circumstances. But we should at least consider the possibility that our modern industrial system represents the flowering of tendencies that go back quite far. This, at any rate, is the implication of recent assessments of the ecological ruin left in the wake of the Roman, Mesopotamian, Chinese, and other prior civilizations. Are we perhaps repeating their errors on a gargantuan scale?
If my first reason for criticizing civilization has to do with its effects on the environment, the second has to do with its impact on human beings. As civilized people, we are also domesticated. We are to primitive peoples as cows and sheep are to bears and eagles. On the rental property where I live in California my landlord keeps two white domesticated ducks. These ducks have been bred to have wings so small as to prevent them from flying. This is a convenience for their keepers, but compared to wild ducks these are pitiful creatures.
Many primal peoples tend to view us as pitiful creatures, too - though powerful and dangerous because of our technology and sheer numbers. They regard civilization as a sort of social disease. We civilized people appear to act as though we were addicted to a powerful drug - a drug that comes in the forms of money, factory-made goods, oil, and electricity. We are helpless without this drug, so we have come to see any threat to its supply as a threat to our very existence. Therefore we are easily manipulated - by desire (for more) or fear (that what we have will be taken away) - and powerful commercial and political interests have learned to orchestrate our desires and fears in order to achieve their own purposes of profit and control. If told that the production of our drug involves slavery, stealing, and murder, or the ecological equivalents, we try to ignore the news so as not to have to face an intolerable double bind.
Since our present civilization is patently ecologically unsustainable in its present form, it follows that our descendants will be living very differently in a few decades, whether their new way of life arises by conscious choice or by default. If humankind is to choose its path deliberately, I believe that our deliberations should include a critique of civilization itself, such as we are undertaking here. The question implicit in such a critique is, What have we done poorly or thoughtlessly in the past that we can do better now? It is in this constructive spirit that I offer the comments that follow.
II. Civilization and Primitivism
What Is Primitivism?
The image of a lost Golden Age of freedom and innocence is at the heart of all the world’s religions, is one of the most powerful themes in the history of human thought, and is the earliest and most characteristic expression of primitivism - the perennial belief in the necessity of a return to origins.
As a philosophical idea, primitivism has had as its proponents Lao Tze, Rousseau, and Thoreau, as well as most of the pre-Socratics, the medieval Jewish and Christian theologians, and 19th- and 20th-century anarchist social theorists, all of whom argued (on different bases and in different ways) the superiority of a simple life close to nature. More recently, many anthropologists have expressed admiration for the spiritual and material advantages of the ways of life of the world’s most “primitive” societies - the surviving gathering-and-hunting peoples who now make up less than one hundredth of one percent of the world’s population.
Meanwhile, as civilization approaches a crisis precipitated by overpopulation and the destruction of the ecological integrity of the planet, primitivism has enjoyed a popular resurgence, by way of increasing interest in shamanism, tribal customs, herbalism, radical environmentalism, and natural foods. There is a widespread (though by no means universally shared) sentiment that civilization has gone too far in its domination of nature, and that in order to survive - or, at least, to live with satisfaction - we must regain some of the spontaneity and naturalness of our early ancestors.
What Is Civilization?
There are many possible definitions of the word civilization. Its derivation - from civis, “town” or “city” - suggests that a minimum definition would be, “urban culture.” Civilization also seems to imply writing, division of labor, agriculture, organized warfare, growth of population, and social stratification.
Yet the latest evidence calls into question the idea that these traits always go together. For example, Elizabeth Stone and Paul Zimansky’s assessment of power relations in the Mesopotamian city of Maskan-shapir (published in the April 1995 Scientific American) suggests that urban culture need not imply class divisions. Their findings seem to show that civilization in its earliest phase was free of these. Still, for the most part the history of civilization in the Near East, the Far East, and Central America, is also the history of kingship, slavery, conquest, agriculture, overpopulation, and environmental ruin. And these traits continue in civilization’s most recent phases - the industrial state and the global market - though now the state itself takes the place of the king, and slavery becomes wage labor and de facto colonialism administered through multinational corporations. Meanwhile, the mechanization of production (which began with agriculture) is overtaking nearly every avenue of human creativity, population is skyrocketing, and organized warfare is resulting in unprecedented levels of bloodshed.
Perhaps, if some of these undesirable traits were absent from the very first cities, I should focus my critique on “Empire Culture” instead of the broader target of “civilization.” However, given how little we still know about the earliest urban centers of the Neolithic era, it is difficult as yet to draw a clear distinction between the two terms.
III. Primitivism Versus Civilization.
Wild Self/Domesticated Self.
People are shaped from birth by their cultural surroundings and by their interactions with the people closest to them. Civilization manipulates these primary relationships in such a way as to domesticate the infant - that is, so as to accustom it to life in a social structure one step removed from nature. The actual process of domestication is describable as follows, using terms borrowed from the object relations school of psychology. The infant lives entirely in the present moment in a state of pure trust and guilelessness, deeply bonded with her mother. But as she grows, she discovers that her mother is a separate entity with her own priorities and limits. The infant’s experience of relationship changes from one of spontaneous trust to one that is suffused with need and longing. This creates a gap between Self and Other in the consciousness of the child, who tries to fill this deepening rift with transitional objects - initially, perhaps a teddy bear; later, addictions and beliefs that serve to fill the psychic gap and thus provide a sense of security. It is the powerful human need for transitional objects that drives individuals in their search for property and power, and that generates bureaucracies and technologies as people pool their efforts.
This process does not occur in the same way in the case of primitive childbearing, where the infant is treated with indulgence, is in constant physical contact with a caregiver throughout infancy, and later undergoes rites of passage. In primal cultures the need for transitional objects appears to be minimized. Anthropological and psychological research converge to suggest that many of civilized people’s emotional ills come from our culture’s abandonment of natural childrearing methods and initiatory rites and its systematic substitution of alienating pedagogical practices from crib through university.
Health: Natural or Artificial?
In terms of health and quality of life, civilization has been a mitigated disaster. S. Boyd Eaton, M.D., et al., argued in The Paleolithic Prescription (1988) that pre-agricultural peoples enjoyed a generally healthy way of life, and that cancer, heart disease, strokes, diabetes, emphysema, hypertension, and cirrhosis - which together lead to 75 percent of all mortality in industrialized nations - are caused by our civilized lifestyles. In terms of diet and exercise, preagricultural lifestyles showed a clear superiority to those of agricultural and civilized peoples.
Much-vaunted increases in longevity in civilized populations have resulted not so much from wonder drugs, as merely from better sanitation - a corrective for conditions created by the overcrowding of cities; and from reductions in infant mortality. It is true that many lives have been spared by modern antibiotics. Yet antibiotics also appear responsible for the evolution of resistant strains of microbes, which health officials now fear could produce unprecedented epidemics in the next century.
The ancient practice of herbalism, evidence of which dates back at least 60,000 years, is practiced in instinctive fashion by all higher animals. Herbal knowledge formed the basis of modern medicine and remains in many ways superior to it. In countless instances, modern synthetic drugs have replaced herbs not because they are more effective or safer, but because they are more profitable to manufacture.
Other forms of “natural” healing - massage, the “placebo effect,” the use of meditation and visualization - are also being shown effective. Medical doctors Bernie Siegel and Deepak Chopra are critical of mechanized medicine and say that the future of the healing professions lies in the direction of attitudinal and natural therapies.
Spirituality: Raw or Cooked?
Spirituality means different things to different people - humility before a higher power or powers; compassion for the suffering of others; obedience to a lineage or tradition; a felt connection with the Earth or with Nature; evolution toward “higher” states of consciousness; or the mystical experience of oneness with all life or with God. With regard to each of these fundamental ways of defining or experiencing the sacred, spontaneous spirituality seems to become regimented, dogmatized, even militarized, with the growth of civilization. While some of the founders of world religions were intuitive primitivists (Jesus, Lao Tze, the Buddha), their followers have often fostered the growth of dominance hierarchies.
The picture is not always simple, though. The thoroughly civilized Roman Catholic Church produced two of the West’s great primitivists - St. Francis and St. Clair; while the neo-shamanic, vegetarian, and herbalist movements of early 20th century Germany attracted arch-authoritarians Heinrich Himmler and Adolph Hitler. Of course, Nazism’s militarism and rigid dominator organization were completely alien to primitive life, while St. Francis’s and St. Clair’s voluntary poverty and treatment of animals as sacred were reminiscent of the lifestyle and worldview of most gathering-and-hunting peoples. If Nazism was atavistic, it was only highly selectively so.
A consideration of these historical ironies is useful in helping us isolate the essentials of true primitivist spirituality - which include spontaneity, mutual aid, encouragement of natural diversity, love of nature, and compassion for others. As spiritual teachers have always insisted, it is the spirit (or state of consciousness) that is important, not the form (names, ideologies, and techniques). While from the standpoint of Teilhard de Chardin’s idea of spiritual evolutionism, primitivist spirituality may initially appear anti-evolutionary or regressive, the essentials we have cited are timeless and trans-evolutionary - they are available at all stages, at all times, for all people. It is when we cease to see civilization in terms of theories of cultural evolution and see it merely as one of several possible forms of social organization that we begin to understand why religion can be liberating, enlightening, and empowering when it holds consistently to primitivist ideals; or deadening and oppressive when it is co-opted to serve the interests of power.
Economics: Free or Unaffordable?
At its base, economics is about how people relate with the land and with one another in the process of fulfilling their material wants and needs. In the most primitive societies, these relations are direct and straightforward. Land, shelter, and food are free. Everything is shared, there are no rich people or poor people, and happiness has little to do with accumulating material possessions. The primitive lives in relative abundance (all needs and wants are easily met) and has plenty of leisure time.
Civilization, in contrast, straddles two economic pillars - technological innovation and the marketplace. “Technology” here includes everything from the plow to the nuclear reactor - all are means to more efficiently extract energy and resources from nature. But efficiency implies the reification of time, and so civilization always brings with it a preoccupation with past and future; eventually the present moment nearly vanishes from view. The elevation of efficiency over other human values is epitomized in the factory - the automated workplace - in which the worker becomes merely an appendage of the machine, a slave to clocks and wages.
The market is civilization’s means of equating dissimilar things through a medium of exchange. As we grow accustomed to valuing everything according to money, we tend to lose a sense of the uniqueness of things. What, after all, is an animal worth, or a mountain, or a redwood tree, or an hour of human life? The market gives us a numerical answer based on scarcity and demand. To the degree that we believe that such values have meaning, we live in a world that is desacralized and desensitized, without heart or spirit.
We can get some idea of ways out of our ecologically ruinous, humanly deadening economic cage by examining not only primitive lifestyles, but the proposals of economist E. F. Schumacher, the experiences of people in utopian communities in which technology and money are marginalized, and the lives of individuals who have adopted an attitude of voluntary simplicity.
Government: Bottom Up or Top Down?
In the most primitive human societies there are no leaders, bosses, politics, laws, crime, or taxes. There is often little division of labor between women and men, and where such division exists both gender’s contributions are often valued more or less equally. Probably as a result, many foraging peoples are relatively peaceful (anthropologist Richard Lee found that “the !Kung [Bushmen of southern Africa] hate fighting, and think anybody who fought would be stupid”).
With agriculture usually come division of labor, increased sexual inequality, and the beginnings of social hierarchy. Priests, kings, and organized, impersonal warfare all seem to come together in one package. Eventually, laws and borders define the creation of the fully fledged state. The state as a focus of coercion and violence has reached its culmination in the 19th and 20th centuries in colonialism, fascism, and Stalinism. Even the democratic industrial state functions essentially as an instrument of multinational corporate-style colonial oppression and domestic enslavement, its citizens merely being given the choice between selected professional bureaucrats representing political parties with slightly varying agendas for the advancement of corporate power.
Beginning with William Godwin in the early 19th century, anarchist social philosophers have offered a critical counterpoint to the increasingly radical statism of most of the world’s civilized political leaders. The core idea of anarchism is that human beings are fundamentally sociable; left to themselves, they tend to cooperate to their mutual benefit. There will always be exceptions, but these are
best dealt with informally and on an individual basis. Many anarchists cite the Athenian polis, the “sections” in Paris during the French Revolution, the New England town meetings of the 18th century, the popular assemblies in Barcelona in the late 1930s, and the Paris general strike of 1968 as positive examples of anarchy in action. They point to the possibility of a kind of social ecology, in which diversity and spontaneity are permitted to flourish unhindered both in human affairs and in Nature.
While critics continue to describe anarchism as a practical failure, organizational and systems theorists Tom Peters and Peter Senge are advocating the transformation of hierarchical, bureaucratized organizations into more decentralized, autonomous, spontaneous ones. This transformation is presently underway in - of all places - the very multinational corporations that form the backbone of industrial civilization.
Civilization and Nature.
Civilized people are accustomed to an anthropocentric view of the world. Our interest in the environment is utilitarian: it is of value because it is of use (or potential use) to human beings - if only as a place for camping and recreation.
Primitive peoples, in contrast, tended to see nature as intrinsically meaningful. In many cultures prohibitions surrounded the overhunting of animals or the felling of trees. The aboriginal peoples of Australia believed that their primary purpose in the cosmic scheme of things was to take care of the land, which meant performing ceremonies for the periodic renewal of plant and animal species, and of the landscape itself.
The difference in effects between the anthropocentric and ecocentric worldviews is incalculable. At present, we human beings - while considering ourselves the most intelligent species on the planet - are engaged in the most unintelligent enterprise imaginable: the destruction of our own natural life-support system. We need here only mention matters such as the standard treatment of factory-farmed domesticated food animals, the destruction of soils, the pollution of air and water, and the extinctions of wild species, as these horrors are well documented. It seems unlikely that these could ever have arisen but for an entrenched and ever-deepening trend of thinking that separates humanity from its natural context and denies inherent worth to non-human nature.
The origin and growth of this tendency to treat nature as an object separate from ourselves can be traced to the Neolithic revolution, and through the various stages of civilization’s intensification and growth. One can also trace the countercurrent to this tendency from the primitivism of the early Taoists to that of today’s deep ecologists, ecofeminists, and bioregionalists.
How We Compensate for Our Loss of Nature.
How do we make up for the loss of our primitive way of life? Psychotherapy, exercise and diet programs, the vacation and entertainment industries, and social welfare programs are necessitated by civilized, industrial lifestyles. The cumulative cost of these compensatory efforts is vast; yet in many respects they are only
palliative.
The medical community now tells us that our modern diet of low-fiber, high-fat processed foods is disastrous to our health. But what exactly is the cost - in terms of hospital stays, surgeries, premature deaths, etc.? A rough but conservative estimate runs into the tens of billions of dollars per year in North America alone. At the forefront of the “wellness” movement are advocates of natural foods, exercise programs (including hiking and backpacking), herbalism, and other therapies that aim specifically to bring overcivilized individuals back in touch with the innate source of health within their own stressed and repressed bodies.
Current approaches in psychology aim to retrieve lost portions of the primitive psyche via “inner child” work, through which adults compensate for alienated childhoods; or men’s and women’s vision quests, through which civilized people seek to access the “wild man” or “wild woman” within.
All of these physically, psychologically, and even spiritually-oriented efforts are helpful antidotes for the distress of civilization. One must wonder, however, whether it wouldn’t be better simply to stop creating the problems that these programs and therapies are intended to correct.
Questions and Objections.
Isn’t civilization simply the inevitable expression of the evolutionary urge as it is translated through human society? Isn’t primitivism therefore regressive?
We are accustomed to thinking of the history of Western civilization as an inevitable evolutionary progression. But this implies that all the world’s peoples who didn’t spontaneously develop civilizations of their own were less highly evolved than ourselves, or simply “backward.” Not all anthropologists who have spent time with such peoples think this way. Indeed, according to the cultural materialist school of thought, articulated primarily by Marvin Harris, social change in the direction of technological innovation and social stratification is fueled not so much by some innate evolutionary urge as by crises brought on by overpopulation and resource exhaustion.
Wasn’t primitive life terrible? Would we really want to go back to hunting and gathering, living without modern comforts and conveniences?
Putting an urban person in the wilderness without comforts and conveniences would be as cruel as abandoning a domesticated pet by the roadside. Even if the animal survived, it would be miserable. And we would probably be miserable too, if the accouterments of civilization were abruptly withdrawn from us. Yet the wild cousins of our hypothetical companion animal - whether a parrot, a canine, or a feline - live quite happily away from houses and packaged pet food and resist our efforts to capture and domesticate them, just as primitive peoples live quite happily without civilization and often resist its imposition. Clearly, animals (including people) can adapt either to wild or domesticated ways of life over the course of several generations, while adult individuals tend to be much less adaptable. In the view of many of its proponents, primitivism implies a direction of social change over time, as opposed to an instantaneous, all-or-nothing choice. We in the industrial world have gradually accustomed ourselves to a way of life that appears to be leading toward a universal biological holocaust. The question is, shall we choose to gradually accustom ourselves to another way of life - one that more successfully integrates human purposes with ecological imperatives - or shall we cling to our present choices to the bitter end?
Obviously, we cannot turn back the clock. But we are at a point in history where we not only can, but must pick and choose among all the present and past elements of human culture to find those that are most humane and sustainable. While the new culture we will create by doing so will not likely represent simply an immediate return to wild food gathering, it could restore much of the freedom, naturalness, and spontaneity that we have traded for civilization’s artifices, and it could include new versions of cultural forms with roots in humanity’s remotest past. We need not slavishly imitate the past; we might, rather, be inspired by the best examples of human adaptation, past and present. Instead of “going back,” we should think of this process as “getting back on track.”
Haven’t we gained important knowledge and abilities through civilization? Wouldn’t renouncing these advances be stupid and short-sighted?
If human beings are inherently mostly good, sociable, and creative, it is inevitable that much of what we have done in the course of the development of civilization should be worth keeping, even if the enterprise as a whole was skewed. But how do we decide what to keep? Obviously, we must agree upon criteria. I would suggest that our first criterion must be ecological sustainability. What activities can be pursued across many generations with minimal environmental damage? A second criterion might be, What sorts of activities promote - rather than degrade - human dignity and freedom?
If human beings are inherently good, then why did we make the “mistake” of creating civilization? Aren’t the two propositions (human beings are good, civilization is bad) contradictory?
Only if taken as absolutes. Human nature is malleable, its qualities changing somewhat according to the natural and social environment. Moreover, humankind is not a closed system. We exist within a natural world that is, on the whole, “good,” but that is subject to rare catastrophes. Perhaps the initial phases of civilization were humanity’s traumatized response to overwhelming global cataclysms accompanying and following the end of the Pleistocene. Kingship and warfare may have originated as survival strategies. Then, perhaps civilization itself became a mechanism for re-traumatizing each new generation, thus preserving and regenerating its own psycho-social basis.
What practical suggestions for the future stem from primitivism? We cannot all revert to gathering and hunting today because there are just too many of us. Can primitivism offer a practical design for living?
No philosophy or “-ism” is a magical formula for the solution of all human problems. Primitivism doesn’t offer easy answers, but it does suggest an alternative direction or set of values. For many centuries, civilization has been traveling in the direction of artificiality, control, and domination. Primitivism tells us that there is an inherent limit to our continued movement in that direction, and that at some point we must begin to choose to readapt ourselves to nature. The point of a primitivist critique of civilization is not necessarily to insist on an absolute rejection of every aspect of modern life, but to assist in clarifying issues so that we can better understand the trade offs we are making now, deepen the process of renegotiating our personal bargains with nature, and thereby contribute to the reframing of our society’s collective covenants.
Some Concluding Thoughts.
In any discussion of primitivism we must keep in mind civilization’s “good” face - the one characterized (in Lewis Mumford’s words) by “the invention and keeping of the written record, the growth of visual and musical arts, the effort to widen the circle of communication and economic intercourse far beyond the range of any local community: ultimately the purpose to make available to all [people] the discoveries and inventions and creations, the works of art and thought, the values and purposes that any single group has discovered.”
Civilization brings not only comforts, but also the opportunity to think the thoughts of Plato or Thoreau, to travel to distant places, and to live under the protection of a legal system that guarantees certain rights. How could we deny the worth of these things?
Naturally, we would like to have it all; we would like to preserve civilization’s perceived benefits while restraining its destructiveness. But we haven’t found a way to do that yet. And it is unlikely that we will while we are in denial about what we have left behind, and about the likely consequences of what we are doing now.
While I advocate taking a critical look at civilization, I am not suggesting that we are now in position to render a final judgment on it. It is entirely possible that we are standing on the threshold of a cultural transformation toward a way of life characterized by relatively higher degrees of contentment, creativity, justice, and sustainability than have been known in any human society heretofore. If we are able to follow this transformation through, and if we call the result “civilization,” then we will surely be entitled to declare civilization a resounding success
(A paper presented at the 24th annual meeting of the International Society for the Comparative Study of Civilizations,June 15, 1995.)
I. Prologue
Having been chosen - whether as devil’s advocate or sacrificial lamb, I am not sure - to lead off this discussion on the question, “Was Civilization a Mistake?”, I would like to offer some preliminary thoughts.
From the viewpoint of any non-civilized person, this consideration would appear to be steeped in irony. Here we are, after all, some of the most civilized people on the planet, discussing in the most civilized way imaginable whether civilization itself might be an error. Most of our fellow civilians would likely find our discussion, in addition to being ironic, also disturbing and pointless: after all, what person who has grown up with cars, electricity, and television would relish the idea of living without a house, and of surviving only on wild foods?
Nevertheless, despite the possibility that at least some of our remarks may be ironic, disturbing, and pointless, here we are. Why? I can only speak for myself. In my own intellectual development I have found that a critique of civilization is virtually inescapable for two reasons.
The first has to do with certain deeply disturbing trends in the modern world. We are, it seems, killing the planet. Revisionist “wise use” advocates tell us there is nothing to worry about; dangers to the environment, they say, have been wildly exaggerated. To me this is the most blatant form of wishful thinking. By most estimates, the oceans are dying, the human population is expanding far beyond the long-term carrying capacity of the land, the ozone layer is disappearing, and the global climate is showing worrisome signs of instability. Unless drastic steps are taken, in fifty years the vast majority of the world’s population will likely be existing in conditions such that the lifestyle of virtually any undisturbed primitive tribe would be paradise by comparison.
Now, it can be argued that civilization per se is not at fault, that the problems we face have to do with unique economic and historical circumstances. But we should at least consider the possibility that our modern industrial system represents the flowering of tendencies that go back quite far. This, at any rate, is the implication of recent assessments of the ecological ruin left in the wake of the Roman, Mesopotamian, Chinese, and other prior civilizations. Are we perhaps repeating their errors on a gargantuan scale?
If my first reason for criticizing civilization has to do with its effects on the environment, the second has to do with its impact on human beings. As civilized people, we are also domesticated. We are to primitive peoples as cows and sheep are to bears and eagles. On the rental property where I live in California my landlord keeps two white domesticated ducks. These ducks have been bred to have wings so small as to prevent them from flying. This is a convenience for their keepers, but compared to wild ducks these are pitiful creatures.
Many primal peoples tend to view us as pitiful creatures, too - though powerful and dangerous because of our technology and sheer numbers. They regard civilization as a sort of social disease. We civilized people appear to act as though we were addicted to a powerful drug - a drug that comes in the forms of money, factory-made goods, oil, and electricity. We are helpless without this drug, so we have come to see any threat to its supply as a threat to our very existence. Therefore we are easily manipulated - by desire (for more) or fear (that what we have will be taken away) - and powerful commercial and political interests have learned to orchestrate our desires and fears in order to achieve their own purposes of profit and control. If told that the production of our drug involves slavery, stealing, and murder, or the ecological equivalents, we try to ignore the news so as not to have to face an intolerable double bind.
Since our present civilization is patently ecologically unsustainable in its present form, it follows that our descendants will be living very differently in a few decades, whether their new way of life arises by conscious choice or by default. If humankind is to choose its path deliberately, I believe that our deliberations should include a critique of civilization itself, such as we are undertaking here. The question implicit in such a critique is, What have we done poorly or thoughtlessly in the past that we can do better now? It is in this constructive spirit that I offer the comments that follow.
II. Civilization and Primitivism
What Is Primitivism?
The image of a lost Golden Age of freedom and innocence is at the heart of all the world’s religions, is one of the most powerful themes in the history of human thought, and is the earliest and most characteristic expression of primitivism - the perennial belief in the necessity of a return to origins.
As a philosophical idea, primitivism has had as its proponents Lao Tze, Rousseau, and Thoreau, as well as most of the pre-Socratics, the medieval Jewish and Christian theologians, and 19th- and 20th-century anarchist social theorists, all of whom argued (on different bases and in different ways) the superiority of a simple life close to nature. More recently, many anthropologists have expressed admiration for the spiritual and material advantages of the ways of life of the world’s most “primitive” societies - the surviving gathering-and-hunting peoples who now make up less than one hundredth of one percent of the world’s population.
Meanwhile, as civilization approaches a crisis precipitated by overpopulation and the destruction of the ecological integrity of the planet, primitivism has enjoyed a popular resurgence, by way of increasing interest in shamanism, tribal customs, herbalism, radical environmentalism, and natural foods. There is a widespread (though by no means universally shared) sentiment that civilization has gone too far in its domination of nature, and that in order to survive - or, at least, to live with satisfaction - we must regain some of the spontaneity and naturalness of our early ancestors.
What Is Civilization?
There are many possible definitions of the word civilization. Its derivation - from civis, “town” or “city” - suggests that a minimum definition would be, “urban culture.” Civilization also seems to imply writing, division of labor, agriculture, organized warfare, growth of population, and social stratification.
Yet the latest evidence calls into question the idea that these traits always go together. For example, Elizabeth Stone and Paul Zimansky’s assessment of power relations in the Mesopotamian city of Maskan-shapir (published in the April 1995 Scientific American) suggests that urban culture need not imply class divisions. Their findings seem to show that civilization in its earliest phase was free of these. Still, for the most part the history of civilization in the Near East, the Far East, and Central America, is also the history of kingship, slavery, conquest, agriculture, overpopulation, and environmental ruin. And these traits continue in civilization’s most recent phases - the industrial state and the global market - though now the state itself takes the place of the king, and slavery becomes wage labor and de facto colonialism administered through multinational corporations. Meanwhile, the mechanization of production (which began with agriculture) is overtaking nearly every avenue of human creativity, population is skyrocketing, and organized warfare is resulting in unprecedented levels of bloodshed.
Perhaps, if some of these undesirable traits were absent from the very first cities, I should focus my critique on “Empire Culture” instead of the broader target of “civilization.” However, given how little we still know about the earliest urban centers of the Neolithic era, it is difficult as yet to draw a clear distinction between the two terms.
III. Primitivism Versus Civilization.
Wild Self/Domesticated Self.
People are shaped from birth by their cultural surroundings and by their interactions with the people closest to them. Civilization manipulates these primary relationships in such a way as to domesticate the infant - that is, so as to accustom it to life in a social structure one step removed from nature. The actual process of domestication is describable as follows, using terms borrowed from the object relations school of psychology. The infant lives entirely in the present moment in a state of pure trust and guilelessness, deeply bonded with her mother. But as she grows, she discovers that her mother is a separate entity with her own priorities and limits. The infant’s experience of relationship changes from one of spontaneous trust to one that is suffused with need and longing. This creates a gap between Self and Other in the consciousness of the child, who tries to fill this deepening rift with transitional objects - initially, perhaps a teddy bear; later, addictions and beliefs that serve to fill the psychic gap and thus provide a sense of security. It is the powerful human need for transitional objects that drives individuals in their search for property and power, and that generates bureaucracies and technologies as people pool their efforts.
This process does not occur in the same way in the case of primitive childbearing, where the infant is treated with indulgence, is in constant physical contact with a caregiver throughout infancy, and later undergoes rites of passage. In primal cultures the need for transitional objects appears to be minimized. Anthropological and psychological research converge to suggest that many of civilized people’s emotional ills come from our culture’s abandonment of natural childrearing methods and initiatory rites and its systematic substitution of alienating pedagogical practices from crib through university.
Health: Natural or Artificial?
In terms of health and quality of life, civilization has been a mitigated disaster. S. Boyd Eaton, M.D., et al., argued in The Paleolithic Prescription (1988) that pre-agricultural peoples enjoyed a generally healthy way of life, and that cancer, heart disease, strokes, diabetes, emphysema, hypertension, and cirrhosis - which together lead to 75 percent of all mortality in industrialized nations - are caused by our civilized lifestyles. In terms of diet and exercise, preagricultural lifestyles showed a clear superiority to those of agricultural and civilized peoples.
Much-vaunted increases in longevity in civilized populations have resulted not so much from wonder drugs, as merely from better sanitation - a corrective for conditions created by the overcrowding of cities; and from reductions in infant mortality. It is true that many lives have been spared by modern antibiotics. Yet antibiotics also appear responsible for the evolution of resistant strains of microbes, which health officials now fear could produce unprecedented epidemics in the next century.
The ancient practice of herbalism, evidence of which dates back at least 60,000 years, is practiced in instinctive fashion by all higher animals. Herbal knowledge formed the basis of modern medicine and remains in many ways superior to it. In countless instances, modern synthetic drugs have replaced herbs not because they are more effective or safer, but because they are more profitable to manufacture.
Other forms of “natural” healing - massage, the “placebo effect,” the use of meditation and visualization - are also being shown effective. Medical doctors Bernie Siegel and Deepak Chopra are critical of mechanized medicine and say that the future of the healing professions lies in the direction of attitudinal and natural therapies.
Spirituality: Raw or Cooked?
Spirituality means different things to different people - humility before a higher power or powers; compassion for the suffering of others; obedience to a lineage or tradition; a felt connection with the Earth or with Nature; evolution toward “higher” states of consciousness; or the mystical experience of oneness with all life or with God. With regard to each of these fundamental ways of defining or experiencing the sacred, spontaneous spirituality seems to become regimented, dogmatized, even militarized, with the growth of civilization. While some of the founders of world religions were intuitive primitivists (Jesus, Lao Tze, the Buddha), their followers have often fostered the growth of dominance hierarchies.
The picture is not always simple, though. The thoroughly civilized Roman Catholic Church produced two of the West’s great primitivists - St. Francis and St. Clair; while the neo-shamanic, vegetarian, and herbalist movements of early 20th century Germany attracted arch-authoritarians Heinrich Himmler and Adolph Hitler. Of course, Nazism’s militarism and rigid dominator organization were completely alien to primitive life, while St. Francis’s and St. Clair’s voluntary poverty and treatment of animals as sacred were reminiscent of the lifestyle and worldview of most gathering-and-hunting peoples. If Nazism was atavistic, it was only highly selectively so.
A consideration of these historical ironies is useful in helping us isolate the essentials of true primitivist spirituality - which include spontaneity, mutual aid, encouragement of natural diversity, love of nature, and compassion for others. As spiritual teachers have always insisted, it is the spirit (or state of consciousness) that is important, not the form (names, ideologies, and techniques). While from the standpoint of Teilhard de Chardin’s idea of spiritual evolutionism, primitivist spirituality may initially appear anti-evolutionary or regressive, the essentials we have cited are timeless and trans-evolutionary - they are available at all stages, at all times, for all people. It is when we cease to see civilization in terms of theories of cultural evolution and see it merely as one of several possible forms of social organization that we begin to understand why religion can be liberating, enlightening, and empowering when it holds consistently to primitivist ideals; or deadening and oppressive when it is co-opted to serve the interests of power.
Economics: Free or Unaffordable?
At its base, economics is about how people relate with the land and with one another in the process of fulfilling their material wants and needs. In the most primitive societies, these relations are direct and straightforward. Land, shelter, and food are free. Everything is shared, there are no rich people or poor people, and happiness has little to do with accumulating material possessions. The primitive lives in relative abundance (all needs and wants are easily met) and has plenty of leisure time.
Civilization, in contrast, straddles two economic pillars - technological innovation and the marketplace. “Technology” here includes everything from the plow to the nuclear reactor - all are means to more efficiently extract energy and resources from nature. But efficiency implies the reification of time, and so civilization always brings with it a preoccupation with past and future; eventually the present moment nearly vanishes from view. The elevation of efficiency over other human values is epitomized in the factory - the automated workplace - in which the worker becomes merely an appendage of the machine, a slave to clocks and wages.
The market is civilization’s means of equating dissimilar things through a medium of exchange. As we grow accustomed to valuing everything according to money, we tend to lose a sense of the uniqueness of things. What, after all, is an animal worth, or a mountain, or a redwood tree, or an hour of human life? The market gives us a numerical answer based on scarcity and demand. To the degree that we believe that such values have meaning, we live in a world that is desacralized and desensitized, without heart or spirit.
We can get some idea of ways out of our ecologically ruinous, humanly deadening economic cage by examining not only primitive lifestyles, but the proposals of economist E. F. Schumacher, the experiences of people in utopian communities in which technology and money are marginalized, and the lives of individuals who have adopted an attitude of voluntary simplicity.
Government: Bottom Up or Top Down?
In the most primitive human societies there are no leaders, bosses, politics, laws, crime, or taxes. There is often little division of labor between women and men, and where such division exists both gender’s contributions are often valued more or less equally. Probably as a result, many foraging peoples are relatively peaceful (anthropologist Richard Lee found that “the !Kung [Bushmen of southern Africa] hate fighting, and think anybody who fought would be stupid”).
With agriculture usually come division of labor, increased sexual inequality, and the beginnings of social hierarchy. Priests, kings, and organized, impersonal warfare all seem to come together in one package. Eventually, laws and borders define the creation of the fully fledged state. The state as a focus of coercion and violence has reached its culmination in the 19th and 20th centuries in colonialism, fascism, and Stalinism. Even the democratic industrial state functions essentially as an instrument of multinational corporate-style colonial oppression and domestic enslavement, its citizens merely being given the choice between selected professional bureaucrats representing political parties with slightly varying agendas for the advancement of corporate power.
Beginning with William Godwin in the early 19th century, anarchist social philosophers have offered a critical counterpoint to the increasingly radical statism of most of the world’s civilized political leaders. The core idea of anarchism is that human beings are fundamentally sociable; left to themselves, they tend to cooperate to their mutual benefit. There will always be exceptions, but these are
best dealt with informally and on an individual basis. Many anarchists cite the Athenian polis, the “sections” in Paris during the French Revolution, the New England town meetings of the 18th century, the popular assemblies in Barcelona in the late 1930s, and the Paris general strike of 1968 as positive examples of anarchy in action. They point to the possibility of a kind of social ecology, in which diversity and spontaneity are permitted to flourish unhindered both in human affairs and in Nature.
While critics continue to describe anarchism as a practical failure, organizational and systems theorists Tom Peters and Peter Senge are advocating the transformation of hierarchical, bureaucratized organizations into more decentralized, autonomous, spontaneous ones. This transformation is presently underway in - of all places - the very multinational corporations that form the backbone of industrial civilization.
Civilization and Nature.
Civilized people are accustomed to an anthropocentric view of the world. Our interest in the environment is utilitarian: it is of value because it is of use (or potential use) to human beings - if only as a place for camping and recreation.
Primitive peoples, in contrast, tended to see nature as intrinsically meaningful. In many cultures prohibitions surrounded the overhunting of animals or the felling of trees. The aboriginal peoples of Australia believed that their primary purpose in the cosmic scheme of things was to take care of the land, which meant performing ceremonies for the periodic renewal of plant and animal species, and of the landscape itself.
The difference in effects between the anthropocentric and ecocentric worldviews is incalculable. At present, we human beings - while considering ourselves the most intelligent species on the planet - are engaged in the most unintelligent enterprise imaginable: the destruction of our own natural life-support system. We need here only mention matters such as the standard treatment of factory-farmed domesticated food animals, the destruction of soils, the pollution of air and water, and the extinctions of wild species, as these horrors are well documented. It seems unlikely that these could ever have arisen but for an entrenched and ever-deepening trend of thinking that separates humanity from its natural context and denies inherent worth to non-human nature.
The origin and growth of this tendency to treat nature as an object separate from ourselves can be traced to the Neolithic revolution, and through the various stages of civilization’s intensification and growth. One can also trace the countercurrent to this tendency from the primitivism of the early Taoists to that of today’s deep ecologists, ecofeminists, and bioregionalists.
How We Compensate for Our Loss of Nature.
How do we make up for the loss of our primitive way of life? Psychotherapy, exercise and diet programs, the vacation and entertainment industries, and social welfare programs are necessitated by civilized, industrial lifestyles. The cumulative cost of these compensatory efforts is vast; yet in many respects they are only
palliative.
The medical community now tells us that our modern diet of low-fiber, high-fat processed foods is disastrous to our health. But what exactly is the cost - in terms of hospital stays, surgeries, premature deaths, etc.? A rough but conservative estimate runs into the tens of billions of dollars per year in North America alone. At the forefront of the “wellness” movement are advocates of natural foods, exercise programs (including hiking and backpacking), herbalism, and other therapies that aim specifically to bring overcivilized individuals back in touch with the innate source of health within their own stressed and repressed bodies.
Current approaches in psychology aim to retrieve lost portions of the primitive psyche via “inner child” work, through which adults compensate for alienated childhoods; or men’s and women’s vision quests, through which civilized people seek to access the “wild man” or “wild woman” within.
All of these physically, psychologically, and even spiritually-oriented efforts are helpful antidotes for the distress of civilization. One must wonder, however, whether it wouldn’t be better simply to stop creating the problems that these programs and therapies are intended to correct.
Questions and Objections.
Isn’t civilization simply the inevitable expression of the evolutionary urge as it is translated through human society? Isn’t primitivism therefore regressive?
We are accustomed to thinking of the history of Western civilization as an inevitable evolutionary progression. But this implies that all the world’s peoples who didn’t spontaneously develop civilizations of their own were less highly evolved than ourselves, or simply “backward.” Not all anthropologists who have spent time with such peoples think this way. Indeed, according to the cultural materialist school of thought, articulated primarily by Marvin Harris, social change in the direction of technological innovation and social stratification is fueled not so much by some innate evolutionary urge as by crises brought on by overpopulation and resource exhaustion.
Wasn’t primitive life terrible? Would we really want to go back to hunting and gathering, living without modern comforts and conveniences?
Putting an urban person in the wilderness without comforts and conveniences would be as cruel as abandoning a domesticated pet by the roadside. Even if the animal survived, it would be miserable. And we would probably be miserable too, if the accouterments of civilization were abruptly withdrawn from us. Yet the wild cousins of our hypothetical companion animal - whether a parrot, a canine, or a feline - live quite happily away from houses and packaged pet food and resist our efforts to capture and domesticate them, just as primitive peoples live quite happily without civilization and often resist its imposition. Clearly, animals (including people) can adapt either to wild or domesticated ways of life over the course of several generations, while adult individuals tend to be much less adaptable. In the view of many of its proponents, primitivism implies a direction of social change over time, as opposed to an instantaneous, all-or-nothing choice. We in the industrial world have gradually accustomed ourselves to a way of life that appears to be leading toward a universal biological holocaust. The question is, shall we choose to gradually accustom ourselves to another way of life - one that more successfully integrates human purposes with ecological imperatives - or shall we cling to our present choices to the bitter end?
Obviously, we cannot turn back the clock. But we are at a point in history where we not only can, but must pick and choose among all the present and past elements of human culture to find those that are most humane and sustainable. While the new culture we will create by doing so will not likely represent simply an immediate return to wild food gathering, it could restore much of the freedom, naturalness, and spontaneity that we have traded for civilization’s artifices, and it could include new versions of cultural forms with roots in humanity’s remotest past. We need not slavishly imitate the past; we might, rather, be inspired by the best examples of human adaptation, past and present. Instead of “going back,” we should think of this process as “getting back on track.”
Haven’t we gained important knowledge and abilities through civilization? Wouldn’t renouncing these advances be stupid and short-sighted?
If human beings are inherently mostly good, sociable, and creative, it is inevitable that much of what we have done in the course of the development of civilization should be worth keeping, even if the enterprise as a whole was skewed. But how do we decide what to keep? Obviously, we must agree upon criteria. I would suggest that our first criterion must be ecological sustainability. What activities can be pursued across many generations with minimal environmental damage? A second criterion might be, What sorts of activities promote - rather than degrade - human dignity and freedom?
If human beings are inherently good, then why did we make the “mistake” of creating civilization? Aren’t the two propositions (human beings are good, civilization is bad) contradictory?
Only if taken as absolutes. Human nature is malleable, its qualities changing somewhat according to the natural and social environment. Moreover, humankind is not a closed system. We exist within a natural world that is, on the whole, “good,” but that is subject to rare catastrophes. Perhaps the initial phases of civilization were humanity’s traumatized response to overwhelming global cataclysms accompanying and following the end of the Pleistocene. Kingship and warfare may have originated as survival strategies. Then, perhaps civilization itself became a mechanism for re-traumatizing each new generation, thus preserving and regenerating its own psycho-social basis.
What practical suggestions for the future stem from primitivism? We cannot all revert to gathering and hunting today because there are just too many of us. Can primitivism offer a practical design for living?
No philosophy or “-ism” is a magical formula for the solution of all human problems. Primitivism doesn’t offer easy answers, but it does suggest an alternative direction or set of values. For many centuries, civilization has been traveling in the direction of artificiality, control, and domination. Primitivism tells us that there is an inherent limit to our continued movement in that direction, and that at some point we must begin to choose to readapt ourselves to nature. The point of a primitivist critique of civilization is not necessarily to insist on an absolute rejection of every aspect of modern life, but to assist in clarifying issues so that we can better understand the trade offs we are making now, deepen the process of renegotiating our personal bargains with nature, and thereby contribute to the reframing of our society’s collective covenants.
Some Concluding Thoughts.
In any discussion of primitivism we must keep in mind civilization’s “good” face - the one characterized (in Lewis Mumford’s words) by “the invention and keeping of the written record, the growth of visual and musical arts, the effort to widen the circle of communication and economic intercourse far beyond the range of any local community: ultimately the purpose to make available to all [people] the discoveries and inventions and creations, the works of art and thought, the values and purposes that any single group has discovered.”
Civilization brings not only comforts, but also the opportunity to think the thoughts of Plato or Thoreau, to travel to distant places, and to live under the protection of a legal system that guarantees certain rights. How could we deny the worth of these things?
Naturally, we would like to have it all; we would like to preserve civilization’s perceived benefits while restraining its destructiveness. But we haven’t found a way to do that yet. And it is unlikely that we will while we are in denial about what we have left behind, and about the likely consequences of what we are doing now.
While I advocate taking a critical look at civilization, I am not suggesting that we are now in position to render a final judgment on it. It is entirely possible that we are standing on the threshold of a cultural transformation toward a way of life characterized by relatively higher degrees of contentment, creativity, justice, and sustainability than have been known in any human society heretofore. If we are able to follow this transformation through, and if we call the result “civilization,” then we will surely be entitled to declare civilization a resounding success
"Health and the Rise of Civilization" By Anthropologist Mark Nathan Cohen
Health and
the Rise of Civilization
Mark Nathan Cohen
(excerpt from book of same
title: pp. 131-141)
There is no evidence either from ethnographic accounts or archaeological excavations to suggest that rates of accidental trauma or interpersonal violence declined substantially with the adoption of more civilized forms of political organization. In fact, some evidence from archaeological sites and from historical sources suggests the opposite.
Evidence from both ethnographic descriptions of contemporary hunters and the archaeological record suggests that the major trend in the quality and quantity of human diets has been downward. Contemporary hunter-gatherers, although lean and occasionally hungry, enjoy levels of caloric intake that compare favorably with national averages for many major countries of the Third World and that are generally above those of the poor in the modern world. Even the poorest recorded hunter-gatherer group enjoys a caloric intake superior to that of impoverished contemporary urban populations. Prehistoric hunter-gatherers appear to have enjoyed richer environments and to have been better nourished than most subsequent populations (primitive and civilized alike). Whenever we can glimpse the remains of anatomically modern human beings who lived in early prehistoric environments still rich in large game, they are often relatively large people displaying comparatively few signs of qualitative malnutrition. The subsequent trend in human size and stature is irregular but is more often downward than upward in most parts of the world until the nineteenth or twentieth century.
The diets of hunter-gatherers appear to be comparatively well balanced, even when they are lean. Ethnographic accounts of contemporary groups suggest that protein intakes are commonly quite high, comparable to those of affluent modern groups and substantially above world averages. Protein deficiency is almost unknown in these groups, and vitamin and mineral deficiencies are rare and usually mild in comparison to rates reported from many Third World populations. Archaeological evidence suggests that specific deficiencies, including that of iron (anemia), vitamin D (rickets), and, more controversially, vitamin C (scurvy) as well as such general signs of protein calorie malnutrition as childhood growth retardation have generally become more common in history rather than declining.
Among farmers, increasing population required more and more frequent cropping of land and the use of more and more marginal soils, both of which further diminished returns for labor. This trend may or may not have been offset by such technological improvements in farming as the use of metal tools, specialization of labor, and efficiencies associated with large-scale production that tend to increase individual productivity as well as total production.
But whether the efficiency of farming increased or declined, the nutrition of individuals appears often to have declined for any of several reasons: because increasingly complex society placed new barriers between individuals and flexible access to resources, because trade often siphoned resources away, because some segments of the society increasingly had only indirect access to food, because investments in new technology to improve production focused power in the hands of elites so that their benefits were not widely shared, and perhaps because of the outright exploitation and deprivation of some segments of society. In addition, more complex societies have had to devote an increasing amount of their productive energy to intergroup competition, the maintenance of intragroup order, the celebration of the community itself, and the privilege of the elite, rather than focusing on the biological maintenance of individuals.
In any case, the popular impression that nutrition has improved through history reflects twentieth-century affluence and seems to have as much to do with class privilege as with an overall increase in productivity. Neither the lower classes of prehistoric and classical empires nor the contemporary Third World have shared in the improvement in caloric intake; consumption of animal protein seems to have declined for all but privileged groups.
There is no clear evidence that the evolution of civilization has reduced the risk of resource failure and starvation as successfully as we like to believe. Episodes of starvation occur among hunter-gatherer bands because natural resources fail and
because they have limited ability either to store or to transport food. The risk of starvation is offset, in part, by the relative freedom of hunter-gatherers to move around and find new resources, but it is clear that with limited technology of transport they can move neither far nor fast enough to escape severe fluctuations in natural resources. But each of the strategies that sedentary and civilized populations use to reduce or eliminate food crises generate costs and risks as well as benefits. The supplementation of foraging economies by small-scale cultivation may help to reduce the risk of seasonal hunger, particularly in crowded and depleted environments. The manipulation and protection of species involved in farming may help to reduce the risk of crop failure. The storage of food in sedentary communities may also help protect the population against seasonal shortages or crop failure. But these advantages may be outweighed by the greater vulnerability that domestic crop species often display toward climatic fluctuations or other natural hazards, a vulnerability that is then exacerbated by the specialized nature or narrow focus of many agricultural systems. The advantages are also offset by the loss of mobility that results from agriculture and storage, the limits and failures of primitive storage systems, and the vulnerability of sedentary communities to political expropriation of their stored resources.
Although the intensification of agriculture expanded production, it may have increased risk in both natural and cultural terms by increasing the risk of soil exhaustion in central growing areas and of crop failure in marginal areas. Such investments as irrigation to maintain or increase productivity may have helped to protect the food supply, but they generated new risks of their own and introduced new kinds of instability by making production more vulnerable to economic and political forces that could disrupt or distort the pattern of investment. Similarly, specialization of production increased the range of products that could be made and increased the overall efficiency of production, but it also placed large segments of the population at the mercy of fickle systems of exchange or equally fickle social and political entitlements.
Modern storage and transport may reduce vulnerability to natural crises, but they increase vulnerability to disruption of the technological or political and economic basis of the storage and transport systems themselves. Transport and storage systems are difficult and expensive to maintain. Governments that have the power to move large amounts of food long distances to offset famine and the power to stimulate investment in protective systems of storage and transport also have and can exercise the power to withhold aid and divert investment. The same market mechanisms that facilitate the rapid movement of produce on a large scale, potentially helping to prevent starvation, also set up patterns of international competition in production and consumption that may threaten starvation to those individuals who depend on world markets to provide their food, an ever-increasing proportion of the world population.
It is therefore not clear, in theory, that civilization improves the reliability of the individual diet. As the data summarized in earlier chapters suggest, neither the record of ethnography and history nor that of archaeology provide any clear indication of progressive increase in the reliability (as opposed to the total size) of human food supplies with the evolution of civilization.
Similar points can be made with reference to the natural history of infectious disease. The data reviewed in preceding chapters suggest that prehistoric hunting and gathering populations would have been visited by fewer infections and suffered lower overall rates of parasitization than most other world populations, except for those of the last century, during which antibiotics have begun to offer serious protection against infection.
The major infectious diseases experienced by isolated hunting and gathering bands are likely to have been of two types: zoonotic diseases, caused by organisms whose life cycles were largely independent of human habits; and chronic diseases, handed directly from person to person, the transmission of which were unlikely to have been discouraged by small group size. Of the two categories, the zoonotic infections are undoubtedly the more important. They are likely to have been severe or even rapidly fatal because they were poorly adapted to human hosts. Moreover, zoonotic diseases may have had a substantial impact on small populations by eliminating productive adults. But in another respect their impact would have been limited because they did not pass from person to person.
By virtue of mobility and the handling of animal carcasses, hunter-gatherers are likely to have been exposed to a wider range of zoonotic infections than are more civilized populations. Mobility may also have exposed hunter-gatherers to the traveler's diarrhea phenomenon in which local microvariants of any parasite (including zoonoses) placed repeated stress on the body's immune response.
The chronic diseases, which can spread among small isolated groups, appear to have been relatively unimportant, although they undoubtedly pose a burden of disease that can often be rapidly eliminated by twentieth-century medicine. First, such chronic diseases appear to provoke relatively little morbidity in those chronically exposed. Moreover, the skeletal evidence suggests that even yaws and other common low-grade infections (periostitis) associated with infections by organisms now common to the human environment were usually less frequent and less severe among small, early mobile populations than among more sedentary and dense human groups. Similar arguments appear to apply to tuberculosis and leprosy, judging from the record of the skeletons. Even though epidemiologists now concede that tuberculosis could have spread and persisted in small groups, the evidence suggests overwhelmingly that it is primarily a disease of dense urban populations.
Similarly, chronic intestinal infestation by bacterial, protozoan, and helminth parasites, although displaying significant variation in occurrence according to the natural; environment, generally appears to be minimized by small group size and mobility. At least, the prevalence of specific parasites and the parasite load, or size of the individual dose, is minimized, although in some environments mobility actually appears to have increased the variety of parasites encountered. Ethnographic observations suggest that parasite loads are often relatively low in mobile bands and commonly increase as sedentary lifestyles are adopted. Similar observations imply that intestinal infestations are commonly more severe in sedentary populations than in their more mobile neighbors. The data also indicate that primitive populations often display better accommodation to their indigenous parasites (that is, fewer symptoms of disease in proportion to their parasite load) than we might otherwise expect. The archaeological evidence suggests that, insofar as intestinal parasite loads can be measured by their effects on overall nutrition (for example, on rates of anemia), these infections were relatively mild in early human populations but became increasingly severe as populations grew larger and more sedentary. In one case where comparative analysis of archaeological mummies from different periods has been undertaken, there is direct evidence of an increase in pathological intestinal bacteria with the adoption of sedentism. In another case, analysis of feces has documented an increase in intestinal parasites with sedentism.
Many major vector-borne infections may also have been less important among prehistoric hunter-gatherers than they are in the modern world. The habits of vectors of such major diseases as malaria, schistosomiasis, and bubonic plague suggest that among relatively small human groups without transportation other than walking these diseases are unlikely to have provided anything like the burden of morbidity and mortality that they inflicted on historic and contemporary
populations.
Epidemiological theory further predicts the failure of most epidemic diseases ever to spread in small isolated populations or in groups of moderate size connected only by transportation on foot. Moreover, studies on the blood sera of contemporary isolated groups suggest that, although small size and isolation is not a complete guarantee against the transmission of such diseases in the vicinity, the spread from group to group is at best haphazard and irregular. The pattern suggests that contemporary isolates are at risk to epidemics once the diseases are maintained by civilized populations, but it seems to confirm predictions that such diseases would and could not have flourished and spread because they would not reliably have been transmitted in a world inhabited entirely by small and isolated groups in which there were no civilized reservoirs of diseases and all transportation of diseases could occur only at the speed of walking human beings.
In addition, overwhelming historical evidence suggests that the greatest rates of morbidity and death from infection are associated with the introduction of new diseases from one region of the world to another by processes associated with civilized transport of goods at speeds and over distances outside the range of movements common to hunting and gathering groups. Small-scale societies move people among groups and enjoy periodic aggregation and dispersal, but they do not move the distances associated with historic and modern religious pilgrimages or military campaigns, nor do they move at the speed associated with rapid modern forms of transportation. The increase in the transportation of people and exogenous diseases seems likely to have had far more profound effects on health than the small burden of traveler's diarrhea imposed by the small-scale movements of hunter-gatherers.
Prehistoric hunting and gathering populations may also have had one other important advantage over many more civilized groups. Given the widely recognized (and generally positive or synergistic) association of malnutrition and disease, the relatively good nutrition of hunter-gatherers may further have buffered them against the infections they did encounter.
In any case, the record of the skeletons appears to suggest that severe episodes of stress that disrupted the growth of children (acute episodes of infection or epidemics and/or episodes of resource failure and starvation) did not decline and if anything became increasingly common with the evolution of civilization in prehistory.
There is also evidence, primarily from ethnographic sources, that primitive populations suffer relatively low rates of many degenerative diseases compared, at least, to the more affluent of modern societies, even after corrections are made for the different distribution of adult ages. Primitive populations (hunter-gatherers, subsistence farmers, and all groups who do not subsist on modern refined foods) appear to enjoy several nutritional advantages over more affluent modern societies that protect them from many of the diseases that now afflict us. High bulk diets, diets with relatively few calories in proportion to other nutrients, diets low in total fat (and particularly low in saturated fat), and diets high in potassium and low in sodium, which are common to such groups, appear to help protect them against a series of degenerative conditions that plague the more affluent of modern populations, often in proportion to their affluence. Diabetes mellitus appears to be extremely rare in primitive groups (both hunter-gatherers and farmers) as are circulatory problems, including high blood pressure, heart disease, and strokes. Similarly, disorders associated with poor bowel function, such as appendicitis, diverticulosis, hiatal hernia, varicose veins, hemorrhoids, and bowel cancers, appear rare. Rates of many other types of cancer particularly breast and lung appear to be low in most small-scale societies, even when corrected for the small proportion of elderly often observed; even those cancers that we now consider to be diseases of under-development, such as Burkitt's lymphoma and cancer of the liver, may be the historical product of changes in human behavior involving food storage or the human-assisted spread of vector-borne infections. The record of the skeletons suggests, through the scarcity of metastases in bone, that cancers were comparatively rare in prehistory. The history of human life expectancy is much harder to describe or summarize with any precision because the evidence is so fragmentary and so many controversies are involved in its interpretation. But once we look beyond the very high life expectancies of mid-twentieth century affluent nations, the existing data also appear to suggest a pattern that is both more complex and less progressive than we are accustomed to believe.
Contrary to assumptions once widely held, the slow growth of prehistoric populations need not imply exceedingly high rates of mortality. Evidence of low fertility and/or the use of birth control by small-scale groups suggests (if we use modern life tables) that average rates of population growth very near zero could have been maintained by groups suffering only historically moderate mortality (life expectancy of 25 to 30 years at birth with 50 to 60 percent of infants reaching adulthood figures that appear to match those observed in ethnographic and archaeological samples) that would have balanced fertility, which was probably below the averages of more sedentary modern populations. The prehistoric acceleration of population growth after the adoption of sedentism and farming, if it is not an artifact of archaeological reconstruction, could be explained by an increase in fertility or altered birth control decisions that appear to accompany sedentism and agriculture. This explanation fits the available data better than any competing hypothesis.
It is not clear whether the adoption of sedentism or farming would have increased or decreased the proportion of individuals dying as infants or children. The advantages of sedentism may have been offset by risks associated with increased infection, closer spacing of children, or the substitution of starchy gruels for mother's milk and other more nutritious weaning foods. The intensification of agriculture and the adoption of more civilized lifestyles may not have improved the probability of surviving childhood until quite recently. Rates of infant and child mortality observed in the smallest contemporary groups (or reconstructed with less certainty among prehistoric groups) would not have embarrassed most European countries until sometime in the nineteenth century and were, in fact, superior to urban rates of child mortality through most of the nineteenth century (and much of the twentieth century in many Third World cities).
There is no evidence from archaeological samples to suggest that adult life expectancy increased with the adoption of sedentism or farming; there is some evidence (complicated by the effects of a probably acceleration of population growth on cemetery samples) to suggest that adult life expectancy may actually
have declined as farming was adopted. In later stages of the intensification of agriculture and the development of civilization, adult life expectancy most often increased and often increased substantially but the trend was spottier than we sometimes realize. Archaeological populations from the Iron Age or even the Medieval period in Europe and the Middle East or from the Mississippian period in North America often suggest average adult ages at death in the middle or upper thirties, not substantially different from (and sometimes lower than) those of the earliest visible populations in the same regions. Moreover, the historic improvement in adult life expectancy may have resulted at least in part from increasing infant and child mortality and the consequent "select" nature of
those entering adulthood as epidemic diseases shifted their focus from adults to children.
These data clearly imply that we need to rethink both scholarly and popular images of human progress and cultural evolution. We have built our images of human history too exclusively from the experiences of privileged classes and populations, and we have assumed too close a fit between technological advances and progress for individual lives.
In scholarly terms, these data which often suggest diminishing returns to health and nutrition tend to undermine models of cultural evolution based on technological advances. They add weight to theories of cultural evolution that emphasize environmental constraints, demographic pressure, and competition and social exploitation, rather than technological or social progress, as the primary instigators of social change. Similarly, the archaeological evidence that outlying populations often suffered reduced health as a consequence of their inclusion in larger political units, the clear class stratification of health in early and modern civilizations, and the general failure of either early or modern civilizations to promote clear improvements in health, nutrition, or economic homeostasis for large segments of their populations until the very recent past all reinforce competitive and exploitative models of the origins and function of civilized states. In popular terms, I think that we must substantially revise our traditional sense that civilization represents progress in human well-being or at least that it did so for most people for most of history prior to the twentieth century. The comparative data simply do not support that image.
the Rise of Civilization
Mark Nathan Cohen
(excerpt from book of same
title: pp. 131-141)
There is no evidence either from ethnographic accounts or archaeological excavations to suggest that rates of accidental trauma or interpersonal violence declined substantially with the adoption of more civilized forms of political organization. In fact, some evidence from archaeological sites and from historical sources suggests the opposite.
Evidence from both ethnographic descriptions of contemporary hunters and the archaeological record suggests that the major trend in the quality and quantity of human diets has been downward. Contemporary hunter-gatherers, although lean and occasionally hungry, enjoy levels of caloric intake that compare favorably with national averages for many major countries of the Third World and that are generally above those of the poor in the modern world. Even the poorest recorded hunter-gatherer group enjoys a caloric intake superior to that of impoverished contemporary urban populations. Prehistoric hunter-gatherers appear to have enjoyed richer environments and to have been better nourished than most subsequent populations (primitive and civilized alike). Whenever we can glimpse the remains of anatomically modern human beings who lived in early prehistoric environments still rich in large game, they are often relatively large people displaying comparatively few signs of qualitative malnutrition. The subsequent trend in human size and stature is irregular but is more often downward than upward in most parts of the world until the nineteenth or twentieth century.
The diets of hunter-gatherers appear to be comparatively well balanced, even when they are lean. Ethnographic accounts of contemporary groups suggest that protein intakes are commonly quite high, comparable to those of affluent modern groups and substantially above world averages. Protein deficiency is almost unknown in these groups, and vitamin and mineral deficiencies are rare and usually mild in comparison to rates reported from many Third World populations. Archaeological evidence suggests that specific deficiencies, including that of iron (anemia), vitamin D (rickets), and, more controversially, vitamin C (scurvy) as well as such general signs of protein calorie malnutrition as childhood growth retardation have generally become more common in history rather than declining.
Among farmers, increasing population required more and more frequent cropping of land and the use of more and more marginal soils, both of which further diminished returns for labor. This trend may or may not have been offset by such technological improvements in farming as the use of metal tools, specialization of labor, and efficiencies associated with large-scale production that tend to increase individual productivity as well as total production.
But whether the efficiency of farming increased or declined, the nutrition of individuals appears often to have declined for any of several reasons: because increasingly complex society placed new barriers between individuals and flexible access to resources, because trade often siphoned resources away, because some segments of the society increasingly had only indirect access to food, because investments in new technology to improve production focused power in the hands of elites so that their benefits were not widely shared, and perhaps because of the outright exploitation and deprivation of some segments of society. In addition, more complex societies have had to devote an increasing amount of their productive energy to intergroup competition, the maintenance of intragroup order, the celebration of the community itself, and the privilege of the elite, rather than focusing on the biological maintenance of individuals.
In any case, the popular impression that nutrition has improved through history reflects twentieth-century affluence and seems to have as much to do with class privilege as with an overall increase in productivity. Neither the lower classes of prehistoric and classical empires nor the contemporary Third World have shared in the improvement in caloric intake; consumption of animal protein seems to have declined for all but privileged groups.
There is no clear evidence that the evolution of civilization has reduced the risk of resource failure and starvation as successfully as we like to believe. Episodes of starvation occur among hunter-gatherer bands because natural resources fail and
because they have limited ability either to store or to transport food. The risk of starvation is offset, in part, by the relative freedom of hunter-gatherers to move around and find new resources, but it is clear that with limited technology of transport they can move neither far nor fast enough to escape severe fluctuations in natural resources. But each of the strategies that sedentary and civilized populations use to reduce or eliminate food crises generate costs and risks as well as benefits. The supplementation of foraging economies by small-scale cultivation may help to reduce the risk of seasonal hunger, particularly in crowded and depleted environments. The manipulation and protection of species involved in farming may help to reduce the risk of crop failure. The storage of food in sedentary communities may also help protect the population against seasonal shortages or crop failure. But these advantages may be outweighed by the greater vulnerability that domestic crop species often display toward climatic fluctuations or other natural hazards, a vulnerability that is then exacerbated by the specialized nature or narrow focus of many agricultural systems. The advantages are also offset by the loss of mobility that results from agriculture and storage, the limits and failures of primitive storage systems, and the vulnerability of sedentary communities to political expropriation of their stored resources.
Although the intensification of agriculture expanded production, it may have increased risk in both natural and cultural terms by increasing the risk of soil exhaustion in central growing areas and of crop failure in marginal areas. Such investments as irrigation to maintain or increase productivity may have helped to protect the food supply, but they generated new risks of their own and introduced new kinds of instability by making production more vulnerable to economic and political forces that could disrupt or distort the pattern of investment. Similarly, specialization of production increased the range of products that could be made and increased the overall efficiency of production, but it also placed large segments of the population at the mercy of fickle systems of exchange or equally fickle social and political entitlements.
Modern storage and transport may reduce vulnerability to natural crises, but they increase vulnerability to disruption of the technological or political and economic basis of the storage and transport systems themselves. Transport and storage systems are difficult and expensive to maintain. Governments that have the power to move large amounts of food long distances to offset famine and the power to stimulate investment in protective systems of storage and transport also have and can exercise the power to withhold aid and divert investment. The same market mechanisms that facilitate the rapid movement of produce on a large scale, potentially helping to prevent starvation, also set up patterns of international competition in production and consumption that may threaten starvation to those individuals who depend on world markets to provide their food, an ever-increasing proportion of the world population.
It is therefore not clear, in theory, that civilization improves the reliability of the individual diet. As the data summarized in earlier chapters suggest, neither the record of ethnography and history nor that of archaeology provide any clear indication of progressive increase in the reliability (as opposed to the total size) of human food supplies with the evolution of civilization.
Similar points can be made with reference to the natural history of infectious disease. The data reviewed in preceding chapters suggest that prehistoric hunting and gathering populations would have been visited by fewer infections and suffered lower overall rates of parasitization than most other world populations, except for those of the last century, during which antibiotics have begun to offer serious protection against infection.
The major infectious diseases experienced by isolated hunting and gathering bands are likely to have been of two types: zoonotic diseases, caused by organisms whose life cycles were largely independent of human habits; and chronic diseases, handed directly from person to person, the transmission of which were unlikely to have been discouraged by small group size. Of the two categories, the zoonotic infections are undoubtedly the more important. They are likely to have been severe or even rapidly fatal because they were poorly adapted to human hosts. Moreover, zoonotic diseases may have had a substantial impact on small populations by eliminating productive adults. But in another respect their impact would have been limited because they did not pass from person to person.
By virtue of mobility and the handling of animal carcasses, hunter-gatherers are likely to have been exposed to a wider range of zoonotic infections than are more civilized populations. Mobility may also have exposed hunter-gatherers to the traveler's diarrhea phenomenon in which local microvariants of any parasite (including zoonoses) placed repeated stress on the body's immune response.
The chronic diseases, which can spread among small isolated groups, appear to have been relatively unimportant, although they undoubtedly pose a burden of disease that can often be rapidly eliminated by twentieth-century medicine. First, such chronic diseases appear to provoke relatively little morbidity in those chronically exposed. Moreover, the skeletal evidence suggests that even yaws and other common low-grade infections (periostitis) associated with infections by organisms now common to the human environment were usually less frequent and less severe among small, early mobile populations than among more sedentary and dense human groups. Similar arguments appear to apply to tuberculosis and leprosy, judging from the record of the skeletons. Even though epidemiologists now concede that tuberculosis could have spread and persisted in small groups, the evidence suggests overwhelmingly that it is primarily a disease of dense urban populations.
Similarly, chronic intestinal infestation by bacterial, protozoan, and helminth parasites, although displaying significant variation in occurrence according to the natural; environment, generally appears to be minimized by small group size and mobility. At least, the prevalence of specific parasites and the parasite load, or size of the individual dose, is minimized, although in some environments mobility actually appears to have increased the variety of parasites encountered. Ethnographic observations suggest that parasite loads are often relatively low in mobile bands and commonly increase as sedentary lifestyles are adopted. Similar observations imply that intestinal infestations are commonly more severe in sedentary populations than in their more mobile neighbors. The data also indicate that primitive populations often display better accommodation to their indigenous parasites (that is, fewer symptoms of disease in proportion to their parasite load) than we might otherwise expect. The archaeological evidence suggests that, insofar as intestinal parasite loads can be measured by their effects on overall nutrition (for example, on rates of anemia), these infections were relatively mild in early human populations but became increasingly severe as populations grew larger and more sedentary. In one case where comparative analysis of archaeological mummies from different periods has been undertaken, there is direct evidence of an increase in pathological intestinal bacteria with the adoption of sedentism. In another case, analysis of feces has documented an increase in intestinal parasites with sedentism.
Many major vector-borne infections may also have been less important among prehistoric hunter-gatherers than they are in the modern world. The habits of vectors of such major diseases as malaria, schistosomiasis, and bubonic plague suggest that among relatively small human groups without transportation other than walking these diseases are unlikely to have provided anything like the burden of morbidity and mortality that they inflicted on historic and contemporary
populations.
Epidemiological theory further predicts the failure of most epidemic diseases ever to spread in small isolated populations or in groups of moderate size connected only by transportation on foot. Moreover, studies on the blood sera of contemporary isolated groups suggest that, although small size and isolation is not a complete guarantee against the transmission of such diseases in the vicinity, the spread from group to group is at best haphazard and irregular. The pattern suggests that contemporary isolates are at risk to epidemics once the diseases are maintained by civilized populations, but it seems to confirm predictions that such diseases would and could not have flourished and spread because they would not reliably have been transmitted in a world inhabited entirely by small and isolated groups in which there were no civilized reservoirs of diseases and all transportation of diseases could occur only at the speed of walking human beings.
In addition, overwhelming historical evidence suggests that the greatest rates of morbidity and death from infection are associated with the introduction of new diseases from one region of the world to another by processes associated with civilized transport of goods at speeds and over distances outside the range of movements common to hunting and gathering groups. Small-scale societies move people among groups and enjoy periodic aggregation and dispersal, but they do not move the distances associated with historic and modern religious pilgrimages or military campaigns, nor do they move at the speed associated with rapid modern forms of transportation. The increase in the transportation of people and exogenous diseases seems likely to have had far more profound effects on health than the small burden of traveler's diarrhea imposed by the small-scale movements of hunter-gatherers.
Prehistoric hunting and gathering populations may also have had one other important advantage over many more civilized groups. Given the widely recognized (and generally positive or synergistic) association of malnutrition and disease, the relatively good nutrition of hunter-gatherers may further have buffered them against the infections they did encounter.
In any case, the record of the skeletons appears to suggest that severe episodes of stress that disrupted the growth of children (acute episodes of infection or epidemics and/or episodes of resource failure and starvation) did not decline and if anything became increasingly common with the evolution of civilization in prehistory.
There is also evidence, primarily from ethnographic sources, that primitive populations suffer relatively low rates of many degenerative diseases compared, at least, to the more affluent of modern societies, even after corrections are made for the different distribution of adult ages. Primitive populations (hunter-gatherers, subsistence farmers, and all groups who do not subsist on modern refined foods) appear to enjoy several nutritional advantages over more affluent modern societies that protect them from many of the diseases that now afflict us. High bulk diets, diets with relatively few calories in proportion to other nutrients, diets low in total fat (and particularly low in saturated fat), and diets high in potassium and low in sodium, which are common to such groups, appear to help protect them against a series of degenerative conditions that plague the more affluent of modern populations, often in proportion to their affluence. Diabetes mellitus appears to be extremely rare in primitive groups (both hunter-gatherers and farmers) as are circulatory problems, including high blood pressure, heart disease, and strokes. Similarly, disorders associated with poor bowel function, such as appendicitis, diverticulosis, hiatal hernia, varicose veins, hemorrhoids, and bowel cancers, appear rare. Rates of many other types of cancer particularly breast and lung appear to be low in most small-scale societies, even when corrected for the small proportion of elderly often observed; even those cancers that we now consider to be diseases of under-development, such as Burkitt's lymphoma and cancer of the liver, may be the historical product of changes in human behavior involving food storage or the human-assisted spread of vector-borne infections. The record of the skeletons suggests, through the scarcity of metastases in bone, that cancers were comparatively rare in prehistory. The history of human life expectancy is much harder to describe or summarize with any precision because the evidence is so fragmentary and so many controversies are involved in its interpretation. But once we look beyond the very high life expectancies of mid-twentieth century affluent nations, the existing data also appear to suggest a pattern that is both more complex and less progressive than we are accustomed to believe.
Contrary to assumptions once widely held, the slow growth of prehistoric populations need not imply exceedingly high rates of mortality. Evidence of low fertility and/or the use of birth control by small-scale groups suggests (if we use modern life tables) that average rates of population growth very near zero could have been maintained by groups suffering only historically moderate mortality (life expectancy of 25 to 30 years at birth with 50 to 60 percent of infants reaching adulthood figures that appear to match those observed in ethnographic and archaeological samples) that would have balanced fertility, which was probably below the averages of more sedentary modern populations. The prehistoric acceleration of population growth after the adoption of sedentism and farming, if it is not an artifact of archaeological reconstruction, could be explained by an increase in fertility or altered birth control decisions that appear to accompany sedentism and agriculture. This explanation fits the available data better than any competing hypothesis.
It is not clear whether the adoption of sedentism or farming would have increased or decreased the proportion of individuals dying as infants or children. The advantages of sedentism may have been offset by risks associated with increased infection, closer spacing of children, or the substitution of starchy gruels for mother's milk and other more nutritious weaning foods. The intensification of agriculture and the adoption of more civilized lifestyles may not have improved the probability of surviving childhood until quite recently. Rates of infant and child mortality observed in the smallest contemporary groups (or reconstructed with less certainty among prehistoric groups) would not have embarrassed most European countries until sometime in the nineteenth century and were, in fact, superior to urban rates of child mortality through most of the nineteenth century (and much of the twentieth century in many Third World cities).
There is no evidence from archaeological samples to suggest that adult life expectancy increased with the adoption of sedentism or farming; there is some evidence (complicated by the effects of a probably acceleration of population growth on cemetery samples) to suggest that adult life expectancy may actually
have declined as farming was adopted. In later stages of the intensification of agriculture and the development of civilization, adult life expectancy most often increased and often increased substantially but the trend was spottier than we sometimes realize. Archaeological populations from the Iron Age or even the Medieval period in Europe and the Middle East or from the Mississippian period in North America often suggest average adult ages at death in the middle or upper thirties, not substantially different from (and sometimes lower than) those of the earliest visible populations in the same regions. Moreover, the historic improvement in adult life expectancy may have resulted at least in part from increasing infant and child mortality and the consequent "select" nature of
those entering adulthood as epidemic diseases shifted their focus from adults to children.
These data clearly imply that we need to rethink both scholarly and popular images of human progress and cultural evolution. We have built our images of human history too exclusively from the experiences of privileged classes and populations, and we have assumed too close a fit between technological advances and progress for individual lives.
In scholarly terms, these data which often suggest diminishing returns to health and nutrition tend to undermine models of cultural evolution based on technological advances. They add weight to theories of cultural evolution that emphasize environmental constraints, demographic pressure, and competition and social exploitation, rather than technological or social progress, as the primary instigators of social change. Similarly, the archaeological evidence that outlying populations often suffered reduced health as a consequence of their inclusion in larger political units, the clear class stratification of health in early and modern civilizations, and the general failure of either early or modern civilizations to promote clear improvements in health, nutrition, or economic homeostasis for large segments of their populations until the very recent past all reinforce competitive and exploitative models of the origins and function of civilized states. In popular terms, I think that we must substantially revise our traditional sense that civilization represents progress in human well-being or at least that it did so for most people for most of history prior to the twentieth century. The comparative data simply do not support that image.
"Our Religions: Are they the Religions of Humanity Itself?" By Author Daniel Quinn
Our Religions: Are they the Religions of Humanity Itself?
Daniel Quinn
Contrary to popular opinion, Charles Darwin did not originate the idea of evolution. By the middle of the 19th century, the mere fact of evolution had been around for a long time, and most thinkers of the time were perfectly content to leave it at that. The absence of a theory to explain evolutionary change didn't trouble them, wasn't experienced as a pressure, as it was by Darwin. He knew there had to be some intelligible mechanism or dynamic that would account for it, and this is what he went looking for--with well known results. In his Origin of Species, he wasn't announcing the fact of evolution, he was trying to make sense of the fact.
In my mid-twenties I began to feel a similar sort of pressure. The modern Age of Anxiety was just being born under the shadows of rampant population growth, global environmental destruction, and the ever-present possibility of nuclear holocaust. I was surprised that most people seemed perfectly reconciled to these things, as if to say, Well, what else would you expect?
Ted Kaczynski , the Unabomber, seemed to think he was saying something terribly original in his 1995 diatribe blaming it all on the Industrial Revolution, but this was just the conventional wisdom of 1962. To my mind, blaming all our problems on the Industrial Revolution is like blaming Hamlet's downfall on his fencing match with Laertes. To understand why Hamlet ended up badly, you can't just look at the last ten minutes of his story, you have to go right back to the beginning of it, and I felt a pressure to do the same with us.
The beginning of our story isn't difficult to find. Every schoolchild learns that ourhuman story, but it's certainly the beginning of our story, for it was from this beginning that all the wonders and horrors of our civilization grew.
Everyone is vaguely aware that there have been two ways of looking at the Agricultural Revolution within our culture, two contradictory stories about its significance. According to the standard version--the version taught in our schools--humans had been around for a long time, three or four million years , living a miserable and shiftless sort of life for most of that time, accomplishing nothing and getting nowhere. But then about 10,000 years ago it finally dawned on folks living in the Fertile Crescent that they didn't have to live like beavers and buzzards, making do with whatever food happened to come along; they could cultivate their own food and thus control their own destiny and well being. Agriculture made it possible for them to give up the nomadic life for the life of farming villagers. Village life encouraged occupational specialization and the advancement of technology on all fronts. Before long, villages became towns, and towns became cities, kingdoms, and empires. Trade connections, elaborate social and economic systems, and literacy soon followed, and there we went. All these advances were based on--and impossible without--agriculture, manifestly humanity's greatest blessing.
The other story, a much older one, is tucked away in a different corner of our cultural heritage. It too is set in the Fertile Crescent and tells a tale of the birth of agriculture, but in this telling agriculture isn't represented as a blessing but rather as a terrible punishment for a crime whose exact nature has always profoundly puzzled us. I'm referring, of course, to the story told in the third chapter of Genesis, the Fall of Adam.
Both these stories are known to virtually everyone who grows up in our culture, including every historian, philosopher, theologian, and anthropologist. But like most thinkers of the mid-19th century, who were content with the mere fact of evolution and felt no pressure to explain it, our historians, philosophers, theologians, and anthropologists seem perfectly content to live with these two contradictory stories. The conflict is manifest but, for them, demands no explanation.
For me, it did. As evolution demanded of Darwin a theory that would make sense of it, the story in Genesis demanded of me a theory that would make sense of it.
There have traditionally been two approaches to Adam's crime and punishment . The text tells us Adam was invited to partake of every tree in the garden of Eden except one, mysteriously called the tree of the knowledge of good and evil. As we know, Adam succumbed to the temptation to sample this fruit. In one approach, the crime is viewed as simple disobedience, in which case the interdiction of the knowledge of good and evil seems entirely arbitrary. God might just as well have interdicted the knowledge of war and peace or the knowledge of pride and prejudice. The point was simply to forbid Adam something in order to test his loyalty. Under this approach, Adam's punishment--banishment from Eden to live by the sweat of his brow as a farmer--was just a spanking; it doesn't "fit the crime" in any particular way. He would have received this punishment no matter what test he had failed.
The second approach tries to make some connection between Adam's crime and his punishment. Under this approach, Eden is viewed as a metaphor for the state of innocence, which is lost when Adam gains the knowledge of good and evil. This makes sense, but only if the knowledge of good and evil is understood as a metaphor for knowledge that destroys innocence. So, with roughly equivalent metaphors at either end, the story is reduced to a banal tautology: Adam lost his innocence by gaining knowledge that destroyed his innocence.
The story of the Fall is coupled with a second that is equally famous and equally baffling, that of Cain and Abel. As conventionally understood, these two brothers were literal individuals, the elder, Cain, a tiller of the soil, and the younger, Abel, a herder. The improbability that two members of the same family would embrace antithetical lifestyles should tip us off to the fact that these were not individuals but emblematic figures, just as Adam was (Adam merely being the Hebrew word for Man).
If we understand these as emblematic figures, then the story begins to make sense. The firstborn of agriculture was indeed the tiller of the soil, as Cain was said to be the firstborn of Adam. This is an undoubted historical fact. The domestication of plants is a process that begins the day you plant your first seed, but the domestication of animals takes generations. So the herder Abel was indeed the second-born--by centuries, if not millennia (another reason to be skeptical of the notion that Cain and Abel were literally second-generation brothers).
A further reason for skepticism on this point is the fact that the ancient farmers and herders of the Near East occupied adjacent but distinctly different regions. Farming was the occupation of the Caucasian inhabitants of the Fertile Crescent. Herding was the occupation of the Semitic inhabitants of the Arabian peninsula to the south.
Another piece of background that needs to be understood is that in very ancient times farmers and herders had radically different lifestyles. Farmers were by the very nature of their work settled villagers; but herders (by the very nature of their work) were nomads, just as many present-day herding peoples are. The herding lifestyle was in fact closer to the hunting-gathering lifestyle than it was to the farming lifestyle.
As the farming peoples of the north expanded, it was inevitable that they would confront their Semitic herding neighbors to the south, perhaps below what is now Iraq--with the predictable result. As they have done from the beginning to the present moment, the tillers of the soil needed more land to put to the plow, and as they've done from the beginning to the present moment, they took it.
As the Semites saw it (and it is of course their version of the story that we have), the tiller of the soil Cain was watering his fields with the blood of Abel the herder.
The fact that the version we have is the Semitic version explains the central mystery of the story, which is why God rejected Cain's gift but accepted Abel's. Naturally, this is the way the Semites would see it. In essence, the story says, "God is on our side. God loves us and the way we live but hates the tillers of the soil and the way they live."
With these provisional understandings in place, I was ready to offer a theory about the first part of the story, the Fall of Adam. What the Semitic authors knew was only the present fact that their brothers from the north were encroaching on them in a murderous way. They hadn't been physically present in the Fertile Crescent to witness the actual birth of agriculture, and in fact this was an event that had occurred hundreds of years earlier. In their story of the Fall, they were reconstructing an ancient event, not reporting a recent one. All that was clear to them was that some strange development had saddled their brothers to the north with a laborious lifestyle and had turned them into murderers, and this had to be a moral or spiritual catastrophe of some kind.
What they observed about their brothers to the north was this peculiarity. They seemed to have the strange idea that they knew how to run the world as well as God. This is what marks them as our cultural ancestors. As we go about our business of running the world, we have no doubt that we're doing as good a job as God, if not better. Obviously God put a lot of creatures in the world that are quite superfluous and even pernicious, and we're quite at liberty to get rid of them. We know where the rivers should run, where the swamps should be drained, where the forests should be razed, where the mountains should be leveled, where the plains should be scoured, where the rain should fall. To us, it's perfectly obvious that we have this knowledge.
In fact, to the authors of the stories in Genesis, it looked as if their brothers to the north had the bizarre idea that they had eaten at God's own tree of wisdom and had gained the very knowledge God uses to rule the world. And what knowledge is this? It's a knowledge that only God is competent to use, the knowledge that every single action God might take--no matter what it is, no matter how large or small--is good for one but evil for another. If a fox is stalking a pheasant, it's in the hands of God whether she will catch the pheasant or the pheasant will escape. If God gives the fox the pheasant, then this is good for the fox but evil for the pheasant. If God allows the pheasant to escape, then this is good for the pheasant but evil for the fox. There's no outcome that can be good for both. The same is true in every area of the world's governance. If God allows the valley to be flooded, then this is good for some but evil for others. If God holds back the flood then this too will be good for some but evil for others.
Decisions of this kind are clearly at the very root of what it means to rule the world, and the wisdom to make them cannot possibly belong to any mere creature, for any creature making such decisions would inevitably say, "I will make every choice so that it's good for me but evil for all others." And of course this is precisely how the agriculturalist operates, saying, "If I scour this plain to plant food for myself, then this will be evil for all the creatures that inhabit the plain, but it'll be good for me. If I raze this forest to plant food for myself, then this will be evil for all the creatures that inhabit the forest, but it'll be good for me."
What the authors of the stories in Genesis perceived was that their brothers to the north had taken into their own hands the rule of the world; they had usurped the role of God. Those who let God run the world and take the food that he's planted for them have an easy life. But those who want to run the world themselves must necessarily plant their own food, must necessarily make their living by the sweat of the brow. As this makes plain, agriculture was not the crime itself but rather the result of the crime, the punishment that must inevitably follow such a crime. It was wielding the knowledge of good and evil that had turned their brothers in the north into farmers--and into murderers.
But these were not the only consequences to be expected from Adam's act. The fruit of the tree of the knowledge of good and evil is harmless to God but poison to Man. It seemed to these authors that usurping God's role in the world would be the very death of Man.
And so it seemed to me when I finally worked all this out in the late 1970s. This investigation of the stories in Genesis was not, for me, an exercise in biblical exegesis. I'd gone looking for a way to understand how in the world we'd brought ourselves face to face with death in such a relatively short period of time--10,000 years, a mere eyeblink in the lifespan of our species--and had found it in an ancient story that we long ago adopted as our own and that remained stubbornly mysterious to us as long as we insisted on reading it as if it were our own. When examined from a point of view not our own, however, it ceased to be mysterious and delivered up a meaning that not only would have made sense to a beleaguered herding people 8,000 years ago but that would also make sense to the beleaguered people of the late twentieth century.
As far as I was concerned, the authors of this story had gotten it right. In spite of the terrible mess we've made of it, we do think we can run the world, and if we continue to think this, it is going to be the death of us.
In case it isn't evident, I should add that of course my reading of Genesis is only a theory. This is what creationists say of evolution, that it's "only a theory, it hasn't been proved," as though this in itself is grounds for dismissal. This misrepresents the point of formulating a theory, which is to make sense of the evidence. So far, Darwin's theory remains the very best way we've found to make sense of the evidence, and my own theory has to be evaluated in the same way. Does it make sense of the evidence--the stories themselves--and does it make more sense than any other theory?
But solving this particular riddle only began to alleviate the pressure I felt for answers that were not being looked for at any level of our culture. The philosophical and theological foundations of our culture had been laid down by people who confidently believed that Man had been born an agriculturalist and civilization builder. These things were as instinctive to him as predation is to lions or hiving is to bees. This meant that, to find and date Man's birth, they had only to look for the beginnings of agriculture and civilization, which were obviously not that far back in time.
When in 1650 Irish theologian James Ussher announced the date of creation as October 23, 4004 B.C., no one laughed, or if they did, it was because of the absurd exactitude of the date, not because the date was absurdly recent. In fact, 4004 B.C. is quite a serviceable date for the beginning of what we would recognize as civilization. This being the case, it's hardly surprising that, for people who took it for granted that Man began building civilization as soon as he was created, 4004 B.C. would seem like a perfectly reasonable date for his creation.
But all this soon changed. By the middle of the 19th century the accumulated evidence of many new sciences had pushed almost all dates back by many orders of magnitude. The universe and the earth were not thousands of years old but billions. The human past extended millions of years back beyond the appearance of agriculture and civilization.Only those who clung to a very literal reading of the biblical creation story rejected the evidence; they saw it as a hoax perpetrated on us either by the devil (to confound us) or by God (to test our faith)--take your pick. The notion that Man had been born an agriculturalist and civilization builder had been rendered totally untenable. He had very definitely not been born either one.
This meant that the philosophical and theological foundations of our culture had been laid by people with a profoundly erroneous understanding of our origins and history. It was therefore urgently important to reexamine these foundations and if necessary to rebuild them from the ground up.
Except, of course, that no one at all thought this was urgently important--or even slightly important. So human life began millions of years before the birth of agriculture. Who cares? Nothing of any importance happened during those millions of years. They were merely a fact, something to be accepted, just as the fact of evolution had been accepted by naturalists long before Darwin.
In the last century we'd gained an understanding of the human story that made nonsense of everything we'd been telling ourselves for 3,000 years, but our settled understandings remained completely unshaken. So what, that Man had not in fact been born an agriculturalist and a civilization builder? He was certainly born to become an agriculturalist and a civilization builder. It was beyond question that this was our foreordained destiny. The way we live is the way humans were meant to live from the beginning of time. And indeed we must go on living this way--even if it kills us.
Facts that were indisputable to all but biblical literalists had radically repositioned us not only in the physical universe but in the history of our own species. The factsense of the fact, the way Darwin had made sense of the fact of evolution.
Except me, and I have to tell you that it gave me no joy. I had to have answers, and I went looking for them not because I wanted to write a book someday but because I personally couldn't live without them.
In Ishmael, I made the point that the conflict between the emblematic figures Cain and Abel didn't end six or eight thousand years ago in the Near East. Cain the tiller of the soil has carried his knife with him to every corner of the world, watering his fields with the blood of tribal peoples wherever he found them. He arrived here in 1492 and over the next three centuries watered his fields with the blood of millions of Native Americans. Today, he's down there in Brazil, knife poised over the few remaining aboriginals in the heart of that country.
The tribe among aboriginal peoples is as universal as the flock among geese, and no anthropologist seriously doubts that it was humanity's original social organization. We didn't evolve in troops or hordes or pods. Rather, we evolved in a social organization was was peculiarly human, that was uniquely successful for culture-bearers. The tribe was successful for humans, which is why it was still universally in place throughout the world three million years later. The tribal organization was natural selection's gift to humanity in the same way that the flock was natural selection's gift to geese.
The elemental glue that holds any tribe together is tribal law. This is easy to say but less easy to understand, because the operation of tribal law is entirely different from the operation of our law. Prohibition is the essence of our law, but the essence of tribal law is remedy. Misbehavior isn't outlawed in any tribe. Rather, tribal law prescribes what must happen in order to minimize the effect of misbehavior and to produce a situation in which everyone feels that they've been made as whole again as it's possible to be.
In The Story of B I described how adultery is handled among the Alawa of Australia. If you have the misfortune to fall in love with another man's wife or another woman's husband, the law doesn't say, "This is prohibited and may not go forward." It says, "If you want your love to go forward, here's what you must do to make things right with all parties and to see to it that marriage isn't cheapened in the eyes of our children." It's a remarkably successful process. What makes it even more remarkable is the fact that it wasn't worked out in any legislature or by any committee. It's another gift of natural selection. Over countless generations of testing, no better way of handling adultery has been found or even conceivably could be found, because--behold!--it works! It does just what the Alawa want it to do, and absolutely no one tries to evade it. Even adulterers don't try to evade it--that's how well it works.
But this is just the law of the Alawa, and it would never occur to them to say, "Everyone in the world should do it this way." They know perfectly well that their tribal neighbors' laws work just as well for them--and for the same reason, that they've been tested from the beginning of time.
One of the virtues of tribal law is that it presupposes that people are just the way we know they are: generally wise, kind, generous, and well-intentioned but perfectly capable of being foolish, unruly, moody, cantankerous, selfish, greedy, violent, stupid, bad-tempered, sneaky, lustful, treacherous, careless, vindictive, neglectful, petty, and all sorts of other unpleasant things. Tribal law doesn't punish people for their shortcomings, as our law does. Rather, it makes the management of their shortcomings an easy and ordinary part of life.
But during the developmental period of our culture, all this changed very dramatically. Tribal peoples began to come together in larger and larger associations, and one of the casualties of this process was tribal law. If you take the Alawa of Australia and put them together with Gebusi of New Guinea, the Bushmen of the Kalahari, and the Yanomami of Brazil, they are very literally not going to know how to live. Not any of these tribes are going to embrace the laws of the others, which may not only be unknown to them but incomprehensible to them. How then are they going to handle mischief that occurs among them? The Gebusi way or the Yanomami way? The Alawa way or the Bushman way? Multiply this by a hundred, and you'll have a fair approximation of where people stood in the early millennia of our own cultural development in the Near East.
When you gather up a hundred tribes and expect them to work and live together, tribal law becomes inapplicable and useless. But of course the people in this amalgam are the same as they always were: capable of being foolish, moody, cantankerous, selfish, greedy, violent, stupid, bad-tempered, and all the rest. In the tribal situation, this was no problem, because tribal law was designed for people like this. But all the tribal ways of handling these ordinary human tendencies had been expunged in our burgeoning civilization. A new way of handling them had to be invented--and I stress the word invented. There was no received, tested way of handling the mischief people were capable of. Our cultural ancestors had to make something up, and what they made up were lists of prohibited behavior.
Very understandably, they began with the big ones. They weren't going to prohibit moodiness or selfishness. They prohibited things like murder, assault, and theft. Of course we don't know what the lists were like until the dawn of literacy, but you can be sure they were in place, because it's hardly plausible that we murdered, robbed, and thieved with impunity for five or six thousand years until Hammurabi finally noticed that these were rather disruptive activities.
When the Israelites escaped from Egypt in the 13th century B.C., they were literally a lawless horde, because they'd left the Egyptian list of prohibitions behind. They needed their own list of prohibitions, which God provided--the famous ten. But of course ten didn't do it. Hundreds more followed, but they didn't do it either.
No number has ever done it for us. Not a thousand, ten thousand, a hundred thousand. Even millions don't do it, and so every single year we pay our legislators to come up with more. But no matter how many prohibitions we come up with, they never do the trick, because no prohibited behavior has ever been eliminated by passing a law against it. Every time someone is sent to prison or executed, this is said to be "sending a message" to miscreants, but for some strange reason the message never arrives, year after year, generation after generation, century after century.
Naturally, we consider this to be a very advanced system.
No tribal people has ever been found that claimed not to know how to live. On the contrary, they're all completely confident that they know how to live. But with the disappearance of tribal law among us, people began to be acutely aware of not knowing how to live. A new class of specialists came to be in demand, their specialty being the annunciation of how people are supposed to live. These specialists we call prophets.
Naturally it takes special qualifications to be a prophet. You must by definition know something the rest of us don't know, something the rest of us are clearly unable to know. This means you must have a source of information that is beyond normal reach--or else what good would it be? A transcendent vision will do, as in the case of Siddhartha Gautama. A dream will do, provided it comes from God. But best of all, of course, is direct, personal, unmediated communication with God. The most persuasive and most highly valued prophets, the ones that are worth dying for and killing for, have the word directly from God.
The appearance of religions based on prophetic revelations is unique to our culture. We alone in the history of all humanity needed such religions. We still need them (and new ones are being created every day), because we still profoundly feel that we don't know how to live. Our religions are the peculiar creation of a bereft people. Yet we don't doubt for a moment that they are the religions of humanity itself.
This belief was not an unreasonable one when it first took root among us. Having long since forgotten that humanity was here long before we came along, we assumed that we were humanity itself and that our history was human history itself. We imagined that humanity had been in existence for just a few thousand years--and that God had been talking to us from the beginning. So why wouldn't our religions be the religions of humanity itself?
When it became known that humanity was millions of years older than we, no one thought it odd that God had remained aloof from the thousands of generations that had come before us. Why would God bother to talk to Homo habilis or Homo erectus? Why would he bother to talk even to Homo sapiens--until we came along? God wanted to talk to civilized folks, not savages, so it's no wonder he remained disdainfully silent.
The philosophers and theologians of the nineteenth and twentieth centuries weren't troubled by God's long silence. The fact alone was enough for them, and they felt no pressure to develop a theory to make sense of it. For Christians, it had long been accepted that Christianity was humanity's religion (which is why all of humanity had to be converted to it, of course). It was an effortless step for thinkers like Teilhard de Chardin and Matthew Fox to promote Christ from humanity's Christ to the Cosmic Christ.
Very strangely, it remained to me to recognize that there once was a religion that could plausibly be called the religion of humanity. It was humanity's first religion and its only universal religion, found wherever humans were found, in place for tens of thousands of years. Christian missionaries encountered it wherever they went, and piously set about destroying it. By now it has been all but stamped out either by missionary efforts or more simply by exterminating its adherents. I certainly take no pride in its discovery, since it's been in plain sight to us for hundreds of years.
Of course it isn't accounted a "real" religion, since it isn't one of ours. It's just a sort of half-baked "pre-religion." How could it be anything else, since it emerged long before God decided humans were worth talking to? It wasn't revealed by any accredited prophet, has no dogma, no evident theology or doctrine, no liturgy, and produces no interesting heresies or schisms. Worst of all, as far as I know, no one has ever killed for it or died for it--and what sort of religion is that? Considering all this, it's actually quite remarkable that we even have a name for it.
The religion I'm talking about is, of course, animism. This name was cut to fit the general missionary impression that these childlike savages believe that things like rocks, trees, and rivers have spirits in them, and it hasn't lost this coloration since the middle of the nineteenth century.
Needless to say, I wasn't prepared to settle for this trivialization of a religion that flourished for tens of thousands of years among people exactly as smart as we are. After decades of trying to understand what these people were telling us about their lives and their vision of humanity's place in the world, I concluded that a very simple (but far from trivial) worldview was at the foundation of what they were saying: The world is a sacred place, and humanity belongs in such a world.
It's simple but also deceptively simple. This can best be seen if we contrast it with the worldview at the foundation of our own religions. In the worldview of our religions, the world is anything but a sacred place. For Christians, it's merely a place of testing and has no intrinsic value. For Buddhists it's a place where suffering is inevitable. If I oversimplify, my object is not to misrepresent but only to clarify the general difference between these two worldviews in the few minutes that are left to me.
For Christians, the world is not where humans belong; it's not our true home, it's just a sort of waiting room where we pass the time before moving on to our true home, it's just a sort of waiting room where we pass the time before moving on to our true home, which is heaven. For Buddhists, the world is another kind of waiting room, which we visit again and again in a repeating cycle of death and rebirth until we finally attain liberation in nirvana.
For Christians, if the world were a sacred place, we wouldn't belong in it, because we're all sinners; God didn't send his only-begotten son to make us worthy of living in a sacred world but to make us worthy of living with God in heaven. For Buddhists, if the world were a sacred place, then why would we hope to escape it? If the world were a sacred place, then would we not rather welcome the repeating cycle of death and rebirth?
From the animist point of view, humans belong in a sacred place because they themselves are sacred. Not sacred in a special way, not more sacred than anything else, but merely as sacred as anything else--as sacred as bison or salmon or crows or crickets or bears or sunflowers.
This is by no means all there is to say about animism. It's explored more fully in The Story of B, but this too is just a beginning. I'm not an authority on animism. I doubt there could ever be such a thing as an authority on animism.
Simple ideas are not always easy to understand. The very simplest idea I've articulated in my work is probably the least understood: There is no one right way for people to live--never has been and never will be. This idea was at the foundation of tribal life everywhere. The Navajo never imagined that they had the right way to live (and that all others were wrong). All they had was a way that suited them. With tribal peoples on all sides of them--all living in different ways--it would have been ridiculous for them to imagine that theirs was the one right way for people to live. It would be like us imagining that there is one right way to orchestrate a Cole Porter song or one right way to make a bicycle.
In the tribal world, because there was complete agreement that no one had the right way to live, there was a staggering glory of cultural diversity, which the people of our culture have been tirelessly eradicating for 10,000 years. For us, it will be paradise when everyone on earth lives exactly the same way.
Almost no one blinks at the statement that there is no one right way for people to live. In one of his denunciations of scribes and pharisees, Jesus said, "You gag on the gnat but swallow down the camel." People find many gnats in my books to gag on, but this great hairy camel goes down as easily as a teaspoon of honey.
May the forests be with you and with your children.
Daniel Quinn
Contrary to popular opinion, Charles Darwin did not originate the idea of evolution. By the middle of the 19th century, the mere fact of evolution had been around for a long time, and most thinkers of the time were perfectly content to leave it at that. The absence of a theory to explain evolutionary change didn't trouble them, wasn't experienced as a pressure, as it was by Darwin. He knew there had to be some intelligible mechanism or dynamic that would account for it, and this is what he went looking for--with well known results. In his Origin of Species, he wasn't announcing the fact of evolution, he was trying to make sense of the fact.
In my mid-twenties I began to feel a similar sort of pressure. The modern Age of Anxiety was just being born under the shadows of rampant population growth, global environmental destruction, and the ever-present possibility of nuclear holocaust. I was surprised that most people seemed perfectly reconciled to these things, as if to say, Well, what else would you expect?
Ted Kaczynski , the Unabomber, seemed to think he was saying something terribly original in his 1995 diatribe blaming it all on the Industrial Revolution, but this was just the conventional wisdom of 1962. To my mind, blaming all our problems on the Industrial Revolution is like blaming Hamlet's downfall on his fencing match with Laertes. To understand why Hamlet ended up badly, you can't just look at the last ten minutes of his story, you have to go right back to the beginning of it, and I felt a pressure to do the same with us.
The beginning of our story isn't difficult to find. Every schoolchild learns that ourhuman story, but it's certainly the beginning of our story, for it was from this beginning that all the wonders and horrors of our civilization grew.
Everyone is vaguely aware that there have been two ways of looking at the Agricultural Revolution within our culture, two contradictory stories about its significance. According to the standard version--the version taught in our schools--humans had been around for a long time, three or four million years , living a miserable and shiftless sort of life for most of that time, accomplishing nothing and getting nowhere. But then about 10,000 years ago it finally dawned on folks living in the Fertile Crescent that they didn't have to live like beavers and buzzards, making do with whatever food happened to come along; they could cultivate their own food and thus control their own destiny and well being. Agriculture made it possible for them to give up the nomadic life for the life of farming villagers. Village life encouraged occupational specialization and the advancement of technology on all fronts. Before long, villages became towns, and towns became cities, kingdoms, and empires. Trade connections, elaborate social and economic systems, and literacy soon followed, and there we went. All these advances were based on--and impossible without--agriculture, manifestly humanity's greatest blessing.
The other story, a much older one, is tucked away in a different corner of our cultural heritage. It too is set in the Fertile Crescent and tells a tale of the birth of agriculture, but in this telling agriculture isn't represented as a blessing but rather as a terrible punishment for a crime whose exact nature has always profoundly puzzled us. I'm referring, of course, to the story told in the third chapter of Genesis, the Fall of Adam.
Both these stories are known to virtually everyone who grows up in our culture, including every historian, philosopher, theologian, and anthropologist. But like most thinkers of the mid-19th century, who were content with the mere fact of evolution and felt no pressure to explain it, our historians, philosophers, theologians, and anthropologists seem perfectly content to live with these two contradictory stories. The conflict is manifest but, for them, demands no explanation.
For me, it did. As evolution demanded of Darwin a theory that would make sense of it, the story in Genesis demanded of me a theory that would make sense of it.
There have traditionally been two approaches to Adam's crime and punishment . The text tells us Adam was invited to partake of every tree in the garden of Eden except one, mysteriously called the tree of the knowledge of good and evil. As we know, Adam succumbed to the temptation to sample this fruit. In one approach, the crime is viewed as simple disobedience, in which case the interdiction of the knowledge of good and evil seems entirely arbitrary. God might just as well have interdicted the knowledge of war and peace or the knowledge of pride and prejudice. The point was simply to forbid Adam something in order to test his loyalty. Under this approach, Adam's punishment--banishment from Eden to live by the sweat of his brow as a farmer--was just a spanking; it doesn't "fit the crime" in any particular way. He would have received this punishment no matter what test he had failed.
The second approach tries to make some connection between Adam's crime and his punishment. Under this approach, Eden is viewed as a metaphor for the state of innocence, which is lost when Adam gains the knowledge of good and evil. This makes sense, but only if the knowledge of good and evil is understood as a metaphor for knowledge that destroys innocence. So, with roughly equivalent metaphors at either end, the story is reduced to a banal tautology: Adam lost his innocence by gaining knowledge that destroyed his innocence.
The story of the Fall is coupled with a second that is equally famous and equally baffling, that of Cain and Abel. As conventionally understood, these two brothers were literal individuals, the elder, Cain, a tiller of the soil, and the younger, Abel, a herder. The improbability that two members of the same family would embrace antithetical lifestyles should tip us off to the fact that these were not individuals but emblematic figures, just as Adam was (Adam merely being the Hebrew word for Man).
If we understand these as emblematic figures, then the story begins to make sense. The firstborn of agriculture was indeed the tiller of the soil, as Cain was said to be the firstborn of Adam. This is an undoubted historical fact. The domestication of plants is a process that begins the day you plant your first seed, but the domestication of animals takes generations. So the herder Abel was indeed the second-born--by centuries, if not millennia (another reason to be skeptical of the notion that Cain and Abel were literally second-generation brothers).
A further reason for skepticism on this point is the fact that the ancient farmers and herders of the Near East occupied adjacent but distinctly different regions. Farming was the occupation of the Caucasian inhabitants of the Fertile Crescent. Herding was the occupation of the Semitic inhabitants of the Arabian peninsula to the south.
Another piece of background that needs to be understood is that in very ancient times farmers and herders had radically different lifestyles. Farmers were by the very nature of their work settled villagers; but herders (by the very nature of their work) were nomads, just as many present-day herding peoples are. The herding lifestyle was in fact closer to the hunting-gathering lifestyle than it was to the farming lifestyle.
As the farming peoples of the north expanded, it was inevitable that they would confront their Semitic herding neighbors to the south, perhaps below what is now Iraq--with the predictable result. As they have done from the beginning to the present moment, the tillers of the soil needed more land to put to the plow, and as they've done from the beginning to the present moment, they took it.
As the Semites saw it (and it is of course their version of the story that we have), the tiller of the soil Cain was watering his fields with the blood of Abel the herder.
The fact that the version we have is the Semitic version explains the central mystery of the story, which is why God rejected Cain's gift but accepted Abel's. Naturally, this is the way the Semites would see it. In essence, the story says, "God is on our side. God loves us and the way we live but hates the tillers of the soil and the way they live."
With these provisional understandings in place, I was ready to offer a theory about the first part of the story, the Fall of Adam. What the Semitic authors knew was only the present fact that their brothers from the north were encroaching on them in a murderous way. They hadn't been physically present in the Fertile Crescent to witness the actual birth of agriculture, and in fact this was an event that had occurred hundreds of years earlier. In their story of the Fall, they were reconstructing an ancient event, not reporting a recent one. All that was clear to them was that some strange development had saddled their brothers to the north with a laborious lifestyle and had turned them into murderers, and this had to be a moral or spiritual catastrophe of some kind.
What they observed about their brothers to the north was this peculiarity. They seemed to have the strange idea that they knew how to run the world as well as God. This is what marks them as our cultural ancestors. As we go about our business of running the world, we have no doubt that we're doing as good a job as God, if not better. Obviously God put a lot of creatures in the world that are quite superfluous and even pernicious, and we're quite at liberty to get rid of them. We know where the rivers should run, where the swamps should be drained, where the forests should be razed, where the mountains should be leveled, where the plains should be scoured, where the rain should fall. To us, it's perfectly obvious that we have this knowledge.
In fact, to the authors of the stories in Genesis, it looked as if their brothers to the north had the bizarre idea that they had eaten at God's own tree of wisdom and had gained the very knowledge God uses to rule the world. And what knowledge is this? It's a knowledge that only God is competent to use, the knowledge that every single action God might take--no matter what it is, no matter how large or small--is good for one but evil for another. If a fox is stalking a pheasant, it's in the hands of God whether she will catch the pheasant or the pheasant will escape. If God gives the fox the pheasant, then this is good for the fox but evil for the pheasant. If God allows the pheasant to escape, then this is good for the pheasant but evil for the fox. There's no outcome that can be good for both. The same is true in every area of the world's governance. If God allows the valley to be flooded, then this is good for some but evil for others. If God holds back the flood then this too will be good for some but evil for others.
Decisions of this kind are clearly at the very root of what it means to rule the world, and the wisdom to make them cannot possibly belong to any mere creature, for any creature making such decisions would inevitably say, "I will make every choice so that it's good for me but evil for all others." And of course this is precisely how the agriculturalist operates, saying, "If I scour this plain to plant food for myself, then this will be evil for all the creatures that inhabit the plain, but it'll be good for me. If I raze this forest to plant food for myself, then this will be evil for all the creatures that inhabit the forest, but it'll be good for me."
What the authors of the stories in Genesis perceived was that their brothers to the north had taken into their own hands the rule of the world; they had usurped the role of God. Those who let God run the world and take the food that he's planted for them have an easy life. But those who want to run the world themselves must necessarily plant their own food, must necessarily make their living by the sweat of the brow. As this makes plain, agriculture was not the crime itself but rather the result of the crime, the punishment that must inevitably follow such a crime. It was wielding the knowledge of good and evil that had turned their brothers in the north into farmers--and into murderers.
But these were not the only consequences to be expected from Adam's act. The fruit of the tree of the knowledge of good and evil is harmless to God but poison to Man. It seemed to these authors that usurping God's role in the world would be the very death of Man.
And so it seemed to me when I finally worked all this out in the late 1970s. This investigation of the stories in Genesis was not, for me, an exercise in biblical exegesis. I'd gone looking for a way to understand how in the world we'd brought ourselves face to face with death in such a relatively short period of time--10,000 years, a mere eyeblink in the lifespan of our species--and had found it in an ancient story that we long ago adopted as our own and that remained stubbornly mysterious to us as long as we insisted on reading it as if it were our own. When examined from a point of view not our own, however, it ceased to be mysterious and delivered up a meaning that not only would have made sense to a beleaguered herding people 8,000 years ago but that would also make sense to the beleaguered people of the late twentieth century.
As far as I was concerned, the authors of this story had gotten it right. In spite of the terrible mess we've made of it, we do think we can run the world, and if we continue to think this, it is going to be the death of us.
In case it isn't evident, I should add that of course my reading of Genesis is only a theory. This is what creationists say of evolution, that it's "only a theory, it hasn't been proved," as though this in itself is grounds for dismissal. This misrepresents the point of formulating a theory, which is to make sense of the evidence. So far, Darwin's theory remains the very best way we've found to make sense of the evidence, and my own theory has to be evaluated in the same way. Does it make sense of the evidence--the stories themselves--and does it make more sense than any other theory?
But solving this particular riddle only began to alleviate the pressure I felt for answers that were not being looked for at any level of our culture. The philosophical and theological foundations of our culture had been laid down by people who confidently believed that Man had been born an agriculturalist and civilization builder. These things were as instinctive to him as predation is to lions or hiving is to bees. This meant that, to find and date Man's birth, they had only to look for the beginnings of agriculture and civilization, which were obviously not that far back in time.
When in 1650 Irish theologian James Ussher announced the date of creation as October 23, 4004 B.C., no one laughed, or if they did, it was because of the absurd exactitude of the date, not because the date was absurdly recent. In fact, 4004 B.C. is quite a serviceable date for the beginning of what we would recognize as civilization. This being the case, it's hardly surprising that, for people who took it for granted that Man began building civilization as soon as he was created, 4004 B.C. would seem like a perfectly reasonable date for his creation.
But all this soon changed. By the middle of the 19th century the accumulated evidence of many new sciences had pushed almost all dates back by many orders of magnitude. The universe and the earth were not thousands of years old but billions. The human past extended millions of years back beyond the appearance of agriculture and civilization.Only those who clung to a very literal reading of the biblical creation story rejected the evidence; they saw it as a hoax perpetrated on us either by the devil (to confound us) or by God (to test our faith)--take your pick. The notion that Man had been born an agriculturalist and civilization builder had been rendered totally untenable. He had very definitely not been born either one.
This meant that the philosophical and theological foundations of our culture had been laid by people with a profoundly erroneous understanding of our origins and history. It was therefore urgently important to reexamine these foundations and if necessary to rebuild them from the ground up.
Except, of course, that no one at all thought this was urgently important--or even slightly important. So human life began millions of years before the birth of agriculture. Who cares? Nothing of any importance happened during those millions of years. They were merely a fact, something to be accepted, just as the fact of evolution had been accepted by naturalists long before Darwin.
In the last century we'd gained an understanding of the human story that made nonsense of everything we'd been telling ourselves for 3,000 years, but our settled understandings remained completely unshaken. So what, that Man had not in fact been born an agriculturalist and a civilization builder? He was certainly born to become an agriculturalist and a civilization builder. It was beyond question that this was our foreordained destiny. The way we live is the way humans were meant to live from the beginning of time. And indeed we must go on living this way--even if it kills us.
Facts that were indisputable to all but biblical literalists had radically repositioned us not only in the physical universe but in the history of our own species. The factsense of the fact, the way Darwin had made sense of the fact of evolution.
Except me, and I have to tell you that it gave me no joy. I had to have answers, and I went looking for them not because I wanted to write a book someday but because I personally couldn't live without them.
In Ishmael, I made the point that the conflict between the emblematic figures Cain and Abel didn't end six or eight thousand years ago in the Near East. Cain the tiller of the soil has carried his knife with him to every corner of the world, watering his fields with the blood of tribal peoples wherever he found them. He arrived here in 1492 and over the next three centuries watered his fields with the blood of millions of Native Americans. Today, he's down there in Brazil, knife poised over the few remaining aboriginals in the heart of that country.
The tribe among aboriginal peoples is as universal as the flock among geese, and no anthropologist seriously doubts that it was humanity's original social organization. We didn't evolve in troops or hordes or pods. Rather, we evolved in a social organization was was peculiarly human, that was uniquely successful for culture-bearers. The tribe was successful for humans, which is why it was still universally in place throughout the world three million years later. The tribal organization was natural selection's gift to humanity in the same way that the flock was natural selection's gift to geese.
The elemental glue that holds any tribe together is tribal law. This is easy to say but less easy to understand, because the operation of tribal law is entirely different from the operation of our law. Prohibition is the essence of our law, but the essence of tribal law is remedy. Misbehavior isn't outlawed in any tribe. Rather, tribal law prescribes what must happen in order to minimize the effect of misbehavior and to produce a situation in which everyone feels that they've been made as whole again as it's possible to be.
In The Story of B I described how adultery is handled among the Alawa of Australia. If you have the misfortune to fall in love with another man's wife or another woman's husband, the law doesn't say, "This is prohibited and may not go forward." It says, "If you want your love to go forward, here's what you must do to make things right with all parties and to see to it that marriage isn't cheapened in the eyes of our children." It's a remarkably successful process. What makes it even more remarkable is the fact that it wasn't worked out in any legislature or by any committee. It's another gift of natural selection. Over countless generations of testing, no better way of handling adultery has been found or even conceivably could be found, because--behold!--it works! It does just what the Alawa want it to do, and absolutely no one tries to evade it. Even adulterers don't try to evade it--that's how well it works.
But this is just the law of the Alawa, and it would never occur to them to say, "Everyone in the world should do it this way." They know perfectly well that their tribal neighbors' laws work just as well for them--and for the same reason, that they've been tested from the beginning of time.
One of the virtues of tribal law is that it presupposes that people are just the way we know they are: generally wise, kind, generous, and well-intentioned but perfectly capable of being foolish, unruly, moody, cantankerous, selfish, greedy, violent, stupid, bad-tempered, sneaky, lustful, treacherous, careless, vindictive, neglectful, petty, and all sorts of other unpleasant things. Tribal law doesn't punish people for their shortcomings, as our law does. Rather, it makes the management of their shortcomings an easy and ordinary part of life.
But during the developmental period of our culture, all this changed very dramatically. Tribal peoples began to come together in larger and larger associations, and one of the casualties of this process was tribal law. If you take the Alawa of Australia and put them together with Gebusi of New Guinea, the Bushmen of the Kalahari, and the Yanomami of Brazil, they are very literally not going to know how to live. Not any of these tribes are going to embrace the laws of the others, which may not only be unknown to them but incomprehensible to them. How then are they going to handle mischief that occurs among them? The Gebusi way or the Yanomami way? The Alawa way or the Bushman way? Multiply this by a hundred, and you'll have a fair approximation of where people stood in the early millennia of our own cultural development in the Near East.
When you gather up a hundred tribes and expect them to work and live together, tribal law becomes inapplicable and useless. But of course the people in this amalgam are the same as they always were: capable of being foolish, moody, cantankerous, selfish, greedy, violent, stupid, bad-tempered, and all the rest. In the tribal situation, this was no problem, because tribal law was designed for people like this. But all the tribal ways of handling these ordinary human tendencies had been expunged in our burgeoning civilization. A new way of handling them had to be invented--and I stress the word invented. There was no received, tested way of handling the mischief people were capable of. Our cultural ancestors had to make something up, and what they made up were lists of prohibited behavior.
Very understandably, they began with the big ones. They weren't going to prohibit moodiness or selfishness. They prohibited things like murder, assault, and theft. Of course we don't know what the lists were like until the dawn of literacy, but you can be sure they were in place, because it's hardly plausible that we murdered, robbed, and thieved with impunity for five or six thousand years until Hammurabi finally noticed that these were rather disruptive activities.
When the Israelites escaped from Egypt in the 13th century B.C., they were literally a lawless horde, because they'd left the Egyptian list of prohibitions behind. They needed their own list of prohibitions, which God provided--the famous ten. But of course ten didn't do it. Hundreds more followed, but they didn't do it either.
No number has ever done it for us. Not a thousand, ten thousand, a hundred thousand. Even millions don't do it, and so every single year we pay our legislators to come up with more. But no matter how many prohibitions we come up with, they never do the trick, because no prohibited behavior has ever been eliminated by passing a law against it. Every time someone is sent to prison or executed, this is said to be "sending a message" to miscreants, but for some strange reason the message never arrives, year after year, generation after generation, century after century.
Naturally, we consider this to be a very advanced system.
No tribal people has ever been found that claimed not to know how to live. On the contrary, they're all completely confident that they know how to live. But with the disappearance of tribal law among us, people began to be acutely aware of not knowing how to live. A new class of specialists came to be in demand, their specialty being the annunciation of how people are supposed to live. These specialists we call prophets.
Naturally it takes special qualifications to be a prophet. You must by definition know something the rest of us don't know, something the rest of us are clearly unable to know. This means you must have a source of information that is beyond normal reach--or else what good would it be? A transcendent vision will do, as in the case of Siddhartha Gautama. A dream will do, provided it comes from God. But best of all, of course, is direct, personal, unmediated communication with God. The most persuasive and most highly valued prophets, the ones that are worth dying for and killing for, have the word directly from God.
The appearance of religions based on prophetic revelations is unique to our culture. We alone in the history of all humanity needed such religions. We still need them (and new ones are being created every day), because we still profoundly feel that we don't know how to live. Our religions are the peculiar creation of a bereft people. Yet we don't doubt for a moment that they are the religions of humanity itself.
This belief was not an unreasonable one when it first took root among us. Having long since forgotten that humanity was here long before we came along, we assumed that we were humanity itself and that our history was human history itself. We imagined that humanity had been in existence for just a few thousand years--and that God had been talking to us from the beginning. So why wouldn't our religions be the religions of humanity itself?
When it became known that humanity was millions of years older than we, no one thought it odd that God had remained aloof from the thousands of generations that had come before us. Why would God bother to talk to Homo habilis or Homo erectus? Why would he bother to talk even to Homo sapiens--until we came along? God wanted to talk to civilized folks, not savages, so it's no wonder he remained disdainfully silent.
The philosophers and theologians of the nineteenth and twentieth centuries weren't troubled by God's long silence. The fact alone was enough for them, and they felt no pressure to develop a theory to make sense of it. For Christians, it had long been accepted that Christianity was humanity's religion (which is why all of humanity had to be converted to it, of course). It was an effortless step for thinkers like Teilhard de Chardin and Matthew Fox to promote Christ from humanity's Christ to the Cosmic Christ.
Very strangely, it remained to me to recognize that there once was a religion that could plausibly be called the religion of humanity. It was humanity's first religion and its only universal religion, found wherever humans were found, in place for tens of thousands of years. Christian missionaries encountered it wherever they went, and piously set about destroying it. By now it has been all but stamped out either by missionary efforts or more simply by exterminating its adherents. I certainly take no pride in its discovery, since it's been in plain sight to us for hundreds of years.
Of course it isn't accounted a "real" religion, since it isn't one of ours. It's just a sort of half-baked "pre-religion." How could it be anything else, since it emerged long before God decided humans were worth talking to? It wasn't revealed by any accredited prophet, has no dogma, no evident theology or doctrine, no liturgy, and produces no interesting heresies or schisms. Worst of all, as far as I know, no one has ever killed for it or died for it--and what sort of religion is that? Considering all this, it's actually quite remarkable that we even have a name for it.
The religion I'm talking about is, of course, animism. This name was cut to fit the general missionary impression that these childlike savages believe that things like rocks, trees, and rivers have spirits in them, and it hasn't lost this coloration since the middle of the nineteenth century.
Needless to say, I wasn't prepared to settle for this trivialization of a religion that flourished for tens of thousands of years among people exactly as smart as we are. After decades of trying to understand what these people were telling us about their lives and their vision of humanity's place in the world, I concluded that a very simple (but far from trivial) worldview was at the foundation of what they were saying: The world is a sacred place, and humanity belongs in such a world.
It's simple but also deceptively simple. This can best be seen if we contrast it with the worldview at the foundation of our own religions. In the worldview of our religions, the world is anything but a sacred place. For Christians, it's merely a place of testing and has no intrinsic value. For Buddhists it's a place where suffering is inevitable. If I oversimplify, my object is not to misrepresent but only to clarify the general difference between these two worldviews in the few minutes that are left to me.
For Christians, the world is not where humans belong; it's not our true home, it's just a sort of waiting room where we pass the time before moving on to our true home, it's just a sort of waiting room where we pass the time before moving on to our true home, which is heaven. For Buddhists, the world is another kind of waiting room, which we visit again and again in a repeating cycle of death and rebirth until we finally attain liberation in nirvana.
For Christians, if the world were a sacred place, we wouldn't belong in it, because we're all sinners; God didn't send his only-begotten son to make us worthy of living in a sacred world but to make us worthy of living with God in heaven. For Buddhists, if the world were a sacred place, then why would we hope to escape it? If the world were a sacred place, then would we not rather welcome the repeating cycle of death and rebirth?
From the animist point of view, humans belong in a sacred place because they themselves are sacred. Not sacred in a special way, not more sacred than anything else, but merely as sacred as anything else--as sacred as bison or salmon or crows or crickets or bears or sunflowers.
This is by no means all there is to say about animism. It's explored more fully in The Story of B, but this too is just a beginning. I'm not an authority on animism. I doubt there could ever be such a thing as an authority on animism.
Simple ideas are not always easy to understand. The very simplest idea I've articulated in my work is probably the least understood: There is no one right way for people to live--never has been and never will be. This idea was at the foundation of tribal life everywhere. The Navajo never imagined that they had the right way to live (and that all others were wrong). All they had was a way that suited them. With tribal peoples on all sides of them--all living in different ways--it would have been ridiculous for them to imagine that theirs was the one right way for people to live. It would be like us imagining that there is one right way to orchestrate a Cole Porter song or one right way to make a bicycle.
In the tribal world, because there was complete agreement that no one had the right way to live, there was a staggering glory of cultural diversity, which the people of our culture have been tirelessly eradicating for 10,000 years. For us, it will be paradise when everyone on earth lives exactly the same way.
Almost no one blinks at the statement that there is no one right way for people to live. In one of his denunciations of scribes and pharisees, Jesus said, "You gag on the gnat but swallow down the camel." People find many gnats in my books to gag on, but this great hairy camel goes down as easily as a teaspoon of honey.
May the forests be with you and with your children.
Subscribe to:
Posts (Atom)