You are on page 1of 128

Acheulean Tradition A major facies of the Old World Lower Paleolithic whose

stone tool assemblages include certain large cutting tool types, especially hand axes and cleavers, the Acheulean Tradition takes its name from Saint-Acheul (Amiens, France), where it was first recognized during the nineteenth century. Acheulean sites are widely distributed in the Old World, from India in the east to Britain in the northwest, including all of Africa, the Near East, and Europe, especially the western half. This broad spatial distribution of the Acheulean Tradition (often simply called the Acheulean) is matched by the long period of time over which it persists: The oldest assemblages date from about 1.4 million years ago, while the youngest are perhaps 100,000 years old. It is no longer believed, as formerly, that the whole of the Acheulean was produced by a single human population, which gradually migrated over much of the Old World. Rather, it is one version of a general level of technological achievement, which proved entirely adequate to support the needs of human hunter-gatherers in many regions, over what seems to us a surprisingly long period of time. If one examines in detail the Acheulean lithic assemblages of any one area, it is immediately apparent that there are substantial differences between them, and also changes and developments, as time passed, in the ways in which the tools were made. But even so, the general stability within the Acheulean Tradition remains remarkable, which is exactly why the term continues to be used.

Acheulean Stone Tools


Hand axes, widely regarded as the hallmark of the Acheulean, are large cutting tools, with various carefully fashioned planforms, the commonest being oval, pear-shaped, lanceolate, and triangular. The cutting edges, convex, concave, or straight, occupy much or all of their circumferences. Many hand axes also have a more or less sharp point, and some have a heavy hammerlike butt. They are usually worked bifacially, that is, both main faces have been flaked during the often symmetrical shaping of the implement. Cleavers are more axlike, with a broad transverse or oblique cutting edge as the main feature and less emphasis on cutting edges at the sides. Because hand axes and cleavers are so readily recognizable, they tend to dominate our perception of Acheulean stone tool kits, which in fact also contain a considerable range of other implements, made by retouching simple flakes of suitable size to make points, knives, and scrapers. Many flakes were also used without formal retouch. As for technological changes through time in the Acheulean, it is broadly true that in the earliest industries the hand axes tend to be thicker and less symmetrical, made by the removal of relatively few flakes with a hard stone hammer. Later, they are often flat and elegantly shaped by the use of a softer hammer (of bone, antler, or wood), which could remove thin trimming flakes, leaving straight and regular cutting edges. Later Acheulean knappers also often show awareness of the prepared core flaking methods, such as Levallois technique, which characterize most Middle Paleolithic industries. There is, however, a wide technological range throughout the Acheulean everywhere, rather than a simple, inviolable progression from crude to refined industries. The implement types made, and the knapping techniques used, are always profoundly influenced by the types of rock locally obtainable, which varied in hardness, grain size, and manner of fracture. Flint and the purer forms of chert are easiest to work, but are not available everywhere. In sub-Saharan Africa, for example, quartzites and many kinds of volcanic rocks, especially fine-grained lavas, were frequently used.

Origins and Spread of the Acheulean


The genesis of the Acheulean Tradition certainly lies in sub-Saharan Africa. Its oldest-known occurrences include sites EF-HR and MLK in Middle Bed II at Olduvai Gorge (Tanzania), Peninj (Tanzania, west of Lake Natron), and Konso-Gardula (southern Ethiopia); dating, mainly by the potassium-argon method, suggests a time range of 1.2 to 1.4 million years. They appear quite suddenly, after over a million years of the Oldowan Tradition, which had only simple tools made from pebbles and flakes. A major technological difference between the two was the Acheulean workers' ability to strike large flakes from boulders, as the blanks from which their hand axes and cleavers were fashioned, rather than depending on whole cobbles or pebbles. This enabled large, broad tools with relatively thin cutting edges to be regularly achieved. It may be no coincidence that Acheulean industries first appeared soon after the emergence of a new hominid type, Homo erectus, larger in both stature and brain than Homo habilis, widely regarded as the maker of the Oldowan. Between about 1.8 and 1.2 million years ago, the first movement of humans out of sub-Saharan Africa occurred. The migration was begun by H. erectus humans, but as time passed, physical evolution and adaptation to new geographical situations brought these early people to a stage that we refer to generally as Homo sapiens, though within it there is considerable local variability: For example, in Europe the early H. sapiens people developed into the well-known Neanderthal populationa process already discernible a quarter of a million years ago and complete by about 120,000 B.P. Sub-Saharan Africa retained its own hominid population during and after the first human migration to other parts of the Old World, and it was apparently here that the development took place from H. erectus, via an early H. sapiens stage, to anatomically modern humans (H. sapiens sapiens), who, by around 100,000 years ago, had themselves spread out of Africa and reached the Near East. The foregoing clearly implies that, over time, several different human types must have made Acheulean industries. Some of the people involved in the first Homo erectus movement out of sub-Saharan Africa were certainly hand-axe makers, since stone tool manufacture in the mainstream Acheulean Tradition spread during the Early and Middle Pleistocene to North Africa and the Near East, into southern and western Europe, and eastward to the Indian subcontinent, though arrival dates are not clear everywhere. There was little penetration of Central or northern Asia at this time, and none of Australasia or the Americas. China and Southeast Asia, however, have many important Lower Paleolithic sites, but their stone artifacts do not belong to the Acheulean Tradition as described here. If the first humans to penetrate east of India were Acheuleans, they would have found few rocks suitable for hand-axe manufacture, and would have had to content themselves with stone tools of less sophisticated design to fulfill the same functions; other materials, such as bamboo, could also have provided highly effective points and cutting edges (though without surviving in the archaeological record). Accordingly, we need not assume that the earliest humans of the Far East had a separate ultimate origin from those who spread the Acheulean Tradition so widely elsewhere: Quite different artifact types could easily have become and remained the fashion in the Far East, especially since there is little sign of subsequent contact with Lower Paleolithic peoples away to the west, during the Middle Pleistocene. The Acheulean, however, is rarely alone in any area where it occurs: There are often

contemporary lithic assemblages from which the typical hand axes and cleavers are quite absent. Examples of this phenomenon include the Soan Tradition of India, the later stages of the Oldowan in East Africa, the flake-tool industries of central Europe, and the Clactonian in Britain. The explanation need not always be the same. Particular human groups must often have produced specialized tool kits to deal with the many different activities undertaken by hunter-gatherers, exploiting seasonal resources of food and raw materials over territorial ranges comprising very variable landscapes: The classic hand axes and cleavers will not always have been the most advantageous tools. But there also remains the possibility of distinct contemporary human groups, maintaining their own separate tool-making traditions, for whatever reasons, with room enough for all, in any given region.

Acheulean Settlements
Acheulean sites mainly occur as scatters of the typical stone artifacts, associated with the channels or floodplains of streams and rivers, or with lake margins. Early humans favored such locations for settlement, but the traces they left were liable to subsequent hydraulic disturbance. Structures, hearths, and fragile materials like wood or plant remains only rarely survive in association with the stone artifacts. At a few sites, such as Torralba (Spain), Kalambo Falls (Zambia), and Gesher Benot Ya'aqov (Israel), waterlogging has preserved traces of worked wood. The remains of bone at many sites, sometimes with cut marks left by stone implements, make clear that the Acheulean people exploited the carcasses of large and small animals, whether as hunters or scavengers. They occasionally used caves or rock shelters as habitations or working places, a few examples being Montagu Cave and Cave of the Hearths (South Africa), Tabun Cave (Israel), Lazeret Cave (southern France), and Pontnewydd Cave (northern Wales). Sometimes they occupied coastal locations, as at Boxgrove (Sussex, England) or Terra Amata (Nice, France), though there is little evidence for their exploiting marine fish or shellfish. Occasional finds on higher ground, for example, the chalk downlands of southern England, testify to their use of the land around and between their main campsites. No unequivocal evidence relating to Acheulean beliefs or ritual has yet been discovered.

The End of the Acheulean Tradition


The late Acheulean lasts into the Upper Pleistocene, but from around 180,000 B.P. the late Lower Paleolithic and earlier Middle Paleolithic overlap in time and to some extent blend together, as hand axes and cleavers lose importance in many areas, while specialized tools and projectile points, made by retouching specially struck flakes, increase. In the Micoquian industries of central Europe, the Jabrudian of the Near East, and the Fauresmith of southern Africa, varying examples of changing tool kits during the passage from a hand-ax-making Lower Paleolithic to a flake-tool-making Middle Paleolithic can be seen. Such terminology, however, really only reflects the efforts of archaeologists to label their current understanding of dynamic and varied processes of change in human circumstances, for which only parts of the imperishable segment of the evidence survive. The final use of the term Acheulean Tradition is for the Moustrien de tradition acheulenne (MTA), one of the many different Middle Paleolithic industries made by the Neanderthal population of Atlantic Europe during the early and middle stages of the last glaciation, about 75,000 to 35,000 years ago. It includes finely made bifacial hand axes, of cordiform and subtriangular shapes, whose inspiration may well come

ultimately from the final pure Acheulean industries of the same region and whose makers must themselves have been Neanderthals. [See also Paleolithic: Lower and Middle Paleolithic.]

Bibliography
Derek A. Roe, The Lower and Middle Palaeolithic Periods in Britain (1981). John J. Wymer, The Palaeolithic Age (1982). John A. J. Gowlett, Ascent to Civilization: The Archaeology of Early Man (1985). Richard G. Klein, The Human Career: Human Biological and Cultural Origins (1989). Derek A. Roe

Lower and Middle Paleolithic


The immensely long Old Stone Age (Paleolithic Period) has from the early days of Prehistoric Archaeology been divided into Lower, Middle, and Upper sections. Most of the early discoveries were made in western Europe, from the mid-nineteenth century onward, and these classic divisions naturally reflected the situation there, though they were tacitly assumed to be of worldwide validity. The Lower Paleolithic, accordingly, was characterized by the bifacially worked hand axes and other archaic stone tools found mainly in river gravels associated with Early or Middle Pleistocene fauna. Middle Paleolithic referred to the elegant flake tool industries found mostly in caves and rock shelters, with Upper Pleistocene fauna and Neanderthal hominid remains. The Upper Paleolithic had fine tools made on blades, bone and ivory implements, decorative items, and anatomically modern humans. While the terms continue in use, their meaning has expanded and changed. The Paleolithic in Western Europe is now perceived as merely one incomplete local sequence within the global Paleolithic succession, and scholars' interests have broadened considerably beyond the mere classification of artifacts, on which the divisions were originally largely based. Today, many archaeologists prefer to see the Lower and Middle Paleolithic as a single continuous stage of human development, that is, as an Earlier Paleolithic that started with the first traces of human activity, at least 2.5 million years ago, and ended only with the rapid spread over the Old World of anatomically modern humans, some 50,000 to 30,000 years ago. That spread of Advanced Paleolithic people coincided with striking technological and social advances, and with the final disappearance of all archaic human types that had hitherto survived. Some prehistorians, however, continue to find the separate terms Lower and Middle Paleolithic useful in certain respects. Insofar as they denote periods or stages, it must be remembered that these terms include not only technological and cultural developments, but also much human physical evolution and a gradual expansion of human territory to cover much of the Old World.

Lower Paleolithic
By literal definition, the Lower Paleolithic begins with the earliest known traces of stone tool manufacture, currently about 2.7 million years ago at Kada Gona, northern

Ethiopia. Other traces of human activity, such as upright walking, go back about a million years further, as dramatically evidenced by the famous human footprint trails at Laetoli, Tanzania (ca. 3.68 million years ago). The hominids of this opening stage belong to the Australopithecine group, known only in sub-Saharan Africa. Though late Australopithecines survived until about one million years ago, the earliest examples of Homo, including H. habilis, appeared in East Africa between 2.5 and 2.0 million years ago, perhaps descended from the gracile Australopithecine Australopithecus afarensis. With the emergence of the Homo line, simple stone tool manufacture became a regular occurrence from ca. 1.8 million years ago (the Oldowan Tradition). Animal bones are frequently found together with stone artifacts at the early sites: the most important locations include Olduvai Gorge (Tanzania), the Turkana Basin (mainly Kenya), and several parts of Ethiopia. The FLK sites at Olduvai and FxJj 50, East Turkana, are important excavated examples. The early humans probably depended more on scavenging than on hunting for themselves. Evidence for human control of fire at this stage is uncertain. By 2 to 1.8 million years ago, somewhat more advanced human types had appeared, of which Homo erectus is the best known, and by 1.8 million years ago these new people had begun a migration out of sub-Saharan Africa that was eventually to reach India, the Far East, the Near East, North Africa, and Europe. Soon after the emergence of H. erectus in East Africa, important new stone tool types appear there: the hand axes and cleavers (large shaped cutting tools) of the Acheulean Tradition. Acheulean industries subsequently spread widely over the Old World during the Early and Middle Pleistocene, though not to the Far East. Lower Paleolithic artifacts and fossil remains of H. erectus certainly occur in Southeast Asia and at numerous sites in Chinafrom about one million years ago Zhoukoudian near Beijing being the most famous, though a few dates as old as 2 to 1.8 million years are also claimed. The artifacts often have a rather crude appearance, however, perhaps because of the nature of the local rocks.

Human Types
By early in the Middle Pleistocene, human evolution was passing from the Homo erectus stage to one that we designate Early Homo sapiens. Adaptation to so many new geographical and climatic situations created much local variability within this taxon. The European Early Homo sapiens population for instance, progressed to H. sapiens neanderthalensis, the Neanderthals, present there in a fully developed form well before 100,000 B.P. In sub-Saharan Africa, however, evolution had by then produced the first examples of anatomically modern humans (H. sapiens sapiens); examples include finds from Klasies River Mouth (South Africa), and the Omo Kibish I individual from southern Ethiopia. These new people had spread to the Near East by ca. 100,00090,000 B.P. (Qafzeh Cave and Skhul Cave in Israel), and this becomes a crucial area, because Neanderthal hominids also reached it, doubtless from eastern Europe: examples include finds from Kebara Cave and Amud Cave (Israel) and Shanidar Cave (Iraq). The two human types may have shared the region for up to 40,000 years.

Middle Paleolithic
At the generalized Early Homo sapiens stage, humans made various Lower Paleolithic industries in different parts of the Old World, the later stages of the Acheulean Tradition being merely the best known. In Europe, early forms of Neanderthals are

also associated with such industries, for example at Atapuerca (Spain) or Pontnewydd Cave (North Wales). What characterized the subsequent Middle Paleolithic industries was the virtual disappearance of hand axes and a new emphasis on finely made flake tools, fashioned on specially struck blanks (prepared core technology), featuring carefully designed scraper and projectile-point types. Many of these European Middle Paleolithic industries are called Mousterian, after the French site of Le Moustier. In other parts of the world, notably southern Africa, the term Middle Stone Age is used for the wide range of industries broadly equivalent in age and technology to the European ones just described. The basis of all Middle Paleolithic economies was hunting, gathering, and scavenging. Human geographical distribution expanded, for example, into the cold steppes of central Russia. Open sites occur with dwelling structures partly made from mammoth bones, with internal hearths, as at Molodova V (Ukraine). Humans had also reached Australia by at least 55,000 B.P., arguably a Middle Paleolithic event. The Neanderthals have left some evidence of ritual practices, notably deliberate burial of the dead, at caves and rock shelters from La Ferrassie (southwest France) to Teshik Tash (Uzbekistan) and Shanidar (Iraq), though at some of their sites human bones occur as fragments amongst animal bones and occupation debris, as at Krapina (Croatia) or L'Hortus (southern France). During the early and middle sections of the Last Glaciation, adaptation to cold conditions made the European Neanderthals a somewhat specialized population, continuing to manufacture their Mousterian industries in habitable parts of the continent, in some cases as late as the mid-30,000s B.P. Late developments, such as the Chatelperronian industry in southwest France, with some more bladelike tools, are still associated with Neanderthal hominids, as at the St. Csaire rock shelter. The same is likely to be true of the Szeletian industries of the late Middle Paleolithic in Central Europe. In the Near East, a clear transition can be seen at ca. 45,00040,000 B.P. from Middle Paleolithic prepared-core technology to the regular manufacture of blades as tool blanks, notably at Ksar Akil (Lebanon) and Boker Tachtit (Israel). In Europe, the end of the Middle Paleolithic was abrupt: the Neanderthal population seems to have been swept away between ca. 42,00035,000 B.P. by a rapid incursion of anatomically modern humans making Upper Paleolithic blade tool industries.[See also Afar; Africa: Prehistory of Africa; Australopithecus and Homo Habilis; China: Stone Age Cultures of China; Europe, the First Colonization Of; Homo Sapiens, Archaic; Koobi Fora; Olorgesaillie; Pleistocene; Torralba/Ambrona.]

Bibliography
John A. J. Gowlett, Ascent to Civilization: The Archaeology of Early Man (1984). Clive Gamble, The Palaeolithic Settlement of Europe (1986). Richard G. Klein, The Human Career: Human Biological and Cultural Origins (1989). A. Barbara Isaac, ed., The Archaeology of Human Origins: Papers by Glynn Isaac (1990). Christopher B. Stringer and Clive Gamble, In Search of the Neanderthals: Solving the Puzzle of Human Origins (1993).

Derek A. Roe

Upper Paleolithic
The Upper Paleolithic is the last of the three divisions of the Old Stone Age. It is a period of approximately 30,000 years duration, from the final development of the last glacial cycle 40,000 years ago, through the last glacial maximum and ending with the improved climatic conditions of the Holocene approximately 10,000 years ago. The archaeological record of the Upper Paleolithic is characterized by a number of features that clearly separate it from the Lower and Middle Paleolithic. These include new techniques of stone working, the use of bone and other nonlithic materials, the appearance of art, larger and more numerous sites, the presence of sites with a distinct structure, and specialized animal hunting strategies. These new features, together with skeletal evidence of anatomically modern humans, Homo sapiens sapiens, has led to the suggestion that the archaeological record of the Upper Paleolithic represents the first appearance of what has been thought to be fully modern behavior. It is the analysis of the specific nature of this behavior and its relationship to the evolution of Homo sapiens sapiens that forms one of the major interpretive problems of the Upper Paleolithic.

Technological Developments
A new type of stone tool technology appears with the beginning of the Upper Paleolithic. This is called blade technology and involves the production of blades, which are struck stone pieces twice as long as they are broad and often with parallel sides. These blades are then used as the raw material for the production of other tools of a definite and clear form, such as endscrapers, borers, gravers, projectile points, and much smaller pieces called Microliths. Detailed examination of these pieces indicates that they were often hafted onto wooden, bone, or ivory shafts. Microliths may have been hafted in groups onto a single shaft, resulting in a composite tool. It would then have been straightforward to replace the broken stone components of these tools while preserving the more valuable shafts. Analysis of the raw materials used for the production of these stone tools (flint in Poland or obsidian in Japan) have indicated that groups of hunters and gatherers collected raw materials from a very wide catchment area, either in the course of a generally nomadic lifestyle or by means of special purpose trips. Studies of tool design have revealed distinct assemblages of stone tools defined on the basis of the appearance (and disappearance) of distinct tool types, such as scrapers and projectile points. In Europe, these assemblages have been called Chatelperronian, Aurignacian, Gravettian, Solutrean, and Magdalenian. They were at one time thought to represent the material residues from distinct, culturally differentiated societies of hunters and gatherers, although such a social interpretation is now much questioned. In addition to these stone tool assemblages, even broader groups of stone technologies can be identified. For example, it is possible to observe a group of stone tool industries based on the production of microblades from special cores that appears at around 18,000 B.P. and covers an area stretching from the Near East across Central Asia through China, Japan, and into North America, with numerous smaller regional variations. In addition to stone, there are marked developments in the technological use of other raw materials as well as the manufacture of new forms. Bone, antler, and ivory appear to have been used for the first time for making tools and other items. These materials

were first used for the manufacture of projectile points, where their less brittle nature would have been ideal. At a later date (ca. 21,000 years B.P.), bone was used for the manufacture of the first eyed needle, possibly indicating a more elaborate clothing technology, and also for the manufacture of the first identifiable musical instruments, which take the form of flutes made from hollow bird bones.

Artistic Expression
From the beginning of the Upper Paleolithic, there also appears examples of recognizable artistic expression in the form of wall paintings and engravings and also mobiliary carvings. The wall (parietal) art includes both abstract art (lines, squares, net shapes) and representational art depicting animals in both single and multiple colors. The best known locations are the cave sites of Lascaux in southern France and Altamira in northern Spain. There are also clay sculptures of bison at the site of Le Tuc d'Audoubert in France. Over the years, interpretations of the meaning of the parietal art have varied enormously from hunting magic to structuralist interpretations of the relationship between men and women. More recent interpretations stress the importance of these representations in the communication of important information for successful hunting in the light of the unpredictable environmental circumstances of the time. The mobiliary art includes bone and antler batons with carved animals such as deer and birds as well as a number of female figurines with seemingly exaggerated sexual organs. The similarity in the form of these figurines over an area that encompasses all of Europe from east to west has been interpreted as an indication of a wide exchange network of marriage partners that would have existed at this time of low population density to ensure a viable breeding population. Another possibility is that these figurines are self-sculptures by women not brought up within Renaissance traditions of perspective and artistic distance. An examination of the engravings on pieces of bone by Alexander Marshack has suggested that some of them may be notations, providing Upper Paleolithic people with some form of calendrical record. There is also much archaeological evidence for the manufacture of bodily ornamentation. Where preservation allows, beads are frequently found, made from bone, ivory, or seashell. The species of seashell used indicate contacts over very great distances of hundreds of miles. As is the case with the use of stone, it is not yet known whether these seashells were collected personally or acquired through contact with other groups either by direct contact or longer networks of exchange. There are also a number of sites, such as Gnnersdorf in Germany and Parpall in Spain, where large numbers of engraved plaques have been found. Examination of these sites in the broader context of the settlement patterns, as well as the diversity of stylistic elements within individual sites, has resulted in some being interpreted as aggregation sites, where a number of smaller bands may have met on a periodic basis, possibly for the exchange of marriage partners. Evidence from other sites, such as footprints at the cave of Niaux in France, and the isolated location of the art within cave systems, suggests that these paintings and engravings might not have been viewed like paintings hanging on the walls of modern art galleries. Rather, they were viewed in the context of structured occasions in which access to the cave might have been limited for some reason. A ritualistic viewing is a possibility.

Paleolithic Art
The existence of Paleolithic art was first established and accepted through the discovery of portable decorated objects in a number of caves and rock shelters in southwest France in the early 1860s. There could be no doubt that the objects were ancient, being associated with Paleolithic tools and the bones of Ice Age animals. Some depicted species (e.g., mammoth) that were extinct or others (e.g., reindeer) that had long ago deserted this part of the world.

Distribution
These first discoveries triggered a treasure hunt for ancient art objects in caves and shelters. A small number of people noticed drawings on the cave walls, but thought little of them. The first real claim for the existence of Paleolithic cave art was that made in 1880 for the Spanish cave of Altamira by a local landowner, de Sautuola. His views were treated with skepticism by the archaeological establishment, because nothing similar had previously been reported, and almost all known portable art had come from France. The rejection of Altamira persisted for twenty years until a breakthrough was made at the cave of La Mouthe (Dordogne) where, in 1895, the removal of some fill had exposed an unknown gallery, the walls of which had engravings including a bison figure. Because of Paleolithic deposits in the blocking fill, it was clear that the pictures must be ancient. Finally, in 1901, engravings were found in the cave of Les Combarelles (Dordogne) and paintings in the nearby cave of Font de Gaume. In 1902 the existence of cave art was officially recognized by the archaeological establishment. Once again, a kind of gold rush ensued, with numerous new sites and galleries being found. Discoveries still continue; in France and Spain, even today, an average of one new site is found every yearmost recently the magnificent Grotte Chauvet in the Ardche, with its unusually numerous and prominent figures of rhinos and big cats. Subsequently, rock art of similar antiquity has been discovered in many other parts of the world as well. Portable art or art mobilier is found from the Iberian Peninsula and North Africa to Siberia, and has notable concentrations in western, central, and eastern Europe. Thousands of specimens are known, and though some sites yield few or none, others contain hundreds or even thousands of items of portable art. The distribution of cave art (art parital) is equally patchy, though it is most abundant in areas that are also rich in decorated objects: the Prigord, the French Pyrenees, and Cantabrian Spain. Paleolithic decorated caves are found from Portugal and the very south of Spain to the north of France. Traces have been found in southwest Germany, and there are concentrations in Italy and Sicily. A handful of caves are also known in Yugoslavia, Romania, and Russia. The current total for Eurasia is about 280 sites. Some contain only one or a few figures on the walls, whereas others like Lascaux or Les Trois Frres have hundreds. However, in recent years it has become apparent that Paleolithic people also produced rock art in the open air, where it has survived in exceptional circumstances: Six sites have so far been found in Spain, Portugal, and the French Pyrenees with engravings that are Paleolithic in style. So cave art is not typical of the period; caves are merely the places where most art has survived.

Methods of Dating
Dating portable objects is easy, since their position in the stratigraphy of a site, together with the associated tools, gives some idea of the cultural phase involved, and radiocarbon dating of organic material from these levels, or even from the art objects themselves, can give more precise results. Dating parietal art was, until recently, far more difficult. Where the caves were blocked during or just after the Ice Age, or where parts of the decorated walls themselves are covered by datable Paleolithic deposits, a minimum age can be established. There are also cases where a fragment of decorated wall has fallen and become stratified in the archaeological layers, though this provides an approximate date for the art's fall rather than its execution. Some caves contain occupation deposits that may plausibly be linked with art production (e.g., through the presence of coloring materials). If a site with parietal art has also produced stratified portable art, there are sometimes clear analogies between the two in technique and style, providing a fairly reliable date for the wall decoration. For the many caves without occupation or portable art, it became necessary to seek stylistic comparison with material from other sites and even other regions, which led inevitably to subjectivity and simplistic schemes of development, since all stylistic arguments are based on an assumption that figures similar in style or technique were roughly contemporaneous in their execution. The first such scheme was put forward by the abb Henri Breuil, who based it primarily on the presence or absence of twisted perspective, a feature he considered primitive, in which an animal figure in profile still has its horns, antlers, tusks, or hooves facing to the front. Breuil believed this was an archaic feature, associated with early phases of cave art, whereas in the Magdalenian (the last phase of Ice Age culture) everything was drawn in proper perspective. Unfortunately his scheme was inconsistent, since twisted hoofs are known in the Magdalenian (e.g., on the Altamira bison), and true perspective sometimes occurs in early phases. This scheme was eventually superseded by that of Andr *Leroi-Gourhan, the French scholar who dominated cave art studies after Breuil's death. Basing himself on securely dated figures, he proposed a series of four styles. Like Breuil, he saw an overall progression from simple, archaic forms to complex, detailed, accurate figures of animals. However, it is now generally recognized that Paleolithic art did not have a single beginning and a single climax; there must have been many of both. Each period probably saw the coexistence of a number of styles and techniques, as well as a wide range of talent and ability. In recent years it has become possible to analyze minute amounts of pigment from parietal figures and hence learn that many black figures, thought to be manganese, actually contain or consist of charcoal. The development of Accelerator Mass Spectrometry (AMS) has meant that one can now obtain radiocarbon estimates from such tiny samples, and a number of figures in several Paleolithic caves have already been dated in this way (see Altamira). In every single case, results suggest that the accumulation of the figures was more episodic and far more complex than envisaged by Leroi-Gourhan's scheme, and sometimes spanned a far longer period than was

believed. Apart from sporadic occurrences of a variety of non-utilitarian objects in earlier periods, the first Eurasian Paleolithic art apparently occurs in the Aurignacian period, around 32,000 years ago; charcoal from two rhinos and a bison in the Chauvet Cave, France, have produced results of approximately this date, making these the earliest dated parietal paintings in the world. For the next ten millennia or so, parietal art seems confined to cave mouths and rock shelters. It was in the Solutrean and, especially, the Magdalenian that deep caves were habitually penetrated and decorated in areas of total darkness, though the Chauvet cave shows that this sometimes happened in much earlier periods as well. Paleolithic art seems to wane with the end of the Ice Age at the close of the Magdalenian, around 11,000 years ago.

Techniques and Materials


Portable art comprises a wide variety of materials and forms. The simplest are slightly modified natural objectsfossils, teeth, shells, or bones that were incised, sawn, or perforated to form beads or pendants. Some sites have hundreds of plaquettes (slabs of stone with drawings engraved on them), and a few painted specimens are known. Engravings occur on flat bones, and were also done on bone shafts and on batons of antler, not only lengthwise but also around the cylinder, maintaining perfect proportions although the whole figure could not be seen. In the Magdalenian Period, zoomorphic figures and circular discs were cut out of thin bone. Antler spear-throwers have figures either carved in relief along the shaft, or carved in the round at the hook-end, where the triangular area of antler dictates the posture and size of the carving. Within these constraints, the artists produced a wide variety of images such as fawns, mammoths, or a leaping horse. A few terra-cotta models have survived in several areas, especially Moravia, but the vast majority of Paleolithic statuettes are made of ivory or soft stone. Ivory was also used to produce beads, bracelets, and armlets. Cave art itself encompasses an astonishing variety and mastery of techniques. One basic approach was the incorporation of natural rock formations: The shapes of cave walls and stalagmites were employed in countless examples to accentuate or represent parts of figures. The simplest form of marking cave walls was to leave finger traces in the soft clay layer. This technique probably spans the whole period, perhaps inspired by cave-bear claw marks on the walls. In some caves, the finger lines also include some definite animal and humanoid figures. Engraving, as in portable art, is by far the most common technique on cave walls. The tools used for engraving varied from robust picks to sharp flint flakes. Work in clay was restricted to the Pyrenees; it ranges from finger holes and tracings to engravings in the cave floor, and bas-relief figures in artificial clay banks. The famous clay bison of le d'Audoubert are in haut-relief, and the three-dimensional bear of Montespan comprises about 1,543 pounds (700 kg) of clay. Parietal sculpture is similarly limited in distribution to the Prigord and Charente regions of France where the limestone could be shaped. But whereas clay figures are known only from the dark depths of caves, sculptures are always in rock shelters or

the illuminated front parts of caves. Both bas-relief and haut-relief are found, the figures being created with percussion tools. Almost all parietal sculptures have traces of red pigment and were originally painted, like much portable art. The red pigment used on cave walls is iron oxide (hematite or ochre); the black is manganese or charcoal. The main coloring materials were usually readily available locally. Recent analyses of pigments, particularly at Niaux, have revealed the use of recipes combining paint with extenders like talc or feldspar. Analyses have detected traces of animal and plant oils used as binders. The simplest way to apply paint to walls was with fingers, but normally some kind of tool was used, though none has survived. Lumps of pigment may have been used as crayons, but since they do not mark the rock well, they were more likely to be sources of powder. Experiments suggest that animal-hair brushes or crushed twigs were the best tools, though occasionally a pad may have been employed on rough surfaces. For hand stencils and some dots and figures, paint was clearly sprayed, either from the mouth or through a tube. Figures have been found not only on clay floors and on walls, but also on ceilings. Some, like the Altamira ceiling, were within easy reach, but for others a ladder or scaffolding was required. At Lascaux, sockets cut into the wall of one gallery give some idea how the scaffolding was constructed. Light was sometimes provided by hearths, but portable light was necessary in most cases. Since only a few dozen stone lamps are known from the period, it is likely that burning torches were generally used which left little trace other than a few fragments of charcoal on the walls. In parietal art, unlike portable, there was no great restriction on size, and figures range from the tiny to the enormous (over 6 feet [2 m]) in some cases, with the great Lascaux bulls exceeding 16 feet (5 m). Small figures are commonly found with large, and there are no groundlines or landscapes.

Types of Images
Paleolithic images are normally grouped into three categories: animals, humans, and non-figurative or abstract (including signs). The vast majority of animal figures are adults in profile, most of them recognizable, although many are incomplete or ambiguous, and a few are imaginary, like the Lascaux unicorn. The animals' age can rarely be estimated, except for the few juveniles known. Their sex is sometimes displayed directly, but almost always discreetly, so that secondary sexual characteristics such as antlers or size and proportions often have to be relied upon. Many figures seem motionless and animated depictions are rare. Scenes as such are very hard to identify in Paleolithic art, since it is often impossible to prove association of figures rather than simple juxtaposition. Only a very few definite scenes are known. One central fact is the overall dominance of the horse and bison among Paleolithic depictions, although other species (e.g., mammoth or deer) may dominate at particular sites. Carnivores are rare; fish and birds are far more plentiful in portable art than

parietal. Insects and recognizable plants are limited to a few examples in portable art. So Paleolithic art is neither a simple bestiary nor a random accumulation of artistic observations of nature. It has meaning and structure, with different species predominating in different periods and regions. Apart from hand stencils, definite humans are scarce in parietal art, unlike portable art where the best-known specimens are the poorly named Venus figurines depicting females of a wide span of ages and types: they are by no means limited to the handful of obese specimens that are often claimed to be characteristic. Genitalia are rarely depicted, so one usually has to rely on breasts or beards to differentiate the sexes, and most humans have to be left neutral. Clothing is rarely clear, and details such as eyebrows, nostrils, navels, and nipples are extremely uncommon. Few figures have hands or fingers drawn in any detail. In the past, all compositesfigures with elements of both humans and animalswere unjustifiably called sorcerers and assumed to be a shaman or medicine man in a mask or animal costume. But they could simply be people with bestialized faces, or humans with animal heads. In any case, composites (the most famous being the sorcerer of Les Trois Frres) are rare, occurring in only about fifteen sites. Nonfigurative marks are far more abundant than figurative, and include a tremendously wide range of motifs, from a single dot or line to complex constructions, and to extensive panels of linear marks. Signs can be totally isolated in a cave, clustered on their own panels, or closely associated with the figurative. The simpler motifs are abundant and widespread. The more complex forms, however, show extraordinary variability and are more restricted in space and time, so they have been seen as ethnic markers, perhaps delineating Paleolithic groups.

Function and Meaning


The first theory attempting to explain this period's art was that it had no meaning; it was simply mindless decoration by hunters with time on their hands. This art for art's sake view arose from the first discoveries of portable art, but once parietal art began to be found it became clear that more was involved: The restricted range of species depicted, their frequent inaccessibility and their associations in caves, the palimpsests and undecorated panels, the enigmatic signs, the many purposely incomplete or ambiguous figures, and the caves that were decorated but apparently not inhabited, all combine to suggest that there is complex meaning behind both the subject matter and the location of Paleolithic art. At the beginning of this century, the functional theory of Sympathetic Magic took over: In other words, the depictions of animals were produced in order to control or influence real animals in some way. Ritual and magic were seen in almost every aspect of Paleolithic artbreakage of decorated objects, images killed ritually with images of missiles or even physical attack. Overall, however, there are very few Paleolithic animal figures with missiles on or near them, and many caves have no images of this type at all. Missiles (whatever they are) also occur on some human figures. There are no clear hunting scenes. Moreover, the animal bones found in many decorated caves bear little relation to the species

depicted on the walls, and it is clear that the motivations behind the art were different from the environmental factors and economic choices that produced the faunal remains. Another popular explanation of Ice Age art is that of fertility magic: The artists depicted animals, hoping they would reproduce and flourish to provide food in the future. Yet few animals have their gender shown, and genitalia are almost always shown discreetly. As for copulation, in the whole of Paleolithic iconography there are only a couple of (very dubious) examples. It is clear that most Paleolithic art is not about either hunting or sex, at least in an explicit sense. The next major theoretical advance, however, introduced the notion of a symbolic sexual element. In the 1950s two French scholars, Annette LamingEmperaire and Andr Leroi-Gourhan concluded that caves had been decorated systematically rather than at random. Parietal art was treated as a carefully laidout composition within each cave; the animals were not portraits but symbols. The key advance was the discovery of repeated associations in the art. The numerically dominant horses and bovids, concentrated in the central panels, were thought to represent a basic duality that was assumed to be sexual. Laming-Emperaire believed the horse to be equivalent to the female and the bovids to the male; for LeroiGourhan it was vice versa. This idea was then extended to the signs, which were dubbed male (phallic) and female (vulvar). The most recent work on Paleolithic art is splintering in many directions. One researcher, for example, is seeking detailed and firm criteria by which to recognize the work of individual artistswe do not, of course, know the gender of Paleolithic artists, and there is no justification for assuming that the art was all done by and for men. Others are investigating the acoustics in different parts of the cave, and finding a clear correspondence between the richest panels and the best acoustics, suggesting that sound played an important part in whatever ceremonies accompanied the production of cave art. No single explanation can suffice for the whole of Paleolithic art: it comprises at least two-thirds of known art history, covering twenty-five millennia and a vast area of the world.[See also Art; Cro-magnons; Europe: The European Paleolithic Period; Notation, Paleolithic; Paleolithic: Upper Paleolithic; Religion; Venus Figurines.]

Bibliography
Edouard Lartet and Henry Christy, Reliquiae Aquitanicae (1875). Henri Breuil, Four Hundred Centuries of Cave Art (1952). Christian Zervos, L'Art de l'Epoque du Renne en France (1959). Paolo Graziosi, Palaeolithic Art (1960). Annette Laming-Emperaire, La Signification de l'Art Rupestre Palolithique (1962). Peter Ucko and Andre Rosenfeld, Palaeolithic Cave Art (1967). Andr Leroi-Gourhan, The Art of Prehistoric Man in Western Europe (1968). Alexander Marshack, The Roots of Civilization (1972). Andr Leroi-Gourhan, The Dawn of European Art (1982).

Paul G. Bahn and Jean Vertut, Images of the Ice Age (1988).

Organized Living Space


From certain sites there is clear evidence for a structured demarcation of the living space. The finest example of this is the discovery of dwellings made from mammoth bones on the central Russian plain at sites such as Mezhirich and Kostenki. Although such dramatic evidence is exceptional, the evidence from many sites points to the existence of structures that have either decayed or been carried away. At Pincevent, for example, the refitting of stone tool manufacturing debris and the general spatial arrangement of hearths and discarded materials has pointed to the existence of three huts. Similar interpretations have been offered for Sunagawa in central Japan. On the island of Kamchatka a number of locations around the Ushki Lake have revealed evidence of sunken floor dwellings with pronounced entrance passageways and stonelined hearths. It is possible on a broad level to interpret the spatial arrangements in these dwellings as hearth areas, discard areas, and possibly activity areas. At the site of Etiolles, close to Paris, studies of the technical abilities exhibited by the flint knappers at the site, based on their discarded debris, have even suggested that one part of the site was used by inexperienced, possibly apprentice, flint knappers, while other areas were used by more experienced craft workers.

Subsistence Organization
Examination of the animal bones at these sites has revealed complex patterns of decision making in terms of the animals to be hunted, the season of hunting, and butchery decisions once animals have been killed. It is in the Upper Paleolithic that we first have evidence to suggest that individual animal species were being preferentially exploited by human groups. There are sites in southwestern France, Spain, Germany, and South Africa where the bones of a single species constitute as much as 80 percent or more of the complete faunal assemblage. A reconstruction of the ages of the animals at the time when they were killed indicates that herds or small groups of animals may have been killed on single occasions. Evidence for the season of hunting suggests that they were killed at the time of their annual migrations or their gathering together for the breeding season. The killing of such large numbers of animals and the production of large quantities of meat perhaps indicate that techniques of efficient storage had already been developed, and stored meat allowed people to live in groups in otherwise uninhabitable environments. In addition to specialization in the targeting of individual species, particular upland sites and site settlement patterns in areas such as northern Spain indicate that hunting may have been carried out by small task groups who would then have brought back the spoils of the hunt to a more residential site located elsewhere. Both this evidence and that of species specialization points to the existence of planning and organization.

Aspects of Modern Human Behavior


Such evidence of the ability to live in difficult environments is provided by the last major characteristic of Upper Paleolithic archaeology, the colonization of hitherto unexploited environments such as the northern plains of Europe, the desert and arid lands of the Near East and Africa, tropical and coniferous forest regions, and the

American continents and Australasia. The evidence for the colonization of all of these areas postdates the appearance of anatomically modern humans. The colonization of Australasia provides a fine example of the abilities of modern humans. Although Australia, Papua New Guinea, and Tasmania would have been linked together at times of low sea level to form the larger continent of Sahul, there would have always been water between Sahul and Indonesia with a channel at least 40 miles (65 km) wide. A mastery of water travel would have been the first necessity for colonization. The earliest radiocarbon dates for this colonization date to 40,000 years B.P. and come from the Huon Peninsula in Papua New Guinea and Swan River near Perth in southwestern Australia. There are also dates of 32,000 years B.P. for burials of anatomically modern humans from the Willandra lakes site of Lake Mungo in southern Australia. Following their arrival in the continent, we also have evidence of human colonization of the arid lands of central Australia and the rain forests of Tasmania. The colonization of Australasia was indeed rapid, and took place in a continent that was completely alien to modern humans in terms of both the plant and animal communities that they encountered. The ability to rapidly recolonize areas rendered uninhabitable by the advance of glacial conditions at the time of the last glacial maximum, clearly evident in the archaeological record from northern Europe, is further testimony to the ability of human groups at this time to exploit new areas. Although the Upper Paleolithic was originally defined on the basis of stone tool technology alone, such a simple technological definition is increasingly problematic and irrelevant. The variety and characteristics of the evidence from Upper Paleolithic sites reveal a clearly organized and diverse range of behavior. The key characteristic of the Upper Paleolithic is now thought to be the appearance of modern human behavior throughout the world, associated with the arrival of anatomically modern humans, although the two do not necessarily appear simultaneously in all places. The principal characteristics of this behavior can perhaps best be defined as symbolism and symbolic expression (perhaps including language) and organizational planning. Symbolism appears in a number of material forms including art, body ornamentation, styles of material goods, and possibly burial practices. Organizational planning is evident in the development of specialized subsistence practices focusing on the exploitation of a single species, the appearance of specialized task groups, the ability to exploit new environments such as the highlands, and to colonize new lands such as the Americas. This is a pattern of behavior that appears very close to that of contemporary groups of hunters and gatherers. Indeed, the evidence has frequently been interpreted in the light of archaeologists' own experiences with such groups, such as the studies of Lewis Binford among the Nunamiut. It is for this reason that it has been considered fully modern behavior. The appearance of anatomically modern humans and of different aspects of modern behavior are crucial to any discussion of the early Upper Paleolithic, and in particular, the timing of the beginning of the Upper Paleolithic. Genetic studies of modern human populations have been used to argue that anatomically modern humans evolved in Africa and then radiated outward to other parts of the world. In so doing they replaced populations of anatomically premodern humans. This corresponds with the archaeological evidence from areas such as Europe where there are indications of

sharp discontinuity between the Middle and Upper Paleolithic records. There are those who argue, however, that in certain parts of the world, especially the Far East, sufficient skeletal similarities exist between anatomically pre-modern and anatomically modern humans to suggest that Homo sapiens sapiens evolved in situ from the local populations. While most scholars would not agree with this interpretation, such similarities raise the important question of the nature of the relationship between incoming modern populations and existing local populations. Was there rapid replacement in all areas, or were there periods of coexistence of varying duration? Within Africa, examples of anatomically modern humans have been dated to 130,000 years B.P. at the site of Omo in East Africa. In the Near East anatomically modern humans have been found at the sites of Skhul and Qafzeh dating to 92,000 years B.P., but pre-modern humans ( Neanderthals) continued living in this region certainly until 60,000 years B.P. according to dates from the Kebara cave at Mt. Carmel. These early finds of anatomically modern humans have all been associated with a material culture that is no different than that of the contemporary pre-modern humans. Within Europe itself, the earliest archaeological evidence associated with incoming modern humans, the so-called Aurignacian, has been dated to 42,000 years B.P. in Bulgaria and to 40,000 years B.P. in Spain. The latest evidence for pre-modern humans (Neanderthals) dates to 36,000 years B.P. and is from the site of St. Csaire in France. Interestingly, the St. Csaire Neanderthal is associated with a material culture (the Chatelperronian) that has been described as fully Upper Paleolithic, including blade stone tools and ornaments: in other words, modern. There is, therefore, good evidence for the overlap of these populations and for abandoning the simple association between modern human behavior and anatomically modern humans. Another apparent feature of modern human behavior is the specialized animal hunting strategy. There are a number of sites where an analysis of the faunal remains suggests that a single animal species was hunted almost exclusively, and there are also sites that suggest the presence of small task groups, such as hunting parties exploiting the upland areas for the hunting of mountain goats. In Europe, this evidence appears not with the first anatomically modern humans but at the time of the last glacial maximum (21,00017,000 years B.P.) and provides a further indication of the gradual development of modern human behavior and not the appearance of a complete package. The appearance of aspects of fully modern behavior and of anatomically modern humans at different times in different parts of the world must inevitably force a reassessment of the timing for the beginning of the Upper Paleolithic and its redefinition in terms of the appearance of anatomically modern humans rather than aspects of recognizably modern human behavior. Within Western Europe the current date of 40,000 years B.P. would therefore continue as it ties in with both the appearance of anatomically modern humans and modern human behavior. In Australia, however, an earlier date of perhaps 60,000 years B.P. would be more appropriate as it seems likely that the first colonizers of the continent were anatomically modern humans. Perhaps the most interesting consequence of the appearance of anatomically modern humans is that the Upper Paleolithic becomes the first period in human history when

we can recognize ourselves, modern humans, in the archaeological record. We can apply direct knowledge of our modern abilities. Our own survival and the demise of our nearest relatives is the ever-present context for the interpretation of the archaeology of the Upper Paleolithic.[See also Africa: Prehistory of Africa; Art; Australia and New Guinea: First Settlement of Sunda and Sahul; China: Stone Age Cultures of China; Cro-magnons; Europe: The European Paleolithic Period; Holocene: Introduction; Homo Sapiens, Archaic; Humans, Modern; Notation, Paleolithic; Rock Art: Paleolithic Art; Venus Figurines.]

Bibliography
Peter Ucko and Andre Rosenfeld, Palaeolithic Cave Art (1968). Alexander Marshack, The Roots of Civilisation (1972). Lewis R. Binford, Nunamiut Ethnoarchaeology (1978). Douglas Price and James Brown, Complex Hunter Gatherers: The Emergence of Social Complexity (1985). Olga Soffer, The Upper Paleolithic of the Central Russian Plain (1985). Clive Gamble, The Palaeolithic Settlement of Europe (1986). Paul Bahn and Jean Vertut, Images of the Ice Age (1988). Clive Gamble, Timewalkers: The Prehistory of Global Colonisation (1993).

Homo Erectus is a species of early human that appeared approximately 1.8 million
years ago and survived until at least 250,000 years ago. It was the first early human to be found not only in Africa but also in eastern Asia and arguably in Europe. Homo erectus differed in a number of ways from its australopithecine antecedents. It was both heavier and taller than these earlier hominids and had a more linear body form. Its legs were longer in relation to its trunk length, which suggests, as do other aspects of its anatomy, that it was more efficient in walking on two legs. There was less sexual dimorphism, or difference in size between males and females, and its brain was also larger than the australopithecine brain. The average brain size, or cranial capacity, of Homo erectus was 50 cubic inches (820 cc), about midway between that of the gracile australopithecines (about 27 cubic inches, or 440 cc) and that of living modern humans (about 76 cubic inches, or 1250 cc). Homo erectus was also the first early human to have a projecting nose. The projecting nose has been interpreted as a condenser to reclaim moisture from exhaled air. This would have been highly important in maintaining the water balance of these early humans under the relatively open, hot, and dry conditions of eastern Africa where they are assumed to have evolved. Other Homo erectus features included a long, low skull with large brow ridges over the eyes and a sagittal ridge, or keel, on the top of the cranium. The face was larger than that of modern humans. It was also more projecting, and there was no chin on the mandible. Many Homo erectus fossils have unusually thick bone not only in their skulls but also throughout the rest of their skeletons. Although they were fully adapted to upright walking, the Homo erectus pelvis and thigh bone (femur) were different enough from those of modern humans to suggest a form of bipedal locomotion that was different from what we see today.Homo erectus in the Far East The name Homo erectus did not come into use until the 1940s, when Ernst Mayr revised and simplified the classification of early humans. Prior to this time, fossils

that we now recognize as Homo erectus were included in a number of taxa among which the most important were Pithecanthropus erectus from Java and Sinanthropus pekinensis from China. Pithecanthropus erectus was the name given to the first discovered Homo erectus fossils, which were found by Eugene Dubois in 1891 and 1892 at the site of Trinil on the Solo River in Java. Further Javanese discoveries were made between 1936 and 1941 by G.H.R. von Koenigswald at the sites of Modjokerto in eastern Java and Sangiran near Trinil. Additional fossils came to light between 1952 and 1975 and have been reported by the Indonesian scientists S. Sartono and T. Jacob. These included a skull from the locality of Sambungmachan and another from Sangiran (Sangiran 17), one of the most complete Homo erectus skulls known. In 1993 another relatively complete skull (Skull IX), was recovered from Sangiran and reported by S. Sartono and two American anthropologists, Grover S. Krantz and Donald E. Tyler. The age of the Javanese Homo erectus has always been uncertain. The material comes from two geological formations, the Kabuh Formation, which is believed to be between 0.5 and 0.7 million years old, and the Pucangan Formation, which underlies it and is older. Until recently, material from this underlying formation has been assumed to be no older than 1 million years. But recent Potassium-argon Dating of deposits from the sites of Modjokerto and Sangiran suggests that some of the Homo erectus fossils may be 1.8 million years old. This is as old as the earliest known Homo erectus fossils from Africa and implies that early humans reached eastern Asia almost 1 million years earlier than previously thought. Homo erectus may have persisted until relatively recent times in Java. The eleven Solo (or Ngandong) skulls, recovered by von Koenigswald between 1931 and 1933, come from the more recent Notopuro Formation and may be younger than 100,000 years old. If this date is correct, it suggests not only that Homo erectus existed for over 1.5 million years in Java but also that it was still extant when modern humans began to appear in the eastern Mediterranean (Skhul and Qafzah in Israel) and possibly also in Africa. Homo erectus fossils are also known from China and were originally assigned to the taxon Sinanthropus pekinensis. The first tooth was found at the site of Zhoukoudian (formerly spelled Choukoutien) in 1923 by Austrian palaeontologist Otto Zdansky. In 1927 Davidson Black, a Canadian anatomist at the Peking Union Medical School, organized large-scale excavations at the site, first under the field direction of Birgir Bohlin and then under W. C. Pei. By 1937 these excavations had resulted in fossils of an estimated forty individuals. Black died in 1933 and was succeeded by the German anatomist Franz Weidenreich, who produced excellent plaster casts of the specimens and detailed anatomical descriptions. This was particularly fortunate because all of the original Zhoukoudian fossils were lost during the Second World War. Locality 1, the source of the original Homo erectus fossils from Zhoukoudian, dates between approximately 500,000 and 240,000 years old and has produced additional fossils in more recent years. Since 1949, additional Homo erectus material has also been found at other Chinese sites, including Gongwangling (850,000750,000 years ago), Chenjiawo (formerly Chenchiawo and dating between 590,000 and 500,000), and Hexian (200,000

150,000). Based on these presently accepted dates, Hexian is the most recent of the Chinese Homo erectus sites and suggests that Homo erectus lived at a time when more modern hominids were beginning to appear in China. These more modern archaic Homo sapiens include Jinniu Shan (300,000210,000), Dali (230,000 180,000) and two skulls from Yunxian that are yet to be precisely dated. The dating evidence might imply that more modern hominids entered China from elsewhere. But these dates are close enough and there is enough error in their determination to leave open the possibility that Homo erectus evolved into archaic Homo sapiens in the Far East.Homo erectus in Africa By far the most famous Homo erectus sites in sub-Saharan Africa are Olduvai Gorge, Tanzania, and Koobi Fora and Nariokotome in the Lake Turkana region of northern Kenya. In 1960, Louis and Mary Leakey discovered a well-preserved Homo erectus skull cap at Olduvai Gorge (Olduvai Hominid 9), which was followed in 1962 by a fragmentary cranium (Olduvai Hominid 12) and in 1970 by a partial pelvis and femur shaft (Olduvai Hominid 28). Olduvai Hominid 9 from Upper Bed two is approximately 1.2 million years old, while Olduvai Hominids 12 and 28 from Upper Bed IV are between 730,000 and 620,000 years old. Between 1973 and 1975, Richard Leakey and his team uncovered a partial skeleton (KNM-ER 1808), two relatively complete skulls (KNM-ER 3733 and 3883), and other cranial mandibular and limb bones at the site of Koobi Fora on the eastern shore of Lake Turkana, northern Kenya. KNM-ER 3733 and 1808 are among the oldest of this material, dating to between about 1.8 and 1.7 million years ago. In 1984 Leakey and his team recovered a nearly complete skeleton of a Homo erectus youth (KNM-WT 15000) from the site of Nariokotome on the western shore of Lake Turkana. This specimen is about 1.6 million years old. Based on its dentition and stage of skeletal growth, it would have been under 15 years old at death, and more probably between about 11 and 13 years old. Its inferred stature at death would have been about 5 feet, 3 inches (160 cm) and it would have been about 6 feet (185 cm) tall if it had lived to adulthood. Juvenile and adult body mass estimates suggest that it would have had the lean body form characteristic of modern humans living in the hot and dry east African savannas. Homo erectus fossils have also been recognized from other sites in both northern and sub-Saharan Africa. From the site of Ternifine, Algeria, in northern Africa, there are three mandibles and a skull fragment (originally called Atlanthropus mauritanicus) which probably date between about 730,000 and 600,000 years. There are also mandibular fragments from Sidi Abderrahman, Morocco, and a mandible and cranial fragments from Thomas Quarries, Morocco, which are more recent, at about 500,000 years. From sub-Saharan Africa there is a cranial fragment from Gombor II (Melka Kuntur), Ethiopia, dating between 1.3 and 0.75 million years. There are also a parietal fragment and temporal fragments from Omo, Shungura Formation member K, Ethiopia (1.31.4 million years ago); teeth and a femoral fragment from Lainyamok, Kenya (700,000560,000 years); and various bones from Swartkrans, South Africa (1.00.7 million years ago). It is perhaps significant that Homo erectus gives way to more advanced archaic Homo sapiens at least 250,000 years earlier in Africa than in the Far East.Homo erectus in Europe There is no direct fossil evidence that Homo erectus ever occupied Europe. The earliest specimen that has affinity with Homo erectus is the mandible from Dmanisi in Georgia that was found in 1991. The Dmanisi mandible is at least 900,000 years old

and could be as old as 1.6 million years. But its location in Georgia, at the far eastern periphery of Europe, says nothing about human occupation in more western areas. The earliest fossil hominids from Europe date to 780,000 years ago and are from the Gran Dolina site at Atapuerca, Spain. There are also archaeological sites, unfortunately without fossil hominids, at Le Vallonet Cave and Soleihac in France and at Isernia La Pineta in Italy, and Krlich in Germany that may document human occupation in Europe at the beginning of the Middle Pleistocene and possibly much earlier. These sites are controversial in themselves, and there is no way of knowing whether the makers of the stone tools recovered from these sites were Homo erectus or another species of hominid. Slightly more recent in time are a fragmentary tibia from Boxgrove, England, and a mandible from Mauer, Germany. Both of these date to approximately 500,000 years. The tibia is a massive bone from a relatively tall individual with an estimated body mass of about 176 pounds (80 kg), but is undiagnostic as to species. The Mauer mandible is also large and has affinities both with Homo erectus and with archaic Homo sapiens. The remaining European fossils from the Middle Pleistocene Period come from sites that include Petralona in Greece, Arago in France, Vrtesszlls in Hungary, Bilzingsleben and Steinheim in Germany, Swanscombe in England, and Sima de los Huesos, Atapuerca, in Spain. All of these sites are more recent in time than Mauer or Boxgrove, dating between about 400,000 and 200,000 years ago. Although some of the more fragmentary finds, such as the occipital bone from Vrtesszlls and a frontal bone from Bilzingsleben, have been claimed to be Homo erectus in the past, this interpretation now seems unlikely. There are over 700 specimens belonging to at least 24 individuals that are currently known from Sierra de Atapuerca. These fossils show a mixture of features, some of which are found in Homo erectus and others in the more recent European Neanderthals. The degree of intrapopulation variation observed in these specimens suggests that the features found in the more fragmentary European Middle Pleistocene material can all be accounted for in one contemporaneous population that is more advanced than Homo erectus but still retains some Homo erectus features.The Origin of Homo erectus The earliest dates for Homo erectus are 1.8 million years ago for the material from Koobi Fora, Kenya, and Modjokerto, Java. There is a rich hominid fossil record in Africa that currently extends back to 4.4 million years (Ardipithecus ramidus), but there are no known earlier hominids in the Far East. Because of this, it is most probable that Homo erectus evolved in Africa and migrated from there to the Far East. If the 1.8 million years date for the infant's skull from Modjokerto proves to be correct, Homo erectus would have had to have departed Africa shortly after its first appearance. At present, its most likely precursor in Africa is Homo habilis, a hominid with a brain size of about 30.5 cubic inches (500 cc) and an australopithecine-like skeleton with relatively short legs in relation to arms and inferred body weight. But certain features of its skull, such as the form of its occipital region and of its brow ridges, foreshadow Homo erectus. If Homo erectus did leave Africa sometime before 1.8 million years ago, this would explain one of the mysteries surrounding the distribution of the Acheulian, or handaxe, tool tradition. This tool tradition is associated with Homo erectus in Africa and is also found throughout Europe and as far east as India, but it is not found further to the east in Asia. The Acheulian does not appear until 1.4 million years ago in Africa. If Homo erectus left Africa sometime prior to 1.8 million years ago, it would have been

before the appearance of this distinctive tool tradition. The fact that the Acheulian tradition never spread to the Far East might have one of two explanations. It is possible that once the hominids reached the Far East, there was minimal communication with other hominid populations in the more western areas. Alternatively, it is also possible that tools equivalent to the distinctive Acheulian hand axe were made of other materials in the East, such as bamboo.Questions about Homo erectus There has been considerable debate in recent years over whether or not the African fossils should be included in the taxon Homo erectus or whether this taxon should be used to refer only to the fossil material from eastern Asia. Bernard Wood has recently suggested that the oldest African Homo erectus fossils from Koobi Fora and Nariokotome, Kenya, should be included in the taxon Homo ergaster and not Homo erectus. He argues that although these fossils have reached the Homo erectus grade of evolution, they are very primitive in relation to the Asia Homo erectus fossils. The African fossils lack the very thick bone throughout the skeleton that characterizes the Asian forms, and they also lack certain details of the skull, such as thick brow ridges or an angular torus, that have been considered diagnostic of Homo erectus. Other palaeoanthropologists, such as Chris Stringer and Peter Andrews, have also argued that there are such fundamental distinctions between Asian and African Homo erectus fossils that none of the African forms should be classified as Homo erectus. Rather, they suggest that the African forms be called archaic Homo sapiens. This is a minority opinion, however, and many palaeoanthropologists follow Philip Rightmire in suggesting that the type of variation in cranial form that exists between Asian and African fossils would be expected in such a species as Homo erectus, a species with a large temporal and geographical distribution.[See also Africa: Prehistory of Africa; Australopithecus and Homo Habilis; China: Stone Age Cultures of China; Europe, the First Colonization Of; Human Evolution: Fossil Evidence For Human Evolution; Humans, Modern: Peopling of the Globe; Paleolithic: Lower and Middle Paleolithic; Pleistocene.]

Bibliography
Richard G. Klein, The Human Career: Human Biological and Cultural Origins (1989). G. Philip Rightmire, Homo Erectus: Comparative Anatomical Studies of an Extinct Human Species (1990). G. Philip Rightmire, Homo erectus: Ancestor or Evolutionary Side Branch? Evolutionary Anthropology 1 (1992): pp.4349. L. Tianyuan and D. A. Etler, New Middle Pleistocene Hominid Crania from Yunxian in China , Nature 357 (1992): pp.404407. B. A. Wood, Origin and Evolution of the Genus Homo , Nature 355 (1992): pp.783 790. M. B. Roberts, C. B. Stringer, and S. A. Parfitt, A Hominid Tibia from Middle Pleistocene Sediments at Boxgrove, UK , Nature 369 (1994): pp.311313. C. C. Swisher, G. H. Curtis, T. Jacob, A. G. Getty, and A. Suprijo, Age of the Earliest Known Hominids in Java, Indonesia , Science 263 (1994): pp.11181121.

Neanderthals The Neanderthals of the northwestern Old World are the best-known
Archaic human group from the Pleistocene. They are represented by the remains of hundreds of individuals and several dozen partial associated skeletons from the last interglacial (ca. 100,000 B.P.) to the middle of the last glacial (ca. 30,000 B.P.). They immediately preceded, or may have coexisted with, early modern humans across their range. As a result, they provide us with a glimpse into both the biology and the behavior of Late Archaic humans and the evolutionary processes associated with the emergence of modern humans. Fossil human remains referable to the Neanderthals are currently known from across Europe and western Asia from Gibraltar, southern Italy, and Israel in the south to Belgium and the Crimea in the north, and from the Atlantic littoral in the west to Uzbekistan in the east. They appear to have occupied most of the ecozones across this region, with the exception of deserts in the southeast and periglacial tundra to the north. It is difficult to specify the age of the oldest Neanderthals, since they evolved gradually out of their predecessors across their geographical range. Their origin was therefore one of subtle shifts in the frequencies of traits we recognize as Neanderthal, most of which appeared during the later Middle Pleistocene (> 130,000 B.P.). It was only toward the end of the last interglacial, between approximately 100,000 and 75,000 B.P., that these features reached sufficient frequency and coalesced into the anatomical pattern of the Neanderthals. Their disappearances were more rapid, occurring between roughly 50,000 B.P. in the Near East and 30,000 B.P. in Atlantic Europe. Even though the term Neanderthal, or Neandertaloid, has been applied generally to Late Archaic humans, the term is now restricted to the populations of Late Archaic humans from this geographical region of the northwestern Old World. Their Late Archaic relatives in Africa, eastern Asia, and Australasia represent a similar grade of human evolution, but they differ from the Neanderthals in the shape of the face, features of the braincase, and (apparently) bodily proportions.

Neanderthal Phylogenetic Status


Considerable attention continues to be devoted to sorting out the phylogenetic origins of early modern humans and the role of the Neanderthals in modern human ancestry. Indeed, the discussion has become inappropriately polarized into extreme Replacement versus Regional Continuity scenarios. In the former, the Neanderthals would have had little or no role in modern human ancestry, whereas in the latter most of them would have contributed to later human gene pools. From current paleontological data indicating the degree of anatomical change between various regional late archaic and early modern human groups and the time frame available for the changes, combined with the geographical patterns of variation of early and recent humans, a most probable scenario has emerged. It appears that early modern humans (robust versions of modern humanity) emerged from local late archaic humans somewhere outside of the Neanderthal range, possibly in sub-Saharan Africa. Those early modern humans then spread geographically, mating with, absorbing, and occasionally displacing local populations of late archaic humans like the Neanderthals. It is possible that, in areas such as the Levant and western Europe, the local Neanderthals died out largely without issue. In other regions, such as central

Europe, they appear to have contributed significantly to the ancestry of early modern humans. Such a complex scenario would explain both the relatively rapid spread of early modern human anatomy across this range (within 1520,000 years), as well as the current patterns of regional (racial) features that are known to take long periods of geological time to become established. In other words, not all Neanderthal populations were ancestral to early modern humans across the northwestern Old World, but most modern people from that region have Neanderthals among their ancestors.

Neanderthal Biology and Behavior


The behavior and biology of the Neanderthals can be inferred from their fossil remains, combined with the associated Paleolithic archaeological remains. For most of their distribution in time and space, the Neanderthals were associated with a Middle Paleolithic (or Mousterian) technology and related archaeological materials. The most recent Neanderthals in western Europe, however, are found with early Upper Paleolithic (Chtelperronian) tools, and in the Near East the earliest modern humans were also associated with Middle Paleolithic technology. As a result, the comments here are based on current knowledge of their biology and its behavioral implications, combined mostly with our knowledge of the usually associated Middle Paleolithic. Although the Neanderthals represent in many ways the most recent part of an archaic Homo lineage, leading from Homo habilis through Homo erectus to groups like the Neanderthals, they nonetheless had a number of important similarities to modern humans. First and foremost, the configurations of their trunks and limbs, and especially their hands and feet, indicate that they stood, walked, and manipulated objects in much the same way that we do. There is indeed nothing in their vertebrae, joint structures, or feet to indicate anything but a fully upright, striding bipedal gait among them. And their hand joints, especially of the wrist and thumb, imply ranges of movement, and hence grip positions, comparable to ours. In addition, although we cannot determine the internal structures of their brains, the size and proportions of their endocranial cavities, as well as their vertebral spinal canals, indicate the full range of cognitive and neuromuscular abilities known for recent humans. Indeed, it is with the Neanderthals that we see the full achievement, for the first time, of the degree of encephalization (brain: body size ratio) that characterizes modern humans. Given the developmental and energetic costs of such a relatively large brain, they must have been using those brains in such a way as to make them selectively advantageous. Related to their large brains were the first signs of a more complex social network. Some of the earliest intentional human burials are of Neanderthals, even though most are little more than a body placed in a shallow grave. This indicates a social need for formal disposal of the dead. Although extremely rare, personal ornamentation indicating intentional modification of one's social persona appears in their sites. And even though we cannot prove its existence, these reflections of social behavior strongly imply the presence of human language, even if it was relatively rudimentary. There is certainly nothing in what can be discerned of their vocal tract anatomy that would preclude fully modern human language. This is especially likely since what is most important for language is cognitive associational skills and fine neurological

control of the vocal tract, both of which were apparently present, given their modernhuman level of encephalization. These apparent mental abilities are reflected as well in their Middle Paleolithic technology. Although mechanically less efficient than the composite material tools of the Upper Paleolithic, Middle Paleolithic flint-knapping reduction sequences clearly illustrate the need for (and hence presence of) complex multistep anticipation and planning. The Neanderthals were also the first humans to permanently occupy midlatitude regions through full glacial cold, indicating their ability to deal with the stresses of cold and with major seasonal fluctuations in resource availability. Nonetheless, there were a number of biological and behavioral contrasts between these late archaic humans and their early modern human successors. Many of these are evident in the multiple contrasts between the Middle and Upper Paleolithic archaeological records (bearing in mind that the earliest modern humans were associated with Middle Paleolithic tools and that the latest Neanderthals made early Upper Paleolithic toolkits). There was a technological shift, in which there was a major increase in standardized stone-tool blank forms (usually prismatic blades), which in turn permitted elaboration of tools using composite materials. Bone and antler became standard raw materials for the first time, exploited for their particular mechanical attributes. All of this contributed to a tool kit that was mechanically more effective than the Middle Paleolithic one, with greater leverage, more task specificity, hence job effectiveness, and the appearance of effective throwing projectiles (rather than just thrusting spears). There was little change in diet and the range of animals eaten. Yet early modern humans appear to have been able to take game animals more effectively, with less risk of personal injury. This is reflected in part in a major drop in the frequency of traumatic injuries to the arms and head, injuries that would occur especially in closequarter hunting with thrusting (rather than throwing) spears. Early modern humans also were more effective at competing with large carnivores for space and resources. These technological and subsistence changes were associated with an explosion of social-role complexity. Personal ornamentation becomes ubiquitous. Burials become more complex, with frequent grave goods and some indication of differential social status. Art, consisting of representational forms and clear symbolic forms, combined with numerous notations on bone, indicates a major increase in the amount of information being exchanged socially. This is combined with the exchange of raw and exotic materials over hundreds of miles (km), probably between sequences of neighboring groups. Clear differential site sizes combine with this to indicate division of labor according to season or task. It is at this time, with early modern humans, that the full complement of modern human social and organizational patterns appears to have emerged. The contrasts reflected in the archaeological record have their parallels in human biology. The Neanderthals, like all Archaic members of the genus Homo, were powerfully built. This is reflected in pronounced muscular markings from their necks to shoulders and hands, and to hips and knees and feet. Their legs in particular show great strength and endurance, implying frequent and prolonged movement across the landscape carrying large burdens. Their arms and hands also had greater mechanical

advantages for important muscles, with an emphasis on power. Their teeth, which were otherwise very similar to those of modern humans, show exceptionally rapid wear of the front teeth down to their roots by the late thirties or early forties; they were accomplishing many holding and stripping tasks with their teeth and jaws rather than with their hands and associated tools. These patterns correspond well with the dearth of mechanically effective implements in their tool kits and the apparent rarity of organizational solutions to exploiting diverse resources in the landscape. Early modern human limb bones were still, by standards of living humans, exceptionally strong. Yet they had lost the domination of strength and mechanical advantage that influenced the skeletons of Archaic humans like the Neanderthals. Still very active and strong, these early modern humans were nonetheless able to accomplish many more everyday tasks through technology and social organization than through brute strength and endurance. These behavioral contrasts are reflected in the different levels of wear and tear on the bodies of these two groups of humans. Among the Neanderthals, over seventy-five percent had experienced periods of severe stress during development, and all who had lived to forty years had the scars of at least one physically traumatic experience. Indeed, few of them had lived past the fourth decade of life. Their lifestyle and level of cultural elaboration clearly had its costs, in terms of stress and life expectancy. But the reason we know so much about them and their stress levels is that they survived many of their injuries, even severely debilitating ones, sometimes for several decades. Early modern humans experienced many of the same forms of stress, but the overall incidence of lesions was lower and life expectancy appears to have increased markedly. The Neanderthals therefore represent one regional group of Late Archaic humans. They carried on the pattern of strength and endurance established early in the genus Homo, adding to it more sophisticated tools, increased intelligence (and probably language), further social cohesion and role definition, and the exploitation of glacial ecozones. Yet there were a number of social, technological, and organizational changes that allowed the pattern we associate with early modern humans and the Upper Paleolithic to become the dominant one in a relatively short period of time. Independent of the actual phylogenetic events responsible for the emergence and spread of early modern humans, their behavioral system and associated biological changes clearly contained a definite, if subtle, advantage over that of late archaic humans such as the Neanderthals.[See also Homo Sapiens, Archaic; Paleolithic, articles on Lower and Middle Paleolithic, Upper Paleolithic.]

Bibliography
Paul Mellars and Chris Stringer, eds., The Human Revolution (1989). Erik Trinkaus, ed., The Emergence of Modern Humans (1989). Chris Stringer and Clive Gamble, In Search of the Neanderthals (1993). Erik Trinkaus and Pat Shipment, The Neandertals: Changing the Image of Mankind (1993).

Cro-magnons are, in informal usage, a group among the late Ice Age peoples of
Europe. The Cro-Magnons are identified with Homo sapiens sapiens of modern form, in the time range ca. 35,00010,000 B.P., roughly corresponding with the period of the Upper Paleolithic in archaeology. The term Cro-Magnon has no formal taxonomic status, since it refers neither to a species or subspecies nor to an archaeological phase or culture. The name is not commonly encountered in modern professional literature in English, since authors prefer to talk more generally of anatomically modern humans. They thus avoid a certain ambiguity in the label Cro-Magnon, which is sometimes used to refer to all early moderns in Europe (as opposed to the preceding Neanderthals), and sometimes to refer to a specific human group that can be distinguished from other Upper Paleolithic humans in the region. Nevertheless, the term Cro-Magnon is still very commonly used in popular texts, because it makes an obvious distinction with the Neanderthals, and also refers directly to people, rather than to the complicated succession of archaeological phases that make up the Upper Paleolithic. This evident practical value has prevented archaeologists and human paleontologistsespecially in continental Europefrom dispensing entirely with the idea of Cro-Magnons. The Cro-Magnons take their name from a rock shelter in the Vezere Valley in the Dordogne, within the famous village of Les Eyzies de Tayac. When the railway was being constructed in 1868, parts of five skeletons were found sealed in Pleistocene deposits, along with hearths and Aurignacian artifacts. Subsequently similar finds were made at sites such as Combe Capelle and Laugerie-Basse in the Dordogne, and Mentone and Grimaldi in Italy. Other specimens found earlier, such as Paviland in Britain and Engis in Belgium could be set in the same group, and it became plain that their physical makeup contrasted sharply with that of Neanderthals discovered in other sites. Sufficient data to build up this classic picture accumulated over a period, but it was brought into sharp focus following the find of a classic Neanderthal at La Chapelle in 1908. The early interpretations owe much to the French scholars Marcellin Boule and Henri Vallois. Later research has extended the geographical distribution of similar humans and has provided an absolute dating scale for them; however, later research has also raised many questions about the origins of the CroMagnons and their status as a coherent group.

Physical Characteristics and Adaptation


Cro-Magnons were closely similar to modern humans, but more robust in some features, especially of the cranium. They meet criteria listed by Michael Day and Chris Stringer for modern humans, such as a short, high cranium and a discontinuous supra-orbital torus (brow ridge). Many individuals were well above present-day average in stature, often reaching around 75 inches (190 cm). Their limbs were long, especially in the forearms and lower legs, body proportions suggesting to some anthropologists that their origins lie in warm climes, rather than Ice Age Europe. Significant variability had already been recognized by Boule, who attributed Negroid characters to some specimens from Grimaldi (placing them in a separate race). A recent study has found that earlier specimens such as those from Cro-Magnon and Mladec in the Czech Republic are outside modern human range, whereas specimens later than 26,000 B.P. generally fall within it. Emanuel Vlcek regards the Mladec I finds as Cro-Magnons, but sees features related to the Neanderthals in later Mladec II specimens and ascribes later specimens from Dolni Vestonice and Predmosti

specimens to a robust Brno Group. Such findings suggest that the original remains from Cro-Magnon are too distinctive to serve as a template of identification for a race all over Europe. If any overall trend can be picked out, it is toward greater gracility as time progressed.

Chronology
Given the rarity of human remains, it is easier to date the onset of the Upper Paleolithic than the first appearance of people resembling the Cro-Magnons, which is not necessarily the same event. Nevertheless, dates around 40,000 B.P. seem highly likely. It is certain that populations of Homo sapiens sapiens became established throughout Europe in far less than 10,000 years. Since the 1950s the chronology of these Late Pleistocene human populations has been derived principally from radiocarbon dating. A late Neanderthal found at St. Csaire in western France with a Chtelperronian (initial Upper Paleolithic) industry is dated to ca. 36,000 B.P. by thermoluminescence (TL), but the Upper Paleolithic Aurignacian appears earlier in northern Spain at ca. 42,00039,000 B.P., as shown by radiocarbon and uranium series dating. It is widely assumed that the Aurignacian is associated with modern (i.e., CroMagnon-like) populations, and that the Chtelperronian, though associated with Neanderthals, may have been triggered by the cultural effects of modern human presence elsewhere in the region (a so-called bow-wave phenomenon). Thereafter the Cro-Magnons were continuously represented in Europe for 20,000 years or more. It might be convenient to end the Cro-Magnons with the glacial maximum of 18,000 B.P., but in France their characteristics persist in Magdalenian populations through the later part of the glaciation until about 12,00010,000 B.P. At this stage human populations began to become more gracile.

Geographical Distribution
Human remains are extremely scarce in relation to the number of archaeological sites. The earliest Upper Paleolithic in France is almost devoid of skeletal remains; finds such as Cro-Magnon, Abri Pataud, and Combe Capelle are probably several thousand years later. These are a minimal sampling of a distribution that archaeological traces strongly suggest was much wider. Thus there are no early remains of Cro-Magnons from Spain, Greece, or Turkey, but populations were probably present. To the north, Upper Paleolithic human remains have been found in Britain, represented by Paviland and Kent's Cavern, and in Germany, by Hahnfersand. Farther east, burials are well represented in the Upper Paleolithic records of the Czech Republic, and in Russia at Kostenki and Sunghir. In the south numbers of finds are known from Italy.

Cultural Associations
Most of the Upper Paleolithic humans are found in deliberate burials, often single but sometimes in groups, and frequently associated with grave goods, such as necklaces of pierced teeth. Such finds are known from a sequence of archaeological phases beginning with the Aurignacian (e.g., Combe Capelle or Mladec), but the succeeding Gravettian (ca. 29,00020,000 B.P.) is richer in burials (e.g., those of Dolni Vestonice in the Czech Republic). It has yielded fewer specimens in western Europe. In southwestern Europe the Solutrean phase is associated with similar populations. They are found again in the Magdalenian or Epi-Gravettian. By this time preserved human remains are much more numerous, and they are known from most parts of Europe. Grave goods sometimes attest to highly developed artistic abilities. The Cro-Magnons

were responsible for much art, but rarely figured in their own work.

Relationship with the Neanderthals and Other Hominids


Recent work has shown that early modern humans (sometimes called Proto-CroMagnons) first appeared at least 100,000 B.P. They are documented in Africa, but most specifically on the cave sites of Skhul and Qafzeh in Israel in the period 100,00090,000 B.P. The Cro-Magnon specimens of Europe must be derived ultimately from one of these ancestral populations, but the available finds show no continuity. Indeed, by 60,000 B.P. Neanderthals featured in the Middle East, and the Proto-Cro-Magnons may have been displaced to the south. It seems likely that they returned somewhere around 50,000 B.P. and flowed into Europe, although there is no documentation in the Middle East other than a burial at Ksar Akil in Lebanon. There is also no close similarity, according to most authors, between the Proto-CroMagnons and the Cro-Magnons. The simplicity of these hypotheses is belied by the complexity of the scarce data that we do have. Just as the St. Csaire find in France documented a late Neanderthal and placed constraints on our ideas about the distribution of the early Cro-Magnons, so one new early Cro-Magnon discovery could dramatically alter our view of their origins.[See also Humans, Modern, articles on Origins of Modern Humans, Peopling the Globe.]

Bibliography
Marcellin Boule and Henri Vallois, Fossil Men (1957). Paul Mellars and Chris Stringer, eds., The Human Revolution (1989). Paul Mellars, ed., The Emergence of Modern Humans (1990). Alan Bilsborough, Human Evolution (1992). Gunter Brauer and Fred H. Smith, eds., Continuity or Replacement: Controversies in Homo sapiens Evolution (1992). Martin J. Aitken, Christopher B. Stringer, and Paul A. Mellars, eds., The Origins of Modern Humans and the Impact of Chronometric Dating (1993). John A.J. Gowlett, Ascent to Civilization, 2nd ed. (1993). Chris Stringer and Clive Gamble, In Search of the Neanderthals (1993).

Olduvai Gorge is located in northern Tanzania close to the Great Rift Valley. It
was discovered by Reck in 1913, although it was the work of Louis and Mary Leakey especially during the 1950s and 1960s that realized its archaeological potential. The gorge at Olduvai is ca. 9 miles (15 km) long and 330 feet (100 m) deep, presenting a series of deposits deriving from lake basin sedimentation, which span a period of almost 2 million years. Olduvai provides the most complete sequence of Pleistocene materials in Africa and was responsible, through intensive dating studies, for the establishment of an African origin for humankind. The deposits at Olduvai represent a range of lakeside and streamside localities where hominids were living in the shadow of a volcano. This volcano erupted periodically and provided the materials for dating the Olduvai sequence. Olduvai was the first site to be dated using the potassium-argon technique. Although referred to as a site, it does in fact comprise a large number of localities, each of which might be termed a site elsewhere. There are over seventy such localities in Beds I and II alone.

The sequence begins with Bed I at the base, dated to 1.8 million years ago. The archaeological evidence from this bed shows Oldowan tools, made of cobbles and flakes, were in use, and importantly, at DK locality a stone-hut structure was present. This evidence for human alteration of the environment is confirmed at FLK where the distribution of tools and bones suggests the presence of a structure such as a windbreak. The Oldowan tool-making tradition was defined on the basis of finds at Olduvai. Four forms are known: Oldowan, Developed Oldowan A, Developed Oldowan B, and Developed Oldowan C, with some intermediate assemblages reported. The tradition is based on direct knapping of river cobbles to form a variety of relatively undifferentiated pebble and flake tools. The different variants show an increase in tool types and their more regular flaking with time. The importance of stone tools can be gauged by the fact that some materials were occasionally transported from sources approximately 6.2 miles (10 km) away. The Oldowan tradition was replaced by Acheulean industries prior to 1.4 million years ago, in upper Bed II, but was contemporary with it until ca. 0.8 million years ago in Bed III. The fossils found at Olduvai have also been important, the first hominid found being Zinjanthropus boisei (later renamed Australopithecus boisei), discovered by Mary Leakey in 1959 in Bed I at FLK. Since that discovery representatives of Homo habilis and Homo erectus have been discovered. The Oldowan is generally associated with Homo habilis and the Acheulean with Homo erectus. The Olduvai localities have been reconstructed as showing complex organization of hominid behavior across the landscape, with home bases being set up along river edges and lake shores and specialist camps for butchery, tool making, caches of materials, etc. being placed elsewhere. This pattern was then used to argue for an early date for the sexual division of labor, with male provisioning of the home bases. Since its excavation and interpretation, debate has arisen as to the reconstructions made of hominid behavior at Olduvai. It has been questioned how far the associations of tools and bones are a product of hominid behavior as opposed to natural processes such as washing together in flash floods and carnivores accumulating bones from their kills. The debate still continues as to whether hominids were hunting game or whether they were scavenging the leftovers from carnivore kills. Certainly, the role of meat in hominid diet and behavior is one factor that needs further work. There is little research activity at Olduvai at present and attention has shifted to earlier sites elsewhere in East Africa.[See also Acheulean Tradition; Africa: Prehistory of Africa; Human Evolution, articles on Fossil Evidence For Human Evolution, The Archaeology of Human Origins; Koobi Fora.]

Bibliography
M. D. Leakey, Olduvai Gorge, Excavations in Beds I and II, 19601963 (1971). R. H. Tuttle, What's New in African Paleoanthropology , Annual Review of Anthropology 17 (1988): pp.391426.

Introduction to Evolution

Introduction Victorian biologist Charles Darwin pointed to Africa as the cradle of


humankind, because the closest primate relatives of humans lived there. A century and a half of intensive palaeoanthropological research has shown he was right. The archaeological record of human activity is longer in tropical Africa than anywhere else in the world, extending back more than 2.5 million years. At present, the evidence for very early human evolution comes from eastern and southern Africa. Tim White describes the earliest Australopithecines and hominids from Ethiopia, Kenya, and Tanzania, an area where the increasingly diverse primate fossil record now extends back to 4 million years. Bipedalism dates back far earlier than the first appearance of stone artifacts and other protohuman culture, which first appear in archaeological sites like those at Koobi Fora on the eastern shore of Lake Turkana in northern Kenya about 2.5 million years ago. These earliest sites are little more than transitory scatters of crude stone artifacts and fractured animal bones, located in dry stream beds, where there was shade and water. In this section, Nicholas Toth and Kathy Schick describe the stone technology behind this earliest of human tool kits, reconstructed from controlled experiments and replications of the first hominid stoneworking. Much of the evidence for very early human behavior comes from the now-classic sites in Bed I at Olduvai Gorge in northern Tanzania, excavated by Louis and Mary Leakey. Dating to just under 2 million years ago, these small artifact and bone scatters have been the subject of much controversy, but they are now regarded not as campsites but as places where early hominids cached meat and ate flesh scavenged from predator kills. The earliest human lifeway was much more apelike than human, with Homo habilis, and probably other hominids, relying heavily on both edible plants and scavenged game meat. Homo erectus, a more advanced human, seems to have evolved about 1.8 million years ago in Africa from earlier hominid stock. By that time, too, some Homo erectus populations were living in Southeast Asia. So if these archaic humans evolved in Africa, they must have radiated rapidly out of Africa into other tropical regions. Leslie Aiello analyzes what we know about Homo erectus from a very sketchy fossil record and shows that these humans evolved slowly toward more modern forms over a period of more than 1.5 million years. Africa provides good evidence for animal butchery and the domestication of fire by Homo erectus, especially by about 750,000 years ago, with some experts arguing that fire originated on the East African savanna. To what extent Homo erectus relied on big-game hunting as opposed to scavenging for meat supplies is a matter for controversy. However, more diverse tool kits, some of them surprisingly lightweight, argue for improved hunting skills throughout Africa, at a time when humans were adapting to all manner of moist and arid tropical environments. Most authorities also believe that anatomically modern humans evolved in Africa from a great diversity of archaic Homo sapiens forms, which in turn evolved from much earlier human populations. As Gunter Braer points out, two main hypotheses pit those who believe Africa was the homeland of modern humans against those who argue for the evolution of Homo sapiens sapiens in Africa, Asia, and other regions more or less simultaneously. The evidence for an African origin is in large part derived from mitochondrial DNA, but the fossil record from Klasies River Cave,

Omo, and other locations provides at least some evidence for anatomically modern humans as early, if not earlier, than in the Near East. According to the out-of-Africa hypothesis, modern humans evolved south of the Sahara, then radiated northward across the desert at a time when it was moister than today, appearing in the Near East at least 100,000 years ago. But, while the case for an African origin for modern humans is compelling, the actual scientific evidence to support it is still inadequate. During the last glaciation, the Sahara was extremely dry, effectively isolating the African tropics from the Mediterranean. Despite this isolation, Africans developed sophisticated foraging cultures, adapted not only to grassland and woodland savanna but to dense rain forest and semiarid and desert conditions. We know little of these adaptations, except from increasingly specialized tool kits, many of them based on small stone flakes and blades. The ultimate roots of the Stone Age foraging cultures of relatively recent millennia and centuries lie in the many late Stone Age groups that flourished throughout tropical Africa for more than 10,000 years, as societies in the Near East, Europe, and Asia were experimenting with agriculture and animal domestication. Some of these Late Stone Age groups, especially the ancestors of the modern-day San of southern Africa, are celebrated for their lively cave paintings and engravings, which, as David Lewis-Williams tells us, have deep symbolic meaning. As Steven Brandt and Andrew Smith recount, farming and animal domestication came to tropical Africa very late in prehistoric times. Cereal agriculture may have been introduced into the Nile Valley by 6000 B.C., or crops may have been domesticated there indigenously, but the question is still unresolved. At the time, the Sahara Desert was still moister than today, supporting scattered groups of cattle herders by 5000 B.C. While ancient Egyptian civilization was based on the annual floods of the Nile River, the Saharans had no such dependable water supplies. As the desert dried up after 4000 B.C., they moved to the margins of the desert, into the Nile Valley, and onto the West African Sahel, where both cattle herding and the cultivation of summer rainfall crops were well established by 2000 B.C. About this time, some pastoralist groups also penetrated on the East African highlands. But the spread of agriculture and herding into tropical regions was inhibited by widespread tsetse fly belts and, perhaps, by the lack of tough-edged axes for forest clearance. It was not until after 1000 B.C. that the new economies spread from northwest of the Zaire forest and from the southern Sahara into eastern, central, and southern Africa. These lifeway changes may have connected with the introduction of ironworking technology, which was well established in West Africa in the first millennium B.C., having been introduced from either North Africa or the Nile along desert trade routes. Once ironworking spread, especially through the Zaire forest, agriculture spread rapidly. By A.D. 500, mixed farming cultures were well established throughout tropical Africa, except in areas like the Kalahari Desert, where any form of farming or herding was marginal. The rapid spread of farming may have also coincided, in general terms, with the spread of Bantu languages throughout tropical Africa from somewhere northwest of the Zaire forest. With the spread of food production throughout tropical Africa, many general patterns of architecture; metal, wood, and clay technology; and subsistence were established south of the Sahara. These simple farming cultures achieved great elaboration during the ensuing two millennia, largely as a result of African responses to economic and political opportunities outside the continent.

Ancient Egyptian civilization was one of the earliest and most long-lived of all preindustrial civilizations. The Nile Valley from the Mediterranean Sea to the First Cataract at Aswan was unified under the pharaoh Narmer about 3100 B.C., in a state that had entirely indigenous roots, even if some innovations, like writing, may have arrived in Egypt from elsewhere in the Near East. There is no evidence that ancient Egypt was a black African civilization, as some scholars have claimed, even if there was constant interaction between the land of the pharaohs and Nubia, upstream of the First Cataract, for more than 3,000 years. The Old Kingdom pharaohs explored Nubian lands for their exotic raw materials. When the Egyptian state passed through a period of political weakness, Nubian leaders assumed greater control and power over the vital trade routes that passed through the Land of Kush. Middle and New Kingdom pharaohs conquered, garrisoned, then colonized Kush, which survived as a powerful kingdom in its own right after 1000 B.C., reaching the height of its power when Nubian kings briefly ruled over Egypt in the seventh century B.C. After being driven from Egypt and chivied as far as their Napatan homeland, the Nubian kings withdrew far upstream to Meroe, where they founded an important kingdom at the crossroads between Saharan, Red Sea, and Nile trade routes. Meroe became an important trade center, especially with the domestication of the Camel in the late first millennium B.C., also a major center for ironworking, going into decline only in the fifth century B.C., when it was overthrown by the kings of the rival kingdom Aksum in the Ethiopian highlands. Like Meroe, Aksum prospered off the Red Sea trade, emerging into prominence with the Mediterranean and India. It reached the height of its power in the eleventh century B.C., after Christianity reached Ethiopia. Two developments had a profound effect on the course of tropical African history. The first was the domestication of the camel, which opened up the trade routes of the Sahara Desert. The second was the discovery by Greek navigators about the time of Jesus of the Monsoon winds of the Indian Ocean. These two developments brought Africa into the orbit of much larger, and rapidly developing, global economic systems, which were to link China, Southeast Asia, Africa, and the Mediterranean and European worlds into a giant web of interconnectedness. Camels were not used for Saharan travel in the Roman colonies in North Africa, although they may have penetrated south of the desert on several occasions. The Saharan camel trade in gold, salt, and other commodities developed in the first millennium A.D., especially after the spread of Islam into North Africa. Indigenous West African kingdoms developed in the Saharan Sahel, at the southern extremities of the caravan routes, as local leaders exercised close control over the mining and bartering of gold and other tropical products. By A.D. 1000, Islam was widespread in the Sahel, and the Sahara, the West African savanna, and the forests to the south were linked by close economic ties. Ghana, Mali, and Songhai in turn dominated the southern end of the Saharan trade between 900 and 1500, during centuries when most of Europe's gold came from West Africa. Small kingdoms also developed in the West African forest, as the institution of kingship assumed great importance, associated as it was with long-distance trade, important ancestor cults, and indigenous terra-cotta and bronze sculpture and art traditions that flourished long after European contact in the late fifteenth century. The monsoon winds linked not only the Red Sea and Arabia with India, but the Land

of Zanj, on the East African coast, as well. During the first millennium, Arabian merchants visited the villages and towns of the coast regularly, trading gold, ivory, hut poles, and other products for textiles, porcelain, glass vessels, glass beads, and other exotic products. By 1100, a series of small ports and towns dotted the coast from present-day Somalia to Kilwa in the south. This was a cosmopolitan African civilization, with strong indigenous roots and close ties to Arabia. Its merchants obtained gold, ivory, and other interior products from kingdoms far from the coast, notably from the Shona chiefdoms between the Limpopo and Zambezi Rivers in southern Africa. Archaeological evidence shows how a series of powerful cattle kingdoms developed in this highland region, kingdoms that prospered from their connections with long-distance trade routes that linked them with the port of Sofala on the Mozambique coast. During the fifteenth century, Great Zimbabwe, the seat of the Mutapa Dynasty, was at the height of its importance. Zimbabwe's imposing stone ruins are among Africa's most important archaeological sites, for the settlement was abandoned just before Europeans landed at the Cape of Good Hope. African kingdoms developed out of indigenous roots, especially in areas where local leaders could control important resources such as grazing grass, salt sources, and copper or gold mines. A series of such chiefdoms flourished south of the Zaire forest in the Kisale region at the end of the first millennium. Richly adorned graves testify to the great economic power and far-flung trading contacts in the region. Cultural influences from these kingdoms spread far and wide over central and southern Africa before the fifteenth century. A seminal event in African history came with the Portuguese capture of the important Islamic trading city of Ceuta in Morocco in 1415. In the 1430s and 1440s, Prince Henry the Navigator of Portugal sent ships on long journeys of exploration down the West African coast, trying to outflank the Islam-controlled Saharan gold routes. By 1480, the Portuguese were well established along the West African coast, while Vasco da Gama rounded the Cape of Good Hope, explored the East African towns, and crossed the Indian Ocean to Goa, opening up a southern route for the spice trade. European contact with Africa brought new economic opportunities for Africans, who took full advantage of them. These opportunities were manifested in the Atlantic slave trade, which began early in the Portuguese exploration of African coasts and reached a crescendo in the late eighteenth and early nineteenth centuries. Christopher DeCorse summarizes the emerging field of historical archaeology, which is documenting not only the European presence in Africa but some of the cultural interactions resulting from the slave trade and other developments.[See also Afar; Africa, Origins of Food Production In; Antiquity of Humankind: Antiquity of Humankind In the Old World; Australopithecus and Homo Habilis; East Africa; Egypt and Africa; Holocene: Holocene Environments In Africa; Human Evolution, articles on Introduction, Fossil Evidence For Human Evolution, The Archaeology of Human Origins; Humans, Modern: Origins of Modern Humans; Hunter-gatherers, African; Nubia; Pastoralists, African; Rock Art: Rock Art of Southern Africa; Trade: African; West African Forest Kingdoms; West African Savanna Kingdoms; West African Sculpture.] Brian M. Fagan

Primate Ancestors of Humans Human beings belong in the superfamily


Hominoidea, and hominoid origins are generally considered to have been about 30 million years ago in Africa. Fossil apes are known in some abundance from shortly after this time until the time of human origins about 5 million years ago. The earliest putative human ancestor is Australopithecus ramidus, from 4.5-million-year-old deposits at Aramis, Ethiopia. Following is an account of the fossil apes that predate this find and may have some bearing on human origins. The earliest-known fossil apes are known from eastern Africa, spanning a period from 24 to 14 million years ago. This is the family Proconsulidae, and the first specimens of Proconsul were found by Arthur Tyndall Hopwood of the Natural History Museum, London, in 1933. He found just nine fossils from 19-million-year-old deposits at Koru, Kenya, but subsequently Louis Leakey, the well-known anthropologist from Kenya, found many hundreds of Proconsul specimens from sites such as Rusinga Island and Songhor, in western Kenya. He described these in collaboration with Wilfred Le Gros Clark of Oxford University. Leakey recognized the distinctiveness of these fossils, erecting the family Proconsulidae, but David Pilbeam, working twenty years later at Yale University, attempted to group the different Proconsul species into lineages leading to living apes. This conclusion was discarded soon afterward, with the description of additional material from Kenya by Peter Andrews, when he put forward the now generally accepted view that the seven or eight species from the early Miocene deposits of Kenya belong to Leakey's family, Proconsulidae. These species lack most hominoid characters, but details of the morphology of the elbow region and the probable lack of a tail indicate that they were primitive apes. The earliest fossils that can be assigned to the family have been found recently by Meave Leakey at Lothidok in northern Kenya in deposits dated to about 2425 million years ago, and the latest comes from Fort Ternan, also in Kenya, at about 14 million years ago. Slightly later in the early Miocene at East Africa, the recently described Afropithecus turkanensis comes from the site of Kalodirr in northern Kenya. It was found by Richard and Meave Leakey, and it differs from Proconsul by sharing more advanced hominoid characters with later apes. In particular, it had massive canines and premolars, and it had a more robustly built and longer face than was present in the earlier fossils. The closely related Heliopithecus leakeyi from Ad Dabtiyah, Saudi Arabia, shares many of the same characters. A small collection was made from this site by Roger Hamilton and Peter Whybrow of the Natural History Museum, London, and additional material is needed to fill in the gap between proconsulids and later fossil hominoids. Another fossil ape very similar to the afropithecines is the genus Kenyapithecus, an enigmatic and poorly known group of fossils from Middle Miocene deposits on Maboko Island and Fort Ternan in southern Kenya and Nachola in northern Kenya. Many of these specimens were again found by Louis Leakey, with additional material coming from excavations at Nachola and Maboko, and all are dated at between 15 and 14 million years ago. Interpretations range from grouping them all in the same species to putting them into different tribal groupings, but it is becoming accepted that they are distinct, either at the generic level or at a still higher taxonomic level. At this stage of the Middle Miocene, the earliest hominoids outside Africa are

encountered in Turkey, at the site of Pasalar, which is dated to about 15 million years ago. The huge collection of fossil hominoids from this site (well in excess of one thousand specimens) have been collected by myself and Berna Alpagut, of the University of Ankara, and we conclude that the species Griphopithecus alpani is very similar to Kenyapithecus from Fort Ternan and that the two genera should be grouped taxonomically. After this period, fossil apes become abundant in Europe and Asia and extremely rare in Africa, although the reasons for this are not known. For instance, the later fossil record in Africa can be summarized as follows: a single tooth from 12million-year-old deposits at Ngorora; another single tooth from Lukeino, dated at about 8 million years ago; and an upper jaw from Samburu Hills, not yet named but clearly a new genus and species probably related to the gorilla. The age of the Samburu Hills deposits is between 8 and 4 million years ago. One of the most common groups of fossil ape in Eurasia, from about 12 to 7 million years ago, is the lineage leading to the orang utan. Abundant fossils of Sivapithecus have been found by many field workers from Turkey in the west to China in the east, with the most notable collection being made by David Pilbeam and his colleagues, who have made large collections ranging from over 12 to about 7 million years ago in sediments on the Potwar Plateau in Pakistan. A nearly complete face complements a partial face from similar-aged deposits in Turkey, showing many similarities shared by Sivapithecus and the orang utan, so this lineage appears to have arisen some time before 12 million years ago. This branching point is frequently used today in calibrating molecular clocks, as is also the divergence date for the whole hominoid group from 30 million years ago. Many other fossils from Europe and Asia are now also put into the orang utan lineage, notably the large collection of skulls, mandibles, and teeth from Lufeng in southwestern China, Lufengpithecus. Two other groups have also been grouped with the orangutan by some archaeologists. The species Dryopithecus fontani was the first fossil ape ever described, in the publication of the first specimens, which preceded Darwin's Origin of Species by more than twenty years. The earliest finds came from 12-million-year-old deposits in France, but more recently better collections have been made in Spain by Miguel Crusafont and in Hungary by Miklos Kretzoi. The main Spanish site, Can Llobateres, is between 10 and 9 million years old, and it has more recently been excavated by Salvador Moya-Sola and Meike Kohler from the Crusafont Palaeontological Institute in Sabadell. They have found a skull and parts of the skeleton of a single individual of Dryopithecus that they claim shares characteristics with the orang utan, leading them to group the fossil ape with Sivapithecus in the orang utan lineage. This view is hotly contested by David Begun and Laszlo Kordos, who together are working on the new collections from Rudabanya, the Hungarian site. This is the same age as the Spanish sites, and the species of Dryopithecus found there has many similarities to the Spanish species, although clearly they are not the same species. A skull has also been found recently at Rudabanya, and Begun and Kordos conclude that the morphology of this shows that Dryopithecus is first of all a member of the great ape and human clade, and secondly that it may have some affinities with the African apes as opposed to the orang utan. In both cases, there are morphological similarities justifying these opposing claims, and it is not clear at present which of these are based on homologous similarities and which are not. More detailed analysis of character polarity is necessary to resolve this conflict.

The last fossil to be considered is in some ways the most controversial of all. This is a collection from Greece, again from similar-aged deposits to Rudabanya and Can Llobateres, and the fossils have been named Duranopithecus macedoniensis by Louis de Bonis of the University of Poitiers in France. They are similar to the prior-named Graecopithecus freybergi described by the late Ralph von Koenigswald, and there is some disagreement as to which name is correct. A skull has recently been described in addition to the abundant jaws and teeth, and this shows characters of the nasoalveolar region and the hafting of the face on the skull that are found in living forms only in the African apes and humans. The orang utan is distinct in these characters, so de Bonis's conclusion is that Duranopithecus is part of the African ape and human lineage. He actually goes further and claims that it is directly ancestral to humans on the basis of reduction in size of the canine, but this conclusion is not justified by the slender evidence. In particular, it contrasts with the recently discovered Australopithecus ramidus in characters such as enamel thickness and relative canine size.[See also Australopithecus and Homo Habilis; Genetics In Archaeology.]

Bibliography
Frederick Szalay and Eric Delson, Evolutionary History of the Primates (1979). Russell Ciochon and Robert Corruccini, eds., New Interpretations of Ape and Human Ancestry (1983). Bernard Wood, Lawrence Martin, and Peter Andrews, Major Topics in Primate and Human Evolution (1986). John Fleagle, Primate Adaptation and Evolution (1988). Ian Tattersall, Eric Delson, and John van Couvering, Encyclopaedia of Human Evolution and Prehistory (1988). Peter Andrews and Christopher Stringer, Human Evolution, an Illustrated Guide (1989).

Fossil Evidence For Human Evolution


Although fossil evidence is our main source of information documenting the course of human evolution, it is not the only relevant evidence. This is particularly true in determining the antiquity of the human line. Analysis of DNA similarities and differences between humans and living apes suggest that humans separated from our closest living relatives, the African apes, sometime prior to about 6 million years ago. The earliest fossils that have been assigned to the human line are two small fragments of mandible dating shortly after this time from the sites of Lothagam (approximately 5.5 million years ago) and Tabarin (approximately 5.0 million years) in Kenya. The relationship of these fossils to the newly defined basal hominid Ardipithecus ramidus from Aramis, Ethiopia (approximately 4.4 million years ago) is as yet unclear. After 4.5 million years, fossils become more numerous and can be divided into two major groups, the australopithecines and early members of the genus Homo.

Australopithecines
The first australopithecine was discovered at the site of Taung in South Africa in 1924 and was named Australopithecus africanus by Raymond *Dart. Subsequently, many other australopithecine fossils have been found at other sites in southern and eastern Africa. The genus Australopithecus lasts for over 4 million years. There are currently

six species recognized in the genus: A. anamensis, A. afarensis, A. africanus, A. aethiopicus, A. boisei, and A. robustus. The first three of these species are sometimes referred to as gracile australopithecines and the last three as robust australopithecines. Some researchers emphasize the difference between these two groups by putting the robust australopithecines in their own genus, Paranthropus. The oldest australopithecine taxon, Australopithecus anamensis, was established by Meave Leakey and her coworkers in 1995 for fossils from the Lake Turkana region of Kenya. This taxon is approximately 4.2 to 3.9 million years old and differs from the earlier Ardipithecus ramidus in many details of its anatomy, including the thick enamel on its teeth. Its teeth are similar in some features to those of earlier representatives of the slightly later australopithecine species, A afarensis, but its postcrania appears to be more modern in form. A. afarensis, was established by Don Johanson in 1978 and now includes fossils from Hadar, the Middle Awash, and Omo in Ethiopia, Laetoli in Tanzania, and Koobi Fora in Kenya. A. afarensis spanned a period of almost a million years, from 3.9 to 3.0 million years ago, and occupied a variety of habitats, from relatively forested to open country. The most well-known A. afarensis fossil is a partial skeleton called Lucy (AL 288-1) that was discovered at Hadar in 1974. It shows that these australopithecines had skeletons that were very different from our own, with short legs in relation to their inferred body sizes and features of the hands, arms, and chest that suggest they were adept at climbing in trees. But features of the pelvis and legs confirm that they were also capable of bipedal locomotion. The well-preserved 3.6million-year-old footprint trail discovered by Mary Leakey at Laetoli, Tanzania, is also clear evidence that they walked on two legs when on the ground. The reasons for the evolution of bipedalism are still unclear. It has recently been suggested by Peter Wheeler of the Liverpool John Moores University that bipedalism is a thermoregulatory device. By walking upright, the australopithecines would have absorbed 60 percent less heat from the sun during the midday hours. This would have helped them to keep down their core temperatures, allowing them to forage in open environments for longer periods of time, while other animals would have had to seek shade. Alternatively, it has also been suggested that bipedalism first evolved as a feeding adaptation in the forest. Whereas A. afarensis and A. anamensis are found only in eastern Africa, the third gracile australopithecine, A. africanus, is found only in southern Africa, at the sites of Sterkfontein, Makapansgat, Taung, and, more recently, Gladysvale. It is slightly more recent in age and dates to approximately 3 to 2.5 million years. A. africanus differs from A. afarensis in details of its skull, teeth, and feet; however, in other aspects they are so similar that Phillip Tobias of the University of the Witwatersrand has suggested in the past that they should only be separated at the subspecific level. The main difference between these gracile australopithecines and the robust australopithecines is the large size of the jaws and teeth in the robust species. Fossils that have since been assigned to Australopithecus robustus were discovered by Robert Broom in the 1930s at Kromdraai and Swartkrans in South Africa. In 1959, fossils of a species with even larger teeth and jaws, Australopithecus boisei, were discovered by Louis and Mary Leakey at Olduvai Gorge, Tanzania, and have subsequently been

found at sites such as Lake Natron (Peninj) in Tanzania and Chemeron and Koobi Fora in Kenya. Both A. boisei and A. robustus first appear about 2 million years ago. The evolutionary relationships of the australopithecines are highly controversial. However morphological features of A. aethiopicus, an early robust australopithecine discovered by Richard Leakey and his team on the western shore of Lake Turkana in Kenya in 1985 and dated to about 2.5 million years, suggest to some authorities an evolutionary link between the earlier A. afarensis and the later A. boisei in eastern Africa. Some would also see a link between A. afarensis, A. africanus, and A. robustus in southern Africa. The relationship between Ardipithecus ramidus and A. anamensis and these later species is currently unknown.

Early Homo
Early Homo, with larger brains and smaller teeth than the australopithecines, first appears at about 2.5 million years ago in Africa, about the same time as the first stone tools in Ethiopia and Kenya. Elizabeth Vrba of Yale University suggests that the appearance of early Homo at this time correlates with a period of worldwide climatic cooling. This is called the Turnover Pulse Hypothesis. She argues that as the climate changed, hominids as well as other animals would have had to adapt to increasingly arid and open conditions. Early Homo would have achieved this through the evolution of tool use, a larger brain, and an arguably more complicated social structure. The australopithecines would have adapted by alternative means, primarily through larger teeth and jaws that would have allowed them to process larger quantities of food of relatively low nutritional value. Although this idea is intriguing, John Kingston of Yale University has provided evidence based on stable carbon isotope analysis that the climate in Africa has not changed substantially throughout this period. Whatever the reasons behind the evolution of the genus Homo and the diversification of the australopithecines, it is important to realise that Homo coexisted with the australopithecines in Africa for a period of more than a million years. There are currently three recognized species of early Homo: Homo habilis, Homo rudolfensis, and Homo ergaster. H. habilis was the first-recognized of these species and dates between about 2 and 1.5 million years ago. This species was established for fossils at Olduvai Gorge in 1964 by Louis Leakey, John Napier, and Phillip Tobias, and is now recognized at both Olduvai Gorge and Koobi Fora. Homo habilis has the smallest brain size of any of the early members of the genus Homo and has a skeleton that resembles that of the australopithecines, with relatively short legs as well as other features suggesting it was still adept in the trees. Homo rudolfensis appears at about 2.5 million years. It is a larger-brained species and is currently recognized at Koobi Fora, Kenya, and Uraha, Malawi. Although it has a larger brain size than H. habilis, it also has larger teeth and a face that retains some australopithecine features. The third species of early Homo, Homo ergaster, is the only one that is indisputably advanced in its morphology over the other Plio-Pleistocene hominids. It appears about 1.8 million years ago and combines a large brain size with a humanlike skeleton that lacks any evidence of a continued life in the trees. The best-preserved specimen of this species is a remarkably complete skeleton (KNM-ER 15000) of a ten-to elevenyear-old youth that was found in 1984 at the site of Nariokotome on the western shore of Lake Turkana by Richard Leakey and his team. It clearly shows humanlike long legs relative to its inferred body weight and had body proportions similar to those of

modern people who live in hot, dry climates. Homo ergaster is, at present, the most probable ancestor of the hominids that left Africa and spread into Europe and Asia.Homo erectus The first fossils that are now recognized as Homo erectus, a skullcap and femur, were found in 1891 and 1892 by the Dutchman Eugene Dubois, at the site of Trinil in Java. He gave the name Pithecanthropus erectus to this material. The skull was thick boned, with a flat forehead and large brow ridges and a cranial capacity of about 55 cubic inches (900 cubic cm). Many more fossils have been recovered from Java in subsequent years at the sites of Mojokerto, Sangiran, and Sambungmachan. Carl Swischer has recently provided new absolute dates of 1.81 million years ago for an infant skullcap from Mojokerto and of 1.66 million years ago for two specimens from Sangiran. These dates suggest that Homo erectus or its ancestors first reached Java at about the same time that Homo ergaster appeared in Africa, almost 1 million years earlier than most scientists would have suggested. These new dates also help to put into context the controversial date of 1.4 million years for a Homo erectus mandible from the site of Dmanisi in Georgia, material that was previously considered to be from the oldest non-African hominid. In the 1920s and 1930s, fossils similar to Pithecanthropus were found at Zhoukoudian in China. These were originally given the name Sinanthropus pekinensis. Recent thermoluminescence dating has established that Zhoukoudian was occupied by hominids between about 400,000 and 250,000 years B.P. More recently, similar fossils have been found at a variety of sites throughout China, such as Chenawo, Gongwangling, Lontandong Cave, Hexian Country, and Yuanmou, Jianshi. Fossils assigned to Homo erectus have also been found in northern Africa (Atlanthropus), South Africa (Telanthropus), Tanzania (Chellan Man), and Germany ( Bilzingsleben). All of these fossils are highly variable, and some scientists have suggested that they are too variable to represent one interbreeding species. This interpretation argues that Homo erectus, defined as only those fossils from Java and China, represents a completely different species than do contemporaneous fossils known from Africa and Europe. The more common interpretation is that these fossils represent one species and their diversity can be explained by the great geographical and temporal distances separating them. Other more recently discovered fossils from China suggest that after 250,000 B.P., Homo erectus in the Far East begins to change into what is known as archaic Homo sapiens. These fossils, from the sites of Jinnu Shan (Liaoning Province), Yuxian (Hubei Province), and Dali (Shaanxi), show various combinations of thinner bone, larger brain sizes, more rounded skulls, and smaller, more modern faces. The fossils might indicate the movement of people from elsewhere in the world into China. Alternatively, they may indicate that Homo erectus in Asia evolved into archaic Homo sapiens with relatively little genetic contact with other areas in the world.

Archaic Homo sapiens


Archaeological and palaeontological evidence suggests that Europe was occupied by hominids at least by the beginning of the Middle Pleistocene (750,000 B.P.) The most informative European Middle Pleistocene site is Atapuerca in northern Spain. This site has yielded the earliest European fossils (750,000 years ago) as well as over 700 hominid fossils representing at least twenty-four individuals from later deposits dating to about 300,000 years ago. These fossils show an interesting mixture of features,

some of which are found in Homo erectus and others in the more recent European Neanderthals. They also show a large degree of intrapopulation variation, suggesting that the features found in other isolated European fossils from this period such as Steinheim (Germany), Swanscombe (England), Arago (France), and Petralona (Greece) can all be accounted for in one contemporaneous population. They also establish that evolution in Europe during the Middle Pleistocene was moving toward the Neanderthals. Anatomically modern Homo sapiens must have arisen elsewhere. The Neanderthals lived throughout Europe and the Levant from about 130,000 B.P. until about 30,000 years ago. They had squat bodies and short distal limb segments that can be interpreted as adaptations to cold, glacial conditions. One of the most complete Neanderthal skeletons was found at the site of Kebara, Israel, and this suggests that although Neanderthals walked on two legs, their type of bipedalism might have been different from that found in modern humans. The Kebara Neanderthal also has a fully modern hyoid bone, indicating that it most probably had a larynx (voice box) that was capable of producing the full range of modern speech sounds. One of the most recent Neanderthals comes from the site of Saint Cesaire in France and dates to about 36,000 years ago. Archaeological evidence from other sites such as El Castilla and L'Arbreda Caves (Spain) suggests that modern humans appeared in Europe before the Neanderthals disappeared and may have coexisted with them for perhaps 5,000 years. The fate of the Neanderthals is unknown, although it is likely that they were either replaced by or genetically absorbed into these modern populations. Europe is not the only place that the Neanderthals seem to have been contemporaneous with modern humans. In the Levant, anatomically modern humans first appear at about 100,000 B.P. and are known from the sites of Qafzeh and Skhul in Israel. Neanderthals are also known from this period or possibly earlier, from the site of Tabun, and are found, in addition, in more recent sites, such as Kebara and Amud in Israel and Shanidar in Iraq. There is considerable controversy over whether these fossils represent two separate and discrete, noninterbreeding species or whether they represent two populations of humans and show evidence of interbreeding. The only other area of the world where there are such early dates for modern humans is Africa. Here fossils from sites such as Omo in Ethiopia as well as Border Cave and Klasies River Cave in southern Africa are about 100,000 years old and have been interpreted as modern in form. These fossils appear to be the end of a continuum of evolution that begins with Homo erectus in the form of Olduvai Hominid 9 (Chellan Man), which dates to about 1.2 million years ago. By about 400,000 or 300,000 years ago, the African hominids are more modern than the earlier Homo erectus fossils, having larger brains and more rounded skulls. This trend continues through fossils such as Florisbad from southern Africa and Ngaloba from Tanzania, which may date between 200,000 and 100,000 years ago and up to the fully modern hominids. This sequence suggests that while Neanderthals were occupying Europe, modern humans were appearing in Africa and the Levant. The evidence seems to be clear that these modern humans ultimately spread into Europe to replace the Neanderthals. But what happened in the Far East? It is possible that modern humans also spread eastward to replace Homo erectus in Java and China. Genetic evidence tends to support this idea. The apparent transitional fossils such as Jinnu Shan and Dali, which

date earlier than the appearance of modern humans in Africa and the Levant, however, may indicate that the story is more complicated, involving not only population movement and hybridization but also local continuity and selection throughout the later part of the Middle Pleistocene and the Late Pleistocene periods. [See also Australopithecus and Homo Habilis.]

Bibliography
R. G. Klein, The Human Career: Human Biological and Cultural Origins. (1989). B. Wood, Origin and Evolution of the Genus Homo , Nature 355 (1992): pp.783790. L. C. Aiello, Human Origins: The Fossil Evidence , American Anthropologist 95 (1993): pp.7396. J.-L. Arsuaga, I. Marnez, A. Gracia, J.-M. Carretero, and E. Carbonell, Three New Human Skulls from the Sima de los Huesos Middle Pleistocene Site in Sierra de Atapuerca, Spain , Nature 362 (1993): pp.534536. R. Lewin, Human Evolution: An Illustrated Introduction, 3rd ed. (1993). M. B. Roberts, C. B. Stringer, and S. A. Parfitt, A Hominid Tibia from Middle Pleistocene Sediments at Boxgrove, UK , Nature 369 (1994): pp.311313. C. C. Swisher III, G. H. Curtis, T. Jacob, A. G. Getty, A. Suprijo, and Widiasmoro, Age of the Earliest Known Hominids in Java, Indonesia , Science 263 (1994): pp.11181121.

The Origins of Human Behavior While humans are distinct in the animal
kingdom through a number of anatomical characteristics, it is their behavior that is most distinctive and sets the species apart. To some extent there is a high degree of integration between the anatomical features and the behavioral ones. For example, bipedalism allows the hand to become a specialized and highly dextrous organ capable of very complex manipulation; the large brain allows for massive levels of information processing and a wide range of creative and logical thought processes. Furthermore, the development of biology, especially neurobiology, is increasingly showing the interactions between cognitive and psychological states and biochemical activity. Behavior, therefore, cannot be divorced from the rest of the evolutionary process. Such developments have important implications for the study of the evolution of human behavior. It is not the case that there is a replacement of biological evolution, focusing on hard anatomy, by cultural evolution, concerned with malleable behavior. Genes play a part in behavior, and therefore the operation of natural selection on behavior can be expected. The emergence of human behavior is an essential part of evolutionary biology. A key problem is gaining access to information about the evolution of behavior. The starting point should be the behavior of living apes and monkeys, and much insight has been gained in recent years by the study of primate behavior in the wild. The primary impact has been to close the apparent gap between human and nonhuman capacities. Whereas it was once generally held that humans were unique as tool makers, hunting primates, language users, and social animals, it is now clear that these characteristics occur in other animals. Chimpanzees are known to use tools twigs for extracting termites from their nests, stone hammers for cracking open nuts.

They also hunt, both individually and cooperatively, probably obtaining more than five percent of their food from meat. All anthropoid primates are highly social, living in a variety of social systems, often held together by bonds of kinship. Their cognitive capacities vary between species and are difficult to assess, but studies and experiments have shown that some possess rudimentary language (meaning-specific sounds, context-dependent vocalizations, and, in chimpanzees, an ability to communicate grammatically using sign language). There also appear to be considerable abilities to employ innovative behavior, and both macaques and chimpanzees have been shown to possess cultural traditions within particular populations. The behavior of living primate species cannot be applied uncritically to hominids. It is probably the case that the detailed behavior of each species is particular to it, and many errors have been made in the past by applying single-species models to early hominids. Chimpanzees, on account of their close relationship to humans, and baboons, due to their assumed environmental similarity with the australopithecines, have been extensively used in this way. However, what these studies do provide is an idea of the baseline from which hominids have developed their own unique characters. The most important conclusion is that this baseline is not that of a simple, asocial, and instinctive organism but that of an already highly complex mammal. In particular, it is likely that the first hominids lived in social groups, hunted and scavenged, and used rudimentary tools. More specifically, on account of their relationships with the African apes, it is likely that they were male-kin bonded with already extensive patterns of parental care. Such inferences drawn from the primates need to be placed against the archaeological and fossil evidence, partly to determine the timing of events and partly to understand the reasons why certain characters evolved. The particular aspects described here are bipedalism, tool making, foraging behavior, and language. Bipedalism seems to be the fundamental characteristic of the hominids, occurring earlier than other traits. It is found in the earliest australopithecines and is probably the unifying feature of the Homindae. A number of explanations for the evolution of bipedalism have been proposed. Freeing the hands for tool making was Darwin's original suggestion, but this seems unlikely in view of the later development of stone tool manufacture. More probable is that bipedalism is an energetically efficient response to the spread of nonforested environments between 10 and 5 million years ago. Apart from its locomotor efficiency in terrestrial environments, it has also been convincingly argued that it provides a number of clear thermoregulatory advantages in what would have been considerably hot environments. It is thus linked to other unique human traits such as copious sweating and loss of body hair. As described earlier, chimpanzees use and make tools in the wild, and have been shown in captivity to be capable of making stone flakes. However, the first clearly recognizable stone tools do not appear until shortly before 2 million years ago, around the time of the appearance of Homo. An implication is that the early australopithecines were not consistent manufacturers of stone tools. The presence of stone tools provides considerable information about the capacities of the hominids: It has been suggested that they were predominantly right handed and capable of sufficient forethought to locate, extract, and modify natural materials. It is apparent

that some at least were used in animal butchery. It is also the case that stone tool practices such as the Acheulean Tradition provide evidence for some form of cultural inheritance. In contrast, though, to these inferences of more human behavior, it should be noted that until the Upper Pleistocene, stone tool traditions show very little variation and are conservative over enormous geographical areas and time periods. Evidence for changes in foraging behavior comes primarily from the archaeological record, and is therefore dependent upon the presence of stone tools. Additional information comes also from tooth morphology and wear, and more recently from chemical analysis of hominid fossil bones. The question of early hominid foraging has been one of considerable controversy in recent years. On one side, it has been argued that stone tools, cut marks on bones, and the association of stones and bones together is evidence for well-developed hunting behavior from the origins of the genus Homo. This has been used to support a model for the early appearance of relatively modern behavior, with only gradual change during the course of the Pleistocene. On the other side, it has been claimed that such evidence is misleading, that only opportunistic scavenging and hunting of small prey occurred, and that it was only with the appearance of modern humans in the last 100,000 years that strategies akin to hunting and gathering were present. This interpretation has usually been associated with a model that contrasts markedly the behavior of all archaic hominids with that of modern Humans, and proposes some form of human revolution in the Upper Pleistocene associated with cultural, symbolic, or linguistic abilities. A number of intermediate positions can be held, but it is probably the case that in the past there has been a tendency to overemphasize the humanness of the early hominids. Closely linked to this controversy has been the problem of language origins. Evidence for language has been inferred from basi-cranial anatomy, the structure of the brain in fossil endocasts, and archaeological evidence. Again, some have proposed that language can be traced back to earliest Homo, while others have argued that only Homo sapiens was capable of language, and that the explosion of art, tool making, and other cultural characteristics of the Upper Paleolithic are evidence for this. Drawing on genetic and modern linguistic evidence, it is probably the case that all known languages go back to a common stock around 100,000 years ago, and that this would be the origin of modern languages, but it does not follow that other hominids, such as Neanderthals, had no language or communicative skills. The fact that these languages have become extinct is not evidence for absence. Furthermore, it is clear from the enlargement of the brain that occurred from about 2 million years ago and accelerated from 400,000 years ago that archaic hominids were intelligent, social animals. As indicated earlier, the results of studying living primates shows that the baseline for hominids was considerable. In the past, debates about the origins and evolution of human behavior tended to focus on alternative single-factor explanationsculture, language, tool making, and so on. More recently, and with much better chronological control, there has been a much more concerted effort to look at the interaction of several factors, and this has led to further controversy over the timing of particular events. These developments have led to a more ethnological approach drawn from the study of animal behavior, with less emphasis on the anthropocentric concept of human culture.[See also Darwinian Theory; Genetics In Archaeology.]

Robert Foley

The Archaeology of Human Origins


By definition, the prehistoric archaeological record begins when the earliest artifacts (objects modified through manufacture or use) produced by humans or protohumans can be recognized. Although a range of organic materials, such as wood, bone, tooth, and horn may have served as tools by early hominids, it is difficult to identify such possible implements. Wood, for instance, rarely survives in the prehistoric record, and bone, tooth, and horn can be modified by a host of other non-hominid agencies, such as carnivore and rodent gnawing, trampling, and postdepositional breakage, making it very difficult to identify unambiguously bones that have been worked or used by early hominids in the early prehistoric record. Fortunately, hominid-modified stones are much easier to identify and tend to be fairly indestructible, and therefore serve as useful markers of early hominid behavior. At present the archaeology of human origins can be taken back to about 2.5 million years ago on the African continent.

Appearance of the First Hominids in Africa


Although the postulated time of divergence between the African apes and humans is estimated to be between six and ten million years ago, the earliest clear evidence for small-brained, bipedal hominids in the fossil record is between four and three million years ago and comes from East African localities such as Hatar and the Middle Awash in Ethiopia and Laetoli in Tanzania. These fossils are usually assigned to Australopithecus afarensis (although some scholars believe that the range of anatomical variation warrants at least two taxa). Although upright walkers, these creatures still exhibit apelike features such as relatively long arms and curved phalanges, interpreted by some as arboreal adaptations. Brain size is essentially that of modern African apes, around 18 to 24 cubic inches (300 to 400 cu cm). Although hand bones suggest that Australopithecus afarensis had a high degree of digit opposability, no recognizable stone tools are known from this period of time. Between 3 and 2.5 million years ago, two new taxa appear to have emerged from this ancestral bipedal stock: Australopithecus africanus in South Africa (known especially from the cave of Sterkfontein) and Australopithecus (Paranthropus) aethiopicus from West Turkana, Kenya. Again, no recognizable archaeological traces are associated with these forms. The robust Australopithecus aethiopicus skull exhibits a strong sagittal crest and enlarged premolars and molars that characterize later robust hominids of Africa between 2.5 and 1 million years ago: Australopithecus (Paranthropus) robustus in South Africa, found at the cave deposits of Swartkrans and Kromdraai, and Australopithecus (Paranthropus) boisei in East Africa, from localities such as East and West Turkana, Kenya and Olduvai Gorge in Tanzania. These robust austalopithecines exhibit cranial capacities of between 25 and 34 cubic inches (400 and 550 cu cm). Between 2.5 and 1.8 million years ago, larger-brained gracile forms with cranial capacities of between 37 and 50 cubic inches (600 and 800 cu cm) are known from African localities such as East Turkana in Kenya and Olduvai Gorge in Tanzania. These forms have usually been assigned to Homo habilis (some anthropologists have

distinguished between those with a somewhat smaller body and brain size as Homo habilis, and those of a larger body and brain size Homo rudolfensis). Beginning about 1.8 million years ago, a larger-brained hominid emerges in Africa, Homo erectus (some anthropologists call the earliest African forms Homo ergaster). With a cranial capacity of 50 to 55 cubic inches (800 to 900 cu cm) and a larger body size similar to that of modern humans, erectus appears to have spread out of Africa and into Eurasia sometime between 1.8 and 1 million years ago. Recently it has been suggested that some of the Java fossils of Homo erectus, notably one from Modjokerto, may be as old as 1.8 million years. If true, this suggests a migration out of Africa soon after the emergence of Homo erectus. The robust australopithecines of East and South Africa went extinct by one million years ago, leaving the genus Homo as the only hominid lineage to continue into the Middle Pleistocene.

Earliest Archaeological Sites


The earliest recognizable stone artifacts are in the form of simple flaked and battered rocks that characterize the Oldowan Industrial Tradition (named after the locality of Olduvai Gorge) or Mode 1 industries. The oldest of these sites appear to be about 2.5 million years old, and a range of sites exhibiting such a technological stage are known between 2.5 and 1.5 million years ago. Such sites include the Omo, Gona, Melka Kunture, and Gadeb in Ethiopia; West and East Turkana and Chesowanja in Kenya; Swartkrans and Sterkfontein in South Africa; and Ain Hanech in Algeria.

Technology
The majority of these early African stone assemblages were dominated by lava, quartz, quartzite, or limestone as principal raw materials, usually obtained in the form of water-worn cobbles or angular chunks. Hardhammer percussion and sometimes bipolar technique were the principal techniques used. Oldowan core forms are traditionally classified into types such as choppers, discoids, polyhedrons, and heavyduty scrapers; retouched artifacts (light-duty tools) made on flakes include scrapers and awls. Artifacts showing signs of battering and pitting include hammerstones, spheroids and subspheroids, and anvils. Experiments have suggested that many of the Oldowan core or core-tool forms could simply be by-products of producing sharp, serviceable flakes, although some of these cores could have been used for wood chopping or shaping. At Olduvai Gorge, a number of sites in Bed II have higher proportions of light-duty tools and spheroids, and have been designated Developed Oldowan by Mary Leakey. Around 1.5 million years ago, after the emergence of Homo erectus in the fossil record, new elements could be seen at some stone artifact assemblages: largish hand axes, picks, and cleavers often made of large flakes struck from boulder cores. These large bifacial forms were the hallmark of the Acheulean Tradition (Mode 2 technologies). Early Acheulean sites include DK in Bed II at Olduvai Gorge and sites at Peninj, Lake Natron in Tanzania, and Konso Gardula in Ethiopia. Besides typological classificatory studies, a range of other methodological approaches have been applied to Early Stone Age sites. Experimental replicative and functional studies have been very useful in understanding why recurrent artifact forms are found at many archaeological sites, and how the rock type, shape, and size of a raw material can affect the resultant products. Microwear analysis of fresh, fine-grained siliceous

artifacts can yield valuable clues pertaining to artifact function. And refitting studies of flaked stone from early archaeological sites can help to show what stages of flaking are represented at an archaeological site, as well as giving a blow-by-blow sequence of flaking events for a given core or retouched form. Refitting studies can also help to assess whether a given site has been heavily disturbed by water action or vertical disturbance from such agencies as roots and burrowing animals.

Environmental Studies
Paleoenvironmental Reconstruction of early archaeological sites has been approached from a range of different methods, including evidence from fossil faunal remains, fossil plant remains (normally in the form of pollen and root casts), and geological and geochemical analysis (in particular carbon and oxygen isotope studies). Such evidence suggests that major drying/cooling phases on the African continent occurred several times, including one period about 2.5 million years ago. Some researchers have suggested that this climatic change may have led to many extinctions of animal forms as well as the emergence of new forms adapted to new conditions. The emergence of the genus Homo and Oldowan sites at about this time could have been a response to such changes.

Social Organization
There is little direct evidence to suggest what types of social organization characterized early hominids; patterns observed among nonhuman primates as well as modern human foragers have often served as partial models in attempts to interpret social organization and behavior. Prior to the emergence of Homo erectus, it would appear that early hominids were characterized by a high degree of sexual dimorphism, suggesting to many anthropologists that there was competition between males for access to females, and that some sort of nonmonogamous mating pattern existed. Homo erectus appears to have exhibited a reduced degree of sexual dimorphism, which may suggest less antagonism among males in competing for females and dominance. Homo erectus also was characterized by a larger brain and body size than earlier hominids, which may imply a larger home range as well. Limb bones of this taxa suggest that these creatures were more efficient at long-range bipedal walking than were earlier hominids, which may in part explain why Homo erectus is the first hominid form known to have migrated out of Africa and why Homo erectus had a larger brain and body size than earlier hominid forms.

Theories of Archaeological Site Formation


Both hominid and nonhominid forces are involved in the formation of Early Stone Age archaeological sites. Most of these sites are buried by sediments that have been carried by river and delta floods or lake transgressions, and such water action may have affected the distribution of prehistoric materials since the time of hominid occupation. Scavenging animals may have carried away or further modified bones at these sites, and trampling and bioturbation by roots or burrowing animals may have affected the vertical and horizontal distribution of the prehistoric materials. How did the concentrations of stone artifacts and sometimes animal bones form in this early archaeological period? Theories to explain how these archaeological sites formed have included a range of interpretations: hydrological jumbles of stone tools and animal bones swept downstream and reconcentrated, with little behavioral

integrity; palimpsests of hominid and nonhominid activities at focal points on the landscape over relatively long periods of time; central foraging places (home bases/camps) where early hominids carried out many subsistence and social activities; in some models, food sharing is an important part of this adaptive strategy; secondary stone caches collected on the landscape as an energy-saving strategy during daily foraging; scavenging stations where early hominids brought animal carcasses or parts of carcasses in order to safely process them with stone tools; and favored places where hominid individuals or groups repeatedly visited and carried out tool-making and tool-using activities. It is likely that a number of these scenarios were involved in the formation of early archaeological sites. One of the principal tasks for researchers interested in the archaeology of human origins is to build testable models that examine the hominid and nonhominid agencies that formed these sites.

Diet and Subsistence


The reconstruction of patterns of early hominid diet and subsistence are paramount to our understanding of the adaptation and evolution of these creatures. Based upon what is known among modern nonhuman primates as well as modern hunter-gatherers in tropical Africa, it is likely that early hominids in Africa had a diet that was predominantly plant foods, supplemented by animal food resources. Unfortunately, plant matter rarely survives in the early archaeological record; therefore, this important aspect of early hominid diet is largely conjectural. It is likely that a range of berries, nuts, seeds, underground plant foods (roots, tubers, corms), etc., were exploited, perhaps with the assistance of technological aids such as stone hammers and anvils to crack open nuts and hard-shelled fruits, digging sticks to uncover underground foodstuffs, and simple containers (bark tray, hide, tortoise or ostrich eggshell) to carry or store such foods. When fossil animal bones are found at early archaeological sites, they may bear patterns of modification that suggest hominids were feeding on these animals. An excellent example is the FLK Zinjanthropus site in Bed I of Olduvai Gorge (ca. 1.8 million years ago). Cut marks on bone surfaces can usually be distinguished from marks from nonhominid agencies (carnivore and rodent gnawing, root etching, etc.) and can show where hominids used stone knives for skinning, dismembering, and defleshing. Fracture patterns of long bones showing percussion flakes and scars, as well as abrasion to bone surfaces from hammerstones are typical of hominid marrow processing. The interpretations of these patterns of animal-bone modification have varied widely, however. At one extreme, it has been argued that early hominids appear to have been efficient predators (or very efficient competitive scavengers) and able to acquire the meaty remains of large mammals before carnivores had modified the bones to any significant degree; at the other extreme, others have argued that these modified animal bones represent marginal scavenging of carnivore leftovers in which only dried, relict meat and marrow were obtained. It is likely that a combination of strategies was employed by opportunistic early hominids, including small-scale predation and scavenging of larger mammalian taxa. Other clues that are useful in attempting to infer dietary patterns include wear studies

on fossil hominid teeth, which can indicate how hard or gritty food items that were eaten were, and chemical analysis of fossil hominid bones (such as strontium/calcium ratios and carbon isotope ratios), which can indicate the relative abundance of meat in the diet and the herb and tree/grass ratios in plant foods. Paleopathologies on bones and teeth (e.g., hypoplasia, hypervitaminosis) may also suggest nutritional stresses or an overabundance of certain types of harmful dietary items.

Conclusion
Future research into the archaeology of human origins will almost certainly focus upon behavioral and ecological issues such as cognitive capabilities, social organization, land-use patterns, biogeographical spread, behavioral site-formation processes, tool function, diet, and competition with other animal taxa. Explanatory models for the emergence of tool-using hominids, as well as technological, behavioral, and evolutionary changes in time and space will be examined in the context of regional and global environmental changes. Refined or new dating techniques should also bring a higher resolution to the chronological placement of hominid fossils, archaeological occurrences, and other evolutionary events.[See also Australopithecus and Homo Habilis.]

Bibliography
Mary Leakey, Olduvai Gorge, Volume 3: Excavations in Beds I and II, 19601963 (1971). John W. K. Harris, Cultural Beginnings: Plio-Pleistocene Archaeological Occurrences from the Afar, Ethiopia , The African Archaeological Review 1 (1983): pp.331. Barbara Isaac, ed., The Archaeology of Human Origins: Papers by Glynn Isaac (1989). J. D. Clark, ed., Cultural Beginnings: Approaches to Understanding Early Hominid Life-ways in the African Savanna (1991). Kathy Schick and Nicholas Toth, Making Silent Stones Speak: Human Evolution and the Dawn of Technology (1993). Nicholas Toth and Kathy Schick

Pleistocene The Pleistocene epoch spans approximately the last 2.4 million years of
geological time. It represents a time interval of great scientific interest owing to the numerous fluctuations in climate that took place and because it represents an important time in hominid evolution. The Pleistocene and Holocene epochs together comprise the Quaternary Period. The Pleistocene is formally regarded as having ended at 10,000 B.P. at the onset of the present Holocene interglacial. The Pleistocene is often subdivided into three sectionsEarly, Middle, and Late. The boundary between the Early and Middle Pleistocene is usually defined by the prominent Matuyama-Brunhes geomagnetic polarity reversal, considered to have occurred near 790,000 B.P. The boundary between the Middle and Late Pleistocene is generally regarded as equivalent to the beginning of oxygen isotope substage 5e that represents the warmest phase of the last interglacial. The age of this boundary, on the basis of marine oxygen isotope stratigraphy, is considered to be approximately 130,000 B.P.

The beginning of the Pleistocene was marked by the growth of ice sheets in the Northern Hemisphere. The start of glaciation in the Southern Hemisphere took place much earlier, however, perhaps as early as twenty million years ago in the Antarctic. This early phase of cooling was followed by several important geological changes, including, for example, the uplift of the Tibetan plateau and the closure of the Isthmus of Panama, both of which exerted a strong influence on climate and ocean circulation. The progressive cooling of global climate finally gave way at the start of the Pleistocene to remarkable climatic instability. During the ensuing time period, ice sheets waxed and waned, triggering complex changes in sea level. Elsewhere, grassland replaced forest, tree lines were lowered, and arid conditions were widespread. These changes took place at the same time as the development of human culture that began with the use of primitive tools and fire and culminated in the sophisticated human achievements that have taken place during the present interglacial. If the development of human culture is stimulated by the existence of a stable climate, the appearance of people in the Pleistocene landscape could not have taken place at a more inopportune time.

Evidence for Pleistocene Climate Change from Ocean Sediments


The longest and most complete records of Pleistocene climate change are derived from studies of sediments deposited on the floors of the world's oceans. These sediments consist mostly of the skeletal remains of calcareous and siliceous microorganisms that have settled out of the water column. Evidence of the former conditions under which the calcareous microorganisms lived can be determined by the analysis of the stable isotope ratios of the oxygen in the carbonate skeletal remains. The derived oxygen isotope chronology is considered to indicate past fluctuations in global ice volume. Although oxygen isotope studies provide valuable information on the timing of past continental ice sheet growth and decay, however, they cannot provide any information on where the growth and decay of individual ice sheets took place. The most significant limitation of oxygen isotope analysis is caused by the activity of burrowing organisms (bioturbation) on the ocean floor that disturb surface sediments. These limit the accuracy to which sediments in individual cores can be used to define sediment age to +/-500 years. Despite this constraint, Pleistocene oxygen isotope stratigraphy shows that during this period numerous high-magnitude fluctuations in climate took place and were associated with the growth and decay of major ice sheets on at least twenty occasions. Furthermore, many of the periods when global climate switched from glacial to interglacial appear to have been exceptionally rapid indeed, although the rates at which these changes took place cannot be determined owing to the effects of sediment bioturbation. The oxygen isotope curves are also significant in that they demonstrate that most of the Pleistocene has been characterized by glacial age conditions. Only very rarely, for around 5 percent to 10 percent of Pleistocene time, have warm interglacial conditions prevailed. Furthermore, Pleistocene interglacials have rarely been associated with air temperatures significantly higher than present. The most important exception to this pattern was the last interglacial that culminated approximately 130,000115,000 years ago (oxygen isotope substage 5e). A popular view is that the Pleistocene glacial and interglacial sequence observed in the oxygen isotope record was caused principally by changes in the nature of the

Earth's orbit around the sun. The Milankovitch theory of climate changes is based on the assumption that there has been no absolute annual change in the amount of incoming solar radiation and that Pleistocene climate changes were the result of longterm cyclical changes in the distribution of insolation across both hemispheres. Indeed, many scientists believe that a well-defined 100,000-year glacial/interglacial cycle observed in the oxygen isotope record may be explained by cyclical changes in the eccentricity of the Earth's orbit. The link between oxygen isotope stratigraphy and ice volume has also permitted estimates to be made of past changes in global sea level, based on the inferred volumes of water stored in the world's oceans. For the Early and Middle Pleistocene, it is not possible to convert ocean water volumes to equivalent sea levels, since plate tectonic processes have caused long-term changes in the shape of ocean basins. Such processes are considered to have been negligible during the Late Pleistocene, however, and attempts have accordingly been made to use oxygen isotope analysis to produce sea-level curves for this time period. These investigations show that during the culmination of the last interglacial, sea level may have been several meters higher than present, and that the end of the interglacial was followed by a number of major climatic oscillations associated with the growth and melting of ice sheets and several major fluctuations in sea level. Sea level fell to its lowest position of - 394 feet (- 120 m) during the culmination of the last glacial maximum ca. 18,000 B.P.

Evidence for Pleistocene Climate Change from Ice Cores


Detailed information on the nature of Late Pleistocene climate change has also been determined through the study of oxygen isotope ratios in ice cores. The most significant ice cores are those that have been sampled from the Antarctic and Greenland ice sheets. In general, it is possible to calculate the age of ice by counting annual layers of ice; annual layers have now been measured as far back as ca. 14,000 years ago. In old ice, where annual layers are indistinct, the age of ice at any given depth is calculated through the use of mathematical models of former ice flow. Until recently, the longest ice core record of climate change was that sampled from Vostok, Antarctica, where a 150,000-year record had been obtained. This record provided, for the first time, a continuous record of past changes in Southern Hemisphere air temperature together with records of past fluctuations in atmospheric CO2 concentrations, and rates of snow and dust deposition. More recently, a 9,846-foot (3,000-m) ice core sequence has been drilled and sampled from the central Greenland ice sheet that appears to provide a continuous record of past climate change for the last 250,000 years. In polar glacier ice, the measured oxygen isotope ratios enable detailed estimates to be made of former air temperatures. Remarkably, the results of the Greenland Ice-Core Project (GRIP) show that the last 10,000 years of the Holocene interglacial have been characterized by sustained climate stability. By contrast, most of Late Pleistocene time as measured in the Greenland ice cores appears to have been characterized by high-magnitude climate oscillations that bear a striking similarity to those evident in the Vostok Antarctic record. The causes of the extreme climate changes that were a feature of the Late Pleistocene are not known. One of the most extreme and rapid climatic reversals took place during the Younger Dryas, between ca. 11,000 and 10,000 radiocarbon years B.P. The climatic

deterioration that accompanied the beginning of this period was associated with largescale reorganization of Northern Hemisphere atmospheric circulation, dramatic changes in North Atlantic ocean circulation linked to the widespread development of sea ice, extensive ice accumulation in the Northern Hemisphere, and a marked decrease in air temperatures. The warming at the end of this period, at the PleistoceneHolocene transition, was equally dramatic, with ice core studies indicating that in Greenland there may have been a seven degree (Celsius) warming within about fifty years. The example of the Younger Dryas demonstrates clearly that during numerous critical periods of Pleistocene time, people may have had to adapt to extreme changes in climate within decades, rather than within millennia as has conventionally been believed. A very important discovery arising from the Greenland ice core research is that very rapid shifts in temperature also occurred during the last interglacial, between ca. 135,000 and 115,000 B.P., most probably reflecting large-scale atmospheric changes over the North Atlantic. The research demonstrates that temperatures may have fluctuated on several occasions from a warm interglacial state with values about two degrees (Celsius) higher than present, to severe cold about ten degrees (Celsius) lower than present within several decades, perhaps even within a single decadeand all of this within a single interglacial! In other words, it is a mistake to use the present interglacial as a climatic analogue for previous Pleistocene interglacials, and it is also fallacious to consider interglacial periods as times of relatively uniform warmth and stable climate. It is a matter of conjecture how Homo habilis and Homo erectus may have responded to such rapid and high-magnitude climatic fluctuations. The Greenland and Antarctic ice cores also reveal the presence of volcanic ash at several discrete levels. The influence of large volcanic eruptions on Pleistocene climate has not been studied in any detail, although there is evidence that some of the largest eruptions may have led to global cooling. The most explosive of all Pleistocene eruptions took place in Toba, Sumatra, ca. 75,000 B.P., and it has been suggested that the eruption may have contributed significantly to the onset worldwide of a major period of Late Pleistocene glaciation.

The Last Glacial Maximum: An Analogue for Pleistocene Ice Age Conditions
Numerous reconstructions have been made of the climatic conditions that prevailed during the last glaciation. The most unified attempt has been in the CLIMAP Project, where a detailed reconstruction was made for the Earth's climate around 18,000 B.P. More recently, several numerical models have been developed to simulate the patterns of global atmospheric and oceanic circulation that existed during this period. These general circulation models (GCMs) represent simulations, based on geological evidence, of the response of the atmosphere to inferred distributions of sea surface temperature, the dimensions of former ice sheets, and the former distribution of lakes, sea ice cover, and the like. The models are then tested by the use of other climatic parameters not used in the model (e.g., estimates of former land temperatures). The various models, when considered together with empirical evidence of past changes in climate, demonstrate complex regional responses to the onset of a major ice age. For example, it is now well known that most areas of tropical rain forest disappeared during the last glaciation. In Africa, a significantly weaker monsoonal circulation led to decreased rainfall in many regions, although in certain areas, decreased evaporation

rates led to the development of large lakes. A similar situation prevailed in South America, where a northward displacement of the Intertropical Convergence Zone led to the development of a semiarid environment throughout much of Amazonia. In Asia and the Indian subcontinent, glacial age conditions were principally influenced by atmospheric changes resulting from the development of the Eurasian ice sheet. Thus, Ice Age conditions were always associated with permanent high pressure over the continental interior and a weakening of the Indian monsoon. As a result, arid conditions prevailed throughout much of Southeast Asia. Farther east, in China, many areas were affected by increased aridity and by the deposition of large thicknesses of wind-blown loess as a result of the anchoring of a jet stream between the ice sheet to the north and the Tibetan plateau to the south. In the unglaciated areas south of the large ice sheets in North America, Europe, and Russia, the development of permafrost was very widespread. In the southwestern United States, lower temperatures, decreased evaporation, and the southward displacement of mid-latitude cyclones led to the development of many large lakes. Similar processes affected the Mediterranean region. Here, lowering of sea level led to the virtual separation of the eastern and western Mediterranean basins, while in the eastern Mediterranean, the Nile delta virtually disappeared as a result of diminished rainfall over eastern Africa. Remarkably, the Aegean Sea may have received much of the meltwater from the southern margin of the Eurasian ice sheet owing to the drainage of waters southward through the Caspian Sea, Black Sea, and Sea of Marmara.

Human Evolution and Pleistocene Climate Change


The study of Pleistocene hominid evolution has focused for a long time on whether evolutionary change took place slowly, or in a stepwise manner, with long periods of little change punctuated by short periods of very fast evolutionary development. The recently published ice core record for the last 250,000 years demonstrates very clearly that for this time interval, and probably for the whole of the Pleistocene, climate change was never slow and cyclical but instead was extremely fast and, in many cases, catastrophic. It has always been considered that the development of bipedalism in hominids was related to increased competition between species caused by the replacement during the Miocene and Pliocene of vast forested areas by savanna grasslands. We know also that in low-latitude areas during the last glacial maximum, most tropical rain forest disappeared, leaving only isolated areas of refugia. If it is true that most of the Pleistocene was characterized by cold and cool climate, it may be reasonable to argue that during the Pleistocene most low-latitude environments were only rarely characterized by the expansion of rain forest and for the most part were typified instead by semiarid conditions. The recurrence of such climatic regimes, together with rapid sawtooth fluctuations in climate, supports the view that such time intervals may have been associated with considerable stresses and rapid adaptation of species. Thus it is probably not realistic to envisage hominid evolutionary stress and adaptation as having taken place against a background of several slow, cold/warm, interglacial/glacial cycles of climate change but rather against a much harsher climatic background, the hallmark of which were sudden and high-magnitude oscillations. A recurring feature that also emerges in any consideration of Pleistocene hominid evolution is that the artifact assemblages often exhibit negligible change over long

periods of time during which major climate fluctuations have taken place. The hand axes of Acheulian assemblages, for example, appear to have changed little during the last million years. Thus, there is great significance in the observation that whereas at one time the skeletal remains of fossil hominids and their artifacts were used to date Quaternary deposits and events, the reverse is now true. The study of Paleolithic archaeology has benefited greatly from recent improvements in dating techniques, and the new ice core data now provide a perspective from which to understand the great Late Pleistocene migrations of Homo sapiens sapiens and Homo sapiens neanderthalis. It is now known, for example, that during the Late Pleistocene in North America and Russia, there may have been at least three major and well-dated periods of ice sheet glaciation separated by lengthy nonglacial intervals. These climatic events must have influenced profoundly the movement of people from eastern Asia into the Americas, although it is not clear what effect such climatic changes may have had on the great Megafaunal Extinctions that took place at the close of the Pleistocene. It has been observed that whereas in North America the extinctions coincide with the arrival of human groups, there is no such relationship for Australia, where human colonization at ca. 40,000 B.P. preceded the main extinctions between 26,000 and 15,000 B.P. It may be argued, nonetheless, that even if climatic and natural habitat changes were the predominant factors in the megafaunal extinctions at the end of the last glaciation, hominids may have helped deliver the final coup de grce to particular species that had already been subject to environmental stresses. It should not be forgotten that our understanding of Pleistocene archaeology and climate change is ultimately dependent on the application of accurate dating techniques. At present, the accuracy of radiocarbon dating, commonly used in archaeology, is being called into question. Whereas the sidereal and radiocarbon time scales broadly correspond for the majority of the Holocene, the same is not true for the Late Pleistocene. As a result, we should not depend too much upon radiocarbon dating to provide all of the archaeological answers that we require.[See also Australopithecus and Homo Habilis; Cromagnons; Europe, the First Colonization Of; Holocene articles on Holocene Environments In Europe, Holocene Environments In Africa, Holocene Environments In the Americas; Homo Erectus; Homo Sapiens, Archaic; Human Evolution: Introduction; Humans, Modern; Neanderthals; Paleolithic.]

Bibliography
CLIMAP Project Members, The Surface of Ice-Age Earth , Science 191 (1976): pp.11311137. David Q. Bowen, Quaternary Geology (1978). Nicholas J. Shackleton et al., Oxygen Isotope Calibration of the Onset of Ice-Rafting and History of Glaciation in the North Atlantic Region , Nature 307 (1984): pp.620 623. John E. Kutzbach and H. E. Wright, Simulation of the Climate of 18,000 years B.P.; Results for the North American/North Atlantic/European Sector and Comparison with the Geological Record of North America , Quaternary Science Reviews 4 (1985): pp.147187.

Nicholas J. Shackleton, Oxygen Isotopes, Ice Volume and Sea Level , Quaternary Science Reviews 6 (1987): pp.183190. Wallace S. Broecker and George H. Denton, What Drives Glacial Cycles? Scientific American January (1990): pp.4350. Martin Bell and Michael J. C. Walker, Late Quaternary Environmental Change: Physical and Human Perspectives (1992). Alastair G. Dawson, Ice Age Earth: Late Quaternary Geology and Climate (1992). Willi Dansgaard et al., Evidence for General Instability of Past Climate from a 250kyr Ice-core Record , Nature 364 (1993): pp.218220. Greenland Ice-Core Project (GRIP) Members, Climate Instability During the Last Interglacial Period Recorded in the GRIP Ice Core , Nature 364 (1993): pp.203207. Alastair G. Dawson

Australopithecus and Homo Habilis The African genus Australopithecus


includes several species of early human ancestors and collateral relatives. Sometime before two million years ago one of these species gave rise to humans via the earliest, most primitive species of our genus, Homo habilis.

History of Discovery and Interpretation


In 1925 Raymond *Dart named a new genus and species, Australopithecus africanus. Dart proposed that the fossilized child's skull from Taung, South Africa, represented a bipedal species ancestral to later humans. Other authorities challenged this interpretation. More complete cranial and postcranial remains of Australopithecus africanus were found by paleontologist Robert Broom at Sterkfontein in the 1930s and 1940s. In 1938, Broom added another species to the genus, recovering a partial skull with a larger face and jaw from Kromdraai, South Africa. Broom named it Paranthropus robustus (most authorities include this species in Australopithecus, and have dropped the genus name). In 1948 Broom and John Robinson began to recover additional examples of Australopithecus robustus at nearby Swartkrans. For the first time they demonstrated two contemporary hominid species in Plio-Pleistocene times early Homo and robust Australopithecus. Robinson's work on fossils of Australopithecus africanus and Australopithecus robustus led him to a formulate a dietary hypothesis, whereby the former species had humanlike proportions of front and back teeth indicating an omnivorous diet, and the robust species had huge teeth and jaws indicative of a vegetarian diet. In 1959, after an intermittent search of over twenty-five years, Mary Leakey, wife of Louis Leakey, found a hominid cranium at Olduvai Gorge in eastern Africa. Unlike the difficult-to-interpret, poorly dated Australopithecus-bearing cave infillings in South Africa, Olduvai's strata were arranged in an orderly fashion, with interbedded volcanic strata amenable to radiometric dating. Bed I, low in the gorge, yielded the massive, robust hominid cranium that Leakey named Zinjanthropus boisei. Most workers immediately recognized this specimen as a northern cousin of Australopithecus robustus, but Louis Leakey insisted that he had found a direct human ancestor dating to 1.8 million years ago. Further excavations in Bed I revealed the contemporary fragmentary remains of a juvenile hominid whose larger braincase and smaller teeth made it a better candidate for human ancestry. Leakey joined Phillip Tobias and John Napier in naming a new, initially disputed species, Homo habilis, in 1964, relegating Australopithecus boisei to a collateral position.

Beginning with Olduvai, much new work took place in eastern Africa's rift. During the late 1960s and early 1970s Clark Howell, Yves Coppens, and colleagues recovered hominid fossils spanning the period 1 to 3.4 million years from the Omo Valley of southern Ethiopia. Richard Leakey's work at Koobi Fora in Kenya established the validity of Homo habilis as a taxon distinct from Australopithecus africanus. The work of Mary Leakey's team at Laetoli in Tanzania, and the efforts of Maurice Taieb, Don Johanson, and their colleagues in Ethiopia's Afar Triangle at Hadar led to the discovery of even more ancient fossil hominids. These remains were attributed by Johanson, Tim White, and Coppens to a primitive species of Australopithecus, A. afarensis in 1978. In the 1980s the validity of Australopithecus aethiopicus was confirmed at West Turkana in Kenya. Associated cranial and postcranial remains of Homo habilis were found at Olduvai. Excavations at Sterkfontein yielded a large sample of hominids and fauna. In the early 1990s continuing work by White and his colleague Desmond Clark in the Middle Awash resulted in the recovery of fossil hominids that predate four million years. These fossils, the earliest hominid ancestors, were placed by T. White, Gen Suwa, and Berhane Asfaw into a new species of Australopithecus, Australopithecus ramidus, in 1994, and into a new genus Ardipithecus in 1995. The earliest species of Australopithecus is A. anamensis, named in 1995 by Meave Leakey and Alan Walker.

Australopithecus Today
Australopithecus appears in the record at about 4 million years ago, but slightly older jaw and limb fragments may also belong to the genus. The youngest Australopithecus specimens are from deposits a little beyond one million years ago and are contemporary with Homo erectus. Comparisons between the DNA of modern humans and living great apes of Africa (the chimpanzees and the gorilla) have shown that these creatures, the pongids, are our closest living relatives. Australopithecus was neither an ape nor a human. All Australopithecus species had skeletons consistent with upright, striding bipedalism, and this unique hominid mode of locomotion is indicated in many parts of the skeleton. The fossilized Laetoli footprints attributed to this species are consistent with this interpretation of the skeletal anatomy. The genus is therefore included in our own zoological family, the Hominidae. Virtually all bones of the body are known for A. afarensis and A. africanus, but skeletal parts for A. robustus and A. boisei are more poorly known, and A. aethiopicus is unknown below the cranium. There is much body size variation in all known species of Australopithecus, much of this probably attributable to sexual dimorphism. Some aspects of early Australopithecus postcranial skeletal anatomy such as long, curved finger and toe phalanges may be holdovers of primitive traits from an ape ancestor. Alternatively, some workers consider such traits to indicate semiarboreal existence. The fundamental musculoskeletal differences between Australopithecus and pongids in the foot, the knee, and the pelvis, however, indicate abandonment of the arboreal substrate and commitment to terrestrial bipedalism. All Australopithecus species lacked the strongly projecting, pointed canines seen in great apes. Another generic trait is the large size of the teeth relative to the body size, a phenomenon known as megadontia. This suggests that Australopithecus consumed low-quality foods requiring heavy chewing. This is particularly true of the extremely

megadont, specialized robust Australopithecus species. None of the Australopithecus species had substantially enlarged braincases, and most known cranial capacities lie between 25 and 37 cubic inches (400 and 600 cu cm)about a third as large as the braincases of modern people. Australopithecus is therefore a creature whose body had evolved toward the human condition considerably sooner than its brain dida good example of mosaic evolution. The genus Australopithecus was exclusively African, and intermittent claims for its presence in China and Java have usually been based on fragmentary remains belonging to early Homo. Australopithecus fossils have been found in eastern and southern Africa but this is not an accurate characterization of genus distribution. Other parts of Africa have not been so well explored or do not have depositional environments conducive to the preservation of ancient skeletal remains. The earliest Australopithecus populations were ecologically widespread, from the dry, upland wooded savannah at Laetoli to the more bushy, highland lakeside environment at Hadar. Given these wide ecological tolerances, it is likely that even the earliest Australopithecus populations were very widespread in Africa. Possibly as a response to his critics' dismissal of Australopithecus africanus as a hominid ancestor, Raymond Dart spent much of his career investigating the bones found intermingled with hominid fossils in South African cave breccias. Concentrating on the Makapansgat site, Dart interpreted antelope bone disproportions and fragmentation as evidence for hominid modification. He imagined a prestone-tool culture for Australopithecus africanus in which bone, teeth, and horn were used for implements. Dart called this the osteodontokeratic culture, and he described Australopithecus africanus as an omnivorous hunting species. These ideas have been tested by actualistic research on modern human and nonhuman carnivore bone accumulations, and cast into doubt by C. K. Brain and others. Despite concentrated searches in appropriate contexts, no recognizable stone or bone implements, nor any other evidence of materially based cultural activity such as cut marks on bones, has yet been found with the earliest species of Australopithecus. Modern chimpanzees, however, make and use tools of perishable materials, and this suggests a sort of minimal baseline against which early Australopithecus cultural behaviors might be judged. Evidence for material culture in later Australopithecus is even more clouded by the presence of at least one other contemporary hominid lineage that evolves into humans. The earliest stone tools in the fossil record are Oldowan assemblages from about 2.6 million years ago. This is a period from which there is evidence of a robust Australopithecus as well as at least one other lineage leading to Homo habilis in eastern Africa. It is widely assumed that members of the latter lineage were authors of the stone tools, but there is anatomically nothing that would have prevented members of both lineages from making and using stone tools. The earliest and most apelike of six widely recognized Australopithecus species is A. anamensis from Kenya. It was the ancestor of Australopithecus afarensis, which is, in turn, widely considered to be the ancestor of all later hominids. A. anamensis descended from Ardipithecus ramidus sometime after 4.4 million years ago. One evolving lineage in eastern Africa links A. afarensis with the descendant species A. aethiopicus (ca. 2.32.6 million years ago [Myr]) and A. boisei (ca. 12 Myr). The latter went extinct. Australopithecus africanus (ca. 2.52.8 Myr) may have been the

exclusive ancestor to either Homo habilis (ca. 1.72.3 Myr) or to A. robustus (ca. 1.8 Myr). Alternatively, it might have been a common ancestor to both, or an evolutionary dead end. Species distinctions and phylogenetic reconstructions within Australopithecus are based on comparisons of the cranial and dental anatomy. Many controversial issues persist in the study of Australopithecus and Homo habilis. There is debate over whether the three species A. afarensis, A. africanus, and Homo habilis should each be broken into smaller species because of the large variation in size and morphology seen in each. There is debate over the mode and tempo of evolution in the various species, and there is heated controversy over the evolutionary relationships among them. Also unresolved is the question of which or how many of these taxa were responsible for manufacturing stone tools, and how the various species members subsisted and locomoted. Some of these problems, like species recognition, are intrinsic to the fossil record. Most of the problems and ongoing debates result from an inadequate fossil record, but accelerated fossil recovery has established the presence of Australopithecus in human ancestry and revealed a more complex and interesting picture of our origins and evolution than was once thought possible.[See also Africa: Prehistory of Africa; Genetics In Archaeology; Human Evolution.]

Bibliography
Lewis R. Binford, Bone: Ancient Men and Modern Myths (1981). Charles K. Brain, The Hunters or the Hunted? (1981). John Reader, Missing Links (1981). Eric Delson, ed., Ancestors: The Hard Evidence (1985). Roger Lewin, Bones of Contention (1987). Frederick E. Grine, Evolutionary History of the Robust Australopithecines (1988). Richard G. Klein, The Human Career (1989).

Cro-magnons are, in informal usage, a group among the late Ice Age peoples of
Europe. The Cro-Magnons are identified with Homo sapiens sapiens of modern form, in the time range ca. 35,00010,000 B.P., roughly corresponding with the period of the Upper Paleolithic in archaeology. The term Cro-Magnon has no formal taxonomic status, since it refers neither to a species or subspecies nor to an archaeological phase or culture. The name is not commonly encountered in modern professional literature in English, since authors prefer to talk more generally of anatomically modern humans. They thus avoid a certain ambiguity in the label Cro-Magnon, which is sometimes used to refer to all early moderns in Europe (as opposed to the preceding Neanderthals), and sometimes to refer to a specific human group that can be distinguished from other Upper Paleolithic humans in the region. Nevertheless, the term Cro-Magnon is still very commonly used in popular texts, because it makes an obvious distinction with the Neanderthals, and also refers directly to people, rather than to the complicated succession of archaeological phases that make up the Upper Paleolithic. This evident practical value has prevented archaeologists and human paleontologistsespecially in continental Europefrom dispensing entirely with the idea of Cro-Magnons. The Cro-Magnons take their name from a rock shelter in the Vezere Valley in the

Dordogne, within the famous village of Les Eyzies de Tayac. When the railway was being constructed in 1868, parts of five skeletons were found sealed in Pleistocene deposits, along with hearths and Aurignacian artifacts. Subsequently similar finds were made at sites such as Combe Capelle and Laugerie-Basse in the Dordogne, and Mentone and Grimaldi in Italy. Other specimens found earlier, such as Paviland in Britain and Engis in Belgium could be set in the same group, and it became plain that their physical makeup contrasted sharply with that of Neanderthals discovered in other sites. Sufficient data to build up this classic picture accumulated over a period, but it was brought into sharp focus following the find of a classic Neanderthal at La Chapelle in 1908. The early interpretations owe much to the French scholars Marcellin Boule and Henri Vallois. Later research has extended the geographical distribution of similar humans and has provided an absolute dating scale for them; however, later research has also raised many questions about the origins of the CroMagnons and their status as a coherent group.

Physical Characteristics and Adaptation


Cro-Magnons were closely similar to modern humans, but more robust in some features, especially of the cranium. They meet criteria listed by Michael Day and Chris Stringer for modern humans, such as a short, high cranium and a discontinuous supra-orbital torus (brow ridge). Many individuals were well above present-day average in stature, often reaching around 75 inches (190 cm). Their limbs were long, especially in the forearms and lower legs, body proportions suggesting to some anthropologists that their origins lie in warm climes, rather than Ice Age Europe. Significant variability had already been recognized by Boule, who attributed Negroid characters to some specimens from Grimaldi (placing them in a separate race). A recent study has found that earlier specimens such as those from Cro-Magnon and Mladec in the Czech Republic are outside modern human range, whereas specimens later than 26,000 B.P. generally fall within it. Emanuel Vlcek regards the Mladec I finds as Cro-Magnons, but sees features related to the Neanderthals in later Mladec II specimens and ascribes later specimens from Dolni Vestonice and Predmosti specimens to a robust Brno Group. Such findings suggest that the original remains from Cro-Magnon are too distinctive to serve as a template of identification for a race all over Europe. If any overall trend can be picked out, it is toward greater gracility as time progressed.

Chronology
Given the rarity of human remains, it is easier to date the onset of the Upper Paleolithic than the first appearance of people resembling the Cro-Magnons, which is not necessarily the same event. Nevertheless, dates around 40,000 B.P. seem highly likely. It is certain that populations of Homo sapiens sapiens became established throughout Europe in far less than 10,000 years. Since the 1950s the chronology of these Late Pleistocene human populations has been derived principally from radiocarbon dating. A late Neanderthal found at St. Csaire in western France with a Chtelperronian (initial Upper Paleolithic) industry is dated to ca. 36,000 B.P. by thermoluminescence (TL), but the Upper Paleolithic Aurignacian appears earlier in northern Spain at ca. 42,00039,000 B.P., as shown by radiocarbon and uranium series dating. It is widely assumed that the Aurignacian is associated with modern (i.e., CroMagnon-like) populations, and that the Chtelperronian, though associated with Neanderthals, may have been triggered by the cultural effects of modern human

presence elsewhere in the region (a so-called bow-wave phenomenon). Thereafter the Cro-Magnons were continuously represented in Europe for 20,000 years or more. It might be convenient to end the Cro-Magnons with the glacial maximum of 18,000 B.P., but in France their characteristics persist in Magdalenian populations through the later part of the glaciation until about 12,00010,000 B.P. At this stage human populations began to become more gracile.

Geographical Distribution
Human remains are extremely scarce in relation to the number of archaeological sites. The earliest Upper Paleolithic in France is almost devoid of skeletal remains; finds such as Cro-Magnon, Abri Pataud, and Combe Capelle are probably several thousand years later. These are a minimal sampling of a distribution that archaeological traces strongly suggest was much wider. Thus there are no early remains of Cro-Magnons from Spain, Greece, or Turkey, but populations were probably present. To the north, Upper Paleolithic human remains have been found in Britain, represented by Paviland and Kent's Cavern, and in Germany, by Hahnfersand. Farther east, burials are well represented in the Upper Paleolithic records of the Czech Republic, and in Russia at Kostenki and Sunghir. In the south numbers of finds are known from Italy.

Cultural Associations
Most of the Upper Paleolithic humans are found in deliberate burials, often single but sometimes in groups, and frequently associated with grave goods, such as necklaces of pierced teeth. Such finds are known from a sequence of archaeological phases beginning with the Aurignacian (e.g., Combe Capelle or Mladec), but the succeeding Gravettian (ca. 29,00020,000 B.P.) is richer in burials (e.g., those of Dolni Vestonice in the Czech Republic). It has yielded fewer specimens in western Europe. In southwestern Europe the Solutrean phase is associated with similar populations. They are found again in the Magdalenian or Epi-Gravettian. By this time preserved human remains are much more numerous, and they are known from most parts of Europe. Grave goods sometimes attest to highly developed artistic abilities. The Cro-Magnons were responsible for much art, but rarely figured in their own work.

Relationship with the Neanderthals and Other Hominids


Recent work has shown that early modern humans (sometimes called Proto-CroMagnons) first appeared at least 100,000 B.P. They are documented in Africa, but most specifically on the cave sites of Skhul and Qafzeh in Israel in the period 100,00090,000 B.P. The Cro-Magnon specimens of Europe must be derived ultimately from one of these ancestral populations, but the available finds show no continuity. Indeed, by 60,000 B.P. Neanderthals featured in the Middle East, and the Proto-Cro-Magnons may have been displaced to the south. It seems likely that they returned somewhere around 50,000 B.P. and flowed into Europe, although there is no documentation in the Middle East other than a burial at Ksar Akil in Lebanon. There is also no close similarity, according to most authors, between the Proto-CroMagnons and the Cro-Magnons. The simplicity of these hypotheses is belied by the complexity of the scarce data that we do have. Just as the St. Csaire find in France documented a late Neanderthal and placed constraints on our ideas about the distribution of the early Cro-Magnons, so one new early Cro-Magnon discovery could dramatically alter our view of their origins.[See also Humans, Modern, articles on Origins of Modern Humans, Peopling the Globe.]

Bibliography
Marcellin Boule and Henri Vallois, Fossil Men (1957). Paul Mellars and Chris Stringer, eds., The Human Revolution (1989). Paul Mellars, ed., The Emergence of Modern Humans (1990). Alan Bilsborough, Human Evolution (1992). Gunter Brauer and Fred H. Smith, eds., Continuity or Replacement: Controversies in Homo sapiens Evolution (1992). Martin J. Aitken, Christopher B. Stringer, and Paul A. Mellars, eds., The Origins of Modern Humans and the Impact of Chronometric Dating (1993). John A.J. Gowlett, Ascent to Civilization, 2nd ed. (1993). Chris Stringer and Clive Gamble, In Search of the Neanderthals (1993).

Europe, the First Colonization Of In archaeology, it has been the traditional


view for decades that humankind arose in Africa, and only spread to other continents at a comparatively late stage. The human ancestor known as Homo habilis (handy man) evolved in Africa by about 2 million years ago but never spread elsewhere. It was the later species, Homo erectus (erect man), which also appeared first in Africa around 1.8 million years ago, that radiated out from that continent into Europe and elsewhere around 1 million years ago. This scenario has been challenged in recent years, not only by some (still contentious) new dates which place Homo erectus in Indonesia at 1.8 million years ago, and which assign similar ages to stone tools in Pakistan and Israel (Ubeidiyeh), but also by a series of finds and dates in Europe and Siberia that may yet lead to a complete revision of human prehistory. At present, the earliest known human remains in Europe are the recently discovered (though still somewhat controversial) series of fragments found in the Orce Basin, northeast of Granada in southeast Spain, which, together with stone tools from the area, are thought to date to about 1.6 million years ago. Likewise, some collapsed cave-sites in the Sierra de Atapuerca near Burgos, in northern Spain, have recently yielded one hundred fragments from five or six individual humans, dating to between 800,000 and 1 million years ago, as well as some crude stone tools (associated with animal bones) from a lower layer estimated to be a million years old. Such finds are far more ancient than what was previously thought to be Europe's oldest human bone, the Mauer jaw in Germany, dating to about 500,000 years ago. In sites lacking human remains, the clues to a human presence lie in stone tools. As in Africa, however, the earliest artifacts tend to be somewhat crude and rudimentary, making it often difficult to prove that they are indeed human-worked rather than products of nature. One site containing what are agreed to be definite tools is the cave of Vallonnet in the Alpes-Maritimes of southern France, which has yielded four flakes and five pebble tools of limestone and quartzite in its center, while animal bones were pushed against the walls. The cave had no traces of fire. Its occupation is dated to about 900,000 years ago.

Archaic pebble industries are also known from a number of areasfor example the high terraces of Catalonia and Roussillon (Spain and France) and those of the Somme (France)which are thought to date back to at least 800,000 years ago. These remains, however, are young when compared to the claims being made for a number of sites in the Massif Central, France, a crucial region for the investigation of early occupations since its volcanic layers not only afford good conservation but also allow accurate dating. The best-known site is that of Chilhac, in the upper valley of the Allier, which is rich in early fauna but which has also yielded a very archaic pebble industry of choppers, cores, and flakes. Unfortunately its date of 1.9 million years has been obtained for the fauna rather than for the industry, but some researchers believe that the two are contemporaneous. More recently, claims have emerged from the region that Europe may have been occupied up to 2.5 million years ago. The principal proponent of this view, the French prehistorian Eugne Bonifay, has discovered what seem to be crude tools of quartz at a site called Saint Eble, located near Langeac at the foot of Mont Coupet, an extinct volcano in the Auvergne region of the Massif Central. Several hundred flakes and chunks of quartz have been recovered from deposits that lie beneath (and are therefore older than) animal fossils known to be around 2 million years old, and also beneath debris from the volcano of about the same age. The crucial question is whether the flakes and pebbles were worked artificially or are products of nature. Bonifay believes that at least five of the quartz pieces are of unquestioned human manufacture, though other specialists remain divided on the issue: some argue that the tools may in fact have been produced by volcanic eruption. If human remains were to be found with the tools, this would be a decisive factor, but meanwhile sites in other parts of the world are accepted by many specialists simply on the evidence of stone tools. In the absence of bones, speculation as to the identity of these first European tool-makers remains just that, though most assume it was some form of Homo erectus or perhaps even Homo habilis for the earliest sites. Some support for the European claims has also been emerging in northern Asia in recent years. In 1991, the complete and very archaic lower jaw of an adult hominid was found in the republic of Georgia, in the city of Dmanisi. It has been dated to about 1.4 million years ago, and assigned to an early Homo erectus or an even older hominid. It was found with archaic stone tools of volcanic tuff and some fractured faunal remains. Moreover, in the 1980s the Russian archaeologist Yuri Mochanov discovered a very early stone-tool industry at the site of Diring in Siberia which he believes to be at least 1.8 million years old, and perhaps even 3.2 million on the basis of palaeoenvironmental data. The pebble tools are claimed to resemble those from Olduvai Gorge, Tanzania, more closely than those from any other Early Pleistocene site, and have led him to resurrect the long-ignored theory of a nontropical origin for humankind. While few researchers agree with Mochanov's earliest date, he has recently found considerable support among American specialists not only for the claim that the industry is humanly made, but also for a date of at least 500,000 years

ago. In view of the Georgian jaw, a date of 1.8 million no longer seems preposterous for a site in Siberia. In short, the traditional scenario of a rather late entrance of Homo erectus into Europe no more than 900,000 years ago is being gradually undermined, not only by discoveries of stone tools throughout Europe that may be at least twice as old, but also by the Dmanisi jaw in Georgia and the Diring finds in Siberia which point to a human presence in northern Asia by at least 1.4 million B.P., and perhaps far earlier. The next few years will undoubtedly produce more such claims and, one hopes, further welldated artifacts and hominid remains which will help clarify this new version of events. [See also Europe: The European Paleolithic Period; Humans, Modern: Peopling of the Globe; Paleolithic: Lower and Middle Paleolithic.]

Bibliography
Les Premiers Habitants de l'Europe: 1,500,000100,000 ans. (1982). E. Bonifay and B. Vandermeersch, eds., Les Premiers Europens, Actes du 114e Congrs nat. des Socits Savantes, Paris 1989 (1991). P. G. Bahn, Treasure of the Sierra Atapuerca , Archaeology 49 (1) (1996): pp.4548.

HOLOCENE Introduction Victorian biologist Charles Darwin pointed to Africa as the cradle of
humankind, because the closest primate relatives of humans lived there. A century and a half of intensive palaeoanthropological research has shown he was right. The archaeological record of human activity is longer in tropical Africa than anywhere else in the world, extending back more than 2.5 million years. At present, the evidence for very early human evolution comes from eastern and southern Africa. Tim White describes the earliest Australopithecines and hominids from Ethiopia, Kenya, and Tanzania, an area where the increasingly diverse primate fossil record now extends back to 4 million years. Bipedalism dates back far earlier than the first appearance of stone artifacts and other protohuman culture, which first appear in archaeological sites like those at Koobi Fora on the eastern shore of Lake Turkana in northern Kenya about 2.5 million years ago. These earliest sites are little more than transitory scatters of crude stone artifacts and fractured animal bones, located in dry stream beds, where there was shade and water. In this section, Nicholas Toth and Kathy Schick describe the stone technology behind this earliest of human tool kits, reconstructed from controlled experiments and replications of the first hominid stoneworking. Much of the evidence for very early human behavior comes from the now-classic sites in Bed I at Olduvai Gorge in northern Tanzania, excavated by Louis and Mary Leakey. Dating to just under 2 million years ago, these small artifact and bone scatters have been the subject of much controversy, but they are now regarded not as campsites but as places where early hominids cached meat and ate flesh scavenged from predator kills. The earliest human lifeway was much more apelike than human, with Homo habilis, and probably other hominids, relying heavily on both edible plants and scavenged game meat. Homo erectus, a more advanced human, seems to have evolved about 1.8 million years ago in Africa from earlier hominid stock. By that time, too, some Homo erectus

populations were living in Southeast Asia. So if these archaic humans evolved in Africa, they must have radiated rapidly out of Africa into other tropical regions. Leslie Aiello analyzes what we know about Homo erectus from a very sketchy fossil record and shows that these humans evolved slowly toward more modern forms over a period of more than 1.5 million years. Africa provides good evidence for animal butchery and the domestication of fire by Homo erectus, especially by about 750,000 years ago, with some experts arguing that fire originated on the East African savanna. To what extent Homo erectus relied on big-game hunting as opposed to scavenging for meat supplies is a matter for controversy. However, more diverse tool kits, some of them surprisingly lightweight, argue for improved hunting skills throughout Africa, at a time when humans were adapting to all manner of moist and arid tropical environments. Most authorities also believe that anatomically modern humans evolved in Africa from a great diversity of archaic Homo sapiens forms, which in turn evolved from much earlier human populations. As Gunter Braer points out, two main hypotheses pit those who believe Africa was the homeland of modern humans against those who argue for the evolution of Homo sapiens sapiens in Africa, Asia, and other regions more or less simultaneously. The evidence for an African origin is in large part derived from mitochondrial DNA, but the fossil record from Klasies River Cave, Omo, and other locations provides at least some evidence for anatomically modern humans as early, if not earlier, than in the Near East. According to the out-of-Africa hypothesis, modern humans evolved south of the Sahara, then radiated northward across the desert at a time when it was moister than today, appearing in the Near East at least 100,000 years ago. But, while the case for an African origin for modern humans is compelling, the actual scientific evidence to support it is still inadequate. During the last glaciation, the Sahara was extremely dry, effectively isolating the African tropics from the Mediterranean. Despite this isolation, Africans developed sophisticated foraging cultures, adapted not only to grassland and woodland savanna but to dense rain forest and semiarid and desert conditions. We know little of these adaptations, except from increasingly specialized tool kits, many of them based on small stone flakes and blades. The ultimate roots of the Stone Age foraging cultures of relatively recent millennia and centuries lie in the many late Stone Age groups that flourished throughout tropical Africa for more than 10,000 years, as societies in the Near East, Europe, and Asia were experimenting with agriculture and animal domestication. Some of these Late Stone Age groups, especially the ancestors of the modern-day San of southern Africa, are celebrated for their lively cave paintings and engravings, which, as David Lewis-Williams tells us, have deep symbolic meaning. As Steven Brandt and Andrew Smith recount, farming and animal domestication came to tropical Africa very late in prehistoric times. Cereal agriculture may have been introduced into the Nile Valley by 6000 B.C., or crops may have been domesticated there indigenously, but the question is still unresolved. At the time, the Sahara Desert was still moister than today, supporting scattered groups of cattle herders by 5000 B.C. While ancient Egyptian civilization was based on the annual floods of the Nile River, the Saharans had no such dependable water supplies. As the desert dried up after 4000 B.C., they moved to the margins of the desert, into the Nile Valley, and onto the West African Sahel, where both cattle herding and the cultivation of summer rainfall crops were well established by 2000 B.C. About this time, some

pastoralist groups also penetrated on the East African highlands. But the spread of agriculture and herding into tropical regions was inhibited by widespread tsetse fly belts and, perhaps, by the lack of tough-edged axes for forest clearance. It was not until after 1000 B.C. that the new economies spread from northwest of the Zaire forest and from the southern Sahara into eastern, central, and southern Africa. These lifeway changes may have connected with the introduction of ironworking technology, which was well established in West Africa in the first millennium B.C., having been introduced from either North Africa or the Nile along desert trade routes. Once ironworking spread, especially through the Zaire forest, agriculture spread rapidly. By A.D. 500, mixed farming cultures were well established throughout tropical Africa, except in areas like the Kalahari Desert, where any form of farming or herding was marginal. The rapid spread of farming may have also coincided, in general terms, with the spread of Bantu languages throughout tropical Africa from somewhere northwest of the Zaire forest. With the spread of food production throughout tropical Africa, many general patterns of architecture; metal, wood, and clay technology; and subsistence were established south of the Sahara. These simple farming cultures achieved great elaboration during the ensuing two millennia, largely as a result of African responses to economic and political opportunities outside the continent. Ancient Egyptian civilization was one of the earliest and most long-lived of all preindustrial civilizations. The Nile Valley from the Mediterranean Sea to the First Cataract at Aswan was unified under the pharaoh Narmer about 3100 B.C., in a state that had entirely indigenous roots, even if some innovations, like writing, may have arrived in Egypt from elsewhere in the Near East. There is no evidence that ancient Egypt was a black African civilization, as some scholars have claimed, even if there was constant interaction between the land of the pharaohs and Nubia, upstream of the First Cataract, for more than 3,000 years. The Old Kingdom pharaohs explored Nubian lands for their exotic raw materials. When the Egyptian state passed through a period of political weakness, Nubian leaders assumed greater control and power over the vital trade routes that passed through the Land of Kush. Middle and New Kingdom pharaohs conquered, garrisoned, then colonized Kush, which survived as a powerful kingdom in its own right after 1000 B.C., reaching the height of its power when Nubian kings briefly ruled over Egypt in the seventh century B.C. After being driven from Egypt and chivied as far as their Napatan homeland, the Nubian kings withdrew far upstream to Meroe, where they founded an important kingdom at the crossroads between Saharan, Red Sea, and Nile trade routes. Meroe became an important trade center, especially with the domestication of the Camel in the late first millennium B.C., also a major center for ironworking, going into decline only in the fifth century B.C., when it was overthrown by the kings of the rival kingdom Aksum in the Ethiopian highlands. Like Meroe, Aksum prospered off the Red Sea trade, emerging into prominence with the Mediterranean and India. It reached the height of its power in the eleventh century B.C., after Christianity reached Ethiopia. Two developments had a profound effect on the course of tropical African history. The first was the domestication of the camel, which opened up the trade routes of the Sahara Desert. The second was the discovery by Greek navigators about the time of Jesus of the Monsoon winds of the Indian Ocean. These two developments brought Africa into the orbit of much larger, and rapidly developing, global economic

systems, which were to link China, Southeast Asia, Africa, and the Mediterranean and European worlds into a giant web of interconnectedness. Camels were not used for Saharan travel in the Roman colonies in North Africa, although they may have penetrated south of the desert on several occasions. The Saharan camel trade in gold, salt, and other commodities developed in the first millennium A.D., especially after the spread of Islam into North Africa. Indigenous West African kingdoms developed in the Saharan Sahel, at the southern extremities of the caravan routes, as local leaders exercised close control over the mining and bartering of gold and other tropical products. By A.D. 1000, Islam was widespread in the Sahel, and the Sahara, the West African savanna, and the forests to the south were linked by close economic ties. Ghana, Mali, and Songhai in turn dominated the southern end of the Saharan trade between 900 and 1500, during centuries when most of Europe's gold came from West Africa. Small kingdoms also developed in the West African forest, as the institution of kingship assumed great importance, associated as it was with long-distance trade, important ancestor cults, and indigenous terra-cotta and bronze sculpture and art traditions that flourished long after European contact in the late fifteenth century. The monsoon winds linked not only the Red Sea and Arabia with India, but the Land of Zanj, on the East African coast, as well. During the first millennium, Arabian merchants visited the villages and towns of the coast regularly, trading gold, ivory, hut poles, and other products for textiles, porcelain, glass vessels, glass beads, and other exotic products. By 1100, a series of small ports and towns dotted the coast from present-day Somalia to Kilwa in the south. This was a cosmopolitan African civilization, with strong indigenous roots and close ties to Arabia. Its merchants obtained gold, ivory, and other interior products from kingdoms far from the coast, notably from the Shona chiefdoms between the Limpopo and Zambezi Rivers in southern Africa. Archaeological evidence shows how a series of powerful cattle kingdoms developed in this highland region, kingdoms that prospered from their connections with long-distance trade routes that linked them with the port of Sofala on the Mozambique coast. During the fifteenth century, Great Zimbabwe, the seat of the Mutapa Dynasty, was at the height of its importance. Zimbabwe's imposing stone ruins are among Africa's most important archaeological sites, for the settlement was abandoned just before Europeans landed at the Cape of Good Hope. African kingdoms developed out of indigenous roots, especially in areas where local leaders could control important resources such as grazing grass, salt sources, and copper or gold mines. A series of such chiefdoms flourished south of the Zaire forest in the Kisale region at the end of the first millennium. Richly adorned graves testify to the great economic power and far-flung trading contacts in the region. Cultural influences from these kingdoms spread far and wide over central and southern Africa before the fifteenth century. A seminal event in African history came with the Portuguese capture of the important Islamic trading city of Ceuta in Morocco in 1415. In the 1430s and 1440s, Prince Henry the Navigator of Portugal sent ships on long journeys of exploration down the West African coast, trying to outflank the Islam-controlled Saharan gold routes. By 1480, the Portuguese were well established along the West African coast, while Vasco da Gama rounded the Cape of Good Hope, explored the East African towns, and

crossed the Indian Ocean to Goa, opening up a southern route for the spice trade. European contact with Africa brought new economic opportunities for Africans, who took full advantage of them. These opportunities were manifested in the Atlantic slave trade, which began early in the Portuguese exploration of African coasts and reached a crescendo in the late eighteenth and early nineteenth centuries. Christopher DeCorse summarizes the emerging field of historical archaeology, which is documenting not only the European presence in Africa but some of the cultural interactions resulting from the slave trade and other developments.[See also Afar; Africa, Origins of Food Production In; Antiquity of Humankind: Antiquity of Humankind In the Old World; Australopithecus and Homo Habilis; East Africa; Egypt and Africa; Holocene: Holocene Environments In Africa; Human Evolution, articles on Introduction, Fossil Evidence For Human Evolution, The Archaeology of Human Origins; Humans, Modern: Origins of Modern Humans; Hunter-gatherers, African; Nubia; Pastoralists, African; Rock Art: Rock Art of Southern Africa; Trade: African; West African Forest Kingdoms; West African Savanna Kingdoms; West African Sculpture.] Brian M. Fagan

Holocene Environments In Africa African environments, ultimately, are


determined primarily by the local climates, in which there were many and profound changes during the Holocene. Climatic changes within and outside the tropics sometimes seem to have been out of phase, but this may in part result from the imprecision of the dating. Also, there were important local and regional anomalies within the overall climatic patterns. Since most of Africa lies at low latitudes, temperature changes have not been very marked. At the beginning of the Holocene, temperatures were still recovering from their minima during the Last Glaciation (overall, perhaps 9 F [5 C] lower than today). Southern Africa was warmer than at present between about 9000 and 4700 B.P., but was generally cooler both before and since. In eastern equatorial Africa, in contrast, modern temperatures were not reached until about 6700 B.P.; warmer temperatures are also detectable in western and northern Africa by 7000 B.P. Rainfall is the critical determinant of African environments. Much of the continent witnessed extreme aridity until about 12,500 B.P., when the tropics became much wetter. Lakes filled closed basins throughout the region, the larger lakes reaching higher levels (sometimes more than 330 feet [100 m] higher) than their modern successors, and some of them overflowing. There was a brief arid phase at about 10,500 B.P. (perhaps reflecting the glacial readvance at higher latitudes), but the lakes seem to have reached their maximal stands between 9500 and 8500 B.P. Lake Chad, which overflowed, stood 130 feet (40 m) higher than it does today and covered an area of some 135,000 square miles (350,000 sq km). The Nile, fed by increased rainfall in its headwaters, began to cut down into its floodplain at about 12,500 B.P., even though the level of the Mediterranean was rising, and continued down-cutting until about 6000 B.P. Reflecting the higher rainfall and temperatures, the rainforests began to expand at 13 12,000 B.P., and were at their maximal extent from 7000 to 3500 B.P. At this time,

they reached some 220 miles (350 km) north of their present limit, and the Dahomey Gap (the break in rainforest distribution in Togo and Benin) was probably closed. The Early Holocene wet phase also involved a northward expansion of the monsoon belt. Rains had reached the eastern Sahara by about 11,000 B.P., and by 9500 B.P. affected most of the modern Sahara and Sahel. Rainfall was not necessarily high: in the eastern Sahara, it may not have exceeded 4 inches (100 mm) a year, but this is a region where no rain had fallen for perhaps 50,000 years and where none falls today. The Sahelian environment expanded into what is now high desert, and intensive use of some of the Sahelian plants eventually domesticated in Africa, particularly sorghum, began at this time. At 9500 B.P., the Niger breached the dune barrier and flooded the Azawad delta 190 miles (300 km) northward. By 8300 B.P., there were permanent lakes in a steppe parkland all across the Sahel and Sahara up to 24 N, supporting groups of gatherers and fishers. The early northward expansion of the monsoon rains across the Sahara was probably not associated with a southward expansion of the Mediterranean winter rains. However, the monsoons reached as far as southern Israel, so that all of the eastern and central Sahara received rainfall. It is possible that northwestern Africa remained arid somewhat longer. The western Sahara was not populated until about 7000 B.P., when Mediterranean faunal elements indicate that the desert had finally, it temporarily, disappeared. Rainfall was not consistently high in the tropics. Most lake levels fell at about 7500 B.P.; they had recovered by 7000 B.P. but were not so high as before, and in both eastern and western Africa, rainfall became more seasonal. Aridity had begun to increase throughout the continent by 4500 B.P. The eastern Sahara had already been long abandoned, except for the massifs and the great oases, and even the western Sahara was unoccupied after 43000 B.P. The retreat of the rainforests before the encroaching savanna, beginning around 35003000 B.P., may have been a factor in the synchronous Bantu expansion. There have been later, more humid episodes, but they have been brief, localized, and comparatively minor. Environmental variations in southern Africa were initially the reverse of those farther north. Thus, after being more humid during the maximum cold of the Last Glaciation, southern Africa became drier at about 12,000 B.P., and the major Holocene wet phase was not established until about 9000 B.P. Thereafter, most of the southern part of the continent generally was in phase with the rest of Africa, the wet period ending by about 4000 B.P.[See also Megafaunal Extinction; Paleoenvironmental Reconstruction; Pleistocene.]

Bibliography
J. A. Allan, ed., The Sahara. Ecological Change and Early Economic History (1981). F. A. Street-Perrott and N. Roberts, Fluctuations in Closed Basin Lakes as an Indicator of Past Atmospheric Circulation Patterns, in Variations in the Global Water Budget, ed. F. A. Street-Perrott, M. Beran, and R. Ratcliffe (1983), pp. pp.331345. Richard G. Klein, ed., Southern African Prehistory and Palaeoenvironments (1984). P. D. Tyson, Climatic Change and Variability in Southern Africa (1986).

A. T. Grove, Africa's Climate in the Holocene, in The Archaeology of Africa: Food, Metals and Towns, ed. Thurstan Shaw, Paul Sinclair, Bassey Andah, and Alex Okpoko (1993), pp. pp.3242. J. Maley, The Climatic and Vegetational History of the Equatorial Regions of Africa during the Upper Quaternary, in The Archaeology of Africa: Food, Metals and Towns, ed. Thurstan Shaw, Paul Sinclair, Bassey Andah, and Alex Okpoko (1993), pp. pp.4352.

Holocene Environments In Europe The environment of Europe changed


dramatically during the Holocene due to both natural and human factors, the relative importance of which varied through time. At the end of the last Ice Age, a large part of northern Europe was covered by ice sheets, while much of the area further south experienced cold conditions and supported open herb-dominated vegetation or open woodlands of birch (Betula) and pine (Pinus). Sea level was below its present height because of water held in the ice sheets, and Britain was joined to the continent by a land bridge. The rapid temperature rise at the onset of the Holocene (10,000 B.P.) enabled trees to spread northwards, leading to the development of dense woodland over much of Europe by ca. 8000 B.P. In northwestern and central Europe this woodland was a mixture of broad-leaved trees, including hazel (Corylus avellana), oak (Quercus), elm (Ulmus), lime (Tilia), and alder (Alnus), while pine, birch, and spruce (Picea) were dominant in Scandinavia and eastern Europe. Rising sea levels separated Britain from the continent by 8000 B.P. and the coastline of Europe resembled that of today by ca. 7000 B.P. These changes presented Mesolithic peoples with a varying resource base, as plant and animal populations changed and the extent and distribution of coastal resources shifted. For example, Franchthi cave in southern Greece today lies a few meters above sea level on a rocky coast, but at the start of the Holocene the sea was up to 2 miles (23 km) away, and separated from the cave by mudflats. This change in the position of the coastline is reflected in changes in the type of mollusk exploited by the inhabitants of the cave, from mudflat species to types characteristic of rocky shores at ca. 8000 B.P. Away from the coast, Mesolithic sites clustered around lakes and rivers, enhancing mobility and providing opportunities for fishing. The Early Mesolithic (ca. 9600 B.P.) site of Star Carr, northern England, was on the edge of a large lake surrounded by open birch woodland. Red deer (Cervus elaphus), roe deer (Capreolus capreolus), elk (Alces alces), aurochs (Bos primigenius), and pig (Sus scrofa), were hunted by the occupants of the site; birch trees were used for timber and birch bark was collected, possibly for resin. Hazel was the dominant tree over much of northwest Europe between ca. 9500 and 7500 B.P., and at some Mesolithic sites hazelnuts seem to have been an important part of the diet. It has been suggested that the human population may have managed hazel by using fire to suppress its competitors, although pollen and charcoal analyses do not support this. Nevertheless, fire may have been used to create small clearings or to

drive game, and there is evidence from some upland areas of northern England that burning was widespread in the later Mesolithic period. Agriculture was first introduced into southeast Europe at ca. 90008000 B.P., and had spread to the northwest by 55005000 B.P. Cereal cultivation required clearance of woodland to create fields, and minor clearings, or landnam episodes, appear in pollen diagrams from this time. The overall extent of woodland was not greatly reduced, and it continued to be used as a resource. That some woodland may have been managed is suggested by the uniform poles of wood used in some of the Somerset Levels trackways in southwest England. Trees, particularly elm and lime, also provided a source of leaf fodder for cattle, as suggested by the find of leaf hay in byres at the Early Neolithic site of Thayngen-Weier in Switzerland. This type of exploitation may have been at least partially responsible for the decline of elm in 5000 B.P., which is widely recorded in pollen diagrams from northern Europe. From the Late Neolithic and Bronze Age, human impacts on the environment increased, as woodland was cleared to provide land for cultivation and pasture, often leading to the onset of soil erosion. In upland areas of northwest Europe, the onset of blanket peat formation seems to have resulted from changes in the water balance brought about by woodland clearance, while in areas of lower rainfall, heathland formed on nutrient-impoverished soils. The original woodland of the Mediterranean area was replaced by thorny grazing-resistant shrubs (macchia and garrigue). The last four thousand years, therefore, have witnessed a major change in the nature of European environments, from a substantially wooded landscape to one with a mosaic of vegetation types, most of which owe their character, directly or indirectly, to human activity.[See also Elm Decline, European; Megafaunal Extinction; Paleoenvironmental Reconstruction; Pleistocene.]

Bibliography
B. Huntley and H.J.B. Birks, An Atlas of Past and Present Pollen Maps for Europe: 013000 Years Ago (1983). B. Huntley and T. Webb III, Vegetation History (1988). Neil Roberts, The Holocene (1989).

Holocene Environments In the Americas About 12,000 years ago, prior to the
onset of the Holocene, there was a general warming trend throughout North America. This decreased the area and thickness of the Laurentide ice sheet. By 12,000 years ago, the prevalent anticyclonic winds, which probably dominated the climates of the western United States during the glacial maximum, weakened, so that more dominant westerly winds began to blow across the United States. Reduction of the ice sheet and the accompanying warming trend reduced the spruce forests of the midwestern United States and eliminated spruce from southerly regions, such as Texas. In the western and southwestern United States, alpine woodlands that had extended their range downslope during the glacial maximum were now in full retreat and were soon restricted to higher elevations. In the same regions, shallow playa lakes dried, and surface levels in larger lakes, such as Lake Bonneville, were dropping. In the southeastern United States, mixed deciduous forests were expanding into regions that

had once been dominated by conifer forests of spruce or fir. By the beginning of the Holocene, at 10,000 years ago, an increase in summer insolation raised temperature averages by as much as 36 to 39 F (24 C) in much of North America, except in northeastern Canada, where a smaller and thinner remnant of the Laurentide ice sheet still existed. Nevertheless, the remaining ice sheet continued to influence the Early Holocene climate of North America. As the remaining ice sheet in North America and alpine glaciers in South America melted, sea levels rose, causing flooding in many low-lying coastal areas. Early Holocene precipitation levels in North America and in tropical regions of Central and South America were higher than present levels; however, by 9,000 years ago there was a reverse pattern in the southern temperate regions of South America, where rainfall levels decreased. A weak glacial anticyclonic wind pattern was still present in the eastern regions of North America. However, subtropical highs over the Pacific Ocean strengthened and created a dominant westerly wind pattern across the western and central portions of North America and throughout most of South America. By the Early Holocene, climates in most areas of North America were becoming similar to climates in those regions today. Exceptions were northeastern North America and continental alpine regions, where climatic conditions remained colder than they are at present due to the effects of remaining ice sheets or alpine glaciers. Southerly, warm, monsoonal winds began flowing northward from the Gulf of Mexico during the Early Holocene into adjacent regions of Central and North America. Overall, the improving climatic conditions throughout the Americas created modern biomes, similar to those of the present, by 8000 years ago. The major exceptions were in some regions of South America, where mesic forests reached their greatest expansion. In much of North and South America, the Early Holocene climatic conditions continued until around 6,000 years ago, when precipitation levels began to drop below presentday levels. Summer temperatures continued to rise in interior North America and soon reached a maximum average 3639 F (24 C) higher than temperatures in those regions today. The effects of these climatic changes led to elevational rises of traditional lowland biomes upslope in mountainous regions and led to the formation of dunes and stream erosion in areas of arid lowlands. Similar patterns occurred in South America, where lake levels dropped in response to warmer climates and higher evaporation rates. By 6,000 years ago, the southerly winds from the Gulf of Mexico strengthened, while in the west and midwest strong westerlies prevailed. The Middle Holocene rise in summer temperatures, coupled with increased evaporation caused by the hot, dry westerly winds, reduced the remaining forested and parkland areas in the central part of North America and allowed prairies to expand to their maximum size. In the southern areas of South America, warmer conditions during the Middle Holocene reduced the expanse of mesic forests and created an expansion of grasslands. In North America, the southern boundary of spruce forests receded to its northernmost latitude and treeline during the Holocene. By the Late Holocene (ca. 3000 B.P.present), summer temperatures and evaporation levels in the interior of North and South America gradually decreased from their Middle Holocene highs to the levels currently found in these regions today. During this time period, the strength of the westerlies weakened in North America, but today they still blow from essentially the same direction. Along the northern border of the United States, spruce forests began moving southward to their present boundary in

these regions of North America. The boundaries of the present biomes in the Western Hemisphere were in place by 1,000 years ago in almost all regions.[See also Megafaunal Extinction; Paleoenvironmental Reconstruction; Pleistocene.]

Bibliography
J. Rabassa, Quaternary of South America and Antarctic Peninsula, vol. 1 (1983). V. M. Bryant and R. G. Holloway, Pollen Records of Late-Quaternary North American Sediments (1985). COHMAP Members, Climatic Changes of the Last 18,000 Years: Observations and Model Simulations , Science 241 (1988): pp.10431052.

Religion is a common if not universal feature of human societies, past and present.
Its remains, in the form of icons, shrines, temples, and churches, form a conspicuous part of the archaeological record. It is recognized as one of the most powerful forces operating on individuals and societies, one which can stimulate them to acts of great enterprise or great cruelty. Yet much of what we know of early religions is derived not from archaeology alone, but from the written documents that are associated with it. In a prehistoric context, where textual evidence is unavailable, the archaeologist is faced with the very difficult task of endeavoring to infer religious beliefs from material remains. Thus the archaeological study of religion may properly be divided into two parts: (1) in early historic societies, where archaeology provides valuable information that can be used in association with textual records to provide a richly textured and multidimensional picture of religious beliefs and practices; and (2) in prehistoric contexts, where archaeology may be able to tell us relatively little about specific religious beliefs, but can document religious practices, insofar as these have left material traces. Religion may be defined as the belief in, worship of, or obedience to a supernatural power or powers considered to be divine or to have control of human destiny. But while belief lies at the basis of religion, it is the institutionalized expression of that belief which gives religion its form and substance. Most religions involve ceremonies or ritual that take place in specified places, some of which may be specially constructed buildings, although natural features such as rocks and springs are also frequently endowed with religious significance. Wherever regular religious practices are performed, there is the potential for archaeologists to identify and interpret the traces of these activities. This is all the more the case where there are intentional modifications of the landscape such as rock paintings, monuments, or shrines.

Early Evidence for Religious Belief


Despite claims for linguistic competence in chimpanzees, there has been no serious suggestion that these or any other animal species hold beliefs that could be described as religious. The development of religious belief is most likely associated with the cognitive changes involved in the evolution of modern humans. It is difficult to locate the origin of religion in simple chronological terms. We do not know whether it emerged suddenly once a particular cognitive threshold was reached, or whether it formed through a more gradual process as human intellect developed stage by stage. The controversy here parallels the argument over the origin of language, where some

argue for a sudden acquisition of language skills within the last 100,000 years, while others prefer to envisage a gradual refinement of modern human language from more primitive forms of communication used by early hominids. The earliest hard information on the religious dimension to human behavior comes from two categories of archaeological evidence from the Middle and Upper Paleolithic: artistic representations and burial practices. Both are hazardous to interpret. Paleolithic art includes both portable objects, such as anthropomorphous figurines, and paintings or engravings on the walls of caves or rock shelters. There are claims for early examples of portable Paleolithic art dating back to before 100,000 years ago, but most of the evidence for both portable and mobiliary art comes from within the last 50,000 years. Here are included the so-called Venus Figurines (female figurines) from Europe, painted stone plaques from southern Africa, early engravings at rock shelters in Australia, and the famous Paleolithic decorated caves of western Europe. It is widely accepted that much of this art has religious significance, but the precise nature of that significance is unclear. The Venus figurines have been interpreted as evidence of a cult of human fertility, although they depict the entire spectrum of female anatomical development from young girls through puberty to pregnancy and beyond. Only a few have the distended bellies which might be taken to indicate pregnant individuals. The cave art of western Europe has most frequently been viewed as a kind of hunting magic, since hunted animals such as bison and horses figure conspicuously among the subjects depicted. A few may even have missiles drawn on them along with marks that may represent wounds. Others, conversely, have interpreted the art in structuralist terms, the different species representing male and female principles, or as a metaphor for human social organization, with particular species serving as the symbols of individual social groups. Whether the European cave art was the focus for religious ceremonies is uncertain, though there is indeed some evidence which points in that direction. Studies have shown that the places chosen for the most vivid images were often those with particular acoustic properties, suggesting that music or chanting could have played a part in whatever rituals were practiced. On the other hand, it must be recognized that the art is also found in secluded niches and other more private places within the caves which would not have been suitable for group ceremonies. The second category of Paleolithic evidence relating to religious belief is funerary. The earliest human burials are those from the Qafzeh cave in Israel, where at least three humans of modern type (Homo sapiens sapiens) were laid to rest in shallow pits around 100,000 years ago. Further burials are associated with Neanderthal remains in Europe and the Near East. The act of burial may in itself be considered to have religious significance, although it does not necessarily imply belief in a life after death. These early burials are all inhumations; the first cremation, from the south Australian site of Lake Mungo, dates to around 26,000 B.C. In many cultures, cremation is believed to be necessary in order to free the soul of the deceased from the dead body. Whether the people who cremated their dead at Lake Mungo shared this belief we cannot determine.

Shamanism and Rock Art


Anthropology provides an alternative route for the study of early religious beliefs. The work of Mircea Eliade and others has shown that among hunter-gatherers, the commonest form of religious expression is shamanism. Shamans are ritual specialists who are possessed of special powers and who act as intermediaries between humans and the shadowy world of spirits and the supernatural. In order to communicate with the spirit world the shaman has to enter a trance, sometimes induced by narcotics or hypnotic dance, in order to experience visions or hallucinations. From ethnographic evidence we know that shamanism may sometimes be directly associated with rock art. The connection has been demonstrated clearly in the case of the rock art of western North America, where the motifs painted on rock faces can be matched specifically with shamanistic beliefs known from the ethnographic literature of the region. In cases such as this, the archaeologist can have reasonable confidence that there has been continuity in belief and religious practice over a period of hundreds, if not thousands, of years. Individual sites and objects can then be interpreted in the light of these ethnographically documented belief systems. A striking example is the appearance in Australian rock art of figures identifiable from current aboriginal beliefs, such as the rainbow serpent Gorrondolmi and his wife depicted at the 6,000-year-old site of Wirlin-Gunyang. The problem of interpretation becomes much more difficult where there is no directly connected ethnographic information. It has been argued, for instance, that the cave art of the western European Paleolithic also relates to a pattern of shamanistic beliefs concerned with hunting magic. This interpretation has not as yet found general acceptance among cave art specialists.

Shrines and Temples


Among the most obvious material traces of religious activity are the remains of shrines and other specially demarcated areas or buildings intended specifically for religious practice. Here we may distinguish between human-made structures and the concept of the numinous landscapewhere natural features such as springs, trees, or rocks are held to have particular religious significance. Australian aborigines, for example, regard particular natural features as some of their most sacred sites. It is only those who have knowledge of these features who can identify them; to those without this knowledge they appear simply as natural features. As an intermediate category between such sites and human-made shrines and temples we may consider natural features that have been intentionally modified as a result of their special significance. Famous examples include the natural cave beneath the Pyramid of the Sun at Teotihuacn in Mexico, which made the city a center of pilgrimage for the whole region; or the most recent cave-shrine of Lourdes in the French Pyrenees, associated with the Christian cult of the Virgin Mary. The oldest human-built structures intended for ritual activity date back less than 10,000 years. The recognition of individual ritual sites does however require two major assumptions on the part of the archaeologist: first, that we are correctly recognizing in these sites the traces of ritual behavior; and second, that the nature of these sites was such that they can justifiably be described as largely or exclusively ritual. The whole division of human behavior into a series of subsystems, such as

economic, technological, or religious, is a heuristic device made by archaeologists for the sake of convenience. In everyday life, there are no sharp divisions between these different categories of behavior, and many ordinary dwelling houses will contain items or features relating to ritual or religious belief, such as statues or icons. Where we are faced with a major monument such as a Mesopotamian ziggurat or a Mesoamerican temple-pyramid the identification as a shrine may be relatively straightforward. Where a less distinctive structure is concerned, however, it may be difficult to determine with confidence whether it is a shrine. A good example of this difficulty is provided by the site of atal Hyk in southern Turkey. Excavations at this large Neolithic tell site occupied around 6000 B.C. exposed a closely packed settlement in which houses were built up against each other, leaving only occasional courtyards open to the sky. Within the complex, almost one third of the rooms had mural decoration or benches suggestive of ritual function. One of the rooms had a wall painting showing a scene of vultures devouring headless corpses; in another was a mural sculpture of a goddess giving birth to a ram. These rooms may have been shrines, implying a very high percentage of ritual space in the settlement as a whole; or they may simply have been richly decorated domestic dwellings in which ritual played a prominent role. With the emergence of state formations we are on somewhat firmer ground, since most state-level societies devoted considerable effort to the construction of impressive religious monuments. In early Mesopotamia, the city-states were each dedicated to a particular deity or pair of deities, under whose special protection the city was considered to lie. The city of Ur, for example, was dedicated to the mood god Nanna and his female consort, and it was to Nanna that the famous ziggurat built at Ur around 2100 B.C. was dedicated. The resources of the state lay behind great building projects such as this, and the temple itself was a wealthy and powerful institution in the early state societies of Mesoamerica, the Near East, and Egypt. The temple was closely associated with the secular power of the ruler, who was also usually high priest, and in some societies was himself regarded as a living god. The division between religious and secular authority was merged in a system where each supported the other. State cults, as well as attracting popular adherents, can also be tools used by secular rulers for propaganda purposes. Beneath the level of institutionalized state religion, however, lesser cults retained their popularity among the ordinary people, who might leave offerings at small wayside altars or domestic shrines. Excavations at ancient city sites have sometimes enabled archaeologists to document the popularity of these lesser cults in the form of figurines or other tokens. These may be related to beliefs or deities about which the ancient texts are silent. Archaeology is thus able to provide a counterbalance to the emphasis in surviving texts on established state-sponsored cults. The same is true of African religious beliefs, which survived, often in secrecy, alongside official Christianity on North American slave plantations.

Cosmology and Religions


In a world without electric lights, and in regions where cloudless skies are common, the night sky would have made a powerful impression on people's understanding of

their place in the cosmos. This is borne out by evidence which shows that astronomical observations played a crucial part in the religious beliefs of many early societies. It has long been established that many important ritual or religious monuments were carefully aligned on solar, lunar, or stellar events, often to an astonishingly high degree of accuracy. The Pyramids of Giza in Egypt, for example, were carefully aligned according to cardinal points, since one version of Egyptian mythology held that the king ascended after death to the circumpolar stars; the shaft from the burial chamber exited in the middle of the pyramid's northern face. Claims for an important astronomical link have also been made with respect to the stone circles of northwestern Europe. According to Alexander Thom and his followers these were carefully constructed so as to include alignments directed toward the rising and setting of the sun, moon, and major stars. Not everybody accepts the postulated lunar and stellar alignments, but the solar alignment is clearly in evidence at some sites. Even today, many people gather at Stonehenge in southern England at the time of the summer solstice to watch the dawn sun rising above the Heel Stone on the main axis of the stone circle. Astronomical alignments such as these are probably evidence of a particular set of beliefs concerning human origins and the place of humankind in the whole order of things. This area of belief is closely akin to religion but usually goes under the name of cosmology. In archaeological contexts, cosmology can be inferredwith caution from the orientation of buildings. This is seen very strikingly in the planning of historical Chinese cities, where rectangular plans were preferred with the principal streets oriented according to the cardinal points. The Chinese view was that, properly organized, earth and heaven formed a geometric and harmonious whole. Cosmological considerations were also very prominent in urban societies of Central America, and especially in that of the Maya. It has long been known that Maya astronomy was highly sophisticated. Recent work has emphasized how closely this was linked with their mythology. Maya texts tell the mythical story of the creation in terms of stars and constellations visible in the night sky. For example, they believed that the Milky Way as it is visible on August 12 was a vast canoe, paddled by gods who used it to ferry First Father (the maize god) to the place where he would be reborn from the three hearthstones, which are popularly known today as Orion's belt. Figures from this story from the night sky were regularly depicted in Maya art, but it is only with the aid of the texts and the understanding of Maya cosmology that these can be understood.

Iconography
Religious observances frequently focus on an image or symbol of the supernatural power that is the subject of worship. Furthermore, religious mythology often features prominently in artistic depictions. Together, these tendencies may result in a rich corpus of religious iconography that is open to the archaeologist to study and interpret. Where texts are available, it may be possible to say which being or power is represented; it is most convenient of all where, as in many Egyptian scenes from tombs or temples, the name of the god or goddess is written alongside the depiction. In other cases, detailed studies of the iconography allow a pantheon of deities to be recognized, even though no names can be attached. The discovery of religious iconography is one of the key categories of evidence that

enables archaeologists to identify a room or building as a shrine or temple. In Europe and the Near East, the focus of worship was often a cult image that served as substitute for the deity itself; one of the most splendid examples was the cult statue of Zeus which was made for the temple of the god at Olympia in Greece, home of the original Olympic Games. The seated statue, made of gold and ivory, rose to a height of 43 feet (13 m) in the shadowy rear part of the temple; light from the main door was reflected onto it by a shallow pool of oil at its foot. In other cases, it may not be so easy to distinguish between divine and human forms. At Tell Asmar in Iraq, a cache of ten alabaster statues was found beneath the floor of a temple. The statues were similar in style and manufacture, and most were interpreted as substitute-figures of worshipers, designed to remind the god of their needs even when they themselves were busy elsewhere. Two of the group were larger, however, and on grounds of their size and of the symbols carved on their bases it was suggested that they might be cult statues of the god and goddess themselves. Others have argued that these too are simply representations of worshipers. This case illustrates once again the difficulty of interpreting religious evidence. Nonetheless, in many religious scenes supernatural figures are carefully distinguished from ordinary mortals by size, by coloring, by the addition of special attributes (such as horns in Mesopotamia, or wings in the Christian tradition), or by their depiction in nonhuman or only partly human guise.

Funerary Beliefs
A final and highly important category of religious evidence is that from burials. One of the major themes of religion is the destiny of humans after death. Most societies possess some belief in a life after death, and this finds reflection in burial rites and cemetery evidence. An extreme case is that of ancient Egypt, where the literal nature of the belief led to extensive efforts to preserve the body of the deceased by mummification. If the body were not preserved, then, according to Egyptian belief, the chances of an afterlife were seriously impaired. The Egyptian evidence also shows clearly how the form of the burial monument itself can be a powerful religious symbol. The pyramid form beloved of Egyptian rulers during the Old and Middle Kingdoms is thought to have represented the slanting rays of the sun, and indicates the importance of the cult of the sun in the religious beliefs associated with kingship. The Pyramid Texts, magical spells inscribed on the walls of the later Old Kingdom pyramids, suggest that the pyramid was to be seen as a material representation of the sun's rays, on which the dead king would ascend to heaven. A more contentious subject is the purpose of grave goods left with the dead. Where these take the form of food remains and feasting equipment, it can be argued that the grave goods were needed for the corpse's sustenance after death. The same applies to burial places that were built to resemble houses, and were presumably regarded as the dwelling place of the dead person's spirit. Many funerary mythologies incorporate the concept of a journey, and here again food or money might be left with the dead people to support them on their journey. A related practice is the placement in the grave of objects signifying the person's rank in society, the intention being to ensure their admission to the correct social rank in the afterlife. At this point, however, interpretation becomes hazardous, since death is an emotive event for those left alive

and showing respect to the dead need by no means imply belief in a life after death.

Conclusion
A short account such as this cannot fully do justice to the great diversity of religious belief and the many ways in which it may be manifest in the archaeological record. Where adequate information exists, it is without doubt one of the most fascinating aspects of human behavior. This is particularly the case for more recent, historical periods, where texts and archaeology together form a powerful and mutually reinforcing combination for the study of religious beliefs and practices. It is only archaeology, however, that can document the silent millennia in the early history of religion, stretching back, deep into the prehistoric past.[See also Art; Astronomy; Christianity, Early; Dead Sea Scrolls; Ideology and Archaeology; Inca Civilization: Inca Religion; Islamic Civilization; Maya Civilization: Maya Pyramids and Temples; Rock Art: Introduction; Venus Figurines.]

Bibliography
H. Frankfort, H. A. Frankfort, J. A. Wilson, and T. Jacobsen, Before Philosophy (1946). I.E.S. Edwards, The Pyramids of Egypt (1961). M. Eliade, Shamanism: Archaic Techniques of Ecstasy (1964). P. Wheatley, The Pivot of the Four Quarters (1971). L. E. Sullivan, Icanchu's Drum: An Orientation to Meaning in South American Religions (1988). D. S. Whitley, Shamanism and Rock Art in Far Western North America , Cambridge Archaeological Journal 2 (1992): pp.89113. D. Friedel, L. Schele, and J. Parker, Maya Cosmos: Three Thousand Years on the Shaman's Path (1993). C. E. Orser, The Archaeology of African-American Slave Religion in the Antebellum South , Cambridge Archaeological Journal 4 (1994): pp.3345.

Venus Figurines The class of artifacts known as Venus figurines comprises an


extremely heterogeneous body of artifactual material from Eurasia, dating to the Upper Paleolithic Period. Venus figurines include, on the one hand, small and easily transportable three-dimensional artifacts or images incised on portable supports, and on the other hand, two-dimensional images deeply carved, light incised, and/or painted onto fixed surfaces such as cave or rock-shelter walls. They range in height from 3 cm to 40 cm or more. While some researchers include in this class abstract images (so-called vulvae and forms resembling an elongated S or upside-down P), these are not discussed here. The best-studied and most oft-pictured specimens in the Venus figurine category are the realistically rendered and almost voluptuous images of the female body, but these are not representative of the class as a whole. There are also clear portrayals of the male body (e.g., from Brassempouy, Laussel, and Doln V stonice) as well as numerous generalized anthropomorphs (e.g., examples from Sireuil, Tursac, Grimaldi, Doln V stonice, and Malt'a). Many specimens appear to be purposefully

androgynous, and those with only faces cannot be sexed at all (e.g., specimens from Brassempouy, Mas d'Azil, Bdeilhac, and Doln V stonice). Some may be no more than incomplete rough-outs (or bauches), and a rare few appear to be composite images of anthropomorphs and animals (i.e., Grimaldi, Laussel, Hohlenstein-Stdel). There are examples with detailed facial features (e.g., from Malt'a, Buret, and Doln V stonice), pronounced coiffures but no facial details (e.g., Brassempouy and Willendorf), and many more with neither face, hair, hands, nor feet rendered in any detail. Some specimens, mostly from the Ukraine and Siberia, have body elaboration interpreted as clothing, belts, and/or tattoos (e.g., especially from the Kostenki group and Buret). Beyond superficial morphology these artifacts have been worked from many different raw materials, each possessing unique physical qualities that were likely selected for their different attributes of workability, availability, and/or overall surface appearance. Venus figurines were made from ivory, serpentine, schist, limestone, hematite, lignite, calcite, fired clay, steatite, and a few of bone or antler. While they have been the subject of scholarly attention for more than a century, a detailed understanding of the sequence of techniques employed to fabricate them (in all their diversity) has been sorely lacking. Work has only recently been started to study the relationship between raw materials, techniques of fabrication, morphological appearance, and prehistoric significance. Most coffee-table art books and many well-known studies highlight only what are considered to be the most visually striking specimens. Yet Venus figurines include flat and apparently pre-pubescent female subjects, images interpreted to be in various stages of pregnancy or of the general female life cycle, as well as several obviously male specimens. The preference in allowing such a heterogeneous class of artifacts to be represented by the most voluptuous examples perhaps says more about the analysts than it does about the artifacts. It betrays their extraordinary diversity in morphology, raw materials, technologies of production, and archaeological contexts through time and space. Some theories to explain their prehistoric significance are now questioned because they have overemphasized specimens representing only one side of the morphological system.

Temporal and Spatial Distribution


Venus figurines date to three periods of the Upper Paleolithic. They appear in the archaeological record between approximately 31,000 and 9000 B.P., but chronometric dating is problematic on several counts. Their distribution through both time and space is episodic. Western Europe: the earliest examples here date to the Gravettian (ca. 26,000 to 21,000 B.P.) and the Magdalenian (ca. 12,300 to 9000 B.P.), with most specimens associated with cave and rock shelter sites. (The earliest renderings from the French Aurignacian, ca. 31,000 to 28,000 B.P., are the problematic so-called vulvae forms not discussed here.) Central Europe: specimens are primarily associated with the Pavlovian (ca. 31,000 to 24,000 B.P.); Ukraine: anthropomorphic imagery is found throughout the Kostenki-Avdeevo culture period (ca. 26,000 to 12,500 B.P.), and comes almost exclusively from open-air occupation sites; Siberian images date to the so-called Eastern Gravettian (primarily ca. 21,000 to 19,000 B.P.). Significantly, some regions with well-established records of Upper Paleolithic human

occupation have no evidence of anthropomorphic imagery, including the Cantabrian region of northern Spain and the Mediterranean region of southwestern Europe (with the sole exception of Italy).

Explanatory Theories
Most explanatory theories treat Venus figurines as a homogeneous class of data and collapse together more than 20,000 years of varied production. Portable and immobile specimens are lumped together, and what may be significant regional and temporal differences in technologies, raw materials, and styles are often ignored. Contextual differences between those specimens found at open-air sites, in cave and/or rock shelters, and other geographic locales are typically underestimated.Functionalist Accounts Today it is generally thought that Upper Paleolithic visual imagery, including Venus figurines, transmitted through stylistic means ecological and/or social information necessary to group survival. One of the primary explanatory accounts for the appearance and geographic distribution of Venus figurines focuses on ecological stress associated with the ice sheets advancing well into northern Europe 20,000 to 16,000 years ago. According to this account, as resources became more difficult to obtain, areas remaining occupied would have been able to sustain only low population densities. Alliance networks forged by the exchange of marriage partners could have counter-balanced these problems, and some researchers believe that Venus figurines played an important role in symbolizing and communicating information related to mating alliances. The geographically widespread production of Venus figurines as part of a system of information exchange could have permitted small groups of prehistoric hunter-gatherers to remain in areas that, without alliance connections, they might otherwise have had to abandon. A second and far more questionable set of functionalist interpretations derives from sociobiology. According to several authors, these are representations of female biology that were fabricated and used for erotic and sexual reasons by males and for male gratification and/or education. While some of these explanations highlight the sensuality of the voluptuous three-dimensional images and argue that they were used as prehistoric sex toys or educational aids, others suggest that they served as trophies to mark brave acts of rape, kidnapping, and possibly murder. A genetic (and thus evolutionary) advantage was supposedly conferred upon the makers/users, either by teaching and practicing lovemaking skills or by publicizing physical prowess and thereby gaining social advantage among one's peers. The inherently androcentric and heterosexist bias in assumptions underpinning these accounts has now come under close scrutiny and they are today considered far less plausible than when originally proposed.Gynecological Accounts. According to some, different Venus figurines literally depict physiological processes associated with pregnancy and/or childbirth or else signify the entire female life cycle. Some researchers note that aspects of parturition are well represented, while still others stress that the subject matter is womanhood and not just motherhood. In some ways these (and other related) contemporary theories build on, but also challenge the simplicity of, turn-of-the-century notions that they were symbols of female fertility and magic (hence in part explaining their original appellationVenus).

Future Directions for Research


The use of multiple lines of evidence is a time-honored way to understand the

significance of prehistoric material culture. Attention to different kinds of site. context, detailed understanding of various techniques of fabrication, recognition of their diverse morphologies and raw material, site-specific spatial information, and consideration of other classes of artifacts with which Venus figurines were discovered may all help turn attention away from what is compelling today and toward whatever might have made them compelling in prehistory.[See also Europe: The European Paleolithic Period; Paleolithic: Upper Paleolithic; Rock Art: Paleolithic Art.]

Bibliography
Z. A. Abramova, Palaeolithic Art in the USSR , Arctic Anthropology 4 (1967): pp.1 179. Desmond Collins and John Onians, The Origins of Art , Art History 1 (1978): pp.1 25. Randall Eaton, The Evolution of Trophy Hunting , Carnivore 1 (1978): pp.110121. Randall Eaton, Mediations on the Origin of Art as Trophyism , Carnivore 2 (1979): pp.68. Patricia Rice, Prehistoric Venuses: Symbols of Motherhood or Womanhood? Journal of Anthropological Archaeology 37 (1981): pp.402414. Marija Gimbutas, The Goddesses and Gods of Old Europe (1982). R. Dale Guthrie, Ethological Observations From Palaeolithic Art, in La Contribution de la Zoologie et de l'Ethologie l'Interprtation de l' Art des Peuples Chasseurs Prhistoriques, eds. Hans-Georg Bandi, et al. (1984), pp. pp.3574. Mariana Gvozdover, Female Imagery in the Palaeolithic , Soviet Anthropology and Archaeology 27 (1989): pp.894. Sarah Nelson, Diversity of the Upper Palaeolithic Venus Figurines and Archaeological Mythology, in Powers of Observation: Alternative Views in Archaeology, eds. Sarah Nelson and Alice Kehoe (1990), pp. pp.1122. Clive Gamble, The Social Context for European Paleolithic Art , Proceedings of the Prehistoric Society 57 (1991): pp.315. Marcia-Anne Dobres, Re-Presentations of Palaeolithic Visual Imagery: Simulacra and Their Alternatives , Kroeber Anthropological Society Papers 7374 (1992): pp.125. Marcia-Anne Dobres, Reconsidering Venus Figurines: A Feminist Inspired Reanalysis, in Ancient Images, Ancient Thought: The Archaeology of Ideology, eds. A. Sean Goldsmith, et al. (1992), pp. pp.245262. Henri Delporte, L'Image de la Femme dans l'Art Prhistorique (2nd Edition, 1993). Henri Delporte, Gravettian Female Figurines: A Regional Survey, in Before Lascaux: The Complex Record of the Early Upper Palaeolithic, eds. Heidi Knecht, et al. (1993), pp. pp.243257. Jean-Pierre Duhard, Relisme de l'Image Fminine Palolithique (1993).

Cultural Ecology Theory has been a fundamental perspective of American


archaeology and anthropology since World War II. Its origin and development are most directly associated with Julian Steward, an anthropologist whose interests incorporated both ethnography and archaeology. Steward succinctly defined cultural ecology as the study of the processes by which a society adapts to its environment (1968). The term environment here is conceived in its broadest sense, including, for example, other social groups. As the word adaptation implies, cultural ecology is related conceptually to cultural evolution, and specifically to Steward's own concept

of multilinear evolution, which stressed the search for regularities in independent sequences of evolutionary change. Central to cultural ecology was Steward's idea of the culture core, those cultural features that mediate most directly between humans and their environments and that are essential to subsistence and other basic economic activities. Such features might include technological, social, political, or ideological elements of culture. Core features are most heavily determined by environmental constraints and interactions, while others not as directly linked to the core are determined by cultural-historical factors such as Diffusion or random innovation. Human culture is inextricably linked to the larger systems of the natural world. Steward advocated cultural ecology both as a theory concerning the nature of Culture and its transformation and as a set of research methods for investigating cultural phenomena. Theoretically, the most powerful explanations of evolutionary change were to be found in the environment/culture core interaction; methodologically, research should identify and investigate core attributes of culture such as technology, subsistence, economy, the organization of work, landholding, and inheritance, since these are situated most directly at the interface between environment and culture. Steward believed that ecological analysis yielded the most powerful and straightforward results when applied to simple small-scale cultures that are technologically unsophisticated and not buffered from nature by complex supracommunity institutions. This conviction is reflected in his own predilection for the study of hunter-gatherers, particularly his classic The Economic and Social Basis of Primitive Bands (1936) and Basin-Plateau Aboriginal Sociopolitical Groups (1938). The development of cultural ecology was partly a reaction against the atheoretical, particularistic, culturalogical, and cultural-historical approaches that dominated American anthropology and archaeology before World War II. While eschewing the environmental determinism and equally sterile possibilism advocated by some human geographers, Steward championed the search for causation of sociocultural phenomena, adopting an explicitly natural science perspective. He thus was among the very first materialists in American anthropology. His formulation of cultural ecology was also influenced by the work of Oswald Spengler, Max Weber, and Arnold Toynbee, as well as Karl Wittfogel, and in turn helped shape Wittfogel's theory of hydraulic civilization. Cultural ecology had an enormous accelerating influence on archaeology beginning in the late 1940s. Until that time American archaeology had remained largely aloof from the strong tradition of ecological, Environmental, and Economic Archaeology long established in Europe. Linked much more closely to anthropology than its European counterparts, archaeology was seen by most anthropologists as the poor handmaiden of ethnography, incapable of a robust identity of its own. Ecological perspectives helped to alter this situation dramatically beginning in the late 1940s. By encouraging Gordon Willey to undertake the settlement pattern component of the archaeological investigation of the Viru Valley in 1946, Steward helped pioneer the emergence of a strong tradition of Settlement Archaeology that later included the work of William T. Sanders and Robert McCormick Adams. This methodological

innovation stimulated the extension of ecological and materialist perspectives to the comparative study of the evolution of complex societies. Partly under the stimulus of Steward's ideas, Robert Braidwood began his research into the origins of agriculture in the Near East, utilizing a team of natural scientists who could effectively augment the skills and interpretations of archaeologists. The issues of the agricultural transformation and the evolution of sociocultural complexity have since been dominant themes of American archaeological research. Cultural ecology also helped to form strong linkages with scholars in related fields who developed their own interests in archaeology, most notably the geographer Karl Butzer. Several of the basic precepts of the New Archaeology of the 1960s had roots in earlier formulations of cultural ecology. These include the idea of the fundamental adaptive, evolutionary functions of culture, the search for causation and explanation using overtly scientific research models, the interdependence of the archaeological and ethnographic records, and the relevance of biological anthropology. Ecological perspectives especially dominated American archaeology between 1955 and 1980, although they increasingly diverged from the original cultural ecology perspective in important ways. New elements included sophisticated quantification, adoption of formal models from the biological sciences (e.g., energy flow), and human geography (e.g., locational analysis), as well as concern with agronomy, human fertility, demography, and nutrition. In addition, largely because of the explanatory power of settlement research, ecological investigations of complex societies have become commonplace. Steward himself had emphasized the cultural rather than ecological dimensions of cultural ecology, but by the 1970s ecological perspectives and methods were much more obtrusive, and remain so today. Criticisms of cultural ecology focus both on Steward's original formulation and on its derived, more explicitly ecological approaches. Among the former are that Steward emphasized qualitative rather than quantitative data, and that the culture core concept is a muddled reinvention or rediscovery of much older, more useful principles devised by Karl Marx. More generally, cultural ecological research is characterized as deterministic, overly reductionist, tautological, dehumanizing, and just plain boring. Such criticisms originate most frequently from structuralists, mentalists, humanists, culture historians, and post-processual archaeologists. None of these schools or approaches is necessarily antithetical to ecological perspectives. Revealingly, many of those who offer such criticisms themselves conduct research that has fundamental adaptive, evolutionary, ecological implications. Archaeology, particularly in the United States, has always been prone to intellectual fashion. Today, many scholars who would not characterize themselves as cultural ecologists in the Stewardian mold, or perhaps not even ecologists or materialists at all, have nevertheless been heavily influenced by the cultural ecology tradition begun by Steward. Ecological perspectives continue to thrive, providing a strong theoretical, scientific, and methodological core of ongoing research. Seen as a pervasive and dynamic point of view rather than an identifiable discipline or school, cultural ecology's legacy includes the convictions that humans and their cultures are integral parts of larger, natural systems, that causal, scientific explanations of cultural

phenomena are possible, and that the enterprise of archaeology requires strong linkages not only with the other subfields of anthropology, but with the hard sciences as well.[See also Critical Theory; Culture Historical Theory; General Systems Theory; Marxist Theory; Post-processual Theory; Processual Theory; Structuralism; Theory In Archaeology.]

Bibliography
Julian Steward, Cultural Ecology (1968).

Prehistory of Africa Africa occupies a unique place in world prehistory. Its


archaeological sequence is of unparalleled length, for the simple reason that it was almost certainly in this continent that hominids and their distinctive behavior first evolved. In the sub-Saharan regions, because literacy has been restricted to the last few centuries, archaeology is a prime source of information about even comparatively recent periods. The great environmental diversity of the African continent, ranging from snow-capped glaciated mountains, to torrid rain forests, to arid deserts totally devoid of vegetation, provides an unparalleled opportunity to observe human ingenuity and adaptation through time. These environments have, in many instances, survived into modern times comparatively unmodified by large-scale industrialization or mechanized cultivation. Thus, continuing traditional African lifestyles can provide exceptionally informative guidelines to the interpretation of the archaeological record. The significance of African archaeology extends far beyond Africa, yet it is hardly surprising that research in this field can rarely be a high priority for the developing nations of that continent. Despite their huge potential and importance, archaeological researches in many parts of Africa remain in their infancy. While intensive investigations have been carried out in several areas, major regions remain almost completely unexplored archaeologically. Discoveries relating to the earliest periods of human activity have been made in eastern Africa (from Ethiopia southward to Tanzania and inland as far as the western branch of the Rift Valley) and in South Africa. Conditions in these areas have been favorable not only to the preservation of the earliest hominids' bones and of the stone tools that they made, but also to their subsequent exposure for discovery whether by natural erosion or by quarrying. The concentrations of archaeological discoveries thus do not necessarily mean that the earliest hominids were restricted to these particular parts of Africa, and it seems likely that their ranges in the east and in the south were continuous. However, environmental conditions in this general region were probably better suited to these creatures' lives and activities than those farther to the west. Recognition in the archaeological record of the earliest evidence for humanity involves a degree of necessarily arbitrary definition. In evolving populations whose scanty representations in the fossil record display a wide range of physical variation, where does one choose to recognize the transition to human status? Given the difficulties in interpreting the simplest and most ancient traces of technology, which survive only in the form of unstandardized stone tools, archaeologists have increasingly sought evidence for human behavioral traits, such as social cooperation,

planning, and food sharing. Recent research, especially in East Africa, has made some progress in elucidating these aspects of the past. More intensive use of particular foodstuffs, both plant and animal, led eventually to the seasonal exploitation of different environments. Such, indeed, may have been the practice in very early times, but its clear attestation in the archaeological record requires the preservation of organic material such as is provided by cave deposits and water-logged occurrences. Study of prehistoric resource exploitation on a regional basis has enabled archaeologists to demonstrate shifting reliance on, for example, marine foods, plants, and land animals at differing seasons of the year. In Africa, as in many other parts of the world, a tendency toward reduced tool size is apparent through all periods of prehistory before the invention of metallurgy. This led ultimately to the appearance in virtually all parts of the continent of microliths implements so tiny that they must have been used hafted, often several together, as composite tools. A variety of cutting and scraping tools were formed in this way, but most characteristic were pointed and barbed arrows; probably the bow and arrow was an African invention. These microliths, with their characteristic steep retouch, were widespread in Africa by about 20,000 B.P., but had first appeared in South Africa significantly earlier, perhaps as much as 100,000 years ago. Two overall trends in stone-tool technology may thus be discerned through the immensely long duration of the African Stone Age. There was a progressive increase in specialization, indicated by the production of more different standardized tools for particular purposes; and there was a steadily more economical use of more carefully selected raw material. The African microlithic industries were the work of people who were fully modern in the anatomical sense: Homo sapiens sapiens. Precisely where and when such people first evolved is not yet known, but it is significant that the oldest known fossils generally accepted as being of this type come from sites in South Africa, where they seem to date to about 100,000 years ago. These, if correctly attributed, are the most ancient remains of fully modern people anywhere in the world, and they support genetic evidence that, although controversial, suggests that it may have been in subSaharan Africa that Homo sapiens sapiens first developed. As people became more adaptable and specialized, they were able to respond more readily to changing environmental opportunities. A particularly significant instance of this, and one which had far-reaching consequences, occurred in what is now the southern Sahara and in parts of East Africa during the period 10,000 to 6000 B.C. This period, which corresponded with the final retreat of the northern-hemisphere ice sheets and consequent worldwide adjustments in sea level, saw the establishment of lakes and rivers in a region that was previously (as again today) too arid to support regular human habitation. Beside these waters, previously nomadic groups established semipermanent habitations that were supported by the rich year-round supplies of fish that the lakes provided, supplemented by hunting for meat and by collecting wild vegetable foods. Sites of these settled peoples are characterized by the barbed-bone heads of the harpoons with which they fished and by the pottery of which their settled lifestyle enabled them to make use.

Between 5000 and 3000 B.C. the climate in the southern Sahara once again deteriorated. Sources of fish became depleted, many wild animal herds moved southward to better-watered regions, and plant foods were fewer and less reliable. It was at this general time that we find the first evidence that people in this part of Africa were taking steps to control the plants and animals upon which they depended steps that led ultimately to the development of farming. The extent to which the domestication of animals and plants was an indigenous African development, rather than one due to stimuli from outside that continent, has for long been a matter of controversy. The question may be clarified, if not finally resolved, by considering the different species involved and the geographical distributions of their wild forms. Of the continent's most important domestic animals, sheep and goats are not known to have occurred wild in Africa, and they were presumably introduced already domesticated from the Near East. Wild cattle, on the other hand, were common in much of the Sahara during the period of high lake levels noted above. In the case of plants, a markedly contrasting situation is apparent. Wheat and barley, probably of ultimate Near Eastern origin, were grown in North Africa and Ethiopia, but virtually all the cereals traditionally cultivated south of the Sahara are of species that occur wild in what is now the southern Sahara and the Sahel. Other crops are of highland Ethiopian origin or, as in the case of yams, from the northern fringes of the equatorial forest. Convincing archaeological evidence for the initial stages of African farming is scanty, but what there is tends to confirm the geographical conclusions summarized above. Rock paintings in the Sahara, tentatively dated between 7000 and 3000 B.C., provide numerous representations of domestic cattle, indicating, among other features, the importance that was attached to body markings and the configuration of horns. Later art in the Nile Valley, and undated examples in the eastern Sahara, show that attempts were made to constrain or tame many other species, including giraffe and ostrich, that were never successfully domesticated. Bones of domestic cattle come from several sites, notably in Libya, Algeria, Niger, and Sudan, dated mostly the fifth or fourth millennia B.C. Firm data about the cultivation of plants are much more rarely available. Large numbers of heavily used grindstones on fourth-millennium-B.C. sites in the Sudanese Nile Valley probably indicate use of cereals, but actual remains of the grains themselves are rarely preserved, and the extent to which they were formally cultivated is still uncertain. However, by 1200 B.C., if not before, bulrush millet was intensively cultivated in the western Sahara of Mauritania. The initial stages of African farming development thus almost certainly took place in the same general area as was occupied by the harpoon fishers, and at the time when these peoples' established lifestyle was subject to considerable stress through the lowering of water levels. It is easy to visualize how, in such circumstances, settled people would have exercised control over the herds of formerly wild cattle and begun to protect, to care for, and then to cultivate plant foods in order to maintain their supplies in the face of reduced availability of fish. Data are not yet available for Ethiopia and the forest fringes: we do not know whether farming began in these areas at the same general period as it did in the southern Sahara. It is, however, important to emphasize that, other than in a very restricted area of the East African highlands, there is no evidence from any part of Africa south of the equator for the practice of any form of farming prior to the start of ironworking late in the last millennium B.C.[See

also Africa, Origins of Food Production In; Human Evolution: The Archaeology of Human Origins; Humans, Modern: Origins of Modern Humans.]

Bibliography
J. Desmond Clark, ed., The Cambridge History of Africa, Vol. 1 (1982). J. Desmond Clark and Steven A. Brandt, eds., From Hunters to Farmers: The Causes and Consequences of Food Production in Africa (1984). David W. Phillipson, African Archaeology (1985; 2nd ed., 1993).

Origins of Modern Humans In the context of the long course of human


evolutionary history, the origin of modern humans is a relatively recent event. Fossils of modern humans first appear in Africa and the Levant between about 130,000 and 70,000 B.P. Important fossils are Omo I from the Omo Kibish Formation, Ethiopia (130,000 B.P.), Border Cave I from southern Africa (80,00070,000 B.P.), the numerous fragmentary fossils from Klasies River Cave, southern Africa (the oldest of which date to greater than 90,000 B.P.), and the skeletons from Qafzeh and Skhul, Israel (about 100,000 B.P.). All of these fossils are controversial. Either the dating has been questioned (e.g., Border Cave, Omo I) or the fossils themselves have been interpreted as archaic rather than fully modern (e.g., Klasies River Cave, Qafzeh, Skhul). Furthermore, none of these early fossils have been found associated with the advanced stone tool traditions that occur with indisputably modern humans after about 40,000 B.P. In the Levant they are rather associated with the Mousterian tool tradition that is also found with the Neanderthals, while in Africa they are associated with the similar Middle Stone Age tradition that occurs with premodern African hominids (archaic Homo sapiens). It is only the later, more advanced traditions, the Upper Paleolithic in Europe and the Levant and the Late Stone Age in Africa, that have been interpreted to reflect fully developed modern human culture with cognition and symbolic language. One of the main controversies surrounding the origin of modern humans is whether the earliest fossils in Africa and the Levant are fully modern and, if so, whether they indicate that modern humans first evolved in this region and then spread out from there, displacing the premodern indigenous populations in Europe and the Far East. This has come to be known as the Out-of-Africa, or African Replacement, Hypothesis, and is primarily associated with paleoanthropologist Chris Stringer and geneticists Rebecca Cann and Alan Wilson. The major alternative explanation, the Multiregional Hypothesis, denies the fully modern status of the controversial early fossils and suggests that premodern populations in Africa, as well as in Europe and Asia, evolved into modern humans in their specific geographic regions. An important corollary of this hypothesis is that hominids are not fully modern unless they are accompanied by archaeological remains that can also be interpreted as fully modern. The Multiregional Hypothesis grew out of the work of Franz Weidenreich in the 1930s and 1940s and is today associated primarily with the paleoanthropologists Milford Wolpoff and Alan Thorne. It argues that there was considerable gene flow between the major population groups in Africa, Europe, and Asia but denies that

modern humans evolved earlier in any one of these areas than in the others. These two major schools of thought use different evidence to support their opposing views of modern human evolution. The Multiregional Hypothesis is based primarily on the recognition of anatomical traits in the skulls of modern Australians and Chinese that are also found in the earlier Homo erectus populations of Java and China. These features, such as the form of the cheek bones or of the bridge of the nose, are interpreted to represent a direct genetic link between the fossil and modern populations. Opponents of the Multiregional Hypothesis argue that these features are inconclusive because (1) some of them are found with greater frequency in modern populations elsewhere in the world, (2) some merely reflect robusticity, and (3) early modern fossils in China, such as Upper Cave 1 from Zhoukoudian, lack any evidence of the contiguity traits, thereby confounding the inferred genetic connection. The Out-of-Africa Hypothesis is based on the argument that the controversial fossils from Africa and the Levant are anatomically modern and that they significantly predate similar modern people elsewhere in the world. There is no doubt that the early moderns from Omo, Klasies River Cave, Qafzeh, and Skhul variably retain primitive features such as brow ridges, relatively large teeth, and a considerable degree of size and robusticity dimorphism between males and females. However, supporters of the Out-of-Africa Hypothesis argue that they have modern features in the skull and also in the postcranial skeleton that fundamentally distinguish them from contemporary archaic humans elsewhere in the world and align them with living humans. There is also genetic support for the Out-of-Africa Hypothesis, which derives primarily from the fact that both nuclear DNA and mitochondrial DNA show a greater diversity among living Africans than among human populations elsewhere in the world. If that diversity can be equated with antiquity, it would suggest that human populations have been evolving longer on the African continent than elsewhere in the world. This implies a greater antiquity for modern humans in Africa than elsewhere. Until recently it was also suggested that mitochondrial DNA indicated that all living humans could trace their ancestry to a single female who lived in Africa approximately 200,000 years ago. Although the analyses upon which this conclusion was based have been shown to be seriously flawed, new analyses of mtDNA diversity by Henry Harpending and his colleagues provide a model for the evolution of modern humans that is consistent with a single, localized origin. Arguing from the degree of mtDNA variation in people today, these authors suggest that the population giving rise to modern humans could not have been larger than 5,00050,000 people (1,000 10,000 effective females). Because this number is significantly smaller than the total population size inferred for Homo sapiens in Africa, Europe, and Asia during the Middle Pleistocene, the magnitude of mtDNA diversity in people today would be incompatible with the Multiregional Hypothesis, which assumes that the total Homo erectus population was ancestral to modern humans. Furthermore, the pattern and magnitude of mtDNA diversity within and between living populations suggests two things. First, Europeans most probably arose from African ancestors. Second, between-population diversity in mtDNA is greater than within-population diversity, suggesting an initial spread of people from a localized origin followed by a period of relative genetic isolation of the migrating people. This

would allow the interpopulational variation to develop. Only later in time would there be a rapid increase in population size in the different geographical areas followed by a higher level of gene flow between populations. This new genetic model for the origin of modern humans implies that the factors involved in the initial evolution and spread of modern human populations were not the same factors that were associated with the subsequent rapid increase in population numbers of these people. It also fits relatively well with what is known from the fossil and archaeological records. Modern human populations from Africa could have spread through the Levant and eastward into Asia sometime around 100,000 years ago, occupying these regions in relatively low density and perhaps interbreeding to some extent with the indigenous populations. At this stage their migration westward into Europe would have been blocked by the Neanderthals. The ultimate spread of modern humans into Europe correlates with the development of the Upper Paleolithic, which appears between about 45,000 and 40,000 years ago not only in Europe but also in the Levant and in Siberia. It would be fair to assume that the cultural advances represented by the Upper Paleolithic were fundamentally associated with the ability of modern humans to displace the Neanderthals and also with the rapid population increase experienced by modern humans in Europe and elsewhere. It has been argued that some major biological change associated with the evolution of modern humans, such as the evolution of fully developed human cognition and symbolic language, underlies the development of the Upper Paleolithic in Eurasia and the Late Stone Age in Africa. However, there is minimal, if any, evidence for such a biological change. It is now accepted that Neanderthals were functionally capable of producing the full range of modern human speech sounds. Furthermore, there is also behavioral evidence suggesting that they had at least basic symbolic capacity. The important question is why Neanderthals and other premodern humans did not develop the Upper Paleolithic (or the Late Stone Age) and conversely why it took the early modern humans represented by Skhul, Qafzeh, Omo, and Klasies River Cave more than 50,000 years before they did. The answer to this question may be simply that at this stage of evolution it was culture, rather than biological evolution involving intelligence or cognitive capability, that was the driving force of change. The factors underlying the evolution of human language, underlying the rapid and virtually simultaneous appearance of the Upper Paleolithic and the Late Stone Age in Eurasia and Africa, and underlying the apparently associated rapid expansion of human populations may better be seen to include fundamental changes in the social organization of the hominids involving economic division of labor, food sharing, greater paternal investment in the offspring, as well as ritual behavior associated with these fundamental changes.[See also Cromagnons; Genetics In Archaeology.]

Bibliography
P. Mellars and C. Stringer, The Human Revolution: Behavioural and Biological Perspectives on the Origins of Modern Humans (1989). C. B. Stringer, The Emergence of Modern Humans , Scientific American (December 1990): pp.6874.

G. Braer and F. H. Smith, Continuity or Replacement? Controversies in Homo sapiens Evolution (1992). A. G. Thorne and M. H. Wolpoff, The Multiregional Evolution of Humans , Scientific American (April 1992): pp.2833. A. C. Wilson and R. L. Cann, The Recent African Genesis of Humans , Scientific America (April 1992): pp.2227. H. C. Harpending et al., The Genetic Structure of Ancient Human Populations , Current Anthropology 34 (1993): pp.483496. M. Stoneking, DNA and Recent Human Evolution , Evolutionary Anthropology 2 (1993): pp.6073.

Prehistory and Early History of South Asia The earliest settlement of humans in
South Asia is not well defined. It is known that important evidence of human evolution is documented by the appearance of large apelike (hominoid) creatures in the Miocene of the Siwalik Hills. For the past twenty years research has been conducted in Pakistan, Kashmir, and Hari Talyanagar. Fossil apes are of the genus Sivapithecus and Ramapithecus, a subgroup of the Sivapithicines. They are closely related to the modern gibbon and can be considered a form of Dryopithecus. Gigantopithecus sometimes occurs, also. These fossils date to a period between about 11.8 million years B.P. and 7.2 million years B.P. Pleistocene finds have been well summarized by K.A.R. Kennedy (1973). The bestdocumented fossil human is an Archaic Homo sapiens from the bed of the Narmada River. A second hominid fossil was found in Afghanistan at Dara-i Kur in association with a Middle Palaeolithic stone tool assemblage, probably Homo sapiens sapiens. The abundance of Lower Palaeolithic artifacts in South Asia contrast sharply with the spotty human fossil record. There are hundreds of sites that contain core bifaces, choppers, and chopping tools. Lower Palaeolithic tools are reported from most of the major regions of South Asia; the exceptions are Southern Sind, Baluchistan, Bangladesh and Sri Lanka. Tools dating to two million years B.P. are reported at Riwat on the Potwar Plateau, a claim being clarified by continuing research. Systematic excavations at Chirki and Paisra have unearthed, in situ, living and working floors. Research at Didwana in Rajasthan produced environmental data, plus masses of artifacts. Eastern India has evidence that Hoabinhian tool making extended into the Subcontinent. The South Asian Middle Paleolithic is abundantly documented by sites but there is no association with human fossils, except for the Dara-I Kur find. This is a flake industry, with limited evidence for the Levallois technique. Middle Paleolithic sites are known from most major regions of South Asia. The exceptions are Baluchistan and the Eastern Indian States. There is considerable topological diversity in this body of material. The Upper Palaeolithic of South Asia is not as well documented as the Middle Paleolithic. Sites appear in some numbers in Gujarat, Rajasthan, the hilly tracts of

central and eastern India, and on the Potwar Plateau of Pakistan, most notably at Sanghao Cave. Some contain long, narrow blades taken from prismatic cores. At other sites, Sanghao Cave for example, the artifacts consist of small irregular flakes. A series of radiocarbon dates from this cave, run by Oxford, yielded consistent results and dated the deeply stratified deposits between about 20,000 and 40,000 years B.P., which overlaps the dates for microlithic tool technology in Sri Lanka. The South Asian Mesolithic has been documented best in Sri Lanka where a series of thirty-two radiocarbon dates from the sites of Batadomba Lena Cave and Beli-lena Kitulgala date a microlithic chipped stone tool industry to around 28,500 to 10,000 B.P. The earliest anatomically modern Homo sapiens in South Asia comes from Batadomba Lena Cave and is associated with the levels dated to around 28,500 B.P. There are also early dates for human fossils from Fa Hien Cave in Sri Lanka that could push this date back to 31,000 B.P. The South Asian Mesolithic assemblages include microblades, lunates, crescents, triangles, trapezes, and the rest of the microlithic tool kit. Some come from sites of hunter-gathering peoples, contemporary with the end of the last glacial period and the early Holocene. The microlithic tool kit, used by many peoples who were not just hunter-gatherers, has a long history in the Subcontinent. By the sixth millennium B.P. microlithic tool users were herding cattle, sheep, and goats. Adaptive strategies are apparent at sites with microlithic technology: herding, hunting and gathering, primitive cultivation, and the keeping of domesticated animals. There are assemblages that suggest a symbiotic relationship of South Asian hunter-gatherers with nearby settled, agricultural and herding communities. Not all sties with microlithic assemblages and adaptive strategies can be considered Mesolithic.

The Beginnings of Food Production


The food-producing economy associated with Pakistan and much of India today, originated in the uplands of the Iranian Plateau and Afghanistan and is based on the wheat/barley and sheep/goat/cattle constellation of domesticated plants and animals. This is clearly related to the Near Eastern pattern of early food production. It was this complex of plants and animals on which the Harappan and Mesopotamian civilizations were based. The earliest manifestation of this tradition in South Asia comes from the site of Mehrgarh, on the Kachi Plains of the Indus Valley in Pakistan. Period IA at Mehrgarh is an aceramic Neolithic with mud brick houses. There is a rich, complex collection of palaeobotanical remains, most of which is from thousands of impressions in the mud bricks of the period. The dominant plant of Period I is domesticated naked six-row barley (Hordeum vulgare subspecies vulgare variety nudum) representing 90 percent of identified plant remains. Domesticated hulled sixrow and two-row barley and domesticated einkorn, emmer, and hard wheat were also there. In Period IA the animal economy was dominated by twelve species of large ungulates: gazelle, swamp deer, nilgai, blackbuck, onager, chital, water buffalo, wild sheep, wild goat, wild cattle, wild pig, and elephant. Richard Meadow takes this to indicate that the first inhabitants of aceramic Mehrgarh I exploited the Kachi Plain

and the surrounding hills. By the end of the aceramic period the faunal assemblage is different. Remains are from sheep, goat, or cattle, domestic animals of great importance in the Middle East and South Asia today. The radiocarbon determinations for Mehrgarh I are inconsistent. The best estimate for its beginnings is around 7000 B.C. Mehrgarh I compares well in cultural development with sites in the Near East. There is a second food-producing cultural tradition in the northern regions of the Subcontinent that is called the Northern Neolithic, dating to around 3000 B.C. Most of the sites are found in Kashmir, but there are also settlements on the plains, as at Sarai Khola, near Taxila. These sites have cord-impressed pottery, ground stone knives and ring stones, a rich bone tool industry, dog burials, and semisubterranian houses. They represent the southernmost expression of a North Asian complex with a cultural tradition that has its roots in inner Asia. A third cultural tradition of early food producers is found in eastern India, and probably Bangladesh. It relates to Southeast Asian traditions found at places like Spirit Cave in northwestern Thailand and the Padah Lin Caves in Burma. These archaeological assemblages may be early, around 8000 to 7000 B.C. Finally, there is the Southern Neolithic of peninsular India. The antecedents of these farmers and herders are not entirely clear, but the sites appear to date to the late third and early second millennia B.C. These peoples used two forms of gram as well as millets, and were cattle herders. There is evidence for keeping domesticated sheep and goats, whose derivations are in the Indus Valley and the Iranian Plateau. By about 2500 B.C. the Indus or Harappan civilization emerged from the foodproducing communities of the Indus Valley and surrounding areas. Excavations at the great cities of Mohenjo-daro and Harappa have demonstrated that some of these people were literate, were craft specialists, lived in cities with a complex society, and engaged in long-distance trade. The Indus Civilization covered an immense area; over one million square kilometers. The Mature or Urban Harappan, lasted only about 500 years (ca. 25002000 B.C.). Although the Indus civilization was different from the archaic states of the Near East there were shared traits. Plants and animals used in the subsistence systems are largely the same. They all made extensive use of brick in their buildings, most of which are rectilinear. Wheel-thrown pottery, usually fired in an oxidizing atmosphere producing red to buff in color, was made. A substantial portion was slipped or decorated with mineral paints applied prior to firing. Walter Fairservis (1961) noted that the Indus civilization reached the eastern limits for the practical cultivation of wheat and barley. These observations suggest that the Indus civilization is the easternmost expression of a very large, heterogeneous pattern of ancient urbanization that stretches from northwestern India, through Pakistan to the Mediterranean Sea. An interaction sphere of considerable proportions that involved overland and maritime trade, commerce, and diplomacy, was probably embodied across the entire area. The cities of the Indus civilization were abandoned as functioning urban centers around 2000 B.C. The reasons are unclear, but invading Aryan tribes or the natural damming of the Indus River were probably not factors. There is also cultural

continuity between the Indus civilization and succeeding Early Iron Age of the Painted Grey Ware in northern India, as documented at Bhagwanpura in Haryana and other sites in the region. There are still gaps in the sequences in Gujarat, Sind, and the West Punjab, which no doubt will be filled in once systematic exploration and excavation are completed. Central India, in 2000 B.C., was home to diverse peoples, documented best by distinctive pottery styles such as Ahar, Kayatha, Malwa, and Jorwe. They were wheat and millet farmers who herded cattle, sheep, and goats. There is continuity in Central India and Southern India between the earliest farming/herding peoples (Malwa-Jorwe and the Southern Neolithic) and the succeeding Early Iron Age of the Peninsular Indian Megalithic Complex. There is evidence, though infrequent, for the occurrence of smelted iron in the South Asian Bronze Age at several sites. The widespread use of iron occurs in South Asia at about 1000 B.C., which is close to the beginnings of the Iron Age in a broad band of regions stretching from the Mediterranean Sea to Southeast Asia and China. There is considerable regionalization in the Early Iron Age in India. The earliest texts in ancient India are the so-called Vedas, consisting of four books, each a collection of hymns. The first is the Rgveda, composed and codified to enable Vedic priests to perform the sacrificial rite necessary for the proper ordering of the life of the Aryan people. The other texts are the Samaveda, the Yajurveda, and the Atharvaveda. The date of the Rgveda is not clear, although the relative chronological sequence for the composition of the texts seems certain: Rgveda/Samaveda, the Yajurveda, and finally the Atharvaveda. The date is based on an analysis of the Sanskrit employed in the texts. The best estimate is that the Rgveda was codified between 1200 and 800 B.C. Most western scholars favor the later date. There is some geographical information in the Rgveda that has been relatively well studied. Rivers mentioned in the text can be equated with modern streams of the Punjab (Indus, Jhelum, Chenab, Ravi, Sutlej). Some, like the ancient Sarasvati, are now largely dry. The people who composed these hymns were familiar with the Punjab, which they called, land of the seven rivers. This is the territory from the Indus in the west to the Yamuna in the east and from the mountains of the north to the panjnad in the south. Sindh, the Ganges Valley, and Peninsular India were almost unknown to them. References are made to gold, silver, lead, copper, and probably bronze, but not iron. However, iron was known by the time of the Atharvaveda which suggests that the Rgveda was codified prior to the widespread use of iron in northern India and Pakistan and that the Atharvaveda was written after 1000 B.C. The peoples of the Rgveda were cattle pastoralists who engaged in some farming. They were a tribal people whose only specialists were chiefs and priests. There are no archaeological sites that can be linked to the Vedic texts, but if the dating is correct, the Painted Grey Ware sites of the Punjab and Haryana would have been occupied by someone of that time. Other bodies of ancient writing called the Brahmanas, Aryanakas, and Upanishads

form the balance of the literature that is called Vedic. These and other texts reveal a great deal about the growing complexity of ancient Indian society with the emergence of the Second Urbanization on the Plains of the Ganges by about 500 B.C. The geographical focus of the literature gradually moves east, down the Ganges, and there are increasing references to Sind in Pakistan and Peninsular India. Archaic states are mentioned; Vatsa, Avanti, Kalinga of modern Orissa, Kasi, now Beneres or Varanasi, and Magada in southern Bihar with its famous city of Pataliputra, modern Patna. They tell of kings, states with elaborate bureaucracies, armies, taxes, and law codes. In this context Buddhism and Jainism were born and spread into Southeast Asia, China, and Japan. Widespread writing returned to South Asia in the fourth century B.C., with the edicts of the first great King, Asoka. Two scripts were employed, Kharoshti in the west, a derivative of Aramaic writing, and Brahmi in the east, thought to be related to Aramaic, but a more doubtful association than Kharoshti. The archaeology of the second urbanization of northern South Asia is documented by excavations at a number of early cities: Taxila, Charsada, Hastinapura, Ahichchhatra, and Sisulpulgarh. The subject has been brought together by Ghosh (1973), Erdosy (1987), and F. R. Allchin (1990). It is surprising that Sind and Baluchistan are mentioned infrequently in the ancient literature of South Asia, but the Deccan and South India were not neglected, although urbanization and sociocultural complexity were relatively late there. Literacy is associated with the so-called Tamil-Brahmi inscriptions of the early centuries B.C. These and other texts note kings, states, and political conflicts. The era coincides generally with the reopening of sea trade between South Asia and the west, and eventually Rome. There is an implication that this economic stimulation and competition is intimately involved with the growth of sociocultural complexity in South India. The archaeological study of South India at the time of Christ involves a consideration of the latest Iron Age Megalithic burials and settlements, and the great coastal ports like Arikamedu and Kauveripattinam. Roman coins in Megalithic burials and the presence of terra sigillata and amphorae in many sites make a strong case for commerce. There is also a robust set of documents that guide archaeological research, especially L. Casson's The Periplus of the Erythraean Sea (1989).[See also Anuradhapura; Asia, Origins of Food Production In: Origins of Food Production In South Asia; Nindowari; Vijayangara.]

Bibliography
Walter A. Fariservis, Jr., The Harappan Civilization: New Evidence and More Theory , Novitates No. 2055 (1961). Arthur L. Basham, The Wonder That Was India, 3rd ed. (1967). Chester Gorman, Excavations at Spirit Cave, North Thailand , Asian Perspectives 13 (1970): pp.79107. A. Ghosh, The City in Early Historical India (1973). Kenneth A. R. Kennedy, The Search for Fossil Man in India, in Physical Anthropology and Its Expanding Horizons: Professor S. S. Sarkar Memorial Volume, ed. A. Basu (1973): pp.2544.

Walter A. Fairservis, Jr., The Roots of Ancient India, 2nd ed. (1975). Chester Gorman, A Priori Models and Thai Prehistory: A Reconsideration of the Beginnings of Agriculture in Southeastern Asia, in Origins of Agriculture, ed. C. A. Reed (1977): pp.321355. Gregory L. Possehl and Kenneth A. R. Kennedy, Hunter-gatherer/Agriculturalist Exchange in Pre-history: An Indian Example , Current Anthropology 20 (1979): pp.592593. Jim G. Shaffer, Bronze Age Iron from Afghanistan: Its Implications for South Asian Protohistory, in Studies in the Archaeology and Palaeoanthropology of South Asia, eds. K.A.R. Kennedy and G. L. Possehl (1984): pp.4162. J. C. Barry, A Review of the Chronology of the Siwalik Hominoids, in Primate Evolution, eds. J. G. Else and P. C. Lee (1986): pp.93105. George Erdosy, Early Historic Cities in India , South Asian Studies 3 (1987): pp.1 23. R. W. Dennell, H. Rendell, and E. Hailwood, Early Tool-Making in Asia: Twomillion-year Old Artefacts in Pakistan , Antiquity 62 (1988): pp.98104. L. Casson, The Periplus Maris Erythraei (1989). Kenneth A. R. Kennedy, Fossil Remains of 28,000-year-old Hominids from Sri Lanka , Current Anthropology 30 (1989): pp.394399. F. R. Allchin, Patterns of City Formation in Early Historic South Asia , South Asian Studies 6 (1990): pp.163174. Arthur L. Basham, The Origins and Development of Classical Hinduism (1990). Dilip K. Chakrabarti, The Early Use of Iron in India (1992).

Harappa one of the best-known cities of the Indian Bronze Age, is located in
Punjab Province of Pakistan on the southern, or left bank, of the Ravi River. It is the type of Harappan or Indus Civilization site that flourished on the plains of Pakistan and western India from about 2500 to 2000 B.C. Mohenjo-daro lies 400 miles (645 km) to the southwest. These cities were once thought of as twin capitals of a vast Harappan empire, but that is no longer a valid perspective. Recent discovery of a third Harappan city at Ganweriwala in Cholistan, midway between Harappa and Mohenjodaro, is the most powerful reason to reject that notion. No one knows how the Harappan polity operated or the role urban centers played. Harappa was first recognized as an archaeological site by Charles Masson, a deserter from the British army, in 1826. It came under systematic excavation in the winter field season of 1920 to 1921 by Rai Bahadur Daya Ram Sahni of the Archaeological Survey of India. Excavation continued through the 1920s and 1930s. The key report for this work is M. S. Vats (1940). Sir Mortimer Wheeler conducted one season of work in 1946 and George F. Dales renewed work there in 1986. Work continues today under the direction of Richard Meadow. The apparent size of Harappa, taken from the mounded area and associated artifact scatter, is approximately 250 acres (100 ha). But, archaeological deposits dating to the Mature phase of the Harappan Culture Tradition have been found under alluvium around the city, and no one is certain of its exact size, but it is perhaps as large as 495 acres (200 ha). With a population density of about 200 people per hectare, and all 250 acres (100 ha) settled at one time, total population would have been about 20,000. Recent work at the site has defined five phases of occupation: Period V (Cemetery H,

Post-urban Harappan, ca. 19001500 B.C.); Period IV (Transition from Mature Harappan to Post-urban, ca. 20001900 B.C.); Period III (Mature Harappan, ca. 2500 2000 B.C.); Period II (Transition from Early to Mature Harappan, ca. 26002500 B.C.); and Period I (Early Harappan, ca. 32002600 B.C.). Periods have been defined stratigraphically. Absolute chronology is based on calibrated radiocarbon dates. There is an imposing high area on the west surrounded by substantial brick walls. It is generally called the AB Mound. Wheeler labeled it a citadel, another archaic thought about the city. A large building on Mound F at the northern end has sets of parallel walls laid precisely on either side of a central road or corridor and is thought of as a granary, although this has never been confirmed by charred grain, storage vessels, or other collateral evidence. There are, however, a series of circular threshing platforms to the south of the granary building. Their function has been determined through careful excavation of the wooden mortars in their centers, associated with grain husks. Two cemeteries, one designated R-37, the largest known place of interment at that time is associated with Period III; cemetery H is a burial ground for Period V. There was diversity in the treatment of the dead, although skeletons have been found in an extended, supine position inside wooden coffins. Artifacts from the Mature Harappan period include the usual square stamp seals, black-on-red painted pottery, and carnelian beads, some of which were etched. There is extensive use of baked brick, a distinctive feature of the Harappan civilization.[See also Asia: Prehistory and Early History of South Asia.]

Bibliography
Mado Sarup Vats, Excavations at Harappa, 2 vols. (1940). R. Eric Mortimer Wheeler, Harappa 1946: The Defenses and Cemetery R-37 , Ancient India 3 (1947): pp.58130. George L. Possehl, Discovering Ancient India's Earliest Cities: The First Phase of Research, in Harappan Civilization: A Contemporary Perspective, ed. Gregory L. Possehl (1982), pp. pp.40513. Jonathan Mark Kenoyer, Urban Processes in the Indus Tradition: A Preliminary Model from Harappa, in Harappa Excavations 19861990: A Multidisciplinary Approach to Third Millennium Urbanism, ed. Richard H. Meadow (1991), pp. pp.29 60.

Indus Civilization The Indus, or Harappan, civilization rose on the plains of the
Greater Indus Valley of Pakistan and northwestern India in the middle of the third millennium B.C. The civilization is now dated to 2500 to 2000 B.C. This period is also called the Mature Harappan. It was the time when the great cities of Mohenjo-daro and Harappa were functioning urban centers. They were inhabited by a population acquainted with the art of writing, and there is abundant evidence for social stratification and craft and career specialization. Unlike the Mesopotamian civilization and Dynastic Egypt, the Indus civilization was

not part of the ancient literature of the Indian subcontinent. Evidence for both of the western civilizations was preserved in the Bible and other lore known to the scholarly world. The standing monuments of Dynastic Egypt that survive to the present day were also testimony to the Bronze Age civilization of north-eastern Africa. But the Vedic texts of ancient India, the earliest of the subcontinents' historical literature, contain no direct reference to the Harappan civilization. The same is true for the Brahmanas, Aryanakas, and Upanishads, which form the balance of the large body of literature that is called Vedic. Archaeologists employed by the British colonial government of India had no hint that there was a vast civilization of the third millennium, and it was an act of pure archaeological discovery that brought it to light. The story of discovery begins in the nineteenth century, when the distinctive square stamp seals were found at the site of Harappa on the banks of the Ravi River in the west Punjab of Pakistan. The writing on them was unknown to epigraphers of the age, which gave an importance to these objects and a constant, but low-keyed interest in the site. For this reason Sir John Marshall, the Director General of the Archaeological Survey of India, sent his colleague Rai Bahadur Daya Ram Sahni to excavate there in the winter field season of 1920 to 1921. Sahni found more seals, but still had nothing to connect them. A year earlier Rakal Das Banerji, the Superintending Archaeologist of the Western Circle of the Archaeological Survey of India, had visited Mohenjodaro, 400 miles (644 km) to the southeast of Harappa, on the banks of the great Indus River of Sind Province. The site had been first recorded in 1911 to 1912, but its significance had not been recognized. Banerji seems to have been a man with sharp intuition and he conducted a small-scale excavation at Mohenjo-daro with his own modest field funds. His work also produced the square stamp seals with the unknown script on them and he recognized the parallels with the seals published from Harappa. The next year (19231924) there were teams digging at both sites, with the full blessings of the Director General's office of the Survey. Marshall made a public announcement of the discovery of a new civilization in the same year. Archaeology in Egypt, the Near East, and South Asia was prospering in the 1920s. It was the era during which Tutankhamun's tomb was discovered, and the British Museum and The University Museum, Philadelphia, excavated at Ur where they discovered the famous Royal Graves. Excavation at Mohenjo-daro continued on a very large scale until 1931, when the Great Depression forced the termination of work there. The chronology of the Mature Harappan civilization is based on radiocarbon dates, with one reasonably good, if general, cross-tie to the Akkadian Period of Mesopotamia. There are 105 radiocarbon dates for the Mature Harappan that have a wide range but average around 2282 B.C. The best range for the Mature, Urban Harappan can be taken to be roughly 25002000 B.C. This correlates well with the date for the Akkadian Period in Mesopotamia. A fairly large number of Harappan artifacts, including Indus stamp seals, etched carnelian beads, and other iconography have been found in these contexts. There are also Mesopotamian texts with the personal name Meluhha, which has been identified as the Mesopotamian name for the Indus civilization. The objects with citations to people from ancient India and Pakistan, resident in Mesopotamia, support the general dating of the Indus civilization very well.

Geography
Settlements of the Mature Harappan Period are found over a very large area, exceeding one million square kilometers. The most westerly of these settlements is a fortified site called Sutkagen-dor, near the border between Pakistan and Iran near the Arabian Sea. This was once thought to be a port, but that idea is no longer contemplated. The site of Lothal in Gujarat anchors the southwestern point of the Mature Harappan. The northeastern sites are found in the upper portion of the GangaYamuna River Doab, mostly in Saharanpur District of Uttar Pradesh. There is one settlement at Manda in Jammu (India), near Ropar on the upper Sutlej River. The greatest concentration of sites is in Cholistan (Desert Country) of Bahawalpur and Rahimyar Khan Districts of Pakistan, with one hundred eighty-five closely spaced around the terminal drainage of the ancient Sarasvati, or Hakra River. The most northern of the sites is Shortughai on the Oxus River in northern Afghanistan. This was almost certainly a trading center. It is distantly removed from other Mature Harappan settlements and therefore not used in the calculation of the total area.

Urban Origins
The beginnings of the Mature Harappan are still not well understood. There was a widespread Early Harappan period with no cities, or even particularly large settlements. There is little sign of social stratification in the Early Harappan and craft specialization is not developed to a marked degree. There appears to have been a period of rapid culture change at about 2600 to 2500 B.C. during which most of the distinctly urban or complex sociocultural institutions of the Indus civilization came together.

Major Settlements
The major settlements of the Harappan civilization are Mohenjo-daro and Harappa. There is a third city, Ganweriwala, which is approximately 198 acres (80 ha) in size in Cholistan. It has not been excavated. Other major sites are Chanhu-daro, Lothal, Dholavira, and Kalibangan. Chanhu-daro, located 70 miles (113 km) south of Mohenjodaro, a Mature Harappan town, was important for the presence of a workshop and for understanding the stratigraphic relationship between the Mature Harappan and succeeding cultures. Lothal, located in Gujarat at the head of the Gulf of Khambhat (Cambay) provides an understanding of Mature Harappan trade and artisanry and the stratigraphic relationship between the Mature Harappan trade and the Post-Mature Harappan in the region. The large brick-lined enclosure at the site, which was once described as a dockyard, probably does not fit that description. Dholavira, located in Kutch, midway between Chanhudaro and Lothal is a 145-acre (60 ha) site under excavation. It is fortified and has a stratigraphic succession from the Early or Pre-Harappan to the Mature Harappan and succeeding cultures. Kalibangan is very close to the present border between India and Pakistan on a now dry river, known today as the Ghaggar-Hakra and in antiquity as the Sarasvati. It is a good example of a regional center of the Indus civilization and has a well-documented stratigraphic succession from the Early to the Mature Harappan.

Writing
In spite of many claims to the contrary, the Indus script remains undeciphered.

Subsistence
The peoples of the Indus civilization were farmers and herders, with hunting, fishing, and gathering as subsidiary activities. The chief food grains were barley and wheat, in that order of importance. They also cultivated at least two forms of seed plants, the chickpea, field pea, mustard, and sesame. The evidence for the cultivation of rice during the Mature Harappan is ambiguous, but possible. The Harappans used grapes, but their status as a domesticated plant is unknown. They engaged in some gathering of wild plants, the most common of which is the Indian jujube. Surkotada, a Mature Harappan site in Kutch District of India, has produced a diverse set of wild plants that all seem to have been gathered for their seeds. Twenty-five species or genera were found, including well-known plants such as Dichanthium, Panicum sp., Carex sp., Amaranthus sp., and Euphorbia sp. Date seeds are also part of the Mature Harappan palaeobotanical sample. The earliest cotton in the Old World was found at Mohenjo-daro. Several examples of this material exist, all being preserved in the corrosion product of metallic objects. One patch of cloth was preserved from the bag used to hold a silver vessel hidden in a floor. A line used for fishing was preserved when it was wrapped around a copper hook. Cotton seeds may be present at the site of Mehrgarh in Period II. The peoples of the Indus civilization were cattle keepers on a grand scale. One of the consistent patterns in Harappan archaeology is that cattle remains are usually above 50 percent of the faunal assemblage, often much more. This observation and the cattle imagery in art make it clear that this was the premier animal in their culture, and that it is highly likely that it may have been the principal form of wealth. The Harappans also kept substantial numbers of water buffalo, sheep, goats, and pigs. They kept domesticated dogs and the figurines show that some of them wore collars, and that there were breeds: some with curved tails over the back, one that looked something like a bulldog, and a thinner, more gracile breed that resembles the modern Afghan. The chicken was domesticated from the wild Red Indian Jungle Fowl, the earliest remains of which are found at Mohenjo-daro. These people were also fish eaters. Most river sites have the remains of the local freshwater fish, especially a variety of carp. Recent excavations at Harappa have revealed the presence of marine fish, indicating some form of commerce in dried or salted fish. The Mature Harappan occupation at Balakot, a site near the Arabian Sea just to the east of Karachi, had sufficient remains from marine animals, especially a grub, to roughly estimate that maritime food resources contributed about half of the dietary intake from all fauna, with most of this coming from fish.

Trade and Crafts


The peoples of the Mature Harappan were wide-ranging traders, within their territories of Pakistan and northwestern India, and in more distant places, including Afghanistan, Central Asia, the Iranian Plateau, and Mesopotamia. The internal trade and commerce involved subsistence materials, such as the aforementioned fish. It extended to the following raw materials: copper (abundant

resources in Baluchistan and the Ketri Belt of Rajasthan); gold (placer and dust available in the Indus River, Kashmir, and other places); silver (southern Khetri Belt, Kashmir); chert (abundant resources in the Rohri Hills of northern Sind); soft gray stone or steatite (widely available in Baluchistan, Northwest Frontier, and Rajasthan); chalcedony and other semiprecious stones (in Gujarat, Kutch, Western Ghats, and Baluchistan); lapis lazuli, an important rich blue stone (available in the Chagai Hills of Baluchistan as well as the better-known Afghan source in Badakhshan); shell (the species used broadly available from the maritime coast); and timber (in the Himalayas). The roots of artisanry for the Indus civilization are historically deep, going back to Period III at Mehrgarh, which dates to the middle of the fifth millennium B.C. It is in these contexts that one sees the beginnings of copper-based metallurgy, the development of wheel-turned pottery, and the firing of hard red wares, as well as the development of the bead-making technology for which the Harappans are so famous. The Harappan artisans took these materials and others and turned them into a wide range of products. Major craft centers have been found at the sites of Chanhu-daro, Lothal, and Mohenjo-daro. Harappan foreign commerce is a vast topic. The trade and interaction with Central Asia is covered in works by Ahmad Hasan Dani (1990) and Henri-Paul Francfort (1992). This interaction has deep roots and began much earlier than the Mature Harappan, as demonstrated by an examination of the ceramics of the two regions, especially the so-called Quetta Ware, and female figurines with distinctive long, joined legs, stretching to the front. It seems to be rooted in the pastoral nomadism that links Central Asia with the Punjab and Indus Valley. The interaction with the Iranian Plateau is not well documented, but one of the best sources in English is Asko Parpola's piece in South Asian Archaeology 1981 (1984). The trade with Mesopotamia, which for the most part seems to have been maritime trade, is well documented, with book-length treatments by Shereen Ratnagar (1981) and Daniel T. Potts (1990). A hypothesis about the beginnings of this maritime trade appears in Possehl's Kulli (1986). The merchandise that appears most prominently in the textual references to products of Meluhha includes: carnelian, lapis lazuli, pearls, a thorn tree of Meluhha, mesu wood, fresh dates, a bird of Meluhha (5 as figurines), a dog of Meluhha, a cat of Meluhha, copper, and gold. The products traded to Meluhha are not as clearly documented, but included food products, oils, cloth, and the like. There are many Harappan artifacts in Mesopotamia, including seals, etched carnelian beads, and ceramics. The Mesopotamian products found in Harappan contexts at Indus sites are very few and there is a considerable disparity in the archaeological record of this subject.

Religion
The best discussion of the religion of the Harappan peoples remains the essay by Sir John Marshall (1931), although the notion that there is a Proto-Shiva on the famous seal from Mohenjo-daro is not part of contemporary thought. The central theme of Harappan religion as it comes from the archaeological record is the combined malefemale deity, symbolized by animal horns and the broad, curving plant motifs. This is

a very broadly defined set of images that reflects an equally broad set of ideas about the principal Harappan deity, although there seems to be no image of this entity. What is seen is a male, horned animal god, generally associated with the water buffalo, and a female plant deity represented as either a plant motif or a human figure standing in or under a plant. The imagery for these two gods is clearly cognate with the broad, sweeping curve of the buffalo horns found in the plant designs. This relationship was there for some purpose and it is reasonable to speculate that it was to convey the sense that what one sees as two is, in fact, a single, unseen entity or idea. It would have been an androgynous being, combining the features of both male and female and obviously not sexually neuter. This is a feature of gods in the Hindu pantheon, as exemplified by Ardhanarisvara, the manifestation of Siva, who is half man and half woman. It also carries a sense of synthesis for the dualism of what came to be called saktism that Sir John Marshall noted in his early essay on Harappan religion. This imagery is proposed here, but it should be considered a hypothetical extension of what is seen quite clearlythe dualism of male/female and animal/plant. All of the plant and animal worship discussed by Marshall can be seen as specific aspects of the great duality of the Harappan Great Tradition. The multiheaded animals, unicorns with elephant trunks, perhaps unicorns themselves as in the terracotta figurines from Chanhu-daro, are all themes appropriate to zoolatry. So are the tigers with bull horns and half human-half quadrupeds seen on the seals, such as on the cylinder from Kalibangan. These are proposed here to be only an elaboration of the animal themes supremely portrayed by the Buffalo Deity on seal number 420 and earlier on the Buffalo Deity pot from Kot Diji. The abundant plant motif, especially as pictured on sealings and painted designs on pottery, is an analogous elaboration of the principal theme seen, for example, on the seal of Divine Adoration (number 430), or the painted motifs on pottery from Kalibangan or Mundigak. The place of water in Indus ritual, or the Harappan civilization generally, seems to be prominent. There is abundant evidence that the Harappan affinity for cleanliness (household and civic drainage, bathing facilities in many if not most houses) was simply a microcosm of the Great Bath. Seen from this perspective the Great Bath was the civic-level facility for water ritual, bathing, and cleanliness, that took place in the homes of ordinary citizens on a regular basis. The importance of water for the growth of plants and the crops that sustained the Harappan peoples would suggest that the water ritual was affiliated with the female/plant side of the duality in the Harappan Great Tradition. Fire worship, if the evidence from Kalibangan is accepted, is somewhat more difficult. It plays an important role, at times with water, in the religious life of many peoples and would not be unexpected in Harappan life. There is a temptation to see fire as the opposite of water and to place it on the male/animal side of the great Harappan duality. This should be considered a tentative suggestion, since it builds on logic that has already been qualified. Seen from this perspective the religion of the Harappan civilization can be perceived as a single institution, with perhaps two different aspects in the personae of the male/animal deity and the female/plant goddess. The possibility that various domains,

settlements, or peoples claimed to be devotees of one or the other of these gods has been suggested, along with the thought that older, diverse parts of the Early Harappan systems of belief were practiced in Mature Harappan times as well. The parallels to these observations found in later Hinduism are in some respects quite striking.

The Eclipse of the Ancient Cities of the Indus


The excavations at Mohenjo-daro between 1922 and 1931 demonstrated conclusively that this ancient city was largely abandoned at the end of the Mature Harappan. There is a very small amount of pottery associated with the so-called Jhukar Culture. The distribution of this pottery at Mohenjo-daro is not well understood, nor is its development out of the wares of Mature Harappan. It may indicate that there was a small community of people living in some probably restricted parts of Mohenjo-daro in the early second millennium. There is also a later Buddhist monastery and stupa on the Mound of the Great Bath. For all intents and purposes, it is fair to say that the city had been dead as an urban center since the opening decades of the second millennium B.C. Similar evidence was gained through the excavations at Harappa, although the early second millennium there, called Cemetery H after the excavation area where it was first observed, is somewhat larger and more apparent than the Jhukar occupation of Mohenjo-daro. The situation at Ganweriwala is less clear since there has been no excavation there, but surface prospecting indicates that the occupation was limited to Mature Harappan times. The evidence from the three urban centers, as well as regional surveys, indicates that Sind and the west Punjab experienced the widespread abandonment of Mature Harappan settlements at the opening of the second millennium. There was either a migration out of these areas or a shift in the system of settlement and subsistence to an area that left very little archaeological trace, since site counts drop in a precipitous way: from sixty-six Mature Harappan sites in Sind down to just nine in the Jhukar era, and from one hundred ninety Mature Harappan sites in the west Punjab down to fortyseven in Cemetery H times. The same is not true everywhere. In Gujarat, site counts are about half the Mature Harappan in the Post-Urban Phase of the second millennium. Rojdi, an important site in central Saurashtra, underwent a major rebuilding in the opening centuries of the second millennium that expanded its total size by half. In the Indian Punjab, Haryana, northern Rajasthan, and western Uttar Pradesh, there are two hundred sixteen known Mature Harappan settlements. The opening centuries of the second millennium see this number increase to eight hundred fifty-nine, a fourfold rise. No one knows for sure the full meaning of these observations, but it does seem to indicate that the eclipse of the Mature Harappan was a regional phenomenon that did not strike all parts of the Harappan world in the same way. Older theories which hold that the cities and civilization were destroyed by invading Aryan tribes, as depicted in the Rig Veda, make very little sense. This is in part because there is no evidence for the sacking of any of the Mature Harappan settlements, nor is there chronological agreement between the date of the Vedic texts and the changes seen so graphically at Mohenjo-daro and Harappa.

The proposition that a natural dam formed across the Indus River in Sind and flooded out the civilization has been widely critiqued and is not a viable proposition.

Conclusion
There is still much to be learned about the Indus civilization. We know of the grandeur of its cities with early grid town planning and a mastery of civic drainages, of its wide-ranging contacts and technological sophistication. But many things elude us about these people: their social and political organizations, the details of their religion, the manner in which their cities were governed, the nature of warfare (if present), and the place of writing in their culture. There is a great deal to be done to clarify and expand our knowledge of the ancient city dwellers of India and Pakistan. [See also Anuradhapura; Asia: Prehistory and Early History of South Asia; Nindowari; Vijayangara.]

Bibliography
Sir John Marshall, Religion, in Mohenjo-daro and the Indus Civilization, 3 vols, ed. Sir John Marshall (1931): pp.4878. Sir Mortimer Wheeler, The Indus Civilization, 3rd ed. (1968). B. K. Thapar, New Traits of the Indus Civilization at Kalibangan: An Appraisal, in South Asian Archaeology ed. Norman Hammond (1973): pp.85104. Walter A. Fairservis, Jr., The Roots of Ancient India, 2nd ed. (1975). Gregory L. Possehl, ed., Ancient Cities of the Indus (1979). Shereen Ratnagar, Encounters: The Westerly Trade of the Harappa Civilization (1981). Asko Parpola, New Correspondences Between Harappan and Near Eastern Glyptic Art, in South Asian Archaeology 1981, ed. Bridget Alchin (1984): pp.17695. Gregory L. Possehl, Kulli: An Exploration of Ancient Civilization in South Asia (1986). R. J. Wasson, The Sedimentological Basis of the Mohenjo-daro Flood HypothesisA Further Comment , Man and Environment 11 (1987): pp.1223. Ahmad Hasan Dani, Central Asia and Pakistan Through the Ages , Lahore Museum Journal 3:1 (1990): pp.113. Gregory L. Possehl, Revolution in the Urban Revolution: The Emergence of Indus Urbanization , Annual Review of Anthropology 19 (1990): pp.26182. Daniel T. Potts, The Arabian Gulf in Antiquity, 2 vols. (1990). Henri-Paul Francfort, New Data Illustrating the Early Contact Between Central Asia and the North-West of the Subcontinent , South Asian Archaeology 1989, ed. Catherine Jarrige (1992): pp.97102. Gregory L. Possehl, ed. Harappan Civilization: A Recent Perspective, 2nd rev. ed. (1993).

Mohenjo-daro, Mound of the Dead Men, is one of the most famous Bronze Age
cities of the world. It is located in Sind Province of Pakistan on the western, or right bank of the Indus River at 2718' north latitude, 6707' east longitude. Mohenjo-daro and Harappa, 400 miles (643 km) to the northeast, are the two principal excavated cities of the Indus or Harappan civilization. They prospered on the plains of Pakistan and western India from around 2500 to 2000 B.C. Stuart Piggott (1950) proposed that they were twin capitals of a vast Harappan empire, but this is not part of current

theory. The discovery of a Harappan city at Ganweriwala in Cholistan, midway between Harappa and Mohenjo-daro, is a strong reason to dismiss Piggott's twin capitals notion. It is not known how Harappan polity operated or the role played by large urban places. Mohenjo-daro was first visited by an archaeologist, D. R. Bhandarker, in 1911 to 1912. Excavation began in the winter field season of 1922 to 1923, just after excavation had taken place at Harappa. The similarity between the remains from these sites led archaeologists to continue to excavate but the discovery of the Indus Civilization was not announced until 1924. Intensive excavation continued at Mohenjo-daro until 1931, when the Great Depression forced the end of the large-scale work which was published in two substantial reports (Marshall 1931 and Mackay 193738). Smaller excavations continued through the 1930s but not on a yearly basis. Sir Mortimer Wheeler undertook one season of work there in 1950 and George F. Dales did the same in 1964. A team of architects and archaeologists headed by Dr. Michael Jansen has been working at the site since 1979, mapping the remains, conducting intensive surface surveys, and producing general documentation. Mohenjo-daro appears to have been occupied during the Mature, Urban Phase Harappan of the Harappan Cultural Tradition (25002000 B.C.). There is a later Buddhist stupa and associated monastery of the early centuries A.D., but it is small. The lowest levels of the site have never been revealed in a substantial way because of the high groundwater table of the Indus Valley, and the beginnings of the city are obscure. There is no evidence for an Early Harappan or pre-Urban Phase occupation. Michael Jansen has proposed that it may be a founder's city planned in the Mature, Urban Phase Harappan prior to its construction, like the Alexandrias that were built by Alexander the Great. There are two parts to the city. A high mound to the west, with the so-called Great Bath, is separated from the Lower Town to the east by an empty space that has been shown, through excavation, never to have been settled. The Mound of the Great Bath is the site of specialized architecture. The Great Bath itself is an open quadrangle with verandas on four sides. There is architectural evidence for a second floor. A long gallery on the south has a small room in each corner; on the east, a single range of small chambers, including one with a well. In the center of the enclosed quadrangle is a large bath approximately thirty-nine feet long by twenty-three feet (12 m by 7 m) wide and sunken eight feet (2.5 m) below the surrounding paving of the court. It has a flight of steps at either end. The bath was waterproofed with a lining of bitumen below the outer courses of baked brick. Its function, insofar as it contained water, is not in doubt. Directly adjacent to the bath is a series of brick foundations that Sir Mortimer Wheeler suggested was a granary. But, just as at Harappa, there is no collateral evidence for this function, making it doubtful. The so-called Assembly Hall and College of Priests are also areas of the Mound of the Great Bath that may not as such be truly functional. The Mound of the Great Bath as a whole is elevated and separated from the living area of the city of Mohenjo-daro and because of this, seems likely to have been the abode of an elite segment of the population. Mohenjo-daro is famous for many things. The two most prominent are its grid plan and the extensive internal drainage system that was integrated into the town plan. The grid town plan has been proved through excavation with two northsouth streets (First

and Second Streets) and two eastwest thoroughfares (Central and East Streets). These divide the Lower Town into at least nine blocks, the internal structure of which does not necessarily conform to the grid town plan. Most houses were provided with trash chutes, sumps, and/or refuse collection bins. Some of these are integrated into a system of street drains, with sumps, manholes, and the like. These are not large-scale sewers, but functioned more like the jube, features of streets in Iran, Afghanistan, and parts of Pakistan. The rainfall around Mohenjo-daro is less than 4 inches (100 mm). Since rainfall in the area is thought to be about the same today as in the third millennium, contrary to some opinion, the drainage system would not have been justified by the rainfall. The combined size of the mounds at Mohenjo-daro is approximately 250 acres (100 ha), about the same as for Harappa. But, extensive remains have been found under the alluvium to the north and east of the mounds at Mohenjo-daro and the city might have been much larger, perhaps 990 or even 1230 acres (400 or 500 ha). If all 250 acres (100 ha) of Mohenjo-daro were settled at one time, with a population density of approximately 200 people per 2.5 acres (1 ha), the population of the city would have been about 20,000. The people of Mohenjo-daro seem to have been engaged in craft production: copper/bronze metallurgy, stone tool manufacture, faience production, shell working, bead manufacturing, seal production, and the like. They were also farmers (barley, some wheat, cotton, and a wide range of other plants) and herders. Cattle are especially prominent in the archaeological record, both as faunal remains and artifacts, and it is clear that the ownership of these animals was almost certainly a principal way in which wealth was expressed. There is abundant evidence for longdistance trade, with contacts reaching Mesopotamia via the sea lanes through the Arabian Gulf, as well as overland to northern Afghanistan, central Asia to the north, and peninsular India to the south and east. Mohenjo-daro was abandoned at about 2000 B.C. The reasons for this are not yet known; however, it was not invading Aryans. Many other settlements in the area surrounding Mohenjo-daro in Sind were also abandoned at this time and something similar took place at Harappa and in its hinterland. The other domains of the Indus civilization seem to have been unaffected, and there is abundant evidence for cultural continuity into the second millennium. In Gujarat and the Indian Punjab and Haryana there is even an increase in the number of settlements, some of which underwent extensive rebuilding at the point when Mohenjo-daro was being abandoned. There seems to be good reason to revise the notion of an eclipse or collapse of the Indus civilization other than in Sind and the west Punjab. There was certainly no discontinuity in the cultural tradition. Recent work by Jagat Pati Joshi of the Archaeological Survey of India has documented continuity from the Mature, Urban Phase Harappan into the Early Iron Age in Punjab and Haryana (Joshi 1978).[See also Asia: Prehistory and Early History of South Asia.]

Bibliography
Sir John Marshall, ed., Mohenjo-Daro and the Indus Civilization, 3 vols. (1931). Ernest J. H. Mackay, Further Excavations at Mohenjo-daro, 2 vols. (19371938).

Dan Stanislawski, The Origin and Spread of the Grid-pattern Town, Geographical Review 36: pp.10520. Stuart Piggott, Prehistoric India to 1000 B.C. (1950). Sir Mortimer Wheeler, The Indus Civilization, 3rd ed. (1968). Jagat Pati Joshi, Interlocking of Late Harappa Culture and Painted Grey War Culture in the Light of Recent Excavations , Man and Environment 2 (1978): pp.98101. Gregory L. Possehl, Discovering Ancient India's Earliest Cities: The First Phase of Research, in Harappan Civilization: A Contemporary Perspective, ed. Gregory L. Possehl (1982): pp.405413. Michael Jansen and Gunter Urban, eds., Reports on Field Work Carried Out at Mohenjo-Daro, Pakistan, 198286 by the Ismeo-Aachen University Mission: Interim Reports, vols. 1, 2, (1983, 1987). George F. Dales, and J. Mark Kenoyer, Excavations at Mohenjo Daro, Pakistan: The Pottery (1986). Michael Jansen and Maurizio Tosi, eds., Reports on Field Work Carried Out at Mohenjo-Daro, Pakistan, 198386 by the Ismeo-Aachen University Mission: Interim Reports, vol. 3 (1988). Gregory L. Possehl

Megalithic Tombs are one of the most widespread and conspicuous landscape
monuments of the western European Neolithic. The term megalithic itself is derived from the Greek words lithos meaning stone and megas or large. They are thus in essence large stone monuments, but by extension megalithic tomb is often used to refer to all Neolithic chambered tombs of western Europe, including those where construction was in dry-stone walling or timber. Recent excavations at Haddenham in Cambridgeshire showed that the timber elements in nonstone chambered tombs could themselves be of great size, and the term megaxylic (large timber) was proposed to refer to these, but so far this has not found general acceptance in the archaeological literature. The variety of monuments comprised within the category of megalithic tombs is enormous, ranging from simple box-like burial chambers beneath small circular mounds to enormous mounds with multiple passages and chambers such as Knowth in Ireland or Barnenez in Brittany. Furthermore, the tombs form part of a larger tradition of western European prehistoric monuments, which also includes standing stones, stone circles, and in Britain, henges and cursus monuments. This monumentalism is a key feature of the western European Neolithic and suggests some conscious attempt on the part of these early societies to create a cultural landscape of conspicuously visible humanly made structures. Among the immense variability of megalithic tombs a number of key types have been identified. One of the earliest and most widespread is the passage grave, where the burial chamber under its covering mound of earth or stones is reached by a passage starting from the edge of the mound. This design allowed continued access to the chamber long after the mound was completed, although in many cases the passage was low and narrow and could be negotiated only by crawling through it. Examples of the passage grave type are found in most of the regions where megalithic tombs were built, including Iberia, France, the British Isles, and southern Scandinavia, but in

addition to the passage graves each region possesses other types of megalithic tomb. In France, there are the alles couvertes, or gallery graves, consisting of an elongated burial chamber reached by a short vestibule. In Ireland, there are court cairns, where long curved arms extend from one end of the mound to enclose an unroofed courtyard. In northern Europe, there are the dysser, in which the chamber is a simple stone compartment beneath the mound, without any means of entry from the outside. In most regions there are additionally other kinds of Neolithic mounded tomb such as the unchambered long mound or round mound; these unchambered monuments, properly speaking, fall outside the category of megalithic tombs, although it is clear they are a related phenomenon. One of the most interesting findings from work on megalithic tombs over the past fifty years has been the realization that most are not single-phase structures of unitary design but the result of many separate episodes of building, modification, and addition. The form of the monument as it appears today is often the final outcome of a process extending over several centuries. A good example of this is the tomb known as Wayland's Smithy in southern Britain. This is a megalithic tomb with a burial chamber of cruciform plan at one end of an elongated mound. The entrance to the passage leading to the burial chamber is in the center of one end of the mound, flanked by large upright stones that create a ceremonial facade. This associated facade is the most conspicuous of the surviving structures but represents only the latest phase of the monument. The original structure consisted of a timber mortuary house containing the bones of fourteen to seventeen individuals. Subsequently, the mortuary house was allowed to decay and the remains covered by an oval mound. At a later stage this was incorporated in the monument that we see today, the oval mound being entirely hidden within the long mound and a separate megalithic passage grave built at one end.

Origins and Chronology


Until the advent of radiocarbon dating in the 1950s conventional wisdom placed most megalithic tombs in the late third or early second millennium B.C., or in some cases even later; in the 1920s, the alle couverte of Tress in Normandy was attributed by its excavator to the Iron Age (first millennium B.C.). At that time, many prehistorians considered megalithic tombs to be derived from the eastern Mediterranean or Aegean region, and the corbel vaults of Newgrange in Ireland and Maes Howe in Scotland were traced back to Mycenaean forebears such as the famous Treasury of Atreus at Mycenae itself. The first radiocarbon dates quickly demonstrated that the western European megalithic tombs were much older than their supposed Aegean antecedents, and the hypothesis of an eastern Mediterranean origin was replaced by theories of independent development. These new dates placed the earliest megalithic tombs in the fourth millennium B.C., and with the calibration of the radiocarbon chronology the oldest dates have been pushed back to around 4800 B.C. in calendar years. This makes them the oldest monumental architecture in the world. Radiocarbon dates have also enabled the chronology of the different varieties of megalithic tomb to be fixed, and have shown that megalithic tombs were still being built and used around 2500 B.C. in Ireland and certain regions of France. The use of megalithic tombs has thus been shown to extend over a period of more than 2,000 years.

The earliest reliably dated tombs are the passage graves of northwestern France, although it is likely that megalithic tombs in certain regions of Portugal belong to approximately the same period. Most theories of origin place particular emphasis on the geographical distribution of the tombs, especially that of apparently early types such as passage graves. Their distribution along the Atlantic margin of Europe suggests that maritime contacts, perhaps between sea-fishing communities, may have played a part in the genesis and dissemination of the concept. This idea gains support from the discovery of collective graves containing the skeletons of up to six individuals in the mesolithic shell middens of Tviec and Hodic off the southern coast of Brittany. The practice of collective burial, which is such a widespread feature of western European chambered tombs, could well have arisen from such modest mesolithic origins. The concept of the mound may have been a response to the social changes connected with the adoption of a new economy or ideology at the beginning of the Neolithic Period. It has been argued that pressure from farming groups spreading across northern France from the east could, in turn, have led to pressure on land and resources in Brittany, stimulating the construction of monumental tombs that acted as territorial markers. Other arguments place the emphasis not on economic change but on the ideology of the longhouse. Longhouses of massive timber construction were a key feature of early framing communities in central Europe, and are thought to have been translated into long mounds for burials by the early farming communities of northern and northwestern Europe. Long mounds are found in northern France as far west as Brittany, and some have argued that it was from these long mounds that all other varieties of mounded tomb, including the passage graves, were derived. This hypothesis fails to account for the early development of megalithic tombs in Iberia, however, where neither long mounds nor longhouses were present. For this reason it remains probably that megalithic tombs derived their origin, in part at least, from local Mesolithic burial traditions.

Usage and Meaning


Megalithic tombs consist of two principal components: the burial chamber and the covering mound, or barrow. A third element sometimes found is a court or forecourt. There is some evidence to suggest how these elements were used, although usage must have varied considerably from generation to generation and from one region to another. The principal burial place was the chamber, although burials sometimes were also placed in the passage. At the Hazleton long mound in southern Britain burials had been placed in the passage only after access to the chamber beyond had been blocked by collapse, so in this case the passage appears to have served as an overflow. The predominant practice in megalithic tombs was that of collective burial, in which remains of up to 350 individuals were placed together in the same tomb. Grave goods were usually few, and most of the bones had become disarticulated. In some tombs there was evidence that the bodies had first been buried or exposed elsewhere, and it was only the cleaned and disarticulated bones that were placed in the chamber; in other cases, entire bodies were introduced, and any disarticulation was the result of later disturbance after they had decomposed. The presence of an entrance or passage was clearly designed to allow repeated access

to the burial chamber over a period of decades or centuries, and evidence shows that earlier burials were sometimes displayed to make way for new interments. There are also indications that in some tombs the bones had been sorted into categories, such as long bones or skulls, which were grouped together in particular areas of the chamber. This suggests that not only may new burials have been introduced via the passage, but selected bones from existing interments may have been extracted for use in cults or ceremonies. Such ceremonies, perhaps involving offerings to the dead, may have taken place in the courts or forecourts. The monumentality of the tombs suggests that the bodies placed in them were of great importance to the communities that built the tombs. A suggestion that has gained broad acceptance is that the tombs drew their significance from being the resting place of the ancestors. In many small-scale societies an individual derives the right to use of the land from his or her lineal descent from the ancestors. The burial mounds may therefore have symbolized ancestral right to land, and this line of reasoning can help to explain why the burial mound is often much larger than would be needed simply to cover the burial chamber itself.

Social Context
A number of exercises, both paper and practical, have attempted to calculate the work effort involved in the construction of a megalithic tomb. This includes quarrying and transport of the stone, construction of the chamber and other structures on site, and completion of the mound. These exercises have shown that it would have been within the capability of a small-scale community of some few dozen persons to build one of the smaller tombs, but that construction of a large tomb such as Knowth, in Ireland, where there are two long and heavily decorated passage graves beneath a mound over 200 feet (60 m) in diameter, would have required the cooperation of a large number of individuals, from several different communities. The fact that, in general, the larger tombs belong to the later stages of megalithic tombs could be related to the development of increasingly hierarchical societies, where power was concentrated more and more in the hands of a ruling elite. Thus what we may be witnessing is a transition from a landscape of relatively egalitarian communities, each with their ancestral monument, to a more hierarchical organization where burial mounds are fewer, larger, and concentrated in emerging centers of power, such as the Boyne Valley or Orkney mainland. Not all regions exhibit such a hierarchical progression, however, and there is evidence that even in the third millennium B.C. some of the tombs were still being shared by a small number of families who chose to bury their dead in a communal burial place. The alle couverte of La Chausse-Tirancourt in northeastern France contains two distinct layers of burial separated by an intentional deposit of chalk. Genetic abnormalities in the bones show that the same families were burying in particular areas within the tomb in both layers. This suggests that these families retained rights to their own specific part of the chamber throughout the life of the tomb.

Megalithic Art
An intriguing feature of some megalithic tombs is the presence of designs carved into the surface of the stones. These designs, known as Megalithic art, are found in tombs along the Atlantic margin of Europe from Iberia to the Orkney Islands but are especially common in Ireland. In the great Boyne Valley tombs such as Knowth and

Newgrange, decorated stones occur both in the slab-built curbs that encircle the base of the burial mounds and on the stones of the passage and chamber. In addition to pecked designs, traces of painted decoration have been found on certain Portuguese tombs. It is unclear whether this kind of decoration is a local Portuguese phenomenon, or whether it was originally much more widespread and survived only in the warmer Portuguese climate. A wide variety of motifs, both representational and abstract, are present in Megalithic art. They may be divided chronologically into three principal phases. In the first phase (ca. 48004000 B.C.), the art appears to be restricted to Brittany and consists of motifs that are schematic but representational rather than purely abstract, as in later phases. Motifs include axes, hafted axes, crooks, and crosses. They are found on menhirs and on simple passages graves. (See Statue-Menhirs.) The second phase coincides with the period when the classic passage graves were being built (40003500 B.C. or possibly as late as 3200 B.C.). The art is now more widespread, being found in Iberia, France, and the British Isles, and in contrast to the preceding period the principal art motifs are nonrepresentational, consisting of abstract curves, circles, spirals, and meanders, often in closely spaced concentric patterns. This kind of art is represented most spectacularly at Gavrinis, a passage grave on a small island in the Gulf of Morbihan in southern Brittany, but by far the greatest number is found in the passage graves of the Boyne Valley. Finally, the third phase is marked by a return to a greater regional variation in Megalithic art. The best-known examples are from northern France, where representational elements become dominant once more. Certain of the motifs seem to be anthropomorphic: necklaces and paired breasts in Brittany and anthropomorphic outlines on the walls of rock-cut tombs in the Marne region. These might be the first representations of spirits or supernatural beings in northwestern Europe since the end of the last Ice Age. The presence of Megalithic art in different regions suggests some measure of interregional contact and cultural sharing. Under certain circumstances, however, identical artistic motifs may be developed by different societies entirely in isolation. This is the alternative possibility presented by recent writers seeking to demonstrate the entoptic nature of the designs involved. Entoptic motifs are a universal product of the human psyche in certain altered states of consciousness, such as trances induced by narcotics or other intoxicants. The abstract patterns that are seen in these circumstances are the same irrespective of cultural or social background. If it is accepted that some megalithic art consists of entoptic motifs, then we need not expect to find direct cultural contacts between the regions using this art. Any specific parallels would be indicative not of cultural contact between these regions, but would stem instead from the origin of these motifs in universal characteristics of the human psyche. The possibility that trance-inducing substances were used in these societies is strengthened by the discovery in a number of French Neolithic burial chambers of fragments of ceramic incense burners. These may have been designed for the inhalation of a narcotic such as opium. Together with the evidence of sorting and manipulation of the bones this provides tantalizing indications of the kinds of ritual

practiced in and around megalithic tombs.

Megaliths Worldwide
Although the best-known Megalithic tombs are those in Europe, it should be noted that monuments of a similar character and construction are found in other parts of the world, including southern India, the Caucasus, Madagascar, and parts of South America. The use of large stone blocks to create a tomb chamber appears thus to have been adopted independently by a number of human societies at different times in the past.[See also British Isles: Prehistory of the British Isles; Burial and Tombs; Europe: The European Neolithic Period; Stone Circles and Alignments.]

Dating the Past Central to the process of doing archaeology is the necessity of
understanding the chronological sequencing of archaeological entities and past events. Without a firm grasp of this sequencing, archaeologists would not be able to deal with issues of behavioral process and evolution. Archaeology as a discipline would be reduced to a dry cataloging of artifacts and monuments with little hope of understanding the mechanisms and rates of change in past human cultures. For this reason, dating the past has been one of the most crucial methodological problems facing archaeologists. Fortunately, the past hundred years' work on this problem has yielded a wide array of methods and techniques to allow archaeologists to extrapolate the fourth dimension (time) from the three physical dimensions (latitude, longitude, and elevation) of archaeological sites. These techniques fall into two categories relative chronology and absolute chronology. Relative chronology is based on the simple stratigraphic principle that older materials will be found lower in an archaeological deposit than newer materialsthe law of superposition. For example, a stone tool dropped on a cave floor in 1000 B.C. will eventually be covered by deposits and possibly later human construction. Another stone tool dropped in that cave in A.D. 1000 will fall on a floor that is higher than the original floor. An archaeologist excavating that cave in 1995 will uncover the tool dropped in A.D. 1000 first because it is higher in the stratigraphic sequence. Subsequent excavation will uncover the tool dropped in 1000 B.C. in a lower level. Simply on the basis of the vertical relationship between the two tools, the archaeologist could determine that the tool found on the lower level was deposited some time before the tool found on the upper level. The archaeologist would not know when either of the two tools were deposited. Nor would he or she know how much time elapsed between the deposition of the two tools. Nevertheless, the archaeologist would be able to develop a relative chronology of the cave deposits that would accurately portray the relative sequence of depositional events that occurred in the cave. This is the first, and simplest, tool that archaeologists have for determining the temporal relationships between occupation events in archaeological sites. For many years this was the only tool that archaeologists had available to them. There are, of course, a range of human and nonhuman factors and processes that can obscure and even reverse that simple relationship, and field archaeologists must be very careful to determine what postdepositional processes have affected their deposits and adjust their relative chronologies accordingly. Another technique of relative dating is Seriation. Seriation is based on the principle that artifacts will change in decorative style and form over time and that each style or form will follow a similar trajectory of early limited use, acceptance and increased

popularity, and eventual decline in popularity tapering to final disuse. A graphical representation of this trajectory with popularity, as measured by the frequency of occurrences in a stratigraphic level, plotted as horizontal bars centered on a vertical axis representing time forms, is called a battleship curve. By plotting battleship curves for several artifact styles (usually, but not necessarily, pottery types) within a site, archaeologists can develop a relative chronology for the site. For many years prior to the development of techniques for absolute dating, seriation was the principal tool that archaeologists had for developing refined chronologies. The drawback, of course, was that this technique did not provide archaeologists with actual dates; nor did it allow archaeologists to know how long or short a period of time was represented by a battleship curve. The great breakthrough for archaeologists came with the development of techniques of absolute dating. Absolute dating techniques allow archaeologists to assign specific calendar dates to deposits within sites and, by extension, sites within regions. The simplest of these techniques uses artifacts of known age. These are artifacts that have a date inscribed on them or artifacts for which historical records indicate the time period when they first came into use and eventually went out of use. While a valuable tool in areas such as the Classical Mediterranean World, where dated or datable coins, tokens, jewelry, and historical records were available, the technique was simply not applicable in most of the rest of the world. In the early part of the twentieth century in the American Southwest and later in northern Europe, archaeologists began exploring the use of tree rings to determine the age of site deposits. Thus was born the science of Dendrochronology, or tree-ring dating. Dendrochronology was a break-through for archaeologists working in the American Southwest, where wood was preserved by the aridity, and for those working in northern Europe, where wood was preserved in bogs and marshes, but was of little or no use in most other parts of the world. The real explosion in the development of techniques of absolute dating began in the 1950s and 1960s. In 1952 Radiocarbon Dating was developed, and for the first time a technique offered archaeologists in almost all parts of the world a way to accurately determine the actual age of the carbonized wood and bone in the deposits of their sites. Radiocarbon dating revolutionized archaeology worldwide and in large part made possible the new or processual archaeology of the 1960s and 1970s. Not only were archaeologists able to accurately date events but they could also start looking at things like the rates of cultural change, and not just on a regional basis, but on a global scale, because finally everyone was able to talk about time using the same calendar scale. Development of new techniques to address both the temporal limitations of radiocarbon dating and the inapplicability of radiocarbon dating to certain areas or contexts blossomed in the 1960s and 1970s. Today archaeologists can look to Fission-track and Potassium-argon Dating for dating extremely old deposits (on the order of millions of years) and Obsidian Hydration Dating, thermoluminescence dating, and archaeomagnetic dating for determining the age of deposits or sites where radiocarbon dating is not an option. In addition to expanding the number of options that archaeologists have for dating

their sites, techniques have also been developed for refining the precision of those date estimates. A variety of techniques for doing Seasonality Studies, discussed elsewhere, can allow the archaeologist to determine not only the approximate year or years that a site was occupied, but the actual season or seasons of occupation. Finally, all of these techniques can be used in conjunction through a technique known as cross-dating. In cross-dating, stratigraphic or assemblage similarities between sites within a region can be used (much in the same way that tree rings are matched) to extend known dates from one or more sites to sites where chronometric techniques might not work, allowing archaeologists to develop cohesive chronologies for exploring regional social and cultural evolution over time. George Michaels

Dendrochronology is the scientific study of the chronological and environmental


information contained in the annual growth layers of trees. The method uses accurately dated tree-ring sequences for placing past events in time and for reconstructing environmental conditions that prevailed when the rings were grown. Both aspects of this science are relevant to archaeology, the first to the exact dating of archaeological features, the second to understanding the effects of environmental variability on human societies. Dendrochronology was created early in the twentieth century by Andrew Ellicott Douglass, an astronomer with the Lowell Observatory in Flagstaff, Arizona, as an out-growth of his study of the effects of sunspots on terrestrial climate. Lacking weather records long enough to be tested for correlation with the twenty-two-year sunspot cycle, Douglass turned to the rings of coniferous trees in this semiarid area as potential proxy climatic indicators that could be related to sunspot activity. Building on his discovery that these trees possessed identical sequences of wide and narrow rings, he developed a continuous 450-year record of the ring-width variability common to the trees of the area and demonstrated that this variability was highly correlated with the precipitation of the winter preceding the growth year. Archaeologists quickly recognized the potential of Douglass's method for dating abundant wood and charcoal remains in the ruins of the Southwest. His discovery, in 1917, that archaeological samples exhibited common patterns of ring-width variability, stimulated an intensive effort to link the undated prehistoric ring sequence with the dated living-tree sequence. Twelve years' work produced a 585-year prehistoric ring series that did not overlap with the dated sequence. In 1929, the rings in a charred log from a site near Show Low, Arizona, connected the two sequences and, for the first time in North American archaeology, allowed calendar dates to be assigned to prehistoric sites. Thus, dendrochronology became the first of many independent dating techniques used in archaeology. Since that time, nearly 50,000 tree-ring dates from nearly 5,000 sites in the Southwest have produced the finest prehistoric chronological controls available in the world. Douglass's success sparked the immediate adoption of tree-ring dating in other regions, notably Alaska, the North American Great Plains, and southern Germany. The University of Arizona recognized Douglass's achievement by creating the Laboratory of Tree-Ring Research in 1937; it remains the world's largest and most

comprehensive dendrochronological research and teaching facility. After 1960, treering programs were begun in virtually every area of the globe. Archaeological dating is now widely practiced in North America and Europe, and other applications of the method are pursued throughout the world. The fundamental principle of dendrochronology is crossdating, the matching of identical patterns of variation in ring morphology among trees in a particular area. Although several ring attributes (density, trace element content, stable isotope composition, intra-annual growth bands, and others) can be used for this purpose, crossdating most commonly is expressed in the covariation of ring widths. Whether established visually, graphically, or statistically, unequivocal crossdating is the essential element of dendrochronology. The size of the area encompassed by a particular crossdating pattern varies from hundreds to hundreds of thousands of square kilometers and must be determined empirically in each case. Chronology building is the process of averaging the annual ring widths of many crossdated samples into composite sequences of ring-size variability with each ring dated to the year in which it was grown. By incorporating overlapping ring records of varying lengths and ages, this procedure produces ring chronologies that are longer than any of their individual components. Thus, the chronology for the Southwest has been extended back to 322 B.C. by adding progressively older archaeological samples to the living-tree sequence. In addition, chronology building reduces individual tree effects and maximizes the variability common to all the trees, that is, the variability caused by largescale external factors, primarily climate. Thousands of chronologies have been built in many regions of the world, the longest of which are an 8,700-year bristlecone pine sequence from California and a 10,000-year sequence from central Europe. Composite chronologies serve as standards for dating samples of unknown age, as records of past climatic variability, and as referents for calibrating Radiocarbon and other time scales. Archaeological tree-ring collections yield three kinds of information: chronological, behavioral, and environmental. Dating remains dendrochronology's primary contribution to archaeology. A tree-ring date is determined by finding the unique point at which the ring-width sequence of a sample matches the pattern of a dated chronology. Tree-ring dates have two notable attributes: accuracy to the calendar year and no associated statistical error. When a sample's outer ring is the final ring grown by the tree, the date specifies the year in which the tree died, usually the year that the tree was cut for use by humans. When complicating factors can be controlled by evaluating detailed data on the provenance, function, and physical attributes of the wooden artifact from which the sample is taken, the date can be applied to the construction of features associated with the artifact. Analyses such as these produce unequaled levels of chronological control at site, locality, and regional scales. Behavioral information results from treating tree-ring samples as artifacts rather than just sources of dates. Analyzing wooden elements in this fashion illuminates a prehistoric people's treatment of trees as a natural resource and wood as a raw material. Information on the season of tree cutting, distance of wood transport, species preferences, tree-felling and woodworking tools and techniques, dead wood use, stockpiling, beam reuse, structure repair, element shaping, and other behaviors can be acquired in this way.

Environmental information comes from two sources. When differential use of tree species by a site's inhabitants can be controlled, differences between the species assemblage of the site and the modern flora of the area can indicate major environmental changes since the site's occupation. The chief source of environmental information is variation in ring widths, which records several aspects of climatic variability. Dendroclimatology is the branch of dendrochronology concerned with environment-tree growth relationships. Dendroclimatic reconstructions are produced by establishing mathematical relationships between ring widths and climate data for the period of overlap between these two records and then using the resulting equations to reconstruct past climatic variability from the longer tree-ring record. These operations reconstruct past climate in terms of standard measures, such as millimeters of precipitation or degrees of temperature, at time scales ranging from seasons to centuries and at spatial scales ranging from localities to continents. Dendroclimatic analyses of climate-sensitive archaeological tree-ring chronologies produce accurate reconstructions of prehistoric climatic variability that can be related to past human behavior. Combining high-frequency dendroclimatic reconstructions with other paleoenvironmental indicators reveals a broad spectrum of environmental variability that would have affected prehistoric and historic human populations. Since its creation by Douglass, dendrochronology has made important contributions to archaeology in many areas of the world. It is safe to predict that, as global interest continues to grow, archaeological applications and the spatial coverage of the method will continue to expand.[See also Dating the Past; Paleoenvironmental Reconstruction; Radiocarbon Dating.]

Bibliography
Bryant Bannister, Dendrochronology, in Science in Archaeology, ed. Don Brothwell and Eric Higgs (1963), pp. pp.161176. H. C. Fritts, Tree-Rings and Climate (1976). Martin R. Rose, Jeffrey S. Dean, and William J. Robinson, The Past Climate of Arroyo Hondo, New Mexico, Reconstructed from Tree-Rings , Arroyo Hondo Archaeological Series 4 (1981). M. G. L. Baillie, Tree-Ring Dating and Archaeology (1982). Jeffrey S. Dean, Dendrochronology, in Dating and Age Determination of Biological Materials, ed. Michael R. Zimmerman and J. Lawrence Angel (1986), pp. pp.126 165. Fritz Hans Schweingruber, Tree Rings: Basics and Applications of Dendrochronology (1988). Jeffrey S. Dean

Fission-track Dating is a method of absolute age determination based on the


microscopic counting of micrometer-sized damage tracks that are created by the spontaneous fission of uranium (U238) atoms and that accumulate with time in minerals and glasses containing uranium in minor concentrations. The method was developed in 19631964 by three U.S. physicists (P. B. Price, R. M. Walker, and R. L. Fleischer). Observation of the tracks under an optical microscope is possible only after special preparation of the sample (polishing and etching). The number of tracks counted per unit of surface in a mineral or glass sample is a function of its age and

uranium content. In order to determine the age, a determination of the uranium content is therefore also required. This is performed by irradiating the sample with a calibrated dose of slow neutrons in a nuclear reactor, an operation that induces new (U235) fission tracks, the number of which is proportional to the uranium content. Fission tracks are thermally unstable, meaning that they fade, to disappear completely at high temperature, a process that is called track annealing. Different materials have different sensitivities with respect to track annealing, glass being more sensitive than minerals, and the annealing process depends not only upon the temperature but also upon the duration of heating. Partially annealed tracks are distinguished from fresh tracks by their smaller size. In the geological sciences, fission-track dating has evolved to the point where an acknowledged chronometer is applied not only to determine the age of minerals (and of the rocks of which they are constituents) but also, and even more often, to study their temperature evolution with time. In archeology, fission-track dating has remained of rather limited importance. The limitations are mainly related to the low number of fission tracks accumulated in the relatively young archaeological samples compared to the half-life of 8.2 1015 years for U238 spontaneous fission. Samples of large size or relatively high uranium content are required, and one is often confronted with lengthy counting procedures of low surface track densities, with a considerable background of spurious tracklike etch pits, deteriorating both precision and accuracy. Fission-track dating can therefore not be considered competitive with radiocarbon or thermoluminescence dating. Nevertheless, the method proved to be well suited for studying specific materials and problems. One of the favorite materials fission-track dating has been applied to is obsidian. Artifacts such as knives and arrowheads made of natural obsidian glass found in Europe and South America can be dated if they were fired by ancient humans. The condition is that heating was sufficiently strong to completely anneal all previously stored geological tracks so that all tracks that are counted result from uranium fission reactions that took place after the firing. This can be checked by track size analysis. Fissiontrack age determinations on artifacts that were not heated normally yield the geological age of the obsidian lava flow the obsidian was extracted from. A comparison of fission-track age determinations on obsidian tools found at different localities with those of known outcrops of obsidian lava flows in Italy, Greece, and Turkey allowed researchers to determine the geographic provenance of the tools and to reconstruct in this way the ancient obsidian trade routes in the Mediterranean World. Fission-track dating has also been applied to man-made glass. Studies of this kind were performed on glaze covering 400- to 500-year-old Japanese bowls. Another example is a glass shard originating from a Gallo-Roman bath near Limoges, France. A correct result of A.D. 150 was found for the age of the bath, with a precision, however, as poor as twenty percent. The fluorescent green uranium-rich glassware produced in Bohemia (central Europe) during the nineteenth century has, on the other hand, been dated quite precisely. Occasionally, pottery has been dated, if it contained suitable inclusions such as flakes of obsidian or zircon grains. Here too, all geological tracks are supposed to be erased

in these inclusions during the baking process. Similar studies were carried out on fired stones and baked earth. Remarkable success was achieved in the age determination of the Homo erectus pekinensis. Based on ca. 100 suitable grains of sphene found in firing ashes in two layers containing human remains in the Zhoukoudian cave near Peking, fission-track ages of 306 56 and 462 45 thousand years B.P. were obtained. Some of the very early hominid sites aged around 2 million years B.P. in eastern Africa (Tanzania, Ethiopia, Kenya) have also been dated with fission tracks, supplementing potassium-argon age determinations that were often found to be problematic. Use was made of glass shards or uranium-rich mineral grains, such as zircons extracted from the volcanic tuff layers that are intercalated between the sedimentary sequences containing the hominid remains.[See also Archaeopaleomagnetic Dating; Dating the Past; Dendrochronology; Luminescence Dating; Obsidian Hydration Dating; Potassium-argon Dating; Radiocarbon Dating; Seriation; Stratigraphy.]

Bibliography
Robert L. Fleischer, P. Buford Price, and Robert M. Walker, Nuclear Tracks in Solids; Principles and Applications (1975). Gnther A. Wagner, Archaeological Applications of Fission-Track Dating , Nuclear Track Detection 2 (1978): pp.5163. Gnther A. Wagner and Peter Van den haute, Fission-Track Dating (1992). Frans De Corte and Peter Van den haute

Obsidian Hydration Dating In many regions of the ancient world, obsidian, a


volcanic glass, was the preferred material for stone-tool production. Fracturing obsidian exposes fresh surfaces, on which hydration rinds may form. The thickness of a rind increases with the age of the artifact. Rind thicknesses, measured using powerful microscopes, can be used to date the production of artifacts. The reactions involved in the production of a rind are complex. Recent studies indicate that four processes are involved: the leaching of alkali ions from the glass into solution, the replacement of these ions by H+ or H3O+, the surface dissolution of the silica network of the glass, and the precipitation of reaction products. Factors related to these reactions affect the rate at which rinds form on obsidian artifacts. These include the chemical composition of the glass and solution, effective hydration temperature (EHT), pH, relative humidity, artifact shape, solution-flow rate, and exposure time. Hydration measurements can be used for relative or absolute dating, with an accuracy dependent on control of these variables. Theoretically, hydration dating has no absolute temporal limitations, but rinds tend to crumble when they reach a thickness of 50 microns, making it difficult to date artifacts of great antiquity. Furthermore, several centuries must pass before a measurable rind forms. Although radiocarbon, thermoluminescence, and archaeomagnetic dating can be used to date Holocene sites,

hydration dating is inexpensive and requires only rudimentary microscopy skills. Hydration measurements are used in two fundamentally different ways to calculate the age of artifacts. The first method calibrates rind measurements with other temporal data, such as radiocarbon dates or even ceramic phases. Once a calibration curve is established, rind measurements from other contexts can be compared to the curve and absolute or relative dates can be determined. The advantage of this technique is that it is based on empirical in vivo measurements and does not require in vitro experiments under unnatural conditions. A disadvantage is that variation in local environmental conditions is not taken into account, increasing error. Unlike the calibration approach, the induction method relies on laboratory experiments. Hydration is induced by exposing fresh obsidian to water vapor or liquid water. In order to increase the reaction rate, high temperatures and pressures are used. Under these artificial conditions, rind formation can be accurately modelled as a diffusion process, allowing the calculation of EHT-dependent hydration rates. In order to calculate absolute dates from obsidian artifacts, paleo-EHTs must be estimated. Contemporary EHTs can be measured using thermal cells or estimated using weather station data and extrapolated to the past. Although the experimental induction method is quite promising, it has serious flaws. First, in vivo rind formation is far more complex than laboratory-induced diffusion. Second, the equations used to model diffusion depend only on time and EHT, although field studies and induction experiments have demonstrated that relative humidity, pH, and other variables also affect rind-formation rates. The earliest attempts to use hydration measurements to date artifacts were made by Irving Friedman and Robert Smith (A New Method Using Obsidian Dating: Part I, American Antiquity 25 (1960): 476522). Since then, hydration dating has been used widely, with particular success in California and the Great Basin of the United States. In this area, numerous regional chronologies have been constructed using the calibration method. In Mesoamerica, obsidian hydration dating has usually been used to supplement more traditional chronological data. Two important projects, however, have used hydration dating to form the backbone of the chronology. At Kaminaljuyu, Guatemala, Joseph Michels (The Pennsylvania State University Kaminaljuyu Project 19691970 Seasons, Part I: Mound Excavations, University Park, 1973) used the calibration method to produce 3,000 obsidian hydration dates that proved to be inaccurate. An unfortunate result has been that Maya archaeologists are now reluctant to use hydration dating. More recently, the experimental induction technique has been used in an attempt to fine-tune the chronology of Copn, a Classic Maya site in Honduras. Although most of the 2,200 dates are consistent with other temporal data, several hundred are very late. David Webster and AnnCorinne Freter (Settlement History and the Classic Collapse at Copan: A Redefined Chronological Perspective, Latin American Antiquity 1 (1990): 6685) have used these dates to argue that a substantial population continued to occupy Copn until A.D. 1150. Other archaeologists question this conclusion, because very few Post-classic ceramics have been found at the site. Furthermore, there are no radiocarbon or archaeomagnetic dates later than A.D. 950. Until independent chronological evidence is found, it seems unlikely that a substantial Postclassic occupation will be accepted.

The Copn dates demonstrate that estimating error is a serious problem with the technique. Although current environmental conditions can be measured, an unmeasurable error is introduced when these conditions are extrapolated to the past. A shift in EHT of just 1 K, for example, can lead to dates that err by centuries. For this reason, hydration dating must still be considered a relatively inaccurate independent chronometric technique.[See also Archaeo-paleomagnetic Dating; Dating the Past; Dendrochronology; Fission-track Dating; Luminescence Dating; Potassiumargon Dating; Radiocarbon Dating; Seriation; Stratigraphy.]

Bibliography
R. E. Taylor, ed., Advances in Obsidian Glass Studies (1976). Clement W. Meighan and Janet L. Scalise, Obsidian Dates IV (1988). E. V. Sayre, P. Vandiver, J. Druzik, and C. Stevenson, eds., Materials Issues in Art and Archaeology (1988). J. J. Mazer, C. M. Stevenson, W. L. Ebert, and J. K. Bates, The Experimental Hydration of Obsidian as a Function of Relative Humidity and Temperature , American Antiquity 56 (1991): pp.504513. Rosanna Ridings, Obsidian Hydration Dating: The Effects of Mean Exponential Ground Temperature and Depth of Artifact Recovery , Journal of Field Archaeology 18 (1991): pp.7785. Geoffrey E. Braswell, Obsidian-Hydration Dating, the Coner Phase, and Revisionist Chronology at Copn, Honduras , Latin American Antiquity 3 (1992): pp.130147. Geoffrey E. Braswell

Potassium-argon Dating Geologists use the potassium-argon technique to date


rocks as much as 2 billion years old and as little as 50,000 years old. The potassiumargon method is one of the few viable ways of dating archaeological sites earlier than 100,000 years old, and has allowed paleoanthropologists to develop an outline chronology for early human evolution and human origins. Potassium (K) is one of the most abundant elements in the earth's crust and is present in nearly every mineral. In its natural form, potassium contains a small proportion of radioactive potassium 40 atoms. For every hundred potassium 40 atoms that decay, eleven become argon 40, an inactive gas that can easily escape from its material by diffusion when lava and other igneous rocks are formed. As volcanic rock forms by crystallization, the concentration of argon 40 drops to almost nothing. But regular and reasonable decay of potassium 40 will continue, with a half-life of 1.3 billion years. It is possible, then, to measure with a spectrometer the concentration of argon 40 that has accumulated since the rock formed. Because many archaeological sites were occupied during a period when extensive volcanic activity occurred, especially in East Africa, it is possible to date them by associations of lava with human settlements. Potassium-argon dates have been obtained from many igneous minerals, of which the most resistant to later argon diffusion are biotite, muscovite, and sanidine. Microscopic examination of the rock is essential to eliminate the possibility of contamination by recrystallization and other processes. In the standard technique, the samples are processed by crushing the rock, concentrating it, and treating it with

hydrofluoric acid to remove any atmospheric argon. The various gases are then removed from the sample and the argon gas is isolated and subjected to mass spectrographic analysis. The age of the sample is then calculated using the argon 40 and potassium 40 content and a standard formula. The resulting date is quoted with a large standard deviationfor Lower Pleistocene sites, on the order of a quarter of a million years. In recent years, computerized argon laser fusion has become the technique of choice. By steering a laser beam over a single irradiated grain of volcanic ash, a potassium-argon specialist can date a lake bed layer, and even a small scatter of tools and animal bones left by an early hominid. The grain glows white hot, gives up a gas, which is purified, and then charged by an electron beam. A powerful magnet accelerates the charged gas and hurls it against a device that counts its argon atoms. By measuring the relative amounts of two isotopes of the element, researchers can calculate the amount of time that has elapsed since the lava cooled and the crystals formed. Potassium-argon dates can be taken only from volcanic rocks, preferably from actual volcanic flows, so the geological associations of fossils and artifacts must be carefully recorded. Fortunately, many early human settlements in the Old World are found in volcanic areas, where such deposits as lava flows and tuffs are found in profusion. The first archaeological date obtained from this method came from Olduvai Gorge, Tanzania, where in 1959 Louis and Mary Leakey found a robust australopithecine skull, Zinjanthropus boisei, stone tools, and animal bones in a Lower Pleistocene lake bed of unknown age. Lava samples from the site were dated to about 1.75 million years, doubling the then-assumed date for early humans. Stone flakes and chopping tools of undoubted human manufacture have come from Koobi Fora in northern Kenya, dated to about 2.5 million years, the earliest date for human artifacts. Still earlier Australopithecus fossils have been dated at Hadar in Ethiopia to between 3 million and 4 million years ago. Potassium-argon samples have dated the appearance of Homo erectus in Africa to about 1.8 million years ago or even earlier. Until recently, paleoanthropologists believed H. erectus radiated out of Africa about a million to 700,000 years ago. But a team of Berkeley scientists have used laser fusion to date H. erectus-bearing levels at Modjokerto in Southeast Asia to 1.8 million years ago, pushing the radiation date back three quarters of a million years.[See also Dating the Past; Human Evolution: Fossil Evidence For Human Evolution.]

Bibliography
G. B. Dalrymple and M. A. Lamphere, Potassium Argon Dating (1970). J. W. Michels, Dating Methods in Archaeology (1973). S. J. Fleming, Dating in Archaeology (1976). Brian M. Fagan

Radiocarbon Dating is an isotopic or nuclear decay method of inferring age for


organic materials. The radiocarbon (C14) method provides a common chronometric time scale of worldwide applicability for the Late Pleistocene and Holocene. Radiocarbon measurements can be obtained on a wide spectrum of carbon-containing samples including charcoal, wood, marine and freshwater shell, bone and antler, peat

and organic-bearing sediments, as well as carbonate deposits such as marl, tufa, and caliche. With a half-life of approximately 5,700 years, the C14 method can be routinely employed in the age range of about 300 to between 40,000 to 50,000 years for sample sizes in the range of 110 grams of carbon using conventional decay or beta counting. With isotopic enrichment and larger sample sizes, ages up to 75,000 years have been measured. Accelerator mass spectrometry (AMS) for direct or ion counting of C14 permits measurements to be obtained routinely on samples of 12 milligrams of carbonand with additional effort on as little as 50100 micrograms of carbonwith ages up to between 40,000 and 50,000 years. The use of AMS technology may in the future permit a significant extension of the C14 time frame to as much as 80,000 to 90,000 years if stringent requirements for the exclusion of microcontamination in samples can be achieved. The C14 dating technique was developed at the University of Chicago immediately following World War II by Willard F. Libby (19081980) and his collaborators James R. Arnold and Ernest C. Anderson. Libby received the Nobel Prize in chemistry in 1960 for the development of the method. The natural production of C14 is a secondary effect of cosmic-ray bombardment in the upper atmosphere. Following production, C14 is oxidized to form C14O2. In this form, C14 is distributed throughout the earth's atmosphere. Most of it is absorbed in the oceans, while a small percentage becomes part of the terrestrial biosphere primarily by means of photosynthesis combined with the distribution of carbon compounds through the different pathways of the carbon cycle. In living organisms, metabolic processes maintain the C14 content in equilibrium with atmospheric C14. However, once metabolic processes ceaseas at the death of an animal or a plantthe amount of C14 will begin to decrease by nuclear decay at a rate measured by the C14 half-life. The radiocarbon age of a sample is based on measurement of its residual C14 content. For a C14 age to be equivalent to its actual or calendar age at a reasonable level of precision, a set of assumptions must hold within relatively narrow limits. These assumptions include (1) the concentration of C14 in each carbon reservoir has remained essentially constant over the C14 time scale, (2) there has been complete and relatively rapid mixing of C14 throughout the various carbon reservoirs on a worldwide basis, (3) carbon isotope ratios in samples have not been altered except by C14 decay since these sample materials ceased to be an active part of one of the carbon reservoirsas at the death of an organism, (4) the half-life of C14 is accurately known with a reasonable precision, and (5) natural levels of C14 can be measured to appropriate levels of accuracy and precision. Radiocarbon age estimates are generally expressed in terms of a set of widely accepted parameters that define a conventional radiocarbon age. These parameters include (1) the use of 5,568 (5,570) years as the C14 half-life even though the actual value is probably closer to 5,730 years, (2) to define zero C14 age, the use of specially prepared oxalic acid or sucrose contemporary standards or a modern standard with a known relationship to the primary standards, (3) the use of A.D. 1950 as the zero point from which to count C14 time, (4) a normalization of C14 in all samples to a common C13/C12 value to compensate for fractionation effects, and (5) an assumption that C14 in all reservoirs has remained constant over the C14 time scale. Radiocarbon ages are typically cited in radiocarbon years BP where BP (or sometimes B.P.) indicates before present or more specifically before A.D. 1950. In

addition, a conventional understanding is that each C14 determination should be accompanied by an expression that provides an estimate of the experimental or analytical uncertainty. Since statistical constraints associated with the measurement of C14 is usually the dominant component of the experimental uncertainty, this value is sometimes informally referred to as the statistical error. This term is suffixed to all appropriately documented C14 age estimates. Typically, a laboratory sample number designation is also included when a C14 age is cited. For most time periods, conventional radiocarbon ages deviate from realthat is, calendar, historical, or siderealtime. A calibrated radiocarbon age takes into consideration the fact that C14 activity in living organisms has not remained constant over the C14 time scale. Tests of the validity of the assumption of constant C14 concentration in living organics over time initially focused on the analyses of the C14 activity of a series of historically and dendrochronologically dated samples. Radiocarbon determinations on several species of tree-ring-dated wood from both North America and Europe have documented a long-term trend and shorter, highfrequency variations in the C14 activity over time. For the Early and Middle Holocene, the amount of correction required to calibrate a C14 date, that is, to bring a conventional C14 age determination into alignment with calendar time, does not exceed 1,000 years. For the pre-Holocene period, radiocarbon ages compared with uranium-series ages from marine cores suggest deviations in C14 ages for the Late Pleistocene of as much as 3,000 years. The C14/tree-ring data also documents shorterterm, higher-frequency variations in C14 activity over time superimposed over the long-term trend. These shorter-term variations, which appear as wiggles, kinks, or windings in the calibration curve, add further complexity to the process of calibrating the C14 time scale. For samples from some carbon reservoirs, conventional contemporary standards may not define a zero C14 age. A reservoir-corrected radiocarbon age can sometimes be calculated by documenting the apparent age exhibited in known-age control samples and correcting for the observed deviation. Reservoir effects occur when initial C14 activities in samples of identical age but from different carbon reservoirs exhibit significantly different C14 concentrations. In some cases, living samples from some environments exhibit apparent C14 ages due to the fact that a significant percentage of the C14 in these samples do not draw their carbon directly from the atmosphere. Reservoir effects can occur in mollusk and other shell materials in both fresh water and marine environments. Examples of other samples influenced by reservoir effects include wood and plant materials growing adjacent to active volcanic fumarole vents or where magmatic fossil CO2 is being injected directly into lake waters and where plants growing in these lake waters derive all or most of their carbon from the lake waters. Reservoir effects can range from a few hundred to a few thousand years depending upon specific circumstances. In the first decade following its introduction, C14 dating documented the geologically late beginning of the postglacial period at about 10,000 C14 years B.P. and the antiquity of agriculture and sedentary village societies in southwestern Asia in the eighth-seventh millennium B.C. Applications of AMS C14 technology has permitted the dating of human skeletons from various sites in the Western Hemisphere that had previously been assigned ages in the range of 20,00070,000 years on the basis of previous C14 analysis or the application of other dating techniques such as the amino

acid racemization method. AMS C14 results on well-characterized organic extracts indicate that the age of all of the human skeletons examined to date do not exceed 11,000 C14 years. AMS C14 measurements have also been crucial in clarifying controversial age assignments of early domesticated or cultivated plants in both the Old and New World as well as documenting that the Shroud of Turin was a medieval artifact.[See also Archaeo-paleomagnetic Dating; Dating the Past; Dendrochronology; Fission-track Dating; Luminescence Dating; Obsidian Hydration Dating; Potassium-argon Dating; Seriation; Stratigraphy.]

Bibliography
W. G. Mook and H. T. Waterbolk, eds., C14 and Archaeology (1983). R. Gillespie, Radiocarbon User's Handbook (1984). W. G. Mook and H. T. Waterbolk, Radiocarbon Dating, Handbooks for Archaeologists, No. 3 (1985). J.A.J. Gowlett and R.E.M. Hedges, eds., Archaeological Results from Accelerator Dating (1986). R. E. Taylor, Radiocarbon Dating: An Archaeological Perspective (1987). D. Polach, Radiocarbon Dating Literature: The First 21 Years, 19471968 (1988). Martin J. Aitken, Science-Based Dating in Archaeology, Chapters 3 and 4 (1990). S. Bowman, Radiocarbon Dating (1990). R. E. Taylor, A. Long, and R. S. Kra, eds., Radiocarbon After Four Decades: An Interdisciplinary Perspective (1992). R. E. Taylor

Seriation includes a number of relative dating techniques, the first of which was
developed in the early 1900s, before the advent of chronometric dating. These techniques are based on a reconstruction of typological or stylistic changes in material culture through time. Ceramics will be used to illustrate the techniques discussed here, but other classes of material can be used as well. To construct the seriation for an area, stratified sites usually are examined. By examining typological or stylistic shifts from the different strata, these changes can be placed in a relative chronological order. Once the seriation of an area is unraveled at a single or several stratified sites, it can be used to place other sites into a regional temporal ordering through ceramic cross-dating. The following hypothetical example uses three sites, the Deep site, the Shallow site, and the New site, to illustrate. The first site investigated in the region is the Deep site, which contains five strata, the lowest containing ceramic type A, the next type B, and so on to the highest level with type E. Next, the Shallow site is excavated. It has one strata containing ceramic type A. Based on our previous research we can say that the Shallow site is contemporaneous with the earliest occupation at the Deep site. Excavations at the New site reveal two strata with ceramic type E in the lower level and F in the higher level. Because type E is found only at the highest levels of the Deep site we can say that the New site was founded while the Deep site was still being occupied and continued to be occupied after the Deep site was abandoned. Additionally, the New site was occupied after the Shallow site. If chronometric dates can be obtained from the deposits containing ceramics at the Deep site, a date can be assigned to the

Shallow and New sites. For example, if the A strata at the Deep site dates to A.D. 900, the Shallow site also dates to A.D. 900. Seriation can also be undertaken using excavated or surface collections from single component sites. This process assumes that the production of artifact types follows a battleship curve distribution through time. That is, the percentage of the assemblage a type represents is small at the beginning of its production span, widens in the middle as it becomes popular, and is small again at the end as it loses popularity and is eclipsed by another type. The seriation can be determined using a graphical display. In the graph, each site is represented by a line, and the percentage of each ceramic type found at that site is represented by a scaled bar. The lines are rearranged until the bars form battleship curves for each type in the overall display. In addition to the graphical method, a number of quantitative methods are used to seriate sites. One of the earliest used was the Brainerd-Robinson method, which computes indexes for each unit (either sites or features within a site) and then orders the units based on these indexes. The index is a measure of similarity between two units, computed as the sum of the absolute value of the differences in percentages of each type between units. This figure is then subtracted from 200. The more similar the units, the smaller the differences in percentage of each type and the closer to 200 the index is; the greater the difference between the units, the farther from 200 the index is. A similarity matrix of the indexes is then constructed. The sites are reordered until the highest indexes are on the diagonal and the values decrease consistently with distance from the diagonal. The resulting ordering of the units reflects their chronological ordering. Today, statistical techniques, such as multidimensional scaling, factor analysis, and cluster analysis, are used in conjunction with computers to determine the correct ordering of the units. Additionally, finer time-scale resolution has been achieved by examining shifts in stylistic elements and motifs in ceramic assemblages rather than types, resulting in a microseriation. The finer resolution of the microseriation is due to the fact that stylistic elements shift through time within a type, and are, therefore, more sensitive to short-term seriations. Both factor analysis and multidimensional scaling rely on similarity matrices related to the correlation of the ceramic type or stylistic element in the assemblage as a whole. These correlations are used to create new variables representing ceramic or stylistic complexes. Each unit is then scored for these new variables and ordered, with units having similar scores on the new variables being close to each other in the seriation. What these techniques allow the archaeologist to do is look at a large group of variables, which are temporally sensitive, and reduce them to a small set of new variables (the typological or stylistic complexes) that represent the interaction of a number of the original variables. This reduction procedure simplifies the data into a few dimensions on which the units can be sorted. Cluster analysis uses the similarities and differences between artifact assemblages to group units into chronological periods. Cluster analysis techniques treat each ceramic type or stylistic element as a dimension and the number of sherds of that type, or possessing that element, present in the unit is a measurement on that dimension. This is similar to the way distance is used as a measurement on the dimension of length. If

ten types are present, the assemblage can be seen in ten-dimensional space, just as length, width, and height represent three-dimensional space. The closer in space two units are, the more similar their artifact assemblages. Cluster analysis then groups the units that are closest together into temporal periods. A number of factors can greatly hinder the seriation process, due to their impact on the material culture on which the seriations are based. Aside from obvious problems presented by disturbance, deposit mixing, and inaccurate contextual information, aspects of prehistoric behavior can have an impact. One of the most devastating is the presence of a production curve that does not correspond to a battleship curve. If the production span of a ceramic type or style has more than one period of popularity, or mode on the curve, relative temporal ordering will be confused because it will be unclear as to which mode the unit belongs. The more types or styles this multimodality is present in, the greater the chance of error. Three additional factors also can affect the seriation. First, if the units of time that correspond to shifts in ceramic production are of very different lengths, the seriation will suffer. Second, if the units are functionally different, the types of artifacts present may vary considerably. If this is the case the ordering or grouping of sites may represent functional similarity rather than temporal relationships. Finally, if the practice of heirlooming objects over long periods of time occurred, the time periods in which the objects were produced and disposed of may not be correlated. If this is true, the similarity matrices and distances used to order or group the units may be inaccurate.[See also Archaeo-paleomagnetic Dating; Artifact Distribution Analysis; Dating the Past; Dendrochronology; Fission-track Dating; Luminescence Dating; Obsidian Hydration Dating; Radiocarbon Dating; Statistical Analysis; Stratigraphy; Typological Analysis.]

Bibliography
Robert Dunnell, Seriation Method and Its Evaluation , American Antiquity 35 (1970): pp.305319. Anna O. Shepard, Ceramics for the Archaeologist (1976). William Marquardt, Advances in Archaeological Seriation, in Advances in Archaeological Method and Theory, vol. 1, ed. Michael B. Schiffer (1978), pp. pp.257314. Prudence Rice, Pottery Analysis, a Sourcebook (1987).

Stratigraphy Archaeological stratigraphy is the study of stratification, which is the


physical deposits and other stratigraphic events, such as a pits or post holes, by which a site is composed through time. Stratification is an unintentional result of human behavior and thus an unbiased record of past activities. For societies without written records, the study of archaeological stratification, by the excavation, recording, and analysis of strata, features, and portable artifacts, is the only method by which their history can be recovered. Even for peoples with written records, the study of archaeological stratification provides a unique four-dimensional history of a site that cannot be obtained from documentary sources. Archaeologists have a great responsibility to decipher and record for posterity the latent history of each site as

encapsulated in its stratification. As the philosopher Voltaire once stated: We owe the dead nothing but the truth. In archaeological excavation, the truth about the past can only be obtained by adherence to stratigraphic principles and methods. Archaeological stratigraphy evolved from geological practices in the last century, but was little refined for some time. The publication of archaeological textbooks by Dame Kathleen Kenyon and Sir Mortimer Wheeler in the early 1950s underlined the importance of stratigraphy in archaeology. The 1970s saw the establishment of the separate discipline of archaeological stratigraphy, since stratification made by people is different from that formed by natural forces. The first textbook on archaeological stratigraphy appeared in 1979.

Constructing the Stratigraphic Sequence


While several laws of archaeological stratigraphy were then proposed, the Law of Superposition is paramount. It states that in a series of layers and interfacial features, as originally created, the upper units of stratification are younger and the lower are older, for each must have been deposited on, or created by the removal of, a preexisting mass of archaeological stratification. This law gives a chronological direction to a body of stratification (generally, early at the bottom and late at the top), and it is the reason for the question always asked about any two contiguous stratigraphic units: Which came first? By attention to the Law of Superposition during an excavation, the units of stratification can be placed in sequential order in relative time, one after the other. Using the stratigraphic method, a site is excavated by the removal of its deposits according to their unique shapes, and in the reverse order to that in which they were made. Each deposit is given a unique number, which is also assigned to all portable objects taken from it, be they coins, sherds of pottery, animal bones, or samples of soil for pollen analysis. It is axiomatic that each deposit, with its artifacts, is a unique capsule of chronological, cultural, and environmental data, and occupies a unique position in the stratigraphic sequence of a site. The archaeologist must consider both stratigraphic, or relative, time, by which one event gives way to another, and absolute time, or calendar time, which gives a date in years to stratigraphic data. Stratigraphic time can be ascertained by stratigraphic excavation and recording without any reference to artifacts: a site may contain no artifacts at all, but its stratigraphic sequence can be obtained nonetheless. The basic principles of archaeological stratigraphy are of universal application because they relate to the uniform characteristics of stratification and not to the cultural artifacts found within the deposits. The study of artifacts may assign a calendar date to stratification and thus fix its relative stratigraphic sequence in absolute time. Many artifact specialists will be needed to arrive at such conclusions, but it is the excavating archaeologist who bears the responsibility for the construction of the stratigraphic sequence of the site. Stratification is a three-dimensional body of archaeological deposits and features, from which a fourth dimension of relative time may be inferred. A stratigraphic sequence is the order, in relative time, of the deposition of layers and the creation of interfacial features, such as pits, through the life of a site. To illustrate such a calendar of relative time, the stratigraphic data is translated into abstract diagrams, with each

unit shown in a standardized format. Each unit is placed in its stratigraphic position relative to deposits above and below it, and the box for each unit is connected with lines indicating the order of superposition or correlation. This is the essence of the Harris Matrix system, introduced in 1973, by which the stratigraphic sequence of any site can be illustrated completely in a single diagram. Using this very simple method, which is of universal application, the stratigraphic sequence of any archaeological site can be developed during the course of excavation. The stratigraphic sequence, not the stratification, is the independent testing pattern against which other analyses of the site, from a reconstruction of its landscape to the study of pottery or pollen, must be proven. Any site that can be excavated is stratified, and its stratigraphic sequence must be demonstrated by such a diagram, as it is not the same as the three-dimensional aspects of stratification shown in profile and plan drawings. The profile drawing is a plane view of the vertical dimensions of stratification, while the plan or map drawing records its surfaces. Such sections illustrate the superimposed pattern of the stratification along the line at which the profile was cut. Plan drawings are records of the surfaces of the stratification and show the horizontal extent of each unit. Modern practice requires single-layer planning, by which each unit is drawn on a separate sheet of tracing paper. Used in conjunction with the stratigraphic sequence of the site, such single-layer plans can be laid down in their order of superimposition, and form one of the most powerful analytical tools in archaeological stratigraphy. The site notes are another way in which the stratigraphic record of a site can be preserved in a documentary form. Such entries will record the stratigraphic relationships of each unit, the composition of its soil, and related data. Section drawings, plans, and the site notes are all complementary parts of the stratigraphic archive. Used with the stratigraphic sequence, the archaeologist is able to carry out the postexcavation analyses of the portable materials taken from the site.

Dating the Deposits


Stratification is made up partly of deposits with objects that can be taken away for study and preservation. These objects help the archaeologist to fix the stratigraphic sequence in terms of years and centuries. Using the stratigraphic sequence as the testing framework, the objects found within each deposit are analyzed and a determination is made about the date at which the deposit was made. Based upon the date of latest object in the deposit, it is assumed that the stratum could not have been formed any earlier than that date. A date before which the deposit was made may be found by comparing the unit with others in stratigraphic order. Only when a consistent chronological order can be seen throughout the length of the stratigraphic sequence can a final determination of the date of each deposit be made. The analyses of the artifacts is of paramount importance in obtaining a date in years for units of stratification that are not in superposition, for they cannot be chronologically associated by any other means. This is true not only within a site but applies to comparisons between stratigraphic events of disparate sites, due to the very limited area of most units of stratification. Having carried out successful analyses of the artifacts, the archaeologist takes up the last stratigraphic task of any archaeological project. This is the reconstruction of the

development of the landscape of the site through the course of absolute, or calendar, time (see Landscape Archaeology). Having determined the stratigraphic sequence of the site and knowing through artifact data which disparate units or groups of units may be associated, the site can be rebuilt, layer by layer, using the single-layer plans.

The Dual Nature of Stratigraphy


This final process demonstrates the duality of archaeological stratification. Materials are made into deposits, which account for the physical accumulation of stratification on the site, an accretion best viewed in a section drawing. The deposits make surfaces on which people lived, while other surfaces, such as a ditch, were formed by destroying preexisting stratification, thereby significantly changing the stratigraphic sequence. Each deposit has a surface, but some surfaces are without deposits; thus the interfacial, or immaterial, aspects of stratification usually comprise more than half the stratigraphic record. Without the deposits, the surfaces could not be dated in absolute time. Without the surfaces, or breaks in the stratigraphic record, there would be no stratigraphic sequences of relative time on any archaeological site. By applying stratigraphic methods, the archaeologist recovers both aspects of the stratigraphic history of the site, from which the truth about some of its past may be ascertained. [See also Archaeo-paleomagnetic Dating; Dating the Past; Dendrochronology; Excavation: Introduction; Fission-track Dating; Luminescence Dating; Obsidian Hydration Dating; Potassium-argon Dating; Radiocarbon Dating; Seriation.]

Bibliography
Kathleen M. Kenyon, Beginning in Archaeology (1952). Mortimer Wheeler, Archaeology from the Earth (1954). Philip Barker, Techniques of Archaeological Excavation (1977). Edward C. Harris, Principles of Archaeological Stratigraphy (1979; 2nd ed., 1989). Michael B. Schiffer, Formation Processes of the Archaeological Record (1987). Edward C. Harris, Marley R. Brown III, and Gregory J. Brown, Practices of Archaeological Stratigraphy (1993). Edward Cecil Harris

You might also like