You are on page 1of 163

Wealth

From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article needs additional citations for verification. Please help improve this article by adding
reliable references. Unsourced material may be challenged and removed. (August 2008)
For the business meaning, see Wealth (economics).
"Prosperity" redirects here. For other uses, see Prosperity (disambiguation).
"Affluent" redirects here. For type of river/stream, see Tributary.

Wealth derives from the old English word "weal". The term was originally an adjective to describe the
possession of great qualities.

Contents
[hide]

• 1 Definition
• 2 Anthropological views
o 2.1 The interpersonal concept
o 2.2 Accumulation of non-necessities
o 2.3 Control of arable land
o 2.4 The capitalist notion
• 3 Sociological view
o 3.1 The upper class
o 3.2 The middle class
o 3.3 The working class
o 3.4 The welfare class
• 4 Other concepts
o 4.1 Global wealth
o 4.2 Not a zero-sum game
o 4.3 The non-normative concept
o 4.4 Non-financial
o 4.5 Sustainable wealth as a measure of well-being
o 4.6 Sustainable wealth
o 4.7 Buckminster Fuller's Notion of Wealth
o 4.8 The limits to wealth creation
o 4.9 The difference between income and wealth
o 4.10 Wealth as measured by time
• 5 Distribution
• 6 Wealth in the form of land
• 7 See also
• 8 External links

• 9 References

[edit] Definition
Adam Smith, in his seminal work The Wealth of Nations, described wealth as "the annual produce of the
land and labour of the society". This "produce" is, at its simplest, that which satisfies human needs and
wants of utility. In popular usage, wealth can be described as an abundance of items of economic value, or
the state of controlling or possessing such items, usually in the form of money, real estate and personal
property. An individual who is considered wealthy, affluent, or rich is someone who has accumulated
substantial wealth relative to others in their society or reference group. In economics, net wealth refers to
the value of assets owned minus the value of liabilities owed at a point in time.[citation needed] Wealth can be
categorized into three principal categories: personal property, including homes or automobiles; monetary
savings, such as the accumulation of past income; and the capital wealth of income producing assets,
including real estate, stocks, and bonds.[citation needed] All these delineations make wealth an especially
important part of social stratification. Wealth provides a type of safety net of protection against an
unforeseen decline in one’s living standard in the event of job loss or other emergency and can be
transformed into home ownership, business ownership, or even a college education. [1][not in citation given]

'Wealth' refers to some accumulation of resources, whether abundant or not. 'Richness' refers to an
abundance of such resources. A wealthy (or rich) individual, community, or nation thus has more
resources than a poor one. Richness can also refer at least basic needs being met with abundance widely
shared. The opposite of wealth is destitution. The opposite of richness is poverty.

The term implies a social contract on establishing and maintaining ownership in relation to such items
which can be invoked with little or no effort and expense on the part of the owner (see means of
protection).

The concept of wealth is relative and not only varies between societies, but will often vary between
different sections or regions in the same society. A personal net worth of US $10,000 in most parts of the
United States would certainly not place a person among the wealthiest citizens of that locale. However,
such an amount would constitute an extraordinary amount of wealth in impoverished developing
countries.

Concepts of wealth also vary across time. Modern labor-saving inventions and the development of the
sciences have enabled the poorest sectors of today's society to enjoy a standard of living equivalent if not
superior to the wealthy of the not-too-distant past. This comparative wealth across time is also applicable
to the future; given this trend of human advancement, it is likely that the standard of living that the
wealthiest today enjoy will be considered rude poverty by future generations.

Some of the wealthiest countries in the world are the United States, the United Kingdom, the Republic of
Ireland, Norway, Japan, Kuwait, United Arab Emirates, South Korea, Germany, The Netherlands,
Belgium, France, Israel, Taiwan, Australia, Singapore, Philippines, Canada, Finland, Greece, Spain,
Portugal, Sweden, Italy, Denmark, New Zealand, Iceland, Monaco, Luxembourg, Liechenstein and
Switzerland, the larger of which are in the G8. All of the above countries, except United Arab Emirates
and Kuwait, are considered developed countries.

[edit] Anthropological views


Anthropology characterizes societies, in part, based on a society's concept of wealth, and the institutional
structures and power used to protect this wealth.[citation needed] Several types are defined below. They can be
viewed as an evolutionary progression. Many young adolescents have become wealthy from the
inheritance of their families.

[edit] The interpersonal concept


Early hominids seem to have started with incipient ideas of wealth[citation needed], similar to that of the great
apes. But as tools, clothing, and other mobile infrastructural capital became important to survival
(especially in hostile biomes), ideas such as the inheritance of wealth, political positions, leadership, and
ability to control group movements (to perhaps reinforce such power) emerged. Neandertal societies had
pooled funerary rites and cave painting which implies at least a notion of shared assets that could be spent
for social purposes, or preserved for social purposes. Wealth may have been collective.

[edit] Accumulation of non-necessities

Humans back to and including the Cro-Magnons seem to have had clearly defined rulers and status
hierarchies. Digs in Russia have revealed elaborate funeral clothing on a pair of children buried there over
35,000 years ago.[citation needed] This indicates a considerable accumulation of wealth by some individuals or
families. The high artisan skill also suggest the capacity to direct specialized labor to tasks that are not of
any obvious utility to the group's survival.

[edit] Control of arable land

The rise of irrigation and urbanization, especially in ancient Sumer and later Egypt, unified the ideas of
wealth and control of land and agriculture.[citation needed] To feed a large stable population, it was possible and
necessary to achieve universal cultivation and city-state protection. The notion of the state and the notion
of war are said to have emerged at this time. Tribal cultures were formalized into what we would call
feudal systems, and many rights and obligations were assumed by the monarchy and related aristocracy.
Protection of infrastructural capital built up over generations became critical: city walls, irrigation
systems, sewage systems, aqueducts, buildings, all impossible to replace within a single generation, and
thus a matter of social survival to maintain. The social capital of entire societies was often defined in
terms of its relation to infrastructural capital (e.g. castles or forts or an allied monastery, cathedral or
temple), and natural capital, (i.e. the land that supplied locally grown food). Agricultural economics
continues these traditions in the analyses of modern agricultural policy and related ideas of wealth, e.g.
the ark of taste model of agricultural wealth.

[edit] The capitalist notion

Banknotes from all around the world donated by visitors to the British Museum, London.

Industrialization emphasized the role of technology. Many jobs were automated. Machines replaced some
workers while other workers became more specialized. Labour specialization became critical to economic
success. However, physical capital, as it came to be known, consisting of both the natural capital (raw
materials from nature) and the infrastructural capital (facilitating technology), became the focus of the
analysis of wealth. Adam Smith saw wealth creation as the combination of materials, labour, land, and
technology in such a way as to capture a profit (excess above the cost of production).[2] The theories of
David Ricardo, John Locke, John Stuart Mill, and later, Karl Marx, in the 18th century and 19th century
built on these views of wealth that we now call classical economics and Marxian economics (see labor
theory of value). Marx distinguishes in the Grundrisse between material wealth and human wealth,
defining human wealth as "wealth in human relations"; land and labour were the source of all material
wealth.

[edit] Sociological view


“Wealth provides an important mechanism of the intergenerational transmission of inequality.”[3]
Approximately half of the wealthiest people in America inherited family fortunes. But the effect of
inherited wealth can be seen on a more modest level as well. For example, a couple that buys a house with
the financial help from their parents or a student that has his or her college education paid for, are
benefiting directly from the accumulated wealth of previous generations. [4]

As a result of different conditions of life, members of different social classes view the world in many
different ways. This allows them to develop different “conceptions of social reality, different aspirations
and hopes and fears, different conceptions of the desirable.” [5] The way different classes in society view
wealth vary and these diverse characteristics are a fundamental dividing line among the classes. Today
there is an extremely skewed concentration of wealth in America, more so than even income. [6] In 1996
the Fed survey reported that the net worth of the top 1 percent was approximately equal to that of the
bottom 90 percent. [7]

[edit] The upper class

Inheritance establishes different starting lines. The majority of those in the upper class have inherited their
wealth and place a greater emphasis on wealth than on income. Upper class children are taught about
investments and accumulation. They are trained and conditioned, technically and philosophically, to
handle the wealth that they will inherit and how to earn more later in life. Wealth and being a member of
the upper class requires significant prior preparation and familiarization. If not trained correctly children
may easily squander immense wealth, though this rarely happens. They use the power and freedom that
comes with wealth to leverage opportunities. This allows them more flexibility in their lives and as a
result have fewer worries.[8]

The accumulation of wealth fosters a growth of power, which in turn creates privileges conducive to more
wealth. Children of the upper class are socialized on how to manage this power and channel this privilege
in many different forms such as gaining access to others' capital and to critical information. It is by
accessing various edifices of information, associates, procedures and auspicious rules that the upper class
are able to maintain their wealth and pass it along, and not necessarily because of an extreme work ethic.
[9]

[edit] The middle class

There is a distinct difference in views about wealth among the middle class compared to those of the
upper class. Where the upper class beliefs focus on wealth, the middle class places a greater emphasis on
income. The middle class views wealth as something for emergencies and it is seen as more of a cushion.
This class is comprised of people that were raised with families that typically owned their own home,
planned ahead and stressed the importance of education and achievement. They earn a significant amount
of income and also have significant amounts of consumption. However there is very limited savings
(deferred consumption) or investments, besides retirement pensions and homeownership. They have been
socialized to accumulate wealth through structured, institutionalized arrangements. Without this set
structure, asset accumulation would likely not occur. [10]
[edit] The working class

The working class has fewer options for advancement and wealth accumulation than the upper and middle
classes. This can be characterized as having limited income, unstable employment and an insignificant
retirement pension account. Access to structured asset accumulation programs, such as retirement
pensions, are not readily available to those in this class and as a result little of their earnings are actually
saved or invested. Consequently, there is a limited financial cushion available in times of hardship such as
a divorce or major illness. Just as their parents, children who lack assets are less likely to plan for the
future. [11]

[edit] The welfare class

Those with the least amount of wealth are the welfare poor. Wealth accumulation for this class is to some
extent prohibited. People that receive AFDC transfers cannot own more than a trivial amount of assets, in
order to be eligible and remain qualified for income transfers. Most of the institutions that the welfare
poor encounter discourage any accumulation of assets. [12]

[edit] Other concepts


[edit] Global wealth

Michel Foucault commented that the concept of Man as an aggregate did not exist before the 18th
century. The shift from the analysis of an individual's wealth to the concept of an aggregation of all men
is implied in the concepts of political economy and then economics. This transition took place as a result
of a cultural bias inherent in the Enlightenment. Wealth was seen as an objective fact of living as a human
being in a society.

[edit] Not a zero-sum game

Regardless of whether one defines wealth as the sum total of all currency, the M1 money supply, or a
broader measure which includes money, securities, and property, the supply of wealth, while limited, is
not fixed. Thus, there is room for people to gain wealth without taking from others, and wealth is not
necessarily a zero-sum game, though short-term effects and some economic situations may make it appear
to be so. Many things can affect the creation and destruction of wealth including size of the work force,
production efficiency, available resource endowments, inventions, innovations, and availability of capital.

However, at any given point in time, there is a limited amount of wealth which exists. That is to say, it is
fixed in the short term. People who study short term issues see wealth as a zero-sum game and
concentrate on the distribution of wealth, whereas people who study long-term issues see wealth as a non-
zero sum game and concentrate on wealth creation. Other people put equal emphasis on both the creation
and the distribution of wealth. It has been theorized, for example, by Robert Wright, among others, that
society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent.

One's attitude towards this issue affects the design of the social or economic system that one prefers.

[edit] The non-normative concept

Neoclassical economics tries to be non-normative for the most part, to be objective and free of value
statements. If it is successful, then wealth would be defined in such a way that it would not be
preconceived to be either positive or negative. This objective has not always been the case. In prior eras
wealth was assumed to be a set of means of persuasion.

It was often seen as self-interested arguments by the powerful explaining why they should remain in
power. In The Prince, Niccolò Machiavelli had commented in that earlier era on the prudent use of
wealth, and the need to tolerate some cruelty and vice in the use of it, in order to maintain appearances of
strength and power.

Jane Jacobs in the 1960s and 70s offered the observation that there were two different moral syndromes
that were common attitudes to wealth and power, and that the one more associated with guardianship did
in fact require a degree of ostentatious conspicuous consumption if only to impress others.

This logic is almost entirely absent from neoclassical economics, which in its extreme form argues for the
abolition of any political economy apart from the service markets wherein favours may be bought and
sold at will, including political ones - the so-called political choice theory popular in the U.S.A.. While it
is entirely likely that such assumptions apply in the subcultures that dominate modern discourse on
technical economics and especially macroeconomics, the less technical notions of wealth and power that
are implied in the older theories of economics and ideas of wealth, still continue as daily facts of life.

[edit] Non-financial

The 21st century view is that many definitions of wealth can exist and continue to co-exist. Some people
talk about measuring the more general concept of well-being.[who?] This is a difficult process but many
believe it possible - human development theory being devoted to this. Furthermore, Manoj Sharma [1],
the head of DifferWorld's [2]faculty makes a very strong case of the importance of factoring in both
financial wealth and non-financial wealth as a measure of True Wealth. Manoj Sharma's definition of
True Wealth being a combination of financial, mental, emotional, physical and spiritual wealth; and how
it is channeled towards the general good of humanity. Although these alternative measures of wealth
exist, they tend to be overshadowed and influenced by the dominant money supply and banking system.
For more on the modern notions of wealth and their interaction see the article on political economy.

[edit] Sustainable wealth as a measure of well-being

Sustainable wealth is defined by the author of Creating Sustainable Wealth, Elizabeth M Parker, as
meeting the individual’s personal, social and environmental needs without compromising the ability of
future generations to meet their own needs. This definition of sustainable wealth comes from the marriage
of sustainability as defined by the Brundtland Commission and wealth defined as a measure of well-being,
not only from marriage but it also can be earned by working hard.

[edit] Sustainable wealth

According to the author of Wealth Odyssey, Larry R. Frank Sr, wealth is what sustains you when you are
not working. It is net worth, not income, which is important when you retire or are unable to work
(premature loss of income due to injury or illness is actually a risk management issue). The key question
is how long would a certain wealth last? Ongoing withdrawal research has sustainable withdrawal rates
anywhere between approximately 3 percent and 8 percent, depending on the research’s assumptions.
Time, how long wealth might last, then becomes a function of how many times does the percentage
withdrawal rate go into all the assets. Example: withdrawing 3 percent a year into 100 percent equals 33.3
years; 4 percent equals 25 years; 8 percent equals 12.5 years, etc. This ignores any growth, which
presumably would be used to offset the effects of inflation. Growth greater than the withdrawal rate
would extend the time assets may last, while negative growth would reduce the time assets may last.
Clearly a lower withdrawal rate is more conservative. Knowing this helps you determine how much
wealth you need also. Example: you know you will need $40,000 a year and use a 4 percent withdrawal
rate, then you need to use 5 percent and therefore need $800,000, etc. This simple “wealth rule” helps you
estimate both the time and the amount.

[edit] Buckminster Fuller's Notion of Wealth

In section 1075.25 of Synergetics, Buckminster Fuller defined wealth as "the measurable degree of
established operative advantage." In Critical Path[13] Fuller described his notion as that which "realistically
protected, nurtured, and accommodated X numbers of human lives for Y number of forward days."
Philosophically, Fuller viewed "real wealth" as human know-how and know-what which he pointed out is
always increasing.

[edit] The limits to wealth creation

There is a debate in economic literature, usually referred to as the limits to growth debate in which the
ecological impact of growth and wealth creation is considered. Many of the wealth creating activities
mentioned above (cutting down trees, hunting, farming) have an impact on the environment around us.
Sometimes the impact is positive (for example, hunting when herd populations are high) and sometimes
the impact is negative (for example, hunting when herd populations are low).

Most researchers feel that sustained environmental impacts can have an effect on the whole ecosystem.
They claim that the accumulated impacts on the ecosystem put a theoretical limit on the amount of wealth
that can be created. They draw on archeology to cite examples of cultures that they claim have
disappeared because they grew beyond the ability of their ecosystems to support them.

Others are more optimistic (or, as the first group might claim, more naïve). They claim that although
unrestrained wealth-creating activities may have localized environmental impact, large scale ecological
effects are either minor or non-existent; or that even if global scale ecological effects exist, human
ingenuity will always find ways of adapting to them, so that there is no ecological limit to the amount of
growth or wealth that this planet will sustain[citation needed].

More fundamentally, the limited surface of Earth places limits on the space, population and natural
resources available to the human race, at least until such time as large-scale space travel is a realistic
proposition.

[edit] The difference between income and wealth

Wealth is a stock that can be represented in an accounting balance sheet, meaning that it is a total
accumulation over time, that can be seen in a snapshot. Income is a flow, meaning it is a rate of change, as
represented in an Income/Expense or Cashflow Statement. Income represents the increase in wealth (as
can be quantified on a Cashflow statement), expenses the decrease in wealth. If you limit wealth to net
worth, then mathematically net income (income minus expenses) can be thought of as the first derivative
of wealth, representing the change in wealth over a period of time.

[edit] Wealth as measured by time

Wealth has also been defined as "the amount of time an individual can maintain his current lifestyle for,
without any new income." For example if a person has $1000, and their lifestyle dictates $1000 per week
of expenses, then their wealth is measured as 1 week. Under this definition, a person with $10,000 of
savings and expenses of $1000 per week (10 weeks of wealth) would be considered wealthier than a
person with $20,000 of savings and expenses of $5000 per week (4 weeks of wealth).

[edit] Distribution
Main article: Distribution of wealth

Capitalism asserts that all wealth is earned, not distributed. It can only be distributed after it is forcibly
seized from the earners (usually in the form of tax). Wealth acquired this way is then distributed. Thus
this section is concerned with the anti-capitalist conception of wealth, namely that all wealth is collective
and distributed among individuals.

Different societies have different opinions about wealth distribution and about the obligations related to
wealth, but from the era of the tribal society to the modern era, there have been means of moderating the
acquisition and use of wealth.

In ecologically rich areas such as those inhabited by the Haida in the Cascadia ecoregion, traditions like
potlatch kept wealth relatively evenly distributed, requiring leaders to buy continued status and respect
with giveaways of wealth to the poorer members of society. Such traditions make what are today often
seen as government responsibilities into matters of personal honour.

In modern societies, the tradition of philanthropy exists. Large donations from funds created by wealthy
individuals are highly visible, although small contributions by many people also offer a wide variety of
support within a society. The continued existence of organizations which survive on donations indicate
that modern Western society has at least some level of philanthropy.

Furthermore, in today's societies, much wealth distribution and redistribution is the result of government
policies and programs. Government policies like the progressivity or regressivity of the tax system can
redistribute wealth to the poor or the rich respectively. Government programs like “disaster relief”
transfer wealth to people that have suffered loss due to a natural disaster. Social security transfers wealth
from the young to the old. Fighting a war transfers wealth to certain sectors of society. Public education
transfers wealth to families with children in public schools. Public road construction transfers wealth from
people that do not use the roads to those people that do (and to those that build the roads). Certain people
resent having to contribute to some or all of these programs, and disparagingly label them social
engineering.

Like all human activities, wealth redistribution cannot achieve 100% efficiency. The act of redistribution
itself has certain costs associated with it, due to the necessary maintenance of the infrastructure that is
required to collect the wealth in question and then redistribute it. Different people on different sides of the
political spectrum have different views on this issue. Some see it as unacceptable waste, while others see
it as a natural fact of life, which is inevitable in all kinds of inter-human relations.

Proponents of the supply-side theory of "trickle-down" economics claim that it is a form of time-deferred
philanthropy. The theory is that newly created wealth eventually "trickles down" to all strata of society.
The argument goes that although wealth is created primarily by the wealthy, they will tend to reinvest
their wealth, and this process will create even more wealth. As the economy grows, it is said that more
and more people will share in the newly created wealth. A similar argument can be made in the case of
Keynesian economics. According to this theory, government redistributions and expenditures have a
multiplier effect that stimulates the economy and creates wealth. Supply-siders claim that wealth is
created primarily by investment (supply), whereas Keynesians claim that wealth is driven by expenditure
(demand). Today most economists agree that growth can be stimulated by either the supply or demand
side, and some of them argue that these are really two sides of the same coin, in the sense that you seldom
get one without the other. Nevertheless, the dispute between supply-side and Keynesian economics is of
continuing interest.

Stresses within social distribution systems can be understood within a broad theory of political economy,
where tradeoffs between means of protection, persuasion and production, and valuations of different
styles of capital, are described. Simply put, if the rich do not at least once in a while give away, of their
own free will, at least a small part of their wealth to the poor, then the poor are much more likely to rebel
against the rich.

[edit] Wealth in the form of land


Many indigenous cultures, being either nomadic or communitarian in nature, rejected the notion of the
private ownership of land wealth. In the western tradition, the concepts of owning land and accumulating
wealth in the form of land were engendered in the rise of the first states, for a primary service and power
of government was, and is to this day, the awarding and adjudication of land use rights.

Land ownership was also justified according to John Locke. He claimed that because we admix our labour
with the land, we thereby deserve the right to control the use of the land and benefit from the product of
that land (but subject to his Lockean proviso of "at least where there is enough, and as good left in
common for others.").

Additionally, in our post-agricultural society this argument has many critics (including those influenced
by Georgist and geolibertarian ideas) who argue that since land, by definition, is not a product of human
labor, any claim of private property in it is a form of theft; as David Lloyd George observed, "to prove a
legal title to land one must trace it back to the man who stole it."

Many older ideas have resurfaced in the modern notions of ecological stewardship, bioregionalism,
natural capital, and ecological economics.

[edit] See also

Look up wealth in Wiktionary, the free dictionary.

• Affluence in the United States


• Capital accumulation
• Distribution of wealth
• Household income in the United States
• Income in the United Kingdom
• Lists of billionaires
• Personal income in the United States
• Poverty
• Private banking
• Surplus product
• Value added
• Wealth (economics)
• Wealth condensation
• Wealth and religion
[edit] External links

Property
From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article is missing citations or needs footnotes.
Using inline citations helps guard against copyright violations and factual inaccuracies. (July 2007)
This article is about the legal or moral ownership rights. For other uses, see Property (disambiguation).

Property law
Part of the common law series

Acquisition

Gift · Adverse possession · Deed


Lost, mislaid, and abandoned property
Treasure trove
Alienation · Bailment · License

Estates in land

Allodial title · Fee simple · Fee tail


Life estate · Defeasible estate
Future interest · Concurrent estate
Leasehold estate · Condominiums

Conveyancing

Bona fide purchaser


Torrens title · Strata title
Estoppel by deed · Quitclaim deed
Mortgage · Equitable conversion
Action to quiet title

Future use control

Restraint on alienation
Rule against perpetuities
Rule in Shelley's Case
Doctrine of worthier title

Nonpossessory interest

Easement · Profit
Covenant running with the land
Equitable servitude

Related topics

Fixtures · Waste · Partition


Riparian water rights
Lateral and subjacent support
Assignment · Nemo dat

Other common law areas

Contract law · Tort law


Wills, trusts and estates
Criminal law · Evidence

v•d•e

Property is any physical or virtual entity that is owned by an individual. An owner of property has the
right to consume, sell, mortgage, transfer and exchange his or her property.[1][2][3] Important types of
property include real property (land), personal property (other physical possessions), and arguably
intellectual property (rights over artistic creations, inventions, etc.). A title, or a right of ownership, is
associated with property that establishes the relation between the goods/services and other individuals or
groups, assuring the owner the right to dispense with the property in a manner he or she sees fit. Some
philosophers assert that property rights arise from social convention. Others find origins for them in
morality or natural law (e.g. Saint Irenaeus).

Contents
[hide]

• 1 Use of the term


• 2 General characteristics
• 3 Theories of property
• 4 Property in philosophy
o 4.1 Ancient philosophy
o 4.2 Pre-industrial English philosophy
 4.2.1 Thomas Hobbes (1600s)
 4.2.2 James Harrington (1600s)
 4.2.3 Robert Filmer (1600s)
 4.2.4 John Locke (1600s)
 4.2.5 William Blackstone (1700s)
 4.2.6 David Hume (1700s)
o 4.3 Critique and response
 4.3.1 Charles Comte - legitimate origin of property
 4.3.2 Pierre Proudhon - property is theft
 4.3.3 Frédéric Bastiat - property is value
o 4.4 Contemporary views
• 5 Types of property
• 6 What can be property?
o 6.1 Rights of use as property
• 7 Who can be an owner?
• 8 References
• 9 See also
• 10 References

• 11 External links and references

[edit] Use of the term


Various scholarly communities (e.g., law, economics, anthropology, sociology) may treat the concept
more systematically, but definitions vary within and between fields. Scholars in the social sciences
frequently conceive of property as a bundle of rights. They stress that property is not a relationship
between people and things, but a relationship between people with regard to things.

Public property is any property that is controlled by a state or by a whole community. Private property is
any property that is not public property. Private property may be under the control of a single individual
or by a group of individuals collectively.[4] Some philosophers like Karl Marx use it to describe a social
relationship between those who sell their labor power and those who buy it.

[edit] General characteristics


Modern property rights conceive of ownership and possession as belonging to legal individuals, even if
the legal individual is not a real person. Corporations, for example, have legal rights similar to American
citizens, including many of their constitutional rights. Therefore, the corporation is a juristic person or
artificial legal entity, which some refer to as "corporate personhood".

Property rights are protected in the current laws of states usually found in the form of a Constitution or a
Bill of Rights. The fifth and the fourteenth amendments to the United States constitution, for example,
provide explicitly for the protection of private property:

The Fifth Amendment states:

Nor be deprived of life, liberty, or property, without due process of law; nor shall private property
be taken for public use, without just compensation.
The Fourteenth Amendment states:

No State shall make or enforce any law which shall abridge the privileges or immunities of
citizens of the United States; nor shall any State deprive any person of life, liberty, or property,
without due process of law.

Protection is also found in the United Nations's Universal Declaration of Human Rights, Article 17, and in
the French Declaration of the Rights of Man and of the Citizen, Article XVII, and in the European
Convention on Human Rights (ECHR), Protocol 1.

Property is usually thought of in terms of a bundle of rights as defined and protected by the local
sovereignty. Ownership, however, does not necessarily equate with sovereignty. If ownership gave
supreme authority it would be sovereignty, not ownership. These are two different concepts.

Traditional principles of property rights includes:

1. control of the use of the property


2. the right to any benefit from the property (examples: mining rights and rent)
3. a right to transfer or sell the property
4. a right to exclude others from the property.

Traditional property rights do not include:

1. uses that unreasonably interfere with the property rights of another private party (the right of quiet
enjoyment). [See Nuisance]
2. uses that unreasonably interfere with public property rights, including uses that interfere with
public health, safety, peace or convenience. [See Public Nuisance, Police Power]

Legal systems have evolved to cover the transactions and disputes which arise over the possession, use,
transfer and disposal of property, most particularly involving contracts. Positive law defines such rights,
and a judiciary is used to adjudicate and to enforce.

In his classic text, "The Common Law", Oliver Wendell Holmes describes property as having two
fundamental aspects. The first is possession, which can be defined as control over a resource based on the
practical inability of another to contradict the ends of the possessor. The second is title, which is the
expectation that others will recognize rights to control resource, even when it is not in possession. He
elaborates the differences between these two concepts, and proposes a history of how they came to be
attached to individuals, as opposed to families or entities such as the church.

According to Adam Smith, the expectation of profit from "improving one's stock of capital" rests on
private property rights. It is a belief central to capitalism that property rights encourage the property
holders to develop the property, generate wealth, and efficiently allocate resources based on the operation
of the market. From this evolved the modern conception of property as a right which is enforced by
positive law, in the expectation that this would produce more wealth and better standards of living.

• Classical liberals, Objectivists, and related traditions

"Just as man can't exist without his body, so no rights can exist without the right to translate one's
rights into reality, to think, to work and keep the results, which means: the right of property." (Ayn
Rand, Atlas Shrugged)
Most thinkers from these traditions subscribe to the labor theory of property. They hold that you
own your own life, and it follows that you must own the products of that life, and that those
products can be traded in free exchange with others.
"Every man has a property in his own person. This nobody has a right to, but himself." (John
Locke, Second Treatise on Civil Government)
"Life, liberty, and property do not exist because men have made laws. On the contrary, it was the
fact that life, liberty, and property existed beforehand that caused men to make laws in the first
place." (Frédéric Bastiat, The Law)
"The reason why men enter into society is the preservation of their property." (John Locke,
Second Treatise on Civil Government)

• Socialism's fundamental principles are centered on a critique of this concept, stating, among other
things, that the cost of defending property is higher than the returns from private property
ownership, and that even when property rights encourage the property-holder to develop his
property, generate wealth, etc., he will only do so for his own benefit, which may not coincide
with the benefit of other people or society at large.

• Libertarian socialism generally accepts property rights, but with a short abandonment time period.
In other words, a person must make (more or less) continuous use of the item or else he loses
ownership rights. This is usually referred to as "possession property" or "usufruct." Thus, in this
usufruct system, absentee ownership is illegitimate, and workers own the machines they work
with.

• Communism argues that only collective ownership of the means of production through a polity
(though not necessarily a state) will assure the minimization of unequal or unjust outcomes and the
maximization of benefits, and that therefore private property (which in communist theory is
limited to capital) should be abolished.

Both communism and some kinds of socialism have also upheld the notion that private property is
inherently illegitimate. This argument is centered mainly on the idea that the creation of private property
will always benefit one class over another, giving way to domination through the use of this private
property. Communists are naturally not opposed to personal property which is "Hard-won, self-acquired,
self-earned" (Communist Manifesto), by members of the proletariat.

Not every person, or entity, with an interest in a given piece of property may be able to exercise all of the
rights mentioned a few paragraphs above. For example, as a lessee of a particular piece of property, you
may not sell the property, because the tenant is only in possession, and does not have title to transfer.
Similarly, while you are a lessee, the owner cannot use his or her right to exclude to keep you from the
property. (Or, if he or she does, you may perhaps be entitled to stop paying rent or perhaps sue to regain
access.)

Further, property may be held in a number of forms, e.g. joint ownership, community property, sole
ownership, lease, etc. These different types of ownership may complicate an owner's ability to exercise
his or her rights unilaterally. For example if two people own a single piece of land as joint tenants, then
depending on the law in the jurisdiction, each may have limited recourse for the actions of the other. For
example, one of the owners might sell his or her interest in the property to a stranger that the other owner
does not particularly like.

[edit] Theories of property


There exist many theories. Perhaps one of the most popular was the natural rights definition of property
rights as advanced by John Locke. Locke advanced the theory that when one mixes one’s labor with
nature, one gains ownership of that part of nature with which the labor is mixed, subject to the limitation
that there should be "enough, and as good, left in common for others" [1].

From the RERUM NOVARUM, Pope Leo XIII wrote "It is surely undeniable that, when a man engages
in remunerative labor, the impelling reason and motive of his work is to obtain property, and thereafter to
hold it as his very own."

Anthropology studies the diverse systems of ownership, rights of use and transfer, and possession[5] under
the term "theories of property". Western legal theory is based, as mentioned, on the owner of property
being a legal individual. However, not all property systems are founded on this basis.

In every culture studied ownership and possession are the subject of custom and regulation, and "law"
where the term can meaningfully be applied. Many tribal cultures balance individual ownership with the
laws of collective groups: tribes, families, associations and nations. For example the 1839 Cherokee
Constitution frames the issue in these terms:

Sec. 2. The lands of the Cherokee Nation shall remain common property; but the improvements
made thereon, and in the possession of the citizens respectively who made, or may rightfully be in
possession of them: Provided, that the citizens of the Nation possessing exclusive and indefeasible
right to their improvements, as expressed in this article, shall possess no right or power to dispose
of their improvements, in any manner whatever, to the United States, individual States, or to
individual citizens thereof; and that, whenever any citizen shall remove with his effects out of the
limits of this Nation, and become a citizen of any other government, all his rights and privileges as
a citizen of this Nation shall cease: Provided, nevertheless, That the National Council shall have
power to re-admit, by law, to all the rights of citizenship, any such person or persons who may, at
any time, desire to return to the Nation, on memorializing the National Council for such
readmission.

Communal property systems describe ownership as belonging to the entire social and political unit, while
corporate systems describe ownership as being attached to an identifiable group with an identifiable
responsible individual. The Roman property law was based on such a corporate system.

Different societies may have different theories of property for differing types of ownership. Pauline Peters
argued that property systems are not isolable from the social fabric, and notions of property may not be
stated as such, but instead may be framed in negative terms: for example the taboo system among
Polynesian peoples. [2]

[edit] Property in philosophy


This section may require cleanup to meet Wikipedia's quality standards.
Please improve this article if you can. (July 2007)
The examples and perspective in this article or section may not represent a worldwide view of the
subject.
Please improve this article or discuss the issue on the talk page.

In medieval and Renaissance Europe the term "property" essentially referred to land. Much rethinking
was necessary in order for land to come to be regarded as only a special case of the property genus. This
rethinking was inspired by at least three broad features of early modern Europe: the surge of commerce,
the breakdown of efforts to prohibit interest (then called "usury"), and the development of centralized
national monarchies.

[edit] Ancient philosophy

Urukagina, the king of the Sumerian city-state Lagash, established the first laws that forbade compelling
the sale of property. The Cyrus cylinder of Cyrus the Great, founder of the Achaemenid Persian Empire,
documents the protection of property rights.[6]

The Ten Commandments shown in Exodus 20:2-17 and Deuteronomy 5:6-21 stated that the Israelites
were not to steal. These texts, written in approximately 1300 B.C., were a blanket early protection of
private property.

Aristotle, in Politics, advocates "private property." In one of the first known expositions of tragedy of the
commons he says, "that which is common to the greatest number has the least care bestowed upon it.
Every one thinks chiefly of his own, hardly at all of the common interest; and only when he is himself
concerned as an individual." In addition he says that when property is common, there are natural problems
that arise due to differences in labor: "If they do not share equally enjoyments and toils, those who labor
much and get little will necessarily complain of those who labor little and receive or consume much. But
indeed there is always a difficulty in men living together and having all human relations in common, but
especially in their having common property." (Politics, 1261b34)

[edit] Pre-industrial English philosophy

[edit] Thomas Hobbes (1600s)

The principal writings of Thomas Hobbes appeared between 1640 and 1651—during and immediately
following the war between forces loyal to King Charles I and those loyal to Parliament. In his own words,
Hobbes' reflection began with the idea of "giving to every man his own," a phrase he drew from the
writings of Cicero. But he wondered: How can anybody call anything his own? In that unsettled time and
place it perhaps was natural that he would conclude: My own can only truly be mine if there is one
unambiguously strongest power in the realm, and that power treats it as mine, protecting its status as such.

[edit] James Harrington (1600s)

A contemporary of Hobbes, James Harrington, reacted differently to the same tumult; he considered
property natural but not inevitable. The author of Oceana, he may have been the first political theorist to
postulate that political power is a consequence, not the cause, of the distribution of property. He said that
the worst possible situation is one in which the commoners have half a nation's property, with crown and
nobility holding the other half—a circumstance fraught with instability and violence. A much better
situation (a stable republic) will exist once the commoners own most property, he suggested.

In later years, the ranks of Harrington's admirers would include American revolutionary and founder John
Adams.

[edit] Robert Filmer (1600s)

Another member of the Hobbes/Harrington generation, Sir Robert Filmer, reached conclusions much like
Hobbes', but through Biblical exegesis. Filmer said that the institution of kingship is analogous to that of
fatherhood, that subjects are but children, whether obedient or unruly, and that property rights are akin to
the household goods that a father may dole out among his children—his to take back and dispose of
according to his pleasure.

[edit] John Locke (1600s)

In the following generation, John Locke sought to answer Filmer, creating a rationale for a balanced
constitution in which the monarch would have a part to play, but not an overwhelming part. Since Filmer's
views essentially require that the Stuart family be uniquely descended from the patriarchs of the Bible,
and since even in the late seventeenth century that was a difficult view to uphold, Locke attacked Filmer's
views in his First Treatise on Government, freeing him to set out his own views in the Second Treatise on
Civil Government. Therein, Locke imagined a pre-social world, the unhappy residents of which create a
social contract. They would, he allowed, create a monarchy, but its task would be to execute the will of an
elected legislature.

"To this end" he wrote, meaning the end of their own long life and peace, "it is that men give up all their
natural power to the society they enter into, and the community put the legislative power into such hands
as they think fit, with this trust, that they shall be governed by declared laws, or else their peace, quiet,
and property will still be at the same uncertainty as it was in the state of nature."

Even when it keeps to proper legislative form, though, Locke held that there are limits to what a
government established by such a contract might rightly do.

"It cannot be supposed that [the hypothetical contractors] they should intend, had they a power so to do, to give any
one or more an absolute arbitrary power over their persons and estates, and put a force into the magistrate's hand to
execute his unlimited will arbitrarily upon them; this were to put themselves into a worse condition than the state of
nature, wherein they had a liberty to defend their right against the injuries of others, and were upon equal terms of
force to maintain it, whether invaded by a single man or many in combination. Whereas by supposing they have
given up themselves to the absolute arbitrary power and will of a legislator, they have disarmed themselves, and
armed him to make a prey of them when he pleases..."

Note that both "persons and estates" are to be protected from the arbitrary power of any magistrate,
inclusive of the "power and will of a legislator." In Lockean terms, depredations against an estate are just
as plausible a justification for resistance and revolution as are those against persons. In neither case are
subjects required to allow themselves to become prey.

To explain the ownership of property Locke advanced a labor theory of property.

[edit] William Blackstone (1700s)

In the 1760s, William Blackstone sought to codify the English common law. In his famous Commentaries
on the Laws of England he wrote that "every wanton and causeless restraint of the will of the subject,
whether produced by a monarch, a nobility, or a popular assembly is a degree of tyranny."

How should such tyranny be prevented or resisted? Through property rights, Blackstone thought, which is
why he emphasized that indemnification must be awarded a non-consenting owner whose property is
taken by eminent domain, and that a property owner is protected against physical invasion of his property
by the laws of trespass and nuisance. Indeed, he wrote that a landowner is free to kill any stranger on his
property between dusk and dawn, even an agent of the King, since it isn't reasonable to expect him to
recognize the King's agents in the dark.[citation needed]

[edit] David Hume (1700s)


In contrast to the figures discussed in this section thus far, David Hume lived a relatively quiet life that
had settled down to a relatively stable social and political structure. He lived the life of a solitary writer
until 1763 when, at 52 years of age, he went off to Paris to work at the British embassy.

In contrast, one might think, to his outrage-generating works on religion and his skeptical views in
epistemology, Hume's views on law and property were quite conservative.

He did not believe in hypothetical contracts, or in the love of mankind in general, and sought to ground
politics upon actual human beings as one knows them. "In general," he wrote, "it may be affirmed that
there is no such passion in human mind, as the love of mankind, merely as such, independent of personal
qualities, or services, or of relation to ourselves." Existing customs should not lightly be disregarded,
because they have come to be what they are as a result of human nature. With this endorsement of custom
comes an endorsement of existing governments, because he conceived of the two as complementary: "A
regard for liberty, though a laudable passion, ought commonly to be subordinate to a reverence for
established government."

These views led to a view on property rights that might today be described as legal positivism. There are
property rights because of and to the extent that the existing law, supported by social customs, secure
them. He offered some practical home-spun advice on the general subject, though, as when he referred to
avarice as "the spur of industry," and expressed concern about excessive levels of taxation, which
"destroy industry, by engendering despair."

[edit] Critique and response

By the mid 19th century, the industrial revolution had transformed England and had begun in France. The
established conception of what constitutes property expanded beyond land to encompass scarce goods in
general. In France, the revolution of the 1790s had led to large-scale confiscation of land formerly owned
by church and king. The restoration of the monarchy led to claims by those dispossessed to have their
former lands returned. Furthermore, the labor theory of value popularized by classical economists such as
Adam Smith[citation needed] and David Ricardo were utilized by a new ideology called socialism to critique the
relations of property to other economic issues, such as profit, rent, interest, and wage-labor. Thus,
property was no longer an esoteric philosophical question, but a political issue of substantial concern.

[edit] Charles Comte - legitimate origin of property

Charles Comte, in Traité de la propriété (1834), attempted to justify the legitimacy of private property in
response to the Bourbon Restoration. According to David Hart, Comte had three main points: "firstly, that
interference by the state over the centuries in property ownership has had dire consequences for justice as
well as for economic productivity; secondly, that property is legitimate when it emerges in such a way as
not to harm anyone; and thirdly, that historically some, but by no means all, property which has evolved
has done so legitimately, with the implication that the present distribution of property is a complex
mixture of legitimately and illegitimately held titles." (The Radical Liberalism of Charles Comte and
Charles Dunoyer

Comte, as Proudhon would later do, rejected Roman legal tradition with its toleration of slavery. He
posited a communal "national" property consisting of non-scarce goods, such as land in ancient hunter-
gatherer societies. Since agriculture was so much more efficient than hunting and gathering, private
property appropriated by someone for farming left remaining hunter-gatherers with more land per person,
and hence did not harm them. Thus this type of land appropriation did not violate the Lockean proviso -
there was "still enough, and as good left." Comte's analysis would be used by later theorists in response to
the socialist critique on property.
[edit] Pierre Proudhon - property is theft

Main articles: What is Property? and Property is theft!

In his 1849 treatise What is Property?, Pierre Proudhon answers with "Property is theft!" In natural
resources, he sees two types of property, de jure property (legal title) and de facto property (physical
possession), and argues that the former is illegitimate. Proudhon's conclusion is that "property, to be just
and possible, must necessarily have equality for its condition."

His analysis of the product of labor upon natural resources as property (usufruct) is more nuanced. He
asserts that land itself cannot be property, yet it should be held by individual possessors as stewards of
mankind with the product of labor being the property of the producer. Proudhon reasoned that any wealth
gained without labor was stolen from those who labored to create that wealth. Even a voluntary contract
to surrender the product of labor to an employer was theft, according to Proudhon, since the controller of
natural resources had no moral right to charge others for the use of that which he did not labor to create
and therefore did not own.

Proudhon's theory of property greatly influenced the budding socialist movement, inspiring anarchist
theorists such as Mikhail Bakunin who modified Proudhon's ideas, as well as antagonizing theorists like
Karl Marx.

[edit] Frédéric Bastiat - property is value

Frédéric Bastiat's main treatise on property can be found in chapter 8 of his book Economic Harmonies
(1850). [3] In a radical departure from traditional property theory, he defines property not as a physical
object, but rather as a relationship between people with respect to an object. Thus, saying one owns a
glass of water is merely verbal shorthand for I may justly gift or trade this water to another person. In
essence, what one owns is not the object but the value of the object. By "value," Bastiat apparently means
market value; he emphasizes that this is quite different from utility. "In our relations with one another, we
are not owners of the utility of things, but of their value, and value is the appraisal made of reciprocal
services."

Strongly disputing Proudhon's equality-based argument, Bastiat theorizes that, as a result of technological
progress and the division of labor, the stock of communal wealth increases over time; that the hours of
work an unskilled laborer expends to buy e.g. 100 liters of wheat decreases over time, thus amounting to
"gratis" satisfaction. Thus, private property continually destroys itself, becoming transformed into
communal wealth. The increasing proportion of communal wealth to private property results in a
tendency toward equality of mankind. "Since the human race started from the point of greatest poverty,
that is, from the point where there were the most obstacles to be overcome, it is clear that all that has
been gained from one era to the next has been due to the spirit of property."

This transformation of private property into the communal domain, Bastiat points out, does not imply that
private property will ever totally disappear. This is because man, as he progresses, continually invents
new and more sophisticated needs and desires.

[edit] Contemporary views

Among contemporary political thinkers who believe that human individuals enjoy rights to own property
and to enter into contracts, there are two views about John Locke. On the one hand there are ardent Locke
admirers, such as W.H. Hutt (1956), who praised Locke for laying down the "quintessence of
individualism." On the other hand, there are those such as Richard Pipes who think that Locke's
arguments are weak, and that undue reliance thereon has weakened the cause of individualism in recent
times. Pipes has written that Locke's work "marked a regression because it rested on the concept of
Natural Law" rather than upon Harrington's sociological framework.

Hernando de Soto has argued that an important characteristic of capitalist market economy is the
functioning state protection of property rights in a formal property system where ownership and
transactions are clearly recorded. These property rights and the whole formal system of property make
possible:

• Greater independence for individuals from local community arrangements to protect their assets;
• Clear, provable, and protectable ownership;
• The standardization and integration of property rules and property information in the country as a
whole;
• Increased trust arising from a greater certainty of punishment for cheating in economic
transactions;
• More formal and complex written statements of ownership that permit the easier assumption of
shared risk and ownership in companies, and insurance against risk;
• Greater availability of loans for new projects, since more things could be used as collateral for the
loans;
• Easier access to and more reliable information regarding such things as credit history and the
worth of assets;
• Increased fungibility, standardization and transferability of statements documenting the ownership
of property, which paves the way for structures such as national markets for companies and the
easy transportation of property through complex networks of individuals and other entities;
• Greater protection of biodiversity due to minimizing of shifting agriculture practices.

All of the above enhance economic growth. [4]

[edit] Types of property

This sign declaring a parking lot to be "private property" illustrates one method of identifying and
protecting property. Note the citations to legal statutes.
Most legal systems distinguish different types (immovable property, estate in land, real estate, real
property) of property, especially between land and all other forms of property - goods and chattels,
movable property or personal property. They often distinguish tangible and intangible property (see
below).

One categorization scheme specifies three species of property: land, improvements (immovable man
made things) and personal property (movable man made things)

In common law, real property (immovable property) is the combination of interests in land and
improvements thereto and personal property is interest in movable property.

'Real property' rights are rights relating to the land. These rights include ownership and usage. Owners
can grant rights to persons and entities in the form of leases, licenses and easements.

Later, with the development of more complex forms of non-tangible property, personal property was
divided into tangible property (such as cars, clothing, animals) and intangible or abstract property (e.g.
financial instruments such as stocks and bonds, etc.), which includes intellectual property (patents,
copyrights, and trademarks).

[edit] What can be property?


The two major justifications given for original property, or homesteading, are effort and scarcity. John
Locke emphasized effort, "mixing your labor" with an object, or clearing and cultivating virgin land.
Benjamin Tucker preferred to look at the telos of property, i.e. What is the purpose of property? His
answer: to solve the scarcity problem. Only when items are relatively scarce with respect to people's
desires do they become property.[5] For example, hunter-gatherers did not consider land to be property,
since there was no shortage of land. Agrarian societies later made arable land property, as it was scarce.
For something to be economically scarce, it must necessarily have the exclusivity property - that use by
one person excludes others from using it. These two justifications lead to different conclusions on what
can be property. Intellectual property - non-corporeal things like ideas, plans, orderings and arrangements
(musical compositions, novels, computer programs) - are generally considered valid property to those
who support an effort justification, but invalid to those who support a scarcity justification (since they
don't have the exclusivity property.) Thus even ardent propertarians may disagree about IP.[6] By either
standard, one's body is one's property.

From some anarchist points of view, the validity of property depends on whether the "property right"
requires enforcement by the state. Different forms of "property" require different amounts of enforcement:
intellectual property requires a great deal of state intervention to enforce, ownership of distant physical
property requires quite a lot, ownership of carried objects requires very little, while ownership of one's
own body requires absolutely no state intervention.

Many things have existed that did not have an owner, sometimes called the commons. The term
"commons," however, is also often used to mean something quite different: "general collective
ownership" - i.e. common ownership. Also, the same term is sometimes used by statists to mean
government-owned property that the general public is allowed to access. Law in all societies has tended to
develop towards reducing the number of things not having clear owners. Supporters of property rights
argue that this enables better protection of scarce resources, due to the tragedy of the commons, while
critics argue that it leads to the exploitation of those resources for personal gain and that it hinders taking
advantage of potential network effects. These arguments have differing validity for different types of
"property" -- things which are not scarce are, for instance, not subject to the tragedy of the commons.
Some apparent critics actually are advocating general collective ownership rather than ownerlessness.
Things today which do not have owners include: ideas (except for intellectual property), seawater (which
is, however, protected by anti-pollution laws), parts of the seafloor (see the United Nations Convention on
the Law of the Sea for restrictions), gasses in Earth's atmosphere, animals in the wild (though there may
be restrictions on hunting etc. -- and in some legal systems, such as that of New York, they are actually
treated as government property), celestial bodies and outer space, and land in Antarctica.

The nature of children under the age of majority is another contested issue here. In ancient societies
children were generally considered the property of their parents. Children in most modern societies
theoretically own their own bodies -- but they are considered incompetent to exercise their rights, and
their parents or guardians are given most of the actual rights of control over them.

Questions regarding the nature of ownership of the body also come up in the issue of abortion and drugs.

In many ancient legal systems (e.g. early Roman law), religious sites (e.g. temples) were considered
property of the God or gods they were devoted to. However, religious pluralism makes it more convenient
to have religious sites owned by the religious body that runs them.

Intellectual property and air (airspace, no-fly zone, pollution laws, which can include tradeable emissions
rights) can be property in some senses of the word.

[edit] Rights of use as property

Ownership of land can be held separately from the ownership of rights over that land, including sporting
rights[7], mineral rights, development rights, air rights, and such other rights as may be worth segregating
from simple land ownership.

[edit] Who can be an owner?


Ownership laws may vary widely among countries depending on the nature of the property of interest
(e.g. firearms, real property, personal property, animals). In some societies only adult men may own
property.[citation needed] In many societies legal entities, such as corporations, trusts, and nations (or
governments) own property.[citation needed]

In the Inca empire, the dead emperors, who were considered gods, still controlled property after death.[7].

[edit] References
1. ^ "property definition".
2. ^ "property", American Heritage Dictionary, http://www.bartleby.com/cgi-bin/texis/webinator/sitesearch?
FILTER=col61&query=property&x=0&y=0
3. ^ "property", WordNet, http://wordnet.princeton.edu/perl/webwn?
s=property&sub=Search+WordNet&o2=&o0=1&o7=&o5=&o1=1&o6=&o4=&o3=&h=
4. ^ Understanding Principles of Politics and the State, by John Schrems, PageFree Publishing (2004), page
234
5. ^ Hann, Chris A new double movement? Anthropological perspectives on property in the age of
neoliberalism Socio-Economic Review, Volume 5, Number 2, April 2007, pp. 287-318(32)
6. ^ Arthur Henry Robertson, John Graham Merrills (1996). Human Rights in the World: An Introduction to
the Study of the International Protection of Human Rights. Manchester University Press. ISBN
0719049237.
7. ^ Mckay, John P. , 2004, "A History of World Societes". Boston: Houghton Mifflin Company
[edit] See also
• Allemansrätten Property giving (legal) Property taking (illegal)
• Anarchism
• Buying agent • Charity • Theft
• Capitalism • Essenes • Kleptocracy
• Communism • Gift
• Homestead principle • Kibbutz Property of either digital or virtual
• Immovable Property • Monasticism form
• Inclusive Democracy • Tithe, Zakat (modern
• Libertarian sense) • Emerging Virtual Institutions
• Lien
• Ownership society Property taking (legal) Property economists
• Patrimony
• Personal property • Confiscation • Armen Alchian
• Propertarian • Eminent domain • Ronald Coase
• Property is theft • Fine
• Property law • Regulatory fees and costs • Hernando de Soto
• Property rights • Search and seizure
(economics) • Tariffs
• Labor theory of property • Tax
• Socialism • Turf and twig (historical)
• Sovereignty • Tithe, Zakat (historical
sense)
• Zoning restrictions
• RS 2477

[edit] References
Poverty
From Wikipedia, the free encyclopedia

Jump to: navigation, search

Look up poverty in Wiktionary, the free dictionary.

Poverty is the deprivation of common necessities such as food, clothing, shelter and safe drinking water,
all of which determine our quality of life. It may also include the lack of access to opportunities such as
education and employment which aid the escape from poverty and/or allow one to enjoy the respect of
fellow citizens. According to Mollie Orshansky who developed the poverty measurements used by the
U.S. government, "to be poor is to be deprived of those goods and services and pleasures which others
around us take for granted."[1] Ongoing debates over causes, effects and best ways to measure poverty,
directly influence the design and implementation of poverty-reduction programs and are therefore relevant
to the fields of public administration and international development.

Although poverty is mainly considered to be undesirable due to the pain and suffering it may cause, in
certain spiritual contexts "voluntary poverty," involving the renunciation of material goods, is seen by
some as virtuous.
Poverty may affect individuals or groups, and is not confined to the developing nations. Poverty in
developed countries is manifest in a set of social problems including homelessness and the persistence of
"ghetto" housing clusters.[2]

Contents
[hide]

• 1 Etymology
• 2 Measuring poverty
o 2.1 Other aspects
• 3 Causes of poverty
o 3.1 Economics
o 3.2 Governance
o 3.3 Demographics and Social Factors
o 3.4 Health Care
o 3.5 Environmental Factors
• 4 Effects of poverty
• 5 Poverty reduction
o 5.1 Economic growth
o 5.2 Free market
o 5.3 Fair trade
o 5.4 Direct aid
o 5.5 Development aid
o 5.6 Improving the environment and access of the poor
o 5.7 Millennium Development Goals
o 5.8 Other approaches
• 6 Voluntary poverty
• 7 See also
o 7.1 Organizations and campaigns
• 8 References
• 9 Further reading

• 10 External links

[edit] Etymology
The words "poverty" and "poor" came from Latin pauper = "poor", which originally came from pau- and
the root of pario, i.e. "giving birth to not much" and referred to unproductive farmland or livestock.

[edit] Measuring poverty

World map showing percentage of population suffering from hunger, World Food Programme, 2006

World map showing percentage of population living on less than 1 dollar per day. UN estimates 1990-
2005.
CIA world map showing percentage of population living below their national poverty line.

World map showing life expectancy.

World map showing the Human Development Index.

World map showing the Gini coefficient, a measure of income inequality.

The percentage of the world's population living on less than $1 per day has halved in twenty years. Most
of this improvement has occurred in East and South Asia. The graph shows the 1981-2001 period.

Life expectancy has been increasing and converging for most of the world. Sub-Saharan Africa has
recently seen a decline, partly related to the AIDS epidemic. Graph shows the years 1950-2005.

About 1/2 of the human population suffers from poverty. Poverty can be measured in terms of absolute or
relative poverty. Absolute poverty refers to a set standard which is consistent over time and between
countries. An example of an absolute measurement would be the percentage of the population eating less
food than is required to sustain the human body (approximately 2000-2500 calories per day for an adult
male).

The World Bank defines extreme poverty as living on less than US$ (PPP) 1 per day, and moderate
poverty as less than $2 a day, estimating that "in 2001, 1.1 billion people had consumption levels below
$1 a day and 2.7 billion lived on less than $2 a day."[3] The proportion of the developing world's
population living in extreme economic poverty fell from 28 percent in 1990 to 21 percent in 2001.[3]
Looking at the period 1981-2001, the percentage of the world's population living on less than $1 per day
has halved. (Note, this does not mean that the value of >$ per day has not decreased over the last 27
years!)

Most of this improvement has occurred in East and South Asia.[4] In East Asia the World Bank reported
that "The poverty headcount rate at the $2-a-day level is estimated to have fallen to about 27 percent [in
2007], down from 29.5 percent in 2006 and 69 percent in 1990."[5]

In Sub-Saharan Africa extreme poverty rose from 41 percent in 1981 to 46 percent in 2001, which
combined with growing population increased the number of people living in poverty from 231 million to
318 million.[6]

Other regions have seen little change. In the early 1990s the transition economies of Eastern Europe and
Central Asia experienced a sharp drop in income. Poverty rates rose to 6 percent at the end of the decade
before beginning to recede.[7]

World Bank data shows that the percentage of the population living in households with consumption or
income per person below the poverty line has decreased in each region of the world since 1990:[8][9]

Region 1990 2002 2004


East Asia and Pacific 15.40% 12.33% 9.07%

Europe and Central Asia 3.60% 1.28% 0.95%

Latin America and the Caribbean 9.62% 9.08% 8.64%

Middle East and North Africa 2.08% 1.69% 1.47%

South Asia 35.04% 33.44% 30.84%

Sub-Saharan Africa 46.07% 42.63% 41.09%

There are various criticisms of these measurements.[10] Shaohua Chen and Martin Ravallion note that
although "a clear trend decline in the percentage of people who are absolutely poor is evident, although
with uneven progress across regions...the developing world outside China and India has seen little or no
sustained progress in reducing the number of poor".

Since the world's population is increasing, a constant number living in poverty would be associated with a
diminshing proportion. Looking at the percentage living on less than $1/day, and if excluding China and
India, then this percentage has decreased from 31.35% to 20.70% between 1981 and 2004.[11]

Other human development indicators are also improving. Life expectancy has greatly increased in the
developing world since WWII and is starting to close the gap to the developed world where the
improvement has been smaller. Even in Sub-Saharan Africa, where most Least Developed Countries are
to be found, life expectancy increased from 30 years before World War II to a peak of about 50 years,
before the HIV pandemic and other diseases started to force it down to the current level of 47 years. Child
mortality has decreased in every developing region of the world[12]. The proportion of the world's
population living in countries where per-capita food supplies are less than 2,200 calories (9,200
kilojoules) per day decreased from 56% in the mid-1960s to below 10% by the 1990s. Between 1950 and
1999, global literacy increased from 52% to 81% of the world. Women made up much of the gap: Female
literacy as a percentage of male literacy has increased from 59% in 1970 to 80% in 2000. The percentage
of children not in the labor force has also risen to over 90% in 2000 from 76% in 1960. There are similar
trends for electric power, cars, radios, and telephones per capita, as well as the proportion of the
population with access to clean water.[13] The book The Improving State of the World finds that many
other indicators have also improved.

Relative poverty views poverty as socially defined and dependent on social context. Income inequality is
a relative measure of poverty. A relative measurement would be to compare the total wealth of the poorest
one-third of the population with the total wealth of richest 1% of the population. There are several
different income inequality metrics. One example is the Gini coefficient.
Income inequality for the world as a whole is diminishing. A 2002 study by Xavier Sala-i-Martin finds
that this is driven mainly, but not fully, by the extraordinary growth rate of the incomes of the 1.2 billion
Chinese citizens. China, India, the OECD and the rest of middle-income and rich countries are likely to
increase their advantage relative to Africa unless it too achieves economic growth; global inequality may
rise. [14][15]

The 2007 World Bank report "Global Economic Prospects" predicts that in 2030 the number living on less
than the equivalent of $1 a day will fall by half, to about 550 million. An average resident of what we
used to call the Third World will live about as well as do residents of the Czech or Slovak republics today.
Much of Africa will have difficulty keeping pace with the rest of the developing world and even if
conditions there improve in absolute terms, the report warns, Africa in 2030 will be home to a larger
proportion of the world's poorest people than it is today.[16]

In many developed countries the official definition of poverty used for statistical purposes is based on
relative income. As such many critics argue that poverty statistics measure inequality rather than material
deprivation or hardship. For instance, according to the U.S. Census Bureau, 46% of those in "poverty" in
the U.S. own their own home (with the average poor person's home having three bedrooms, with one and
a half baths, and a garage).[17] Furthermore, the measurements are usually based on a person's yearly
income and frequently take no account of total wealth. The main poverty line used in the OECD and the
European Union is based on "economic distance", a level of income set at 50% of the median household
income. The US poverty line is more arbitrary. It was created in 1963-64 and was based on the dollar
costs of the United States Department of Agriculture's "economy food plan" multiplied by a factor of
three. The multiplier was based on research showing that food costs then accounted for about one third of
the total money income. This one-time calculation has since been annually updated for inflation.[18]
Others, such as economist Ellen Frank, argue that the poverty measure is too low as families spend much
less of their total budget on food than they did when the measure was established. Further, federal poverty
statistics do not account for the widely varying regional differences in non-food costs such as housing,
transport, and utilities. [19]

[edit] Other aspects

The point is, economic aspects of poverty may focus on material needs, typically including the necessities
of daily living, such as food, clothing, shelter, or safe drinking water. Poverty in this sense may be
understood as a condition in which a person or community is lacking in the basic needs for a minimum
standard of well-being and life, particularly as a result of a persistent lack of income.

Analysis of social aspects of poverty links conditions of scarcity to aspects of the distribution of resources
and power in a society and recognizes that poverty may be a function of the diminished "capability" of
people to live the kinds of lives they value.[20] The social aspects of poverty may include lack of access to
information, education, health care, or political power.[21][22] Poverty may also be understood as an aspect
of unequal social status and inequitable social relationships, experienced as social exclusion, dependency,
and diminished capacity to participate, or to develop meaningful connections with other people in society.
[23][24][25]

The World Bank's "Voices of the Poor," based on research with over 20,000 poor people in 23 countries,
identifies a range of factors which poor people identify as part of poverty.[26] These include:

• precarious livelihoods
• excluded locations
• physical limitations
• gender relationships
• problems in social relationships
• lack of security
• abuse by those in power
• dis-empowering institutions
• limited capabilities, and
• weak community organizations.

David Moore, in his book The World Bank, argues that some analyses of poverty reflect pejorative,
sometimes racial, stereotypes of impoverished people as powerless victims and passive recipients of aid
programs.[27]

[edit] Causes of poverty

A starving female child during the Nigerian-Biafran war of the late 1960s. The abdomen is paradoxically
swollen due to Kwashiorkor or severe protein malnutrition.

Urban poverty is common in developing countries. Shown here is Mumbai, India.

Many different factors have been cited to explain why poverty occurs; no single explanation has gained
universal acceptance.

Possible factors include:

[edit] Economics
• Unemployment. Some countries' governments are believed to purposefully maintain a 2-10%
unemployed populace to act as a 'replacement threat' to unskilled private sector workers, by way
of maintaining an existing thriving service economy.
• As of late 2007, increased farming for use in biofuels,[28] along with world oil prices at nearly $130
a barrel,[29] has pushed up the price of grain.[30] Food riots have recently taken place in many
countries across the world.[31][32][33]
• Capital flight by which the wealthy in a society shift their assets to off-shore tax havens deprives
nations of revenue needed to break the vicious cycle of poverty. [34]
• Weakly entrenched formal systems of title to private property are seen by writers such as
Hernando de Soto as a limit to economic growth and therefore a cause of poverty. [35]
• Communists see the institution of property rights itself as a cause of poverty.[36]
• Unfair terms of trade, in particular, the very high subsidies to and protective tariffs for agriculture
in the developed world. This drains the taxed money and increases the prices for the consumers in
developed world; decreases competition and efficiency; prevents exports by more competitive
agricultural and other sectors in the developed world due to retaliatory trade barriers; and
undermines the very type of industry in which the developing countries do have comparative
advantages.[37]
• Tax havens which tax their own citizens and companies but not those from other nations and
refuse to disclose information necessary for foreign taxation. This enables large scale political
corruption, tax evasion, and organized crime in the foreign nations.[34]
• Unequal distribution of land. [38] Land reform is one solution.

[edit] Governance

• Lacking democracy in poor countries: "The records when we look at social dimensions of
development—access to drinking water, girls' literacy, health care—are even more starkly
divergent. For example, in terms of life expectancy, rich democracies typically enjoy life
expectancies that are nine years longer than poor autocracies. Opportunities of finishing secondary
school are 40 percent higher. Infant mortality rates are 25 percent lower. Agricultural yields are
about 25 percent higher, on average, in poor democracies than in poor autocracies—an important
fact, given that 70 percent of the population in poor countries is often rural-based.""poor
democracies don't spend any more on their health and education sectors as a percentage of GDP
than do poor autocracies, nor do they get higher levels of foreign assistance. They don't run up
higher levels of budget deficits. They simply manage the resources that they have more
effectively." [10]
• The governance effectiveness of governments has a major impact on the delivery of
socioeconomic outcomes for poor populations[39]
• Weak rule of law can discourage investment and thus perpetuate poverty.[40]
• Poor management of resource revenues can mean that rather than lifting countries out of poverty,
revenues from such activities as oil production or gold mining actually leads to a resource curse.
• Failure by governments to provide essential infrastructure worsens poverty.[41][42].
• Poor access to affordable education traps individuals and countries in cycles of poverty.[41]
• High levels of corruption undermine efforts to make a sustainable impact on poverty. In Nigeria,
for example, more than $400 billion was stolen from the treasury by Nigeria's leaders between
1960 and 1999.[43][44]
Poverty in a developed nation, as seen in Harlem, New York, USA.

Again in a developed nation council houses in Seacroft, Leeds, UK have been deserted due to poverty and
high crime.

• Welfare states have an effect on poverty reduction. Currently modern, expansive welfare states
that ensure economic opportunity, independence and security in a near universal manner are still
the exclusive domain of the developed nations,[45] commonly constituting at least 20% of GDP,
with the largest Scandinavian welfare states constituting over 40% of GDP.[46] These modern
welfare states, which largely arose in the late 19th and early 20th centuries, seeing their greatest
expansion in the mid 20th century, and have proven themselves highly effective in reducing
relative as well as absolute poverty in all analyzed high-income OECD countries.[47][48][49]

Absolute poverty rate


Relative poverty
(threshold set at 40% of U.S. median household income)
Country [47] rate[48]
Pre-transfer Post-transfer Pre-transfer Post-transfer
Sweden 23.7 5.8 14.8 4.8
Norway 9.2 1.7 12.4 4.0
Netherlands 22.1 7.3 18.5 11.5
Finland 11.9 3.7 12.4 3.1
Denmark 26.4 5.9 17.4 4.8
Germany 15.2 4.3 9.7 5.1
Switzerland 12.5 3.8 10.9 9.1
Canada 22.5 6.5 17.1 11.9
France 36.1 9.8 21.8 6.1
Belgium 26.8 6.0 19.5 4.1
Australia 23.3 11.9 16.2 9.2
United Kingdom 16.8 8.7 16.4 8.2
United States 21.0 11.7 17.2 15.1
Italy 30.7 14.3 19.7 9.1

[edit] Demographics and Social Factors


• Overpopulation and lack of access to birth control methods.[50][51] Note that population growth
slows or even become negative as poverty is reduced due to the demographic transition.[52]
• Crime, both white-collar crime and blue-collar crime, including violent gangs and drug cartels.[53]
[54][55]

• Historical factors, for example imperialism, colonialism[56][57][58] and Post-Communism (at least 50
million children in Eastern Europe and the former Soviet Union lived in poverty).[59][60]

Dalits in Jaipur, India.[61]

• Brain drain
• Matthew effect: the phenomenon, widely observed across advanced welfare states, that the middle
classes tend to be the main beneficiaries of social benefits and services, even if these are primarily
targeted at the poor.
• Cultural causes, which attribute poverty to common patterns of life, learned or shared within a
community. For example, Max Weber argued that the Protestant work ethic contributed to
economic growth during the industrial revolution.
• War, including civil war, genocide, and democide.[62]
• Discrimination of various kinds, such as age discrimination, stereotyping,[63] gender
discrimination, racial discrimination, caste discrimination.[64]
• Individual beliefs, actions and choices.[65] For example, research by Isabell Sawhill, a respected
researcher from the Brookings Institute indicates that, in the United States, if any individual
follows three rules, their chance of being in poverty shrinks to a statistically insignificant level: (1)
Stay in school, don't drop out. (2) Postpone bringing children into the world until marriage. (3)
Work, don't quit, keep working, no matter how humble the job. Thus, her research indicates that
most poverty is associated statistically with individuals who choose (a) to drop out of school,
and /or (b) to have children outside of marriage, and/ or (c) who do not hold a job for long. In
short, her research suggests that most poverty is statistically associated with poor or unwise life
choices.

[edit] Health Care

Hardwood surgical tables are commonplace in rural Nigerian clinics.


• Poor access to affordable health care makes individuals less resilient to economic hardship and
more vulnerable to poverty.[41]

• Inadequate nutrition in childhood, itself an effect of poverty, undermines the ability of individuals
to develop their full human capabilities and thus makes them more vulnerable to poverty. Lack of
essential minerals such as iodine and iron can impair brain development. It is estimated that 2
billion people (one-third of the total global population) are affected by iodine deficiency, including
285 million 6- to 12-year-old children. In developing countries, it is estimated that 40% of
children aged 4 and under suffer from anemia because of insufficient iron in their diets. See also
Health and intelligence.[66]
• Disease, specifically diseases of poverty: AIDS,[67] malaria[68] and tuberculosis and others
overwhelmingly afflict developing nations, which perpetuate poverty by diverting individual,
community, and national health and economic resources from investment and productivity.[69]
Further, many tropical nations are affected by parasites like malaria, schistosomiasis, and
trypanosomiasis that are not present in temperate climates. The Tsetse fly makes it very difficult to
use many animals in agriculture in afflicted regions.
• Clinical depression undermines the resilience of individuals and when not properly treated makes
them vulnerable to poverty. [70]
• Similarly substance abuse, including for example alcoholism and drug abuse when not properly
treated undermines resilience and can consign people to vicious poverty cycles.[71]

[edit] Environmental Factors

• Erosion. Intensive farming often leads to a vicious cycle of exhaustion of soil fertility and decline
of agricultural yields and hence, increased poverty.[72]
• Desertification and overgrazing.[73] Approximately 40% of the world's agricultural land is seriously
degraded.[74] In Africa, if current trends of soil degradation continue, the continent might be able to
feed just 25% of its population by 2025, according to UNU's Ghana-based Institute for Natural
Resources in Africa.[75]
• Deforestation as exemplified by the widespread rural poverty in China that began in the early 20th
century and is attributed to non-sustainable tree harvesting.[76]
• Natural factors such as climate change.[77] or environment[78] Lower income families suffer the
most from climate change; yet on a per capita basis, they contribute the least to climate change [79]
• Geographic factors, for example access to fertile land, fresh water, minerals, energy, and other
natural resources, presence or absence of natural features helping or limiting communication, such
as mountains, deserts, navigable rivers, or coastline. Historically, geography has prevented or
slowed the spread of new technology to areas such as the Americas and Sub-Saharan Africa. The
climate also limits what crops and farm animals may be used on similarly fertile lands.[80]
• On the other hand, research on the resource curse has found that countries with an abundance of
natural resources creating quick wealth from exports tend to have less long-term prosperity than
countries with less of these natural resources.
• Drought and water crisis.[81][82][83]

[edit] Effects of poverty


The effects of poverty may also be causes, as listed above, thus creating a "poverty cycle" operating
across multiple levels, individual, local, national and global.

Those living in poverty and lacking access to essential health services, suffering hunger or even
starvation,[84] experience mental and physical health problems which make it harder for them to improve
their situation.[85] One third of deaths - some 18 million people a year or 50,000 per day - are due to
poverty-related causes: in total 270 million people, most of them women and children, have died as a
result of poverty since 1990.[86] Those living in poverty suffer lower life expectancy. Every year nearly 11
million children living in poverty die before their fifth birthday. Those living in poverty often suffer from
hunger.[87] 800 million people go to bed hungry every night.[88] Poverty increases the risk of homelessness.
[89]
There are over 100 million street children worldwide.[90] Increased risk of drug abuse may also be
associated with poverty.[91]

Diseases of poverty reflect the dynamic relationship between poverty and poor health; while such
infectious diseases result directly from poverty, they also perpetuate and deepen impoverishment by
sapping personal and national health and financial resources. For example, malaria decreases GDP growth
by up to 1.3% in some developing nations, and by killing tens of millions in sub-Saharan Africa, AIDS
alone threatens “the economies, social structures, and political stability of entire societies”.[92][93]

Those living in poverty in the developed world may suffer social isolation. Rates of suicide may increase
in conditions of poverty. Death of a breadwinner may decrease a household's resilience to poverty
conditions and cause a dramatic worsening in their situation. Low income levels and poor employment
opportunities for adults in turn create the conditions where households can depend on the income of child
members. An estimated 218 million children aged 5 to 17 are in child labor worldwide, excluding child
domestic labor.[94] Lacking viable employment opportunities those living in poverty may also engage in
the informal economy, or in criminal activity, both of which may on a larger scale discourage investment
in the economy, further perpetuating conditions of poverty.

Unfortunately, there is a high risk of educational underachievement for children who are from low-
income housing circumstances. This often is a process that begins in primary school for some less
fortunate children. These children are at a higher risk than other children for retention in their grade,
special placements during the school’s hours and even not completing their high school education. [95]
There are indeed many explanations for why students tend to drop out of school. For children with low
resources, the risk factors are similar to excuses such as juvenile delinquency rates, higher levels of
teenage pregnancy, and the economic dependency upon their low income parent or parents. [95]

Intellectual competence is key to the educational attainment. Brighter people, whether old or young, tend
to have more economic power, go further in their education, and often lead much healthier, much more
prestigious lives. So children with high abilities and low income must be supported and research should
identify methods for discovering and reaching these young students. [95] Early childhood education and
assisted learning in the home are highly considered as methods that are supported by plenty of careful
study, observation and research. [95]

In 1972, came the Carolina Abecedarian Project at the Frank Porter Graham Child Development Center of
the University of North Carolina. The program was started to identify multiple intervention methods “to
enhance the intellectual competence and academic achievement of children from socioeconomically
disadvantaged families”. [95] Initially there would be a combination of early childhood education, pediatric
care and family support services. [95] Secondly there would be children entering the school system as
kindergarteners and remain there for three years and finally an observational and analytical study of the
children at the end of the three years around eight years old was to be carried out. [95]

Families and society who submit low levels of investment in the education and development of less
fortunate children end up with less favorable results for the children who see a life of parental
employment reduction and low wages. Higher rates of early childbearing with all the connected risks to
family, health and well-being are majorly important issues to address since education from preschool to
high school are both identifiably meaningful in a life. [95]
The racial and ethnic diversity in the United States is wide and has increased over the last two decades.
Hispanics and African Americans are the largest two ethnic groups living in poverty in the United States.
Unfortunately, minority groups as a whole form the highest percentage of the poor. Hispanics, however,
are about seventy percent more likely to actually be poor. [96] Additionally, children living in single-parent
homes in the case that their mother is the sole caregiver are about fifty percent more likely to be poor than
children of two-parent homes. [97]

Though racial and ethnic diversity have both increased, changes in poverty rate have also changed. From
1993 to the year 2000, poverty rates as a whole decreased from twenty-two percent to sixteen percent. [96]
Sadly, the number of children in the homes of poverty has drastically increased, leaving about one in five
children in these circumstances. [96]

Poverty often drastically affects children’s success in school. A child’s “home activities, preferences,
mannerisms” must align with the world and in the cases that they do not these students are at a
disadvantage in the school and most importantly the classroom. [96] Therefore, it is safe to state that
children who live at or below the poverty level will have far less success educationally than children who
live above the poverty line. Poor children have a great deal less healthcare and this ultimately results in
many absences from the academic year. Additionally, poor children are much more likely to suffer from
hunger, fatigue, irritability, headaches, ear infections, and colds. [96] These illnesses could potentially
restrict a child or student’s focus and concentration.

Unfortunately, the levels of parental education in low socioeconomic households have a strong correlation
the educational level of success of the children in the home. In households where there are around four or
more children, the affects are tremendously greater for each child. [97] However, in families where the
child’s first language is not English, children have a higher chance at low levels of educational attainment.
[97]

Undeniably, children are affected by poverty across the board when it comes to education; however,
literacy is an alarming issue. Some children are fortunate enough to have people in the home who can
read and write being that they are literate and other children are not as fortunate and never see the
opportunity for learning assistance at home or in school and communication with teachers. [96] Regardless
of the support, many of these children are still disadvantaged simply because of their home life. A child’s
method of interacting, responding, and communicating will help them with how to deal with their peers
and their adults. [96]

It is through literacy that unknown places, people and things are open to many children. [96] Children are
able to explore their thoughts and understanding and navigate new materials and ways of thinking and
understanding. Literacy is very important to current and future leaders in that they most certainly need to
be able to communicate with people verbally and through print. Though there are different level of it, all
children regardless of lower-, mid-, or high- income home come into the schoolhouse with some level of
literacy. [96] Just as there are different levels of literacy in each schooled child, there are also different
types of strengths and weaknesses that each child will bring as well. When there is a concrete foundation
for speaking there is a better chance for effective writing skills to be improved or even began. [96]

Some may initially believe that children from disadvantaged homes do not have the ability to converse
with others. It is actually the differences in the casualness of the language and the richness and quality of
the communication. [97] Also for these children knowing when to speak one way versus another is
important. For instance, using a relaxed language at home would be very different from using formal
language in school and out of the home. Though formal language in the aspect of word choice and
conversation is praised, it is important to acknowledge that children from disadvantaged homes are more
skilled in nonverbal cues such as hand movement, facial expressions and body language. [97] Effective
teachers might attempt to build on the oral communication strengths of each and every student regardless
of their current ability.

A child’s knowledge of print depends variably on their socioeconomic status much of the time. All
children have some knowledge of print but the lack or overflow of print knowledge depends most
certainly on the social, cultural and economic status of their home life. [96] It is important to note that
though there may be a great amount of print in the home it is not nearly as common for their to be much
writing to accompany the print sources. [96] Though they are aware of certain print images, words and
signs they are still not able to begin formulating much of any type of meaning. [96] Many children even see
the need for writing but many also do not have a full and clear understanding of the power in the meaning
of what is written or the change of circumstance they could ultimate merit because of writing.

Low income and wealth levels undermine the ability of governments to levy taxes for public service
provision, adding to the 'vicious circle' connecting the causes and effects of poverty. Lack of essential
infrastructure, poor education and health services, and poor sanitation contribute to the perpetuation of
poverty.[98] Poor access to affordable public education can lead to low levels of literacy, further
entrenching poverty. Weak public service provision and high levels of poverty can increase states'
vulnerability to natural disasters and make states more vulnerable to shocks in the international economy,
such as those associated with rising fuel prices, or declining commodity prices.[99][100]

Areas strongly affected by poverty tend to be more violent. In one survey, 67% of children from
disadvantaged inner cities said they had witnessed a serious assault, and 33% reported witnessing a
homicide.[101] 51% of fifth graders from New Orleans (median income for a household: $27,133) have
been found to be victims of violence, compared to 32% in Washington, DC (mean income for a
household: $40,127).[102]

The capacity of the state is further undermined by the problem that people living in poverty may be more
vulnerable to extremist political persuasion, and may feel less loyalty to a state unable to deliver basic
services. For these reasons conditions of poverty may increase the risk of political violence, terrorism, war
and genocide, and may make those living in poverty vulnerable to human trafficking, internal
displacement and exile as refugees. Countries suffering widespread poverty may experience loss of
population, particularly in high-skilled professions, through emigration, which may further undermine
their ability to improve their situation.

[edit] Poverty reduction


Main article: Poverty reduction

In politics, the fight against poverty is usually regarded as a social goal and many governments have
institutions or departments dedicated to tackling poverty. One of the main debates in the field of poverty
reduction is around the question of how actively the state should manage the economy and provide public
services to tackle the problem of poverty. In the nineties, international development policies focused on a
package of measures known and criticized as the "Washington Consensus" which involved reducing the
scope of state activities, and reducing state intervention in the economy, reducing trade barriers and
opening economies to foreign investment. Vigorous debate over these issues continues, and most poverty
reduction programs attempt to increase both the competitiveness of the economy and the viability of the
state.

[edit] Economic growth


World GDP per capita rapidly increased beginning with the Industrial Revolution.

The anti-poverty strategy of the World Bank depends heavily on reducing poverty through the promotion
of economic growth.[103]. The World Bank argues that an overview of many studies shows that:

• Growth is fundamental for poverty reduction, and in principle growth as such does not affect
inequality.
• Growth accompanied by progressive distributional change is better than growth alone.
• High initial income inequality is a brake on poverty reduction.
• Poverty itself is also likely to be a barrier for poverty reduction; and wealth inequality seems to
predict lower future growth rates.[104]

[edit] Free market

Although the term 'free market' is essentially a misnomer, since all markets (regardless of whether they
are national or domestic) function only via shared public infrastructure and are, accordingly, regulated by
governments in a wide variety of ways, the rhetoric of 'free markets' and 'free enterprise' has won out in
the public media over time. What are frequently described as free market reforms represent one strategy
for reducing poverty, though not a strategy without its problems. For example, while the 20th century has
seen noted reductions of poverty in India and China, both of those countries have also been sites of some
of the century's most horrific corporate-sponsored human rights abuses. So, while hundreds of millions of
people in the two countries 'grew out' of poverty (depending on how one measures poverty), mostly as a
result of the abandonment of collective farming in China and the cutting of government red tape in India,
[105]
tragedies like the Bhopal disaster[106] and massive deforestation throughout much of India[107] have
more than tarnished such successes. Additionally, in China, the end of collective farming could not,
properly speaking, be described as a move toward a 'free market,' since land ownership remained a
question of state districting and management.[108] So, while shifts in market structure and values have
definitely played a role in fostering economic growth in India and China, that growth has often come with
serious, even shocking human and environmental costs.

Developing countries face a range of obstacles to trading competitively on international markets. Almost
half of the budget of the European Union, for example, is directed to agricultural subsidies, which
primarily benefit large multinational agribusinesses who form a powerful lobby.[109] Japan gave 47 billion
dollars in 2005 in subsidies to its agricultural sector,[110] nearly four times the amount it gave in total
foreign aid.[111] The US gives 3.9 billion dollars each year in subsidies to its cotton sector, including
25,000 growers, three times more in subsidies than the entire USAID budget for Africa, although America
contributes a sum far larger than the 3.9 billion dollars through other agencies.[112] Critics argue that
agricultural subsidies in the developed world drain taxation revenue, increase the end-prices paid by
consumers, and discourage efficiency improvements, while retaliatory trade barriers unfairly undermine
the competitiveness of agricultural and other exports in those industries in which developing countries
would otherwise have a significant comparative advantages.[37]

Bringing the market to remote, rural areas can bring farmers the information to produce more profitably.
For example, mobile phones could be used to do this, helping people in remote areas of the developing
world. Farmers receive market information sent directly to their phones.[113] In Ethiopia, for example,
remote farmers produce crops that may not bring the best profits. When they sell their products to a local
trader, who then sells to another trader, and another, the cost of the food rises before it finally reaches the
consumer in large cities. Economist Gabre-Madhin proposes warehouses where farmers could have
constant updates of the latest market prices, making the farmer think nationally, not locally. Each
warehouse would have an independent neutral party that would test and grade the farmer's harvest,
allowing traders in Addis Ababa, and potentially outside Ethiopia, to place bids on food, even if it is
unseen. Thus, if the farmer gets five cents in one place he would get three times the price by selling it in
another part of the country where there may be a drought.[114] Such schemes, while attractive, again give
the lie to the term 'free market.' Gabre-Madhin's plan, for instance, is likely to require government support
of some sort, since independent neutral parties can be as hard to come by in Africa as anywhere else in the
world. Ultimately, as philosopher Noam Chomsky has argued, the idea of the 'free market' is something of
a fantasy, since markets tend to either depend on massive government subsidies of everything from raw
materials to transportation[115] or to consist largely of single corporations selling products to their own
overseas branches, without those products (or the jobs associated with making them, ever going to
citizens of poverty-stricken areas. In effect, this means that the word 'free market' acts as a sort of trick,
used to convince people to support government spending that mostly benefits the very wealthy and that
they would never otherwise support. It is for this reason that Chomsky has described free market
capitalism as "socialism for the rich."[115]

The Global Competitiveness Report, the Ease of Doing Business Index, and the Index of Economic
Freedom are annual reports, often used in academic research, ranking the worlds nations on factors argued
to increase economic growth and reduce poverty. Again, though, factors that may increase economic
growth should neither be confused with factors that increase the freedom of markets nor simply assumed
to benefit those living in poverty. This becomes clear with a glance at one of the world's strongest
expressions of the 'free market': the United States health-care system, which functions with almost no
government oversight, and under which 45 million of the country's 301 million citizens are uninsured.[116]
Perhaps not surprisingly, the U.S., long one of the world's greatest proponents of 'free markets' in poverty-
stricken countries, itself has one of the worst records on domestic poverty among the industrialized
nations, with nearly 16 million of its citizens living in what is termed 'deep poverty': earning half or less
of the federal poverty line figure per year.[116]

One theory for reducing poverty suggests that raising tariffs and import substitution leads to greater
wealth by protecting the country from the deeper inequalities of what is called free trade. This theory was
practiced highly between the 1950s and 1970s, when it appeared to fail to develop wealth. The theory
assumes a lack of trade barriers on incoming (often highly subsidized) goods from wealthier countries,
considered by some economists a driver of poverty[citation needed]. Most countries have some history of import
substitution and direct government protection of and investment in local industries, however, although
that history is often troubled and difficulty-ridden. The theory claims that reducing tariff receipts can
lower a major source of government revenue & spending, while raising tariffs may improve the terms of
trade for the poor.[117] In contrast, a WTO study has shown that in practice often high tariffs lead to a
stagnation of economic growth and development and the costs of the tariffs are borne most heavily on the
poor.[118] The search for acceptable and appropriate market solutions to the problem of poverty continues,
but one thing at least is certain: there are no markets that can be truly described as 'free,' and many of the
markets described in this way leave untouched or actually worsen the conditions of poverty. At the very
least, many analysts agree, blind faith in the 'free market' must be called into question, prompting re-
examination of certain basic values.[119]]

[edit] Fair trade

Further information: Fair trade

Another approach to alleviating poverty is to implement Fair Trade which advocates the payment of a fair
price as well as social and environmental standards in areas related to the production of goods.

[edit] Direct aid


• The government can directly help those in need through cash transfers as a short term expedient.
This has been applied with mixed results in most Western societies during the 20th century in
what became known as the welfare state. Especially for those most at risk, such as the elderly and
people with disabilities.
• Private charity. Systems to encourage direct transfers to the poor by citizens organized into
voluntary or not-for-profit groupings are often encouraged by the state through charitable trusts
and tax deduction arrangements. International Remittances sent by migrant workers to their
families in developing countries provide an important source of income. This form of direct aid is
around twice the size of official aid related inflows.

[edit] Development aid

Most developed nations give development aid to developing countries. The UN target for development
aid is 0.7% of GDP; currently only a few nations achieve this. Some think tanks and NGOs have argued
that Western monetary aid often only serves to increase poverty and social inequality, either because it is
conditioned with the implementation of harmful economic policies in the recipient countries [120], or
because it's tied with the importing of products from the donor country over cheaper alternatives,[121] or
because foreign aid is seen to be serving the interests of the donor more than the recipient.[122] Critics also
argue that some of the foreign aid is stolen by corrupt governments and officials, and that higher aid
levels erode the quality of governance. Policy becomes much more oriented toward what will get more
aid money than it does towards meeting the needs of the people.[123] Victor Bout, one of the worlds most
notorious arms dealers, told the New York Times how he saw firsthand in Angola, Congo and elsewhere
"how Western donations to impoverished countries lead to the destruction of social and ecological
balance, mutual resentment and eventually war."[124] "Once countries give money, they control you." he
says.

Supporters argue that these problems may be solved with better auditing of how the aid is used.[123] Aid
from non-governmental organizations may be more effective than governmental aid; this may be because
it is better at reaching the poor and better controlled at the grassroots level.[125] As a point of comparison,
the annual world military spending is over $1 trillion.[126]

[edit] Improving the environment and access of the poor

Numerous methods have been adduced to upgrade the situation of those in poverty, some contradictory to
each other. Some of these mechanisms are:

• Subsidized housing development.


• Education, especially that directed at assisting the poor to produce food in underdeveloped
countries.
• Family planning to limit the numbers born into poverty and allow family incomes to better cover
the existing family.
• Subsidized health care.
• Assistance in finding employment.
• Subsidized employment (see also Workfare).
• Encouragement of political participation and community organizing.
• Implementation of fair property rights laws.
• Reduction of regulatory burden and bureaucratic oversight.
• Reduction of taxation on income and capital.
• Reduction of government spending, including a reduction in borrowing and printing money.

[edit] Millennium Development Goals


Eradication of extreme poverty and hunger is the first Millennium Development Goal. One of the targets
within this goal is the halving of the proportion of people living in extreme poverty by 2015. In addition
to broader approaches, the Sachs Report (for the UN Millennium Project) [127] proposes a series of "quick
wins", approaches identified by development experts which would cost relatively little but could have a
major constructive effect on world poverty. The quick wins are:

• Directly assisting local entrepreneurs to grow their businesses and create jobs.
• Access to information on sexual and reproductive health.
• Action against domestic violence.
• Appointing government scientific advisors in every country.
• Deworming school children in affected areas.
• Drugs for AIDS, tuberculosis, and malaria.
• Eliminating school fees.
• Ending user fees for basic health care in developing countries.
• Free school meals for schoolchildren.
• Legislation for women’s rights, including rights to property.
• Planting trees.
• Providing soil nutrients to farmers in sub-Saharan Africa.
• Providing mosquito nets.
• Access to electricity, water and sanitation.
• Supporting breast-feeding.
• Training programs for community health in rural areas.
• Upgrading slums, and providing land for public housing.

[edit] Other approaches

The Copenhagen Consensus was an attempt to rank global welfare improvement programs in terms of
their urgency and cost-effectiveness; Direct Aid to combat HIV infection was determined to be the top
priority.

Some argue for a radical change of the economic system. There are several proposals for a fundamental
restructuring of existing economic relations, and many of their supporters argue that their ideas would
reduce or even eliminate poverty entirely if they were implemented. Such proposals have been put
forward by both left-wing and right-wing groups: socialism, communism, anarchism, libertarianism,
binary economics and participatory economics, among others.

Proponents of such taxes argue that absolute or relative poverty can be reduced by progressive taxation, a
wealth tax, and an inheritance tax.

The IMF and member countries have produced Poverty Reduction Strategy papers or PRSPs.[128]

In his book The End of Poverty (ISBN 1594200459),[129] a prominent economist named Jeffrey Sachs laid
out a plan to eradicate global poverty by the year 2025. Following his recommendations, international
organizations are working to help eradicate poverty worldwide with intervention in the areas of housing,
food, education, basic health, agricultural inputs, safe drinking water, transportation and communications.
[130]

[edit] Voluntary poverty


See also: Simple living
St. Francis of Assisi renounces his worldly goods in a painting attributed to Giotto di Bondone.
'Tis the gift to be simple,
'tis the gift to be free,
'tis the gift to come down where you ought to be,
And when we find ourselves in the place just right,
It will be in the valley of love and delight.

—Shaker song.[131]

Among some individuals, such as ascetics, poverty is considered a necessary or desirable condition,
which must be embraced in order to reach certain spiritual, moral, or intellectual states. Poverty is often
understood to be an essential element of renunciation in religions such as Buddhism and Jainism, whilst in
Roman Catholicism it is one of the evangelical counsels. Certain religious orders also take a vow of
extreme poverty. For example, the Franciscan orders have traditionally forgone all individual and
corporate forms of ownership. While individual ownership of goods and wealth is forbidden for
Benedictines, following the Rule of St. Benedict, the monastery itself may possess both goods and money,
and throughout history some monasteries have become very rich indeed.[citation needed]

In this context of religious vows, poverty may be understood as a means of self-denial in order to place
oneself at the service of others; Pope Honorius III wrote in 1217 that the Dominicans "lived a life of
voluntary poverty, exposing themselves to innumerable dangers and sufferings, for the salvation of
others". Following Jesus' warning that riches can be like thorns that choke up the good seed of the word
(Matthew 13:22), voluntary poverty is often understood by Christians as of benefit to the individual - a
form of self-discipline by which one distances oneself from distractions from God.[citation needed]

[edit] See also


• List of countries by • Hunger Sustainable development portal
percentage of population • Impoverishment • Poverty threshold
living in poverty • Income disparity • Poverty trap
• Countries by fertility rate • International inequality • Rural ghetto
• List of countries by GDP • International Development • Social exclusion
(PPP) per capita • IQ and Global Inequality • Subsidized housing
• Cycle of poverty • IQ and the Wealth of • Street children
• Diseases of poverty Nations • Ten Threats identified by
• Distribution of wealth • Least Developed Countries the United Nations
• Deprivation index • Life expectancy • Welfare
• Literacy • Working poor
• Economic inequality • Minimum wage • Make Poverty History
• Feminization of poverty • Pauperism • The Hunger Site
• Food security • Population growth • List of famines
• Food vs fuel
• Fuel poverty • Poor Law • 2007–2008 world food
• Global justice price crisis

• Green Revolution

[edit] Organizations and campaigns

• Abahlali baseMjondolo - South African • International Food Policy Research Institute


Shack dwellers' organisation • International Fund for Agricultural
• Brooks World Poverty Institute Development
• Catholic Charities USA[132] • Southern Poverty Law Center
• Center for Global Development • The Make Poverty History campaign
• Child Poverty Action Group • Mississippi Teacher Corps
• Compassion Canada • United Nations Millennium Campaign [133][134]
• Five Talents - Gives poverty stricken people • World Bank
another chance • World Food Day
• Free the Children • The Red Letters Campaign [135]
• Grameen Bank A micro lending bank for the • Global Poverty Minimization [136]
poor. • Eurodad
• Micah Challenge halving golbal poverty by
2015. • ONE campaign [137]
• Microgiving Direct charitable giving
• Global Call to Action Against Poverty
(GCAP)

• 17 October: UN International Day for the


Eradication of Poverty (White Band Day 4)

[edit] References

Management
From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article or section includes a list of references or external links, but its sources remain unclear
because it has insufficient inline citations.
You can improve this article by introducing more precise citations where appropriate. (November 2008)
This article needs additional citations for verification. Please help improve this article by adding
reliable references. Unsourced material may be challenged and removed. (November 2008)
For other uses, see Management (disambiguation).
Management in business and human organization activity is simply the act of getting people together to
accomplish desired goals. Management comprises planning, organizing, staffing, leading or directing, and
controlling an organization (a group of one or more people or entities) or effort for the purpose of
accomplishing a goal. Resourcing encompasses the deployment and manipulation of human resources,
financial resources, technological resources, and natural resources.

Management can also refer to the person or people who perform the act(s) of management.

Contents
[hide]

• 1 Etymology
• 2 Overview
o 2.1 Theoretical scope
o 2.2 Nature of managerial work
• 3 Historical development
o 3.1 Early writing
 3.1.1 Sun Tzu's The Art of War
 3.1.2 Niccolò Machiavelli's The Prince
 3.1.3 Adam Smith's The Wealth of Nations
o 3.2 19th century
o 3.3 20th century
o 3.4 21st century
• 4 Management topics
o 4.1 Basic functions of management
o 4.2 Formation of the business policy
 4.2.1 How to implement policies and strategies
 4.2.2 The development of policies and strategies
 4.2.3 Where policies and strategies fit into the planning process
o 4.3 Managerial levels and hierarchy
• 5 Areas and categories and implementations of management
• 6 See also
• 7 References

• 8 External links

[edit] Etymology
The verb manage comes from the Italian maneggiare (to handle — especially a horse), which in turn
derives from the Latin manus (hand). The French word mesnagement (later ménagement) influenced the
development in meaning of the English word management in the 17th and 18th centuries.[1]

[edit] Overview
[edit] Theoretical scope

Mary Parker Follett (1868–1933), who wrote on the topic in the early twentieth century, defined
management as "the art of getting things done through people".[2] One can also think of management
functionally, as the action of measuring a quantity on a regular basis and of adjusting some initial plan; or
as the actions taken to reach one's intended goal. This applies even in situations where planning does not
take place. From this perspective, Frenchman Henri Fayol[3] considers management to consist of seven
functions:

1. planning
2. organizing
3. leading
4. co-ordinating
5. controlling
6. staffing
7. motivating

Some people, however, find this definition, while useful, far too narrow. The phrase "management is what
managers do" occurs widely, suggesting the difficulty of defining management, the shifting nature of
definitions, and the connection of managerial practices with the existence of a managerial cadre or class.

One habit of thought regards management as equivalent to "business administration" and thus excludes
management in places outside commerce, as for example in charities and in the public sector. More
realistically, however, every organization must manage its work, people, processes, technology, etc. in
order to maximize its effectiveness. Nonetheless, many people refer to university departments which
teach management as "business schools." Some institutions (such as the Harvard Business School) use
that name while others (such as the Yale School of Management) employ the more inclusive term
"management."

English speakers may also use the term "management" or "the management" as a collective word
describing the managers of an organization, for example of a corporation. Historically this use of the term
was often contrasted with the term "Labor" referring to those being managed.

[edit] Nature of managerial work

Editors are currently in dispute concerning points of view expressed in this section. Please help to
discuss and resolve the dispute before removing this message. (December 2007)

In for-profit work, management has as its primary function the satisfaction of a range of stakeholders.
This typically involves making a profit (for the shareholders), creating valued products at a reasonable
cost (for customers), and providing rewarding employment opportunities (for employees). In nonprofit
management, add the importance of keeping the faith of donors. In most models of
management/governance, shareholders vote for the board of directors, and the board then hires senior
management. Some organizations have experimented with other methods (such as employee-voting
models) of selecting or reviewing managers; but this occurs only very rarely.

In the public sector of countries constituted as representative democracies, voters elect politicians to
public office. Such politicians hire many managers and administrators, and in some countries like the
United States political appointees lose their jobs on the election of a new president/governor/mayor.

Public, private, and voluntary sectors place different demands on managers, but all must retain the faith of
those who select them (if they wish to retain their jobs), retain the faith of those people that fund the
organization, and retain the faith of those who work for the organization. If they fail to convince
employees of the advantages of staying rather than leaving, they may tip the organization into a
downward spiral of hiring, training, firing, and recruiting. Management also has the task of innovating
and of improving the functioning of organizations.
[edit] Historical development
Difficulties arise in tracing the history of management. Some see it (by definition) as a late modern (in the
sense of late modernity) conceptualization. On those terms it cannot have a pre-modern history, only
harbingers (such as stewards). Others, however, detect management-like -thought back to Sumerian
traders and to the builders of the pyramids of ancient Egypt. Slave-owners through the centuries faced the
problems of exploiting/motivating a dependent but sometimes unenthusiastic or recalcitrant workforce,
but many pre-industrial enterprises, given their small scale, did not feel compelled to face the issues of
management systematically. However, innovations such as the spread of Arabic numerals (5th to 15th
centuries) and the codification of double-entry book-keeping (1494) provided tools for management
assessment, planning and control.

Given the scale of most commercial operations and the lack of mechanized record-keeping and recording
before the industrial revolution, it made sense for most owners of enterprises in those times to carry out
management functions by and for themselves. But with growing size and complexity of organizations, the
split between owners (individuals, industrial dynasties or groups of shareholders) and day-to-day
managers (independent specialists in planning and control) gradually became more common.

[edit] Early writing

While management has been present for millennia, several writers have created a background of works
that assisted in modern management theories.[4]

[edit] Sun Tzu's The Art of War

Written by Chinese general Sun Tzu in the 6th century BC, The Art of War is a military strategy book
that, for managerial purposes, recommends being aware of and acting on strengths and weaknesses of
both a manager's organization and a foe's.[4]

[edit] Niccolò Machiavelli's The Prince

Believing that people were motivated by self-interest, Niccolò Machiavelli wrote The Prince in 1513 as
advice for the leadership of Florence, Italy.[5] Machiavelli recommended that leaders use fear—but not
hatred—to maintain control.

[edit] Adam Smith's The Wealth of Nations

Written in 1776 by Adam Smith, a Scottish moral philosopher, The Wealth of Nations aims for efficient
organization of work through Specialization of labor.[5] Smith described how changes in processes could
boost productivity in the manufacture of pins. While individuals could produce 200 pins per day, Smith
analyzed the steps involved in manufacture and, with 10 specialists, enabled production of 48,000 pins per
day.[5]

[edit] 19th century

Classical economists such as Adam Smith (1723 - 1790) and John Stuart Mill (1806 - 1873) provided a
theoretical background to resource-allocation, production, and pricing issues. About the same time,
innovators like Eli Whitney (1765 - 1825), James Watt (1736 - 1819), and Matthew Boulton (1728 -
1809) developed elements of technical production such as standardization, quality-control procedures,
cost-accounting, interchangeability of parts, and work-planning. Many of these aspects of management
existed in the pre-1861 slave-based sector of the US economy. That environment saw 4 million people, as
the contemporary usages had it, "managed" in profitable quasi-mass production.

By the late 19th century, marginal economists Alfred Marshall (1842 - 1924), Léon Walras (1834 - 1910),
and others introduced a new layer of complexity to the theoretical underpinnings of management. Joseph
Wharton offered the first tertiary-level course in management in 1881.

[edit] 20th century

By about 1900 one finds managers trying to place their theories on what they regarded as a thoroughly
scientific basis (see scientism for perceived limitations of this belief). Examples include Henry R.
Towne's Science of management in the 1890s, Frederick Winslow Taylor's The Principles of Scientific
Management (1911), Frank and Lillian Gilbreth's Applied motion study (1917), and Henry L. Gantt's
charts (1910s). J. Duncan wrote the first college management textbook in 1911. In 1912 Yoichi Ueno
introduced Taylorism to Japan and became first management consultant of the "Japanese-management
style". His son Ichiro Ueno pioneered Japanese quality-assurance.

The first comprehensive theories of management appeared around 1920. The Harvard Business School
invented the Master of Business Administration degree (MBA) in 1921. People like Henri Fayol (1841 -
1925) and Alexander Church described the various branches of management and their inter-relationships.
In the early 20th century, people like Ordway Tead (1891 - 1973), Walter Scott and J. Mooney applied the
principles of psychology to management, while other writers, such as Elton Mayo (1880 - 1949), Mary
Parker Follett (1868 - 1933), Chester Barnard (1886 - 1961), Max Weber (1864 - 1920), Rensis Likert
(1903 - 1981), and Chris Argyris (1923 - ) approached the phenomenon of management from a
sociological perspective.

Peter Drucker (1909 – 2005) wrote one of the earliest books on applied management: Concept of the
Corporation (published in 1946). It resulted from Alfred Sloan (chairman of General Motors until 1956)
commissioning a study of the organisation. Drucker went on to write 39 books, many in the same vein.

H. Dodge, Ronald Fisher (1890 - 1962), and Thornton C. Fry introduced statistical techniques into
management-studies. In the 1940s, Patrick Blackett combined these statistical theories with
microeconomic theory and gave birth to the science of operations research. Operations research,
sometimes known as "management science" (but distinct from Taylor's scientific management), attempts
to take a scientific approach to solving management problems, particularly in the areas of logistics and
operations.

Some of the more recent developments include the Theory of Constraints, management by objectives,
reengineering, Six Sigma and various information-technology-driven theories such as agile software
development, as well as group management theories such as Cog's Ladder.

As the general recognition of managers as a class solidified during the 20th century and gave perceived
practitioners of the art/science of management a certain amount of prestige, so the way opened for
popularised systems of management ideas to peddle their wares. In this context many management fads
may have had more to do with pop psychology than with scientific theories of management.

Towards the end of the 20th century, business management came to consist of six separate branches,
namely:

• Human resource management


• Operations management or production management
• Strategic management
• Marketing management
• Financial management
• Information technology management responsible for management information systems

[edit] 21st century

In the 21st century observers find it increasingly difficult to subdivide management into functional
categories in this way. More and more processes simultaneously involve several categories. Instead, one
tends to think in terms of the various processes, tasks, and objects subject to management.

Branches of management theory also exist relating to nonprofits and to government: such as public
administration, public management, and educational management. Further, management programs related
to civil-society organizations have also spawned programs in nonprofit management and social
entrepreneurship.

Note that many of the assumptions made by management have come under attack from business ethics
viewpoints, critical management studies, and anti-corporate activism.

As one consequence, workplace democracy has become both more common, and more advocated, in
some places distributing all management functions among the workers, each of whom takes on a portion
of the work. However, these models predate any current political issue, and may occur more naturally
than does a command hierarchy. All management to some degree embraces democratic principles in that
in the long term workers must give majority support to management; otherwise they leave to find other
work, or go on strike. Hence management has started to become less based on the conceptualisation of
classical military command-and-control, and more about facilitation and support of collaborative activity,
utilizing principles such as those of human interaction management to deal with the complexities of
human interaction. Indeed, the concept of Ubiquitous command-and-control posits such a transformation
for 21st century military management.

[edit] Management topics


[edit] Basic functions of management

Management operates through various functions, often classified as planning, organizing,


leading/motivating, and controlling.

• Planning: Deciding what needs to happen in the future (today, next week, next month, next year,
over the next 5 years, etc.) and generating plans for action.
• Organizing: (Implementation) making optimum use of the resources required to enable the
successful carrying out of plans.
• Staffing: Job Analyzing, recruitment, and hiring individuals for appropriate jobs.
• Leading/Motivating: Exhibiting leadership and motivational skills in order to encourage others to
play an effective part in achieving plans and ensure willing participation in the organization on the
parts of workers.
• Controlling: Monitoring, checking progress against plans, which may need modification based
on feedback.
• Motivating: the process of stimulating an individual to take action that will accomplish a desired
goal.

[edit] Formation of the business policy


• The mission of the business is its most obvious purpose -- which may be, for example, to make
soap.
• The vision of the business reflects its aspirations and specifies its intended direction or future
destination.
• The objectives of the business refers to the ends or activity at which a certain task is aimed.
• The business's policy is a guide that stipulates rules, regulations and objectives, and may be used
in the managers' decision-making. It must be flexible and easily interpreted and understood by all
employees.
• The business's strategy refers to the coordinated plan of action that it is going to take, as well as
the resources that it will use, to realize its vision and long-term objectives. It is a guideline to
managers, stipulating how they ought to allocate and utilize the factors of production to the
business's advantage. Initially, it could help the managers decide on what type of business they
want to form.

[edit] How to implement policies and strategies

• All policies and strategies must be discussed with all managerial personnel and staff.
• Managers must understand where and how they can implement their policies and strategies.
• A plan of action must be devised for each department.
• Policies and strategies must be reviewed regularly.
• Contingency plans must be devised in case the environment changes.
• Assessments of progress ought to be carried out regularly by top-level managers.
• A good environment is required within the business.

[edit] The development of policies and strategies

• The missions, objectives, strengths and weaknesses of each department must be analysed to
determine their roles in achieving the business's mission.
• The forecasting method develops a reliable picture of the business's future environment.
• A planning unit must be created to ensure that all plans are consistent and that policies and
strategies are aimed at achieving the same mission and objectives.
• Contingency plans must be developed, just in case.

All policies must be discussed with all managerial personnel and staff that is required in the execution of
any departmental policy.

[edit] Where policies and strategies fit into the planning process

• They give mid- and lower-level managers a good idea of the future plans for each department.
• A framework is created whereby plans and decisions are made.
• Mid- and lower-level management may add their own plans to the business's strategic ones.

[edit] Managerial levels and hierarchy

The management of a large organization may have three levels:

1. Senior management (or "top management" or "upper management")


2. Middle management
3. Low-level management, such as supervisors or team-leaders
4. Foreman
5. Rank and File
Top-level management

• Require an extensive knowledge of management roles and skills.


• They have to be very aware of external factors such as markets.
• Their decisions are generally of a long-term nature
• Their decisions are made using analytic, directive, conceptual and/or behavioral/participative
processes
• They are responsible for strategic decisions.
• They have to chalk out the plan and see that plan may be effective in the future.
• They are executive in nature.

Middle management

• Mid-level managers have a specialized understanding of certain managerial tasks.


• They are responsible for carrying out the decisions made by top-level management.

Lower management

• This level of management ensures that the decisions and plans taken by the other two are carried
out.
• Lower-level managers' decisions are generally short-term ones

Foreman / lead hand

• They are people who have direct supervision over the working force in office factory, sales field
or other workgroup or areas of activity.

Rank and File

• The responsibilities of the persons belonging to this group are even more restricted and more
specific than those of the foreman.

[edit] Areas and categories and implementations of management


• Accounting management • Human resources management • Performance
• Agile management • Hospital management management
• Association management • Information technology • Product management
• Capability Management management • Public administration
• Change management • Innovation management • Public management
• Commercial operations • Interim management • Quality management
management • Inventory management • Records management
• Communication • Knowledge management • Research management
management • Land management • Resource management
• Constraint management • Leadership management • Risk management
• Cost management • Logistics management • Skills management
• Crisis management • Lifecycle management • Social entrepreneurship
• Critical management • Management on demand • Spend management
studies • Marketing management • Spiritual management
• Customer relationship • Materials management • Strategic management
management • Office management • Stress management
• Decision making styles • Operations management • Supply chain
• Design management • Organization development management
• Disaster management • Perception management • Systems management
• Earned value management • Practice management • Talent management
• Educational management • Program management • Time management
• Environmental • Project management
management • Visual management
• Facility management • Process management
• Financial management

• Forecasting

[edit] See also


Articles • Management system Lists
• Managerialism
• Adhocracy • Micromanagement • List of basic management
• Administration • Macromanagement topics
• Certified Business • Middle management • List of management topics
Manager • Music management • List of marketing topics
• Collaboration • Organizational Behavior • List of human resource
• Collaborative Management management topics
method • Organizational studies • List of economics topics
• Corporate • Predictive analytics • List of finance topics
governance • Project management • List of accounting topics
• Design • Public administration • List of information technology
management • Risk management topics
• Engineering • Risk management • List of production topics
management • Team building • List of business law topics
• Evidence-based • Scientific management • List of business ethics, political
management • Senior management economy, and philosophy of
• Forecasting • Social entrepreneurship business topics
• Futures studies • Virtual management • List of business theorists
• Knowledge • Peter Drucker's management by • List of economists
visualization objectives • List of corporate leaders
• Leadership • Eliyahu M. Goldratt's Theory
• Management of Constraints • Timeline of management
consulting • Pointy Haired Boss — a techniques
• Management negative stereotype of
control managers
• Management
cybernetics
• Management
development
• Management fad
• Management
science

• Management styles

[edit] References
History
From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the social science. For a general history of Mankind, see History of the world. For
other uses, see History (disambiguation).

Historia (Allegory of History).


By Nikolaos Gysis (1892).
History portal

History is the study of the past, particularly using written records. New technology, such as photography,
and computer text files now sometimes complement traditional archival sources. History is a field of
research producing a continuous narrative and a systematic analysis of past events of importance to the
human race.[1] Those who study history as a profession are called historians.

Contents
[hide]

• 1 Etymology
• 2 Description
• 3 History and prehistory
• 4 Historiography
• 5 Philosophy of history
• 6 Historical methods
• 7 Areas of study
o 7.1 Periods
o 7.2 Geographical locations
 7.2.1 World
 7.2.2 Regions
o 7.3 Military history
o 7.4 Social history
o 7.5 Cultural History
o 7.6 Diplomatic history
o 7.7 People's history
o 7.8 Gender history
• 8 Pseudohistory
• 9 See also
o 9.1 Lists
o 9.2 Methods and tools
o 9.3 Related disciplines
o 9.4 Other
• 10 References
o 10.1 Notes
o 10.2 Bibliography

o 10.3 External links

[edit] Etymology
Look up history in
Wiktionary, the free dictionary.

The word history comes from Greek ἱστορία (historia), from the Proto-Indo-European *wid-tor-, from the
root *weid-, "to know, to see".[2] This root is also present in the English words wit, wise, wisdom, vision,
and idea, in the Sanskrit word veda,[3] and in the Slavic word videti and vedati, as well as others.[4] (The
asterisk before a word indicates that it is a hypothetical construction, not an attested form.)

The Ancient Greek word ἱστορία, historía, means "inquiry, knowledge acquired by investigation". It was
in that sense that Aristotle used the word in his Περί Τά Ζωα Ιστορία, Peri Ta Zoa Istória or, in Latinized
form, Historia Animalium.[5] The term is derived from ἵστωρ, hístōr meaning wise man, witness, or judge.
We can see early attestations of ἵστωρ in Homeric Hymns, Heraclitus, the Athenian ephebes' oath, and in
Boiotic inscriptions (in a legal sense, either "judge" or "witness," or similar). The spirant is problematic,
and not present in cognate Greek eídomai ("to appear"). The form historeîn, "to inquire", is an Ionic
derivation, which spread first in Classical Greece and ultimately over all of Hellenistic civilization.

It was still in the Greek sense that Francis Bacon used the term in the late 16th century, when he wrote
about "Natural History". For him, historia was "the knowledge of objects determined by space and time",
that sort of knowledge provided by memory (while science was provided by reason, and poetry was
provided by fantasy).

The word entered the English language in 1390 with the meaning of "relation of incidents, story". In
Middle English, the meaning was "story" in general. The restriction to the meaning "record of past
events" arises in the late 15th century. In German, French, and most Germanic and Romance languages,
the same word is still used to mean both "history" and "story". The adjective historical is attested from
1661, and historic from 1669.[6]

Historian in the sense of a "researcher of history" is attested from 1531. In all European languages, the
substantive "history" is still used to mean both "what happened with men", and "the scholarly study of the
happened", the latter sense sometimes distinguished with a capital letter, "History", or the word
historiography.[5]

[edit] Description

The title page to The Historians' History of the World.

Since historians are simultaneously observers and participants, the historical works they produce are
written from the perspective of their own time and sometimes with due concern for possible lessons for
their own future. In the words of Benedetto Croce, "All history is contemporary history". History is
facilitated by the formation of a 'true discourse of past' through the production of narrative and analysis of
past events relating to the human race.[6] The modern discipline of history is dedicated to the institutional
production of this discourse.

All events that are remembered and preserved in some authentic form constitute the historical record.[1]
The task of historical discourse is to identify the sources which can most usefully contribute to the
production of accurate accounts of past. Therefore, the constitution of the historian's archive is a result of
circumscribing a more general archive by invalidating the usage of certain texts and documents (by
falsifying their claims to represent the 'true past').

The study of history has sometimes been classified as part of the humanities and other times as part of the
social sciences[7] It can also be seen as a bridge between those two broad areas, incorporating
methodologies from both. Some individual historians strongly support one or the other classification.[8] In
modern academia, history is increasingly classified as a social science. In the 20th century, French
historian Fernand Braudel revolutionized the study of history, by using such outside disciplines as
economics, anthropology, and geography in the study of global history.

Traditionally, historians have recorded events of the past, either in writing or by passing on an oral
tradition, and have attempted to answer historical questions through the study of written documents and
oral accounts. For the beginning, historians have also used such sources as monuments, inscriptions, and
pictures. In general, the sources of historical knowledge can be separated into three categories: what is
written, what is said, and what is physically preserved, and historians often consult all three.[9] But writing
is the marker that separates history from what comes before.

Archaeology is a discipline that is especially helpful in dealing with buried sites and objects, which, once
unearthed, contribute to the study of history. But archaeology rarely stands alone. It uses narrative sources
to complement its discoveries. However, archaeology is constituted by a range of methodologies and
approaches which are independent from history; that is to say, archaeology does not "fill the gaps" within
textual sources. Indeed, Historical Archaeology is a specific branch of archaeology, often contrasting its
conclusions against those of contemporary textual sources. Mark Leone, the excavator and interpreter of
historical Annapolis in America (an 18th century town on east coast), has sought to understand the
contradiction between textual documents and the material record, demonstrating the possession of slaves
and the inequalities of wealth apparent via the study of the total historical environment, despite the
ideology of "liberty" inherant in written documents at this time.
There are varieties of ways in which history can be organized, including chronologically, culturally,
territorially, and thematically. These divisions are not mutually exclusive, and significant overlaps are
often present, as in "The International Women's Movement in an Age of Transition, 1800–1945." It is
possible for historians to concern themselves with both the very specific and the very general, although
the modern trend has been toward specialization. The area called Big History resists this specialization,
and searches for universal patterns or trends. History has often been studied with some practical or
theoretical aim, but also may be studied out of simple intellectual curiosity.[10]

[edit] History and prehistory


Human history
This box: view • talk • edit

↑ before Homo (Pliocene)


Human prehistory

• Human evolution

>> Recent African origin of


modern humans •
Multiregional hypothesis

• Archaic Homo sapiens


Three-age system

• Stone Age

>> Paleolithic • Mesolithic


• Neolithic

• Bronze Age

>> Near East | India •


Europe • China • Korea

• Iron Age

>> Bronze Age collapse •


Ancient Near East • India •
Europe • China • Japan •
Korea • Nigeria
History

• Cradle of civilization
• Antiquity
• Middle Ages
• Early Modern period

• Modern period
see also: Modernity, Futurology
↓Future
Further information: Protohistory
The history of the world is the memory of the past experience of Homo sapiens around the world, as that
experience has been preserved, largely in written records. By "prehistory", historians mean the recovery
of knowledge of the past in an area where no written records exist, or where the writing of a culture is not
understood. Human history is marked both by a gradual accretion of discoveries and inventions, as well as
by quantum leaps — paradigm shifts, revolutions — that comprise epochs in the material and spiritual
evolution of humankind. By studying painting, drawings, carvings, and other artifacts, some information
can be recovered even in the absence of a written record. Since the 20th century, the study of prehistory is
considered essential to avoid history's implicit exclusion of certain civilizations, such as those of Sub-
Saharan Africa and pre-Columbian America. Historians in the West have been criticized for focusing
disproportionately on the Western world.[11] In 1961, British historian E. H. Carr wrote:

The line of demarcation between prehistoric and historical times is crossed when people cease to
live only in the present, and become consciously interested both in their past and in their future.
History begins with the handing down of tradition; and tradition means the carrying of the habits
and lessons of the past into the future. Records of the past begin to be kept for the benefit of future
generations.[12]

Such a definition would include within the scope of history peoples such as Australian Aboriginals and
New Zealand Maori who, before contact with Europeans, already possessed a strong interest in the past
and maintained oral records transmitted to succeeding generations.

[edit] Historiography
Main article: Historiography

Historiography has a number of related meanings. Firstly, it can refer to how history has been produced:
the story of the development of methodology and practices (for example, the move from short-term
biographical narrative towards long-term thematic analysis). Secondly, it can refer to what has been
produced: a specific body of historical writing (for example, "medieval historiography during the 1960s"
means "Works of medieval history written during the 1960s"). Thirdly, it may refer to why history is
produced: the Philosophy of history. As a meta-level analysis of descriptions of the past, this third
conception can relate to the first two in that the analysis usually focuses on the narratives, interpretations,
worldview, use of evidence, or method of presentation of other historians. Professional historians also
debate the question of whether history can be taught as a single coherent narrative or a series of
competing narratives.

[edit] Philosophy of history


History's philosophical questions

• What is the proper unit for


the study of the human
past — the individual?
The polis? The
civilization? The culture?
Or the nation state?

• Are there broad patterns


and progress? Are there
cycles? Is human history
random and devoid of any
meaning?
Main article: Philosophy of history

Philosophy of history is an area of philosophy concerning the eventual significance, if any, of human
history. Furthermore, it speculates as to a possible teleological end to its development—that is, it asks if
there is a design, purpose, directive principle, or finality in the processes of human history. Philosophy of
history should not be confused with historiography, which is the study of history as an academic
discipline, and thus concerns its methods and practices, and its development as a discipline over time. Nor
should philosophy of history be confused with the history of philosophy, which is the study of the
development of philosophical ideas through time.

Professional historians debate the question of whether history is a science or a liberal art. The distinction
is artificial, as many view the field from more than one perspective.[13] Recent argument in support for the
transformation of history into science have been made by Peter Turchin in an article titled "Arise
Cliodynamics" in the journal "Nature".[14][15]

[edit] Historical methods


Further information: Historical method

A depiction of the ancient Library


of Alexandria.

Historical method basics

The following questions are used


by historians in modern work.

1. When was the source,


written or unwritten,
produced (date)?
2. Where was it produced
(localization)?
3. By whom was it produced
(authorship)?
4. From what pre-existing
material was it produced
(analysis)?
5. In what original form was
it produced (integrity)?
6. What is the evidential
value of its contents
(credibility)?

The first four are known as higher


criticism; the fifth, lower
criticism; and, together, external
criticism. The sixth and final
inquiry about a source is called
internal criticism.

The historical method comprises the techniques and guidelines by which historians use primary sources
and other evidence to research and then to write history.

Herodotus of Halicarnassus (484 BC – ca.425 BC)[16] has generally been acclaimed as the "father of
history". However, his contemporary Thucydides (ca. 460 BC – ca. 400 BC) is credited with having
begun the scientific approach to history in his work the History of the Peloponnesian War. Thucydides,
unlike Herodotus and other religious historians, regarded history as being the product of the choices and
actions of human beings, and looked at cause and effect, rather than as the result of divine intervention.[16]
In his historical method, Thucydides emphasized chronology, a neutral point of view, and that the human
world was the result of the actions of human beings. Greek historians also viewed history as cyclical, with
events regularly recurring.[17]

There were historical traditions and sophisticated use of historical method in ancient and medieval China.
The groundwork for professional historiography in East Asia was established by the Han Dynasty court
historian known as Sima Qian (145–90 BC), author of the Shiji (Records of the Grand Historian). For the
quality of his timeless written work, Sima Qian is posthumously known as the Father of Chinese
Historiography. Chinese historians of subsequent dynastic periods in China used his Shiji as the official
format for historical texts, as well as for biographical literature.

Saint Augustine was influential in Christian and Western thought at the beginning of the medieval period.
Through the Medieval and Renaissance periods, history was often studied through a sacred or religious
perspective. Around 1800, German philosopher and historian Georg Wilhelm Friedrich Hegel brought
philosophy and a more secular approach in historical study.[10]

In the preface to his book, the Muqaddimah (1377), the Arab historian and early sociologist, Ibn Khaldun,
warned of seven mistakes that he thought that historians regularly committed. In this criticism, he
approached the past as strange and in need of interpretation. The originality of Ibn Khaldun was to claim
that the cultural difference of another age must govern the evaluation of relevant historical material, to
distinguish the principles according to which it might be possible to attempt the evaluation, and lastly, to
feel the need for experience, in addition to rational principles, in order to assess a culture of the past. Ibn
Khaldun often criticized "idle superstition and uncritical acceptance of historical data." As a result, he
introduced a scientific method to the study of history, which was considered something "new to his age",
and he often referred to it as his "new science", now associated with historiography.[18] His historical
method also laid the groundwork for the observation of the role of state, communication, propaganda and
systematic bias in history,[19] and he is thus considered to be the "father of historiography"[20][21] or the
"father of the philosophy of history".[22]

Other historians of note who have advanced the historical methods of study include Leopold von Ranke,
Sir Lewis Bernstein Namier, Pieter Geyl, G. M. Trevelyan, Sir Geoffrey Elton, and A. J. P. Taylor. In the
20th century, historians focused less on epic nationalistic narratives, which often tended to glorify the
nation or individuals, to more objective analyses. A major trend of historical methodology in the 20th
century was a tendency to treat history more as a social science rather than as an art, which traditionally
had been the case. Some of the leading advocates of history as a social science were a diverse collection
of scholars which included Fernand Braudel, E. H. Carr, Fritz Fischer, Emmanuel Le Roy Ladurie, Hans-
Ulrich Wehler, Bruce Trigger, Marc Bloch, Karl Dietrich Bracher, Peter Gay, Robert Fogel, Lucien
Febvre and Lawrence Stone. Many of the advocates of history as a social science were or are noted for
their multi-disciplinary approach. Braudel combined history with geography, Bracher history with
political science, Fogel history with economics, Gay history with psychology, Trigger history with
archeology while Wehler, Bloch, Fischer, Stone, Febvre and Le Roy Ladurie have in varying and
differing ways amalgamated history with sociology, geography, anthropology, and economics. More
recently, the field of digital history has begun to address ways of using computer technology to pose new
questions to historical data and generate digital scholarship.

In opposition to the claims of history as a social science, historians such as Hugh Trevor-Roper, John
Lukacs, Donald Creighton, Gertrude Himmelfarb and Gerhard Ritter argued that the key to the historians’
work was the power of the imagination, and hence contended that history should be understood as an art.
French historians associated with the Annales School introduced quantitative history, using raw data to
track the lives of typical individuals, and were prominent in the establishment of cultural history (cf.
histoire des mentalités). Intellectual historians such as Herbert Butterfield, Ernst Nolte and George Mosse
have argued for the significance of ideas in history. American historians, motivated by the civil rights era,
focused on formerly overlooked ethnic, racial, and socio-economic groups. Another genre of social
history to emerge in the post-WWII era was Alltagsgeschichte (History of Everyday Life). Scholars such
as Martin Broszat, Ian Kershaw and Detlev Peukert sought to examine what everyday life was like for
ordinary people in 20th century Germany, especially in the Nazi period.

Marxist historians such as Eric Hobsbawm, E. P. Thompson, Rodney Hilton, Georges Lefebvre, Eugene
D. Genovese, Isaac Deutscher, C. L. R. James, Timothy Mason, Herbert Aptheker, Arno J. Mayer and
Christopher Hill have sought to validate Karl Marx's theories by analyzing history from a Marxist
perspective. In response to the Marxist interpretation of history, historians such as François Furet, Richard
Pipes, J. C. D. Clark, Roland Mousnier, Henry Ashby Turner and Robert Conquest have offered anti-
Marxist interpretations of history. Feminist historians such as Joan Wallach Scott, Claudia Koonz, Natalie
Zemon Davis, Sheila Rowbotham, Gisela Bock, Gerda Lerner, Elizabeth Fox-Genovese, and Lynn Hunt
have argued for the importance of studying the experience of women in the past. In recent years,
postmodernists have challenged the validity and need for the study of history on the basis that all history
is based on the personal interpretation of sources. In his 1997 book In Defence of History, Richard J.
Evans, a professor of modern history at Cambridge University, defended the worth of history. Another
defence of history from post-modernist criticism was the Australian historian Keith Windschuttle's 1994
book, The Killing of History.

[edit] Areas of study


Particular studies and fields
These are approaches to history;
not listed are histories of other
fields, such as history of science,
history of mathematics and history
of philosophy.

• Ancient history : the study


from the beginning of
human history until the
Early Middle Ages.
• Art History: the study of
changes in and social
context of art.
• Big History: study of
history on a large scale
across long time frames
and epochs through a
multi-disciplinary
approach.
• Chronology: science of
localizing historical events
in time.
• Contemporary history: the
study of historical events
that are immediately
relevant to the present
time.
• Counterfactual history: the
study of historical events
as they might have
happened in different
causal circumstances.
• Cultural history: the study
of culture in the past.
• Digital History: the use of
computing technologies to
produce digital
scholarship.
• Economic History: the
study of economies in the
past.
• Futurology: study of the
future: researches the
medium to long-term
future of societies and of
the physical world.
• Intellectual history: the
study of ideas in the
context of the cultures that
produced them and their
development over time.
• Maritime history: the
study of maritime
transport and all the
connected subjects.
• Modern history : the study
of the Modern Times, the
era after the Middle Ages.
• Military History: the study
of warfare and wars in
history and what is
sometimes considered to
be a sub-branch of
military history, Naval
History.
• Natural history: the study
of the development of the
cosmos, the Earth, biology
and interactions thereof.
• Paleography: study of
ancient texts.
• People's history: historical
work from the perspective
of common people.
• Political history: the study
of politics in the past.
• Psychohistory: study of
the psychological
motivations of historical
events.
• Pseudohistory: study
about the past that falls
outside the domain of
mainstream history
(sometimes it is an
equivalent of
pseudoscience).
• Social History: the study
of the process of social
change throughout history.
• Universal history: basic to
the Western tradition of
historiography.
• Women's history: the
history of female human
beings. Gender history is
related and covers the
perspective of gender.

• World History: the study


of history from a global
perspective.

[edit] Periods

Main article: Periodisation


Historical study often focuses on events and developments that occur in particular blocks of time.
Historians give these periods of time names in order to allow "organising ideas and classificatory
generalisations" to be used by historians.[23] The names given to a period can vary with geographical
location, as can the dates of the start and end of a particular period. Centuries and decades are commonly
used periods and the time they represent depends on the dating system used. Most periods are constructed
retrospectively and so reflect value judgments made about the past. The way periods are constructed and
the names given to them can affect the way they are viewed and studied.[24]

[edit] Geographical locations

Particular geographical locations can form the basis of historical study, for example, continents, countries
and cities.

[edit] World

Main article: History of the World

World history is the study of major civilizations over the last 3000 years or so. It has led to highly
controversial interpretations by Oswald Spengler and Arnold J. Toynbee, among others. World history is
especially important as a teaching field. It has increasingly entered the university curriculum in the U.S.,
in many cases replacing courses in Western Civilization, that had a focus on Europe and the U.S. World
history adds extensive new material on Asia, Africa and Latin America.

[edit] Regions

• History of Africa begins with the first emergence of modern human beings on the continent,
continuing into its modern present as a patchwork of diverse and politically developing nation
states.
• History of the Americas is the collective history of North and South America, including Central
America and the Caribbean.
o History of North America is the study of the past passed down from generation to
generation on the continent in the Earth's northern and western hemisphere.
o History of Central America is the study of the past passed down from generation to
generation on the continent in the Earth's western hemisphere.
o History of the Caribbean begins with the oldest evidence where 7,000-year-old remains
have been found.
o History of South America is the study of the past passed down from generation to
generation on the continent in the Earth's southern and western hemisphere.
• History of Antarctica emerges from early Western theories of a vast continent, known as Terra
Australis, believed to exist in the far south of the globe.
• History of Australia start with the documentation of the Makassar trading with Indigenous
Australians on Australia's north coast.
• History of New Zealand dates back at least 700 years to when it was discovered and settled by
Polynesians, who developed a distinct Māori culture centred on kinship links and land.
• History of the Pacific Islands covers the history of the islands in the Pacific Ocean.
• History of Eurasia is the collective history of several distinct peripheral coastal regions: the
Middle East, South Asia, East Asia, Southeast Asia, and Europe, linked by the interior mass of the
Eurasian steppe of Central Asia and Eastern Europe.
o History of Europe describes the passage of time from humans inhabiting the European
continent to the present day.
 History of Frisia is the study of the rich history and folklore of the Frisians and
their languages, battles, culture, cuisine, and so forth.
o History of Asia can be seen as the collective history of several distinct peripheral coastal
regions, East Asia, South Asia, and the Middle East linked by the interior mass of the
Eurasian steppe.
 History of East Asia is the study of the past passed down from generation to
generation in East Asia.
 History of the Middle East begins with the earliest civilizations in the region now
known as the Middle East that were established around 3000 BC, in Mesopotamia
(Iraq).
 History of South Asia is the study of the past passed down from generation to
generation in the Sub-Himalayan region.
 History of Southeast Asia has been characterized as interaction between regional
players and foreign powers.

[edit] Military history

Main article: Military history

Military history studies conflicts within human society usually concentrating on historical wars and
warfare including battles, military strategies and weaponry.[25] However, the subject may range from a
melee between two tribes to conflicts between proper militaries to a world war affecting the majority of
the human population. Military historians record the events of military history.

[edit] Social history

Main article: Social history

Social history is the study of how societies adapt and change over periods of time. Social history is an
area of historical study considered by some to be a social science that attempts to view historical evidence
from the point of view of developing social trends. In this view, it may include areas of economic history,
legal history and the analysis of other aspects of civil society that show the evolution of social norms,
behaviors and more.

[edit] Cultural History

Main article: Cultural history

Cultural history, as a discipline, at least in its common definition since the 1970s, often combines the
approaches of anthropology and history to look at popular cultural traditions and cultural interpretations
of historical experience. It examines the records and narrative descriptions of past knowledge, customs,
and arts of a group of people.

[edit] Diplomatic history

Main article: Diplomatic history

Diplomatic history, sometimes referred to as "Rankian History"[26] in honor of Leopold von Ranke,
focuses on politics, politicians and other high rulers and views them as being the driving force of
continuity and change in history. This type of political history is the study of the conduct of international
relations between states or across state boundaries over time. This is the most common form of history
and is often the classical and popular belief of what history should be.

[edit] People's history

Main article: People's history

A people's history is a type of historical work which attempts to account for historical events from the
perspective of common people. A people's history is the history of the world that is the story of mass
movements and of the outsiders. Individuals not included in the past in other type of writing about history
are part of this theory's primary focus, which includes the disenfranchised, the oppressed, the poor, the
nonconformists, and the otherwise forgotten people. This theory also usually focuses on events occurring
in the fullness of time, or when an overwhelming wave of smaller events cause certain developments to
occur.

[edit] Gender history

Main article: Gender history

Gender history is a sub-field of History and Gender studies, which looks at the past from the perspective
of gender. It is in many ways, an outgrowth of women's history. Despite its relatively short life, Gender
History (and its forerunner Women's History) has had a rather significant effect on the general study of
history. Since the 1960s, when the initially small field first achieved a measure of acceptance, it has gone
through a number of different phases, each with its own challenges and outcomes. Although some of the
changes to the study of history have been quite obvious, such as increased numbers of books on famous
women or simply the admission of greater numbers of women into the historical profession, other
influences are more subtle.

[edit] Pseudohistory
Main article: Pseudohistory

Pseudohistory is a term applied to texts which purport to be historical in nature but which depart from
standard historiographical conventions in a way which undermines their conclusions. Works which draw
controversial conclusions from new, speculative or disputed historical evidence, particularly in the fields
of national, political, military and religious affairs, are often rejected as pseudohistory.

In many countries, such as Japan, Russia, and the United States, the subject taught in the primary and
secondary schools under the name "history" has at times been censored for political reasons. To give just
a few of many examples: in Japan, mention of the Nanking Massacre has been removed from textbooks;
in Russia under Stalin, history was rewritten to conform with communist party doctrine; and in the United
States the history of the American Civil War had been censored to avoid giving offense to white
Southerners.[27][28][29] This practice goes back to the earliest recorded times. In Book Three of The
Republic, Plato recommends that citizens be taught lies in order to instill patriotism.[30]

For more details on this topic, see political historical revisionism.

[edit] See also


H
i
s
t
o
r
Current events portal WikiProject History
y

p
o
r
t
a
l

[edit] Related disciplines


• Historian, a person who studies and writes
history • Archaeology: the systematic study of our
human past, based on the investigation of
[edit] Lists material culture and context, together
forming the archaeological record.
• List of centuries • Archontology: study of historical offices and
• List of decades important positions in state, international,
• List of historians political, religious and other organizations
• List of historians by area of study and societies.
• List of history journals
• List of history topics [edit] Other
• List of timelines (Timeline)
• Changelog: log or record of changes made to
[edit] Methods and tools a project, such as a website or software
project.
• Contemporaneous corroboration: A method • Historical drama film: The portrayal of
historians use to establish facts beyond their history on film.
limited lifespan.
• Social change: changes in the nature, the
• Prosopography: A methodological tool for social institutions, the social behavior, or the
the collection of all known information about social relations of a society or community of
individuals within a given period. people.

[edit] References
Race (classification of human beings)
From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article may be too long to comfortably read and navigate. Please consider splitting content into
sub-articles and using this article for a summary of the key points of the subject.
A series of articles on
Race
Main topics
Race
Race and genetics
Human genetic variation
Historical definitions
Race and health
Race and intelligence
Social
Social interpretations of race
Racism
Racial segregation
Anti-miscegenation laws
Racialism
Racial profiling
Race in the United States
Race in Brazil
Related
Ethnic group
Human evolution
Genetics
Racism topics
Category: Race
This box: view • talk • edit
For other uses, see Race.

The term race or racial group usually refers to the concept of categorizing humans into populations or
groups on the basis of various sets of characteristics.[1] The most widely used human racial categories are
based on visible traits (especially skin color, cranial or facial features and hair texture), and self-
identification.[1][2]

Conceptions of race, as well as specific ways of grouping races, vary by culture and over time, and are
often controversial for scientific as well as social and political reasons. The controversy ultimately
revolves around whether or not races are natural types or socially constructed, and the degree to which
perceived differences in ability and achievement, categorized on the basis of race, are a product of
inherited (i.e. genetic) traits or environmental, social and cultural factors.

Some argue that although race is a valid taxonomic concept in other species, it cannot be applied to
humans.[3] Many scientists have argued that race definitions are imprecise, arbitrary, derived from custom,
have many exceptions, have many gradations, and that the numbers of races delineated vary according to
the culture making the racial distinctions; thus they reject the notion that any definition of race pertaining
to humans can have taxonomic rigour and validity.[4] Today most scientists study human genotypic and
phenotypic variation using concepts such as "population" and "clinal gradation". Many contend that while
racial categorizations may be marked by phenotypic or genotypic traits, the idea of race itself, and actual
divisions of persons into races or racial groups, are social constructs.[5][6][7][8][9][10][11][12]

Contents
[hide]

• 1 History
o 1.1 In ancient civilizations
o 1.2 Age of Discovery
o 1.3 Scientific concepts
o 1.4 17th and 18th century
o 1.5 19th century
• 2 Modern debates
o 2.1 Models of human evolution
o 2.2 Race as subspecies
 2.2.1 Morphological subspecies
 2.2.2 Subspecies genetically differentiated populations
o 2.3 Population genetics: population and cline
 2.3.1 Clines
 2.3.2 Populations
o 2.4 Molecular genetics: lineages and clusters
 2.4.1 Molecular lineages, Y chromosomes and mitochondrial DNA
 2.4.2 How much are genes shared? Clustering analyses and what they tell us
o 2.5 Summary of different biological definitions of race
o 2.6 Current views across disciplines
o 2.7 Races as social constructions
 2.7.1 In the United States
 2.7.2 In Brazil
 2.7.3 Marketing of race: genetic lineages as social lineages
• 3 Political and practical uses
o 3.1 Racism
o 3.2 Race and intelligence
o 3.3 In biomedicine
o 3.4 In law enforcement
• 4 See also
• 5 Footnotes
• 6 Bibliography
• 7 External links
o 7.1 Official statements and standards
o 7.2 Popular press

o 7.3 Others

[edit] History
See also: Historical definitions of race

[edit] In ancient civilizations

See also: Ancient Egypt and race

Blue-eyed Central Asian (Tocharian?) and East-Asian Buddhist monks, Bezeklik, Eastern Tarim Basin,
9th-10th century.[13][14]

Given visually complex social relationships, humans presumably have always observed and speculated
about the physical differences among individuals and groups. But different societies have attributed
markedly different meanings to these distinctions. For example, the Ancient Egyptian sacred text called
Book of Gates identifies four categories that are now conventionally labeled "Egyptians", "Asiatics",
"Libyans", and "Nubians", but such distinctions tended to conflate differences as defined by physical
features such as skin tone, with tribal and national identity. Classical civilizations from Rome to China
tended to invest much more importance in familial or tribal affiliation than with one's physical appearance
(Dikötter 1992; Goldenberg 2003). Ancient Greek and Roman authors also attempted to explain and
categorize visible biological differences among peoples known to them. Such categories often also
included fantastical human-like beings that were supposed to exist in far-away lands. Some Roman
writers adhered to an environmental determinism in which climate could affect the appearance and
character of groups (Isaac 2004). In many ancient civilizations, individuals with widely varying physical
appearances became full members of a society by growing up within that society or by adopting that
society's cultural norms (Snowden 1983; Lewis 1990).

Julian the Apostate was an early observer of the differences in humans, based upon ethnic, cultural, and
geographic traits, but as the ideology of "race" had not yet been constructed, he believed that they were
the result of "Providence":

Come, tell me why it is that the Celts and the Germans are fierce, while the Hellenes and Romans are, generally
speaking, inclined to political life and humane, though at the same time unyielding and warlike? Why the Egyptians
are more intelligent and more given to crafts, and the Syrians unwarlike and effeminate, but at the same time
intelligent, hot-tempered, vain and quick to learn? For if there is anyone who does not discern a reason for these
differences among the nations, but rather declaims that all this so befell spontaneously, how, I ask, can he still
believe that the universe is administered by a providence? — Julian, the Apostate.[15]

Medieval models of "race" mixed Classical ideas with the notion that humanity as a whole was descended
from Shem, Ham and Japheth, the three sons of Noah, producing distinct Semitic (Asiatic), Hamitic
(African), and Japhetic (Indo-European) peoples. This theory dates back to the Judeo-Christian tradition,
as described in the Babylonian Talmud, which states that "the descendants of Ham are cursed by being
black, and [it] depicts Ham as a sinful man and his progeny as degenerates." In the 14th century, the
Islamic sociologist Ibn Khaldun, an adherent of environmental determinism, dispelled this theory as a
myth. He wrote that black skin was due to the hot climate of sub-Saharan Africa and not due to the
descendants of Ham being cursed.[16]

In the 9th century, Al-Jahiz, an Afro-Arab biologist and Islamic philosopher of East African descent, was
an early adherent of environmental determinism and explained how the environment can determine the
physical characteristics of the inhabitants of a certain community. He used his theories on the struggle for
existence and environmental determinism to explain the origins of different human skin colors,
particularly black skin, which he believed to be the result of the environment. He cited a stony region of
black basalt in the northern Najd as evidence for his theory:[17]

"[It] is so unusual that its gazelles and ostriches, its insects and flies, its foxes, sheep and asses, its horses and its
birds are all black. Blackness and whiteness are in fact caused by the properties of the region, as well as by the
God-given nature of water and soil and by the proximity or remoteness of the sun and the intensity or mildness of
its heat."

[edit] Age of Discovery

The word "race", along with many of the ideas now associated with the term, were products of European
imperialism and colonization during the age of exploration. (Smedley 1999) As Europeans encountered
people from different parts of the world, they speculated about the physical, social, and cultural
differences among various human groups. The rise of the Atlantic slave trade, which gradually displaced
an earlier trade in slaves from throughout the world, created a further incentive to categorize human
groups in order to justify the subordination of African slaves. (Meltzer 1993) Drawing on Classical
sources and upon their own internal interactions — for example, the hostility between the English and
Irish was a powerful influence on early thinking about the differences between people (Takaki 1993) —
Europeans began to sort themselves and others into groups associated with physical appearance and with
deeply ingrained behaviors and capacities. A set of folk beliefs took hold that linked inherited physical
differences between groups to inherited intellectual, behavioral, and moral qualities. (Banton 1977)
Although similar ideas can be found in other cultures (Lewis 1990; Dikötter 1992), they appear not to
have had as much influence upon their social structures as was found in Europe and the parts of the world
colonized by Europeans. However, often brutal conflicts between ethnic groups have existed throughout
history and across the world.

[edit] Scientific concepts

Further information: Race (historical definitions), Scientific racism, Craniofacial anthropometry

The first scientific attempts to classify humans by categories of race date from the 17th century, along
with the development of European imperialism and colonization around the world. The first post-
Classical published classification of humans into distinct races seems to be François Bernier's Nouvelle
division de la terre par les différents espèces ou races qui l'habitent ("New division of Earth by the
different species or races which inhabit it"), published in 1684.

[edit] 17th and 18th century

According to philosopher Michel Foucault, theories of both racial and class conflict can be traced to 17th
century political debates about innate differences among ethnicities. In England radicals such as John
Lilburne emphasised conflicts between Saxon and Norman peoples. In France Henri de Boulainvilliers
argued that the Germanic Franks possessed a natural right to leadership, in contrast to descendants of the
Gauls. In the 18th century, the differences among human groups became a focus of scientific investigation
(Todorov 1993). Initially, scholars focused on cataloguing and describing "The Natural Varieties of
Mankind," as Johann Friedrich Blumenbach entitled his 1775 text (which established the five major
divisions of humans still reflected in some racial classifications, i.e., the Caucasoid race, Mongoloid race,
Ethiopian race (later termed the Negroid race), American Indian race, and Malayan race). From the 17th
through the 19th centuries, the merging of folk beliefs about group differences with scientific
explanations of those differences produced what one scholar has called an "ideology of race" (Smedley
1999). According to this ideology, races are primordial, natural, enduring and distinct. It was further
argued that some groups may be the result of mixture between formerly distinct populations, but that
careful study could distinguish the ancestral races that had combined to produce admixed groups.

[edit] 19th century

The 19th century saw attempts to change race from a taxonomic to a biological concept. In the 19th
century a number of natural scientists wrote on race: Georges Cuvier, Charles Darwin, Alfred Wallace,
Francis Galton, James Cowles Pritchard, Louis Agassiz, Charles Pickering, and Johann Friedrich
Blumenbach. As the science of anthropology took shape in the 19th century, European and American
scientists increasingly sought explanations for the behavioral and cultural differences they attributed to
groups (Stanton 1960). For example, using anthropometrics, invented by Francis Galton and Alphonse
Bertillon, they measured the shapes and sizes of skulls and related the results to group differences in
intelligence or other attributes (Lieberman 2001).

These scientists made three claims about race: first, that races are objective, naturally occurring divisions
of humanity; second, that there is a strong relationship between biological races and other human
phenomena (such as forms of activity and interpersonal relations and culture, and by extension the relative
material success of cultures), thus biologizing the notion of "race", as Foucault demonstrated in his
historical analysis; third, that race is therefore a valid scientific category that can be used to explain and
predict individual and group behavior. Races were distinguished by skin color, facial type, cranial profile
and size, texture and color of hair. Moreover, races were almost universally considered to reflect group
differences in moral character and intelligence.

The eugenics movement of the late 19th and early 20th centuries, inspired by Arthur Gobineau's An Essay
on the Inequality of the Human Races (1853–1855) and Vacher de Lapouge's "anthroposociology",
asserted as self-evident the biological inferiority of particular groups (Kevles 1985). In many parts of the
world, the idea of race became a way of rigidly dividing groups by culture as well as by physical
appearances (Hannaford 1996). Campaigns of oppression and genocide were often motivated by supposed
racial differences (Horowitz 2001).

In Charles Darwin's most controversial book, The Descent of Man, he made strong suggestions of racial
differences and European superiority. In Darwin's view, stronger tribes of humans always replaced
weaker tribes. As savage tribes came in conflict with civilized nations, such as England, the less advanced
people were destroyed.[18] Nevertheless, he also noted the great difficulty naturalists had in trying to
decide how many "races" there actually were (Darwin was himself a monogenist on the question of race,
believing that all humans were of the same species and finding "race" to be a somewhat arbitrary
distinction among some groups):

Man has been studied more carefully than any other animal, and yet there is the greatest possible diversity amongst
capable judges whether he should be classed as a single species or race, or as two (Virey), as three (Jacquinot), as
four (Kant), five (Blumenbach), six (Buffon), seven (Hunter), eight (Agassiz), eleven (Pickering), fifteen (Bory St.
Vincent), sixteen (Desmoulins), twenty-two (Morton), sixty (Crawfurd), or as sixty-three, according to Burke. This
diversity of judgment does not prove that the races ought not to be ranked as species, but it shews that they graduate
into each other, and that it is hardly possible to discover clear distinctive characters between them.[19]

[edit] Modern debates


[edit] Models of human evolution

See also: Multiregional hypothesis


See also: Recent single origin hypothesis

In a recent article, Leonard Lieberman and Fatimah Jackson have suggested that any new support for a
biological concept of race will likely come from another source, namely, the study of human evolution.
They therefore ask what, if any, implications current models of human evolution may have for any
biological conception of race.[20]

Today, all humans are classified as belonging to the species Homo sapiens and sub-species Homo sapiens
sapiens. However, this is not the first species of hominids: the first species of genus Homo, Homo habilis,
evolved in East Africa at least 2 million years ago, and members of this species populated different parts
of Africa in a relatively short time. Homo erectus evolved more than 1.8 million years ago, and by 1.5
million years ago had spread throughout Europe and Asia. Virtually all physical anthropologists agree that
Homo sapiens evolved out of Homo erectus. Anthropologists have been divided as to whether Homo
sapiens evolved as one interconnected species from H. erectus (called the Multiregional Model, or the
Regional Continuity Model), or evolved only in East Africa, and then migrated out of Africa and replaced
H. erectus populations throughout Europe and Asia (called the Out of Africa Model or the Complete
Replacement Model). Anthropologists continue to debate both possibilities, and the evidence is
technically ambiguous as to which model is correct, although most anthropologists currently favor the Out
of Africa model.
Lieberman and Jackson have argued that while advocates of both the Multiregional Model and the Out of
Africa Model use the word race and make racial assumptions, none define the term.[21] They conclude that
"Each model has implications that both magnify and minimize the differences between races. Yet each
model seems to take race and races as a conceptual reality. The net result is that those anthropologists
who prefer to view races as a reality are encouraged to do so" and conclude that students of human
evolution would be better off avoiding the word race, and instead describe genetic differences in terms of
populations and clinal gradations.[22]

[edit] Race as subspecies

Further information: Race (biology), Species, Subspecies, Systematics, Phylogenetics, Cladistics.

With the advent of the modern synthesis in the early 20th century, many biologists sought to use
evolutionary models and populations genetics in an attempt to formalise taxonomy. The Biological
Species Concept (BSC) is the most widely used system for describing species, this concept defines a
species as a group of organisms that interbreed in their natural environment and produce viable offspring.
In practice species are not classified according to the BSC but according to typology by the use of a
holotype, due to the difficulty of determining whether all members of a group of organisms do or can in
practice potentially interbreed.[23] BSC species are routinely classified on a subspecific level, though this
classification is conducted differently for different taxons, for mammals the normal taxonomic unit below
the species level is usually the subspecies.[24] More recently the Phylogenetic Species Concept (PSC) has
gained a substantial following. The PSC is based on the idea of a least-inclusive taxonomic unit (LITU),
in phylogenetic classification no subspecies can exist because they would automatically constitute a LITU
(any monophyletic group). Technically species cease to exist as do all hierarchical taxa, a LITU is
effectively defined as any monophyletic taxon, phylogenetics is strongly influenced by cladistics which
classifies organisms based on evolution rather than similarities between groups of organisms.[23] In
biology the term "race" is very rarely used because it is ambiguous, "'Race' is not being defined or used
consistently; its referents are varied and shift depending on context. The term is often used colloquially to
refer to a range of human groupings. Religious, cultural, social, national, ethnic, linguistic, genetic,
geographical and anatomical groups have been and sometimes still are called 'races'".[25] Generally when it
is used it is synonymous with subspecies.[26][25][27] One of the main obstacles to identifying subspecies is
that, while it is a recognised taxonomic term, it has no precise definition.[26]

Species of organisms that are monotypic (i.e. form a single subspecies) display at least one of these
properties:

• All members of the species are very similar and cannot be sensibly divided into biologically
significant subcategories.
• The individuals vary considerably but the variation is essentially random and largely meaningless
so far as genetic transmission of these variations is concerned (many plant species fit into this
category, which is why horticulturists interested in preserving, say, a particular flower color avoid
propagation from seed, and instead use vegetative methods like propagation from cuttings).
• The variation among individuals is noticeable and follows a pattern, but there are no clear dividing
lines among separate groups: they fade imperceptibly into one another. Such clinal variation
displays a lack of allopatric partition between groups (i.e. a clearly defined boundary demarcating
the subspecies), which is usually required before they are recognised as subspecies.[28]

A polytypic species has two or more subspecies. These are separate populations that are more genetically
different from one another and that are more reproductively isolated, gene flow between these populations
is much reduced leading to genetic differentiation.
[edit] Morphological subspecies

Traditionally subspecies are seen as geographically isolated and genetically differentiated populations.[26]
Or to put it another way "the designation 'subspecies' is used to indicate an objective degree of
microevolutionary divergence"[25] One objection to this idea is that it does not identify any degree of
differentiation, therefore any population that is somewhat biologically different could be considered a
subspecies, even to the level of a local population. As a result it is necessary to impose a threshold on the
level of difference that is required for a population to be designated a subspecies.[26] This effectively
means that populations of organisms must have reached a certain measurable level of difference in order
to be recognised as subspecies. Dean Amadon proposed in 1949 that subspecies would be defined
according to the seventy-five percent rule which means that 75% of a population must lie outside 99% of
the range of other populations for a given defining morphological character or a set of characters. The 75
percent rule still has defenders but other scholars argue that it should be replaced with 90 or 95 percent
rule.[29][30][31]

In 1978, Sewall Wright suggested that human populations that have long inhabited separated parts of the
world should, in general, be considered to be of different subspecies by the usual criterion that most
individuals of such populations can be allocated correctly by inspection. It does not require a trained
anthropologist to classify an array of Englishmen, West Africans, and Chinese with 100% accuracy by
features, skin color, and type of hair in spite of so much variability within each of these groups that every
individual can easily be distinguished from every other. However, it is customary to use the term race
rather than subspecies for the major subdivisions of the human species as well as for minor ones.[32]

On the other hand in practice subspecies are often defined by easily observable physical appearance, but
there is not necessarily any evolutionary significance to these observed differences, so this form of
classification has become less acceptable to evolutionary biologists.[26][25] Likewise this typological
approach to "race" is generally regarded as discredited by biologists and anthropologists.

Because of the difficulty in classifying subspecies morphologically, many biologists reject the concept
altogether, citing problems such as:[25]

• Visible physical differences do not correlate with one another, leading to the possibility of
different classifications for the same individual organisms.[25]
• Parallel evolution can lead to the existence of the appearance of similarities between groups of
organisms that are not part of the same species.[25]
• The existence of isolated populations within previously designated subspecies.[25]
• That the criteria for classification are arbitrary.[25]

[edit] Subspecies genetically differentiated populations

Another way to look at differences between populations is to measure genetic differences rather than
physical differences, these should be less biased. Genetic differences between populations of organisms
can be measured using the fixation index of Sewall Wright, which is often abbreviated to FST. This
statistic is used to compare differences between any two given populations and can be used to measure
genetic differences between populations for individual genes, or for many genes simultaneously.[33] For
example it is often stated that the fixation index for humans is about 0.15. This means that about 85% of
the variation measured in the human population is within any population, and about 15% of the variation
occurs between populations, or that any two individuals from different populations are almost as likely to
be more similar to each other than either is to a member of their own group.[26][25] It is often stated that
human genetic variation is low compared to other mammalian species, and it has been claimed that this
should be taken as evidence that there is no natural subdivision of the human population.[34][35][36][37][38]
Write himself believed that a value of 0.25 represented great genetic variation and that an FST of 0.15-0.25
represented moderate variation. It should however be noted that about 5% of human variation occurs
between populations within continents, and therefor the FST between continental groups of humans (or
races) is as low as 0.1 (or possibly lower).[33]

In their 2003 paper "Human Genetic Diversity and the Nonexistence of Biological Races"[39] Jeffrey Long
and Rick Kittles give a long critique of the application of FST to human populations. They find that the
figure of 85% is misleading because it implies that all human populations contain on average 85% of all
genetic diversity. This does not correctly reflect human population history, they claim, because it treats all
human groups as independent. A more realistic portrayal of the way human groups are related is to
understand that some human groups are parental to other groups and that these groups represent
paraphyletic groups to their descent groups. For example under the recent African origin theory the
human population in Africa is paraphyletic to all other human groups because it represents the ancestral
group from which all non-African populations derive, but more than that, non-African groups only derive
from a small non-representative sample of this African population. This means that all non-African
groups are more closely related to each other and to some African groups (probably east Africans) than
they are to others, and further that the migration out of Africa represented a genetic bottleneck, with a
great deal of the diversity that existed in Africa not being carried out of Africa by the emigrating groups.
This view produces a version of human population movements that do not result in all human populations
being independent, but rather produces a series of dilutions of diversity the further from Africa any
population lives, each founding event representing a genetic subset of it's parental population. Long and
Kittles find that rather than 85% of human genetic diversity existing in all human populations, about
100% of human diversity exists in a single African population, whereas only about 70% of human genetic
diversity exists in a population derived from New Guinea. Long and Kittles make the observation that this
still produces a global human population that is genetically homogeneous compared to other mammalian
populations.

Wright's F statistics are not used to determine whether a group can be described as a subspecies or not,
though the statistic is used to measure the degree of differentiation between populations, the degree of
genetic differentiation is not a marker of subspecies status.[33] Generally taxonomists prefer to use
phylogenetic analysis to determine whether a population can be considered a subspecies. Phylogenetic
analysis relies on the concept of derived characteristics that are not shared between groups, this means
that these populations are usually allopatric and therefore discretely bounded, this makes subspecies,
evolutionarily speaking, monophyletic groups.[26] The clinality of human genetic variation in general rules
out any idea that human population groups can be considered monophyletic as there appears to always
have been a great deal of gene flow between human populations.[26]

[edit] Population genetics: population and cline

At the beginning of the 20th century, anthropologists questioned, and eventually abandoned, the claim
that biologically distinct races are isomorphic with distinct linguistic, cultural, and social groups. Shortly
thereafter, the rise of population genetics provided scientists with a new understanding of the sources of
phenotypic variation. This new science has led many mainstream evolutionary scientists in anthropology
and biology to question the very validity of race as a scientific concept describing an objectively real
phenomenon. Those who came to reject the validity of the concept of race did so for four reasons:
empirical, definitional, the availability of alternative concepts, and ethical (Lieberman and Byrne 1993).

The first to challenge the concept of race on empirical grounds were anthropologists Franz Boas, who
demonstrated phenotypic plasticity due to environmental factors (Boas 1912), and Ashley Montagu
(1941, 1942), who relied on evidence from genetics. Zoologists Edward O. Wilson and W. Brown then
challenged the concept from the perspective of general animal systematics, and further rejected the claim
that "races" were equivalent to "subspecies" (Wilson and Brown 1953).
[edit] Clines

One of the crucial innovations in reconceptualizing genotypic and phenotypic variation was
anthropologist C. Loring Brace's observation that such variations, insofar as it is affected by natural
selection, migration, or genetic drift, are distributed along geographic gradations or clines (Brace 1964).
This point called attention to a problem common to phenotype-based descriptions of races (for example,
those based on hair texture and skin color): they ignore a host of other similarities and differences (for
example, blood type) that do not correlate highly with the markers for race. Thus, anthropologist Frank
Livingstone's conclusion that, since clines cross racial boundaries, "there are no races, only clines"
(Livingstone 1962: 279).

In a response to Livingston, Theodore Dobzhansky argued that when talking about "race" one must be
attentive to how the term is being used: "I agree with Dr. Livingston that if races have to be 'discrete
units,' then there are no races, and if 'race' is used as an 'explanation' of the human variability, rather than
vice versa, then the explanation is invalid." He further argued that one could use the term race if one
distinguished between "race differences" and "the race concept." The former refers to any distinction in
gene frequencies between populations; the latter is "a matter of judgment." He further observed that even
when there is clinal variation, "Race differences are objectively ascertainable biological phenomena ....
but it does not follow that racially distinct populations must be given racial (or subspecific) labels."[40] In
short, Livingston and Dobzhansky agree that there are genetic differences among human beings; they also
agree that the use of the race concept to classify people, and how the race concept is used, is a matter of
social convention. They differ on whether the race concept remains a meaningful and useful social
convention.

In 1964, biologists Paul Ehrlich and Holm pointed out cases where two or more clines are distributed
discordantly—for example, melanin is distributed in a decreasing pattern from the equator north and
south; frequencies for the haplotype for beta-S hemoglobin, on the other hand, radiate out of specific
geographical points in Africa (Ehrlich and Holm 1964). As anthropologists Leonard Lieberman and
Fatimah Linda Jackson observe, "Discordant patterns of heterogeneity falsify any description of a
population as if it were genotypically or even phenotypically homogeneous" (Lieverman and Jackson
1995).

Patterns such as those seen in human physical and genetic variation as described above, have led to the
consequence that the number and geographic location of any described races is highly dependent on the
importance attributed to, and quantity of, the traits considered. For example if only skin colour and a "two
race" system of classification were used, then one might classify Indigenous Australians in the same
"race" as Black people, and Caucasians in the same "race" as East Asian people, but biologists and
anthropologists would dispute that these classifications have any scientific validity. On the other hand the
greater the number of traits (or alleles) considered, the more subdivisions of humanity are detected, due to
the fact that traits and gene frequencies do not always correspond to the same geographical location, or as
Ossario and Duster (2005) put it:

Anthropologists long ago discovered that humans' physical traits vary gradually, with groups that are close
geographic neighbors being more similar than groups that are geographically separated. This pattern of variation,
known as clinal variation, is also observed for many alleles that vary from one human group to another. Another
observation is that traits or alleles that vary from one group to another do not vary at the same rate. This pattern is
referred to as nonconcordant variation. Because the variation of physical traits is clinal and nonconcordant,
anthropologists of the late 19th and early 20th centuries discovered that the more traits and the more human groups
they measured, the fewer discrete differences they observed among races and the more categories they had to create
to classify human beings. The number of races observed expanded to the 30s and 50s, and eventually
anthropologists concluded that there were no discrete races (Marks, 2002). Twentieth and 21st century biomedical
researchers have discovered this same feature when evaluating human variation at the level of alleles and allele
frequencies. Nature has not created four or five distinct, nonoverlapping genetic groups of people.[41]
[edit] Populations

Population geneticists have debated as to whether the concept of population can provide a basis for a new
conception of race. In order to do this a working definition of population must be found. Surprisingly
there is no generally accepted concept of population that biologists use. It has been pointed out that the
concept of population is central to ecology, evolutionary biology and conservation biology, but also that
most definitions of population rely on qualitative descriptions such as "a group of organisms of the same
species occupying a particular space at a particular time"[42] Waples and Gaggiotti identify two broad
types of definitions for populations, those that fall into an ecological paradigm and those that fall into an
evolutionary paradigm. Examples such definitions are:

• Ecological paradigm: A group of individuals of the same species that co-occur in space and time
and have an opportunity to interact with each other.
• Evolutionary paradigm: A group of individuals of the same species living in close enough
proximity that any member of the group can potentially mate with any other member.[42]

Richard Lewontin, claiming that 85 percent of human variation occurs within populations, and not among
populations, argued that neither "race" nor "subspecies" were appropriate or useful ways to describe
populations (Lewontin 1973). Nevertheless, barriers—which may be cultural or physical— between
populations can limit gene flow and increase genetic differences. Recent work by population geneticists
conducting research in Europe suggests that ethnic identity can be a barrier to gene flow.[43][44][45][46] Others,
such as Ernst Mayr, have argued for a notion of "geographic race" [4]. Some researchers report the
variation between racial groups (measured by Sewall Wright's population structure statistic FST) accounts
for as little as 5% of human genetic variation². Sewall Wright himself commented that if differences this
large were seen in another species, they would be called subspecies.[47] In 2003 A. W. F. Edwards argued
that cluster analysis supersedes Lewontin's arguments (see below).

These empirical challenges to the concept of race forced evolutionary sciences to reconsider their
definition of race. Mid-century, anthropologist William Boyd defined race as:

A population which differs significantly from other populations in regard to the frequency of one
or more of the genes it possesses. It is an arbitrary matter which, and how many, gene loci we
choose to consider as a significant "constellation" (Boyd 1950).

Lieberman and Jackson (1994) have pointed out that "the weakness of this statement is that if one gene
can distinguish races then the number of races is as numerous as the number of human couples
reproducing." Moreover, anthropologist Stephen Molnar has suggested that the discordance of clines
inevitably results in a multiplication of races that renders the concept itself useless (Molnar 1992).

The distribution of many physical traits resembles the distribution of genetic variation within and between
human populations (American Association of Physical Anthropologists 1996; Keita and Kittles 1997). For
example, ~90% of the variation in human head shapes occurs within every human group, and ~10%
separates groups, with a greater variability of head shape among individuals with recent African ancestors
(Relethford 2002).

[edit] Molecular genetics: lineages and clusters

With the recent availability of large amounts of human genetic data from many geographically distant
human groups scientists have again started to investigate the relationships between people from various
parts of the world. One method is to investigate DNA molecules that are passed down from mother to
child (mtDNA) or from father to son (Y chromosomes), these form molecular lineages and can be
informative regarding prehistoric population migrations. Alternatively autosomal alleles are investigated
in an attempt to understand how much genetic material groups of people share. This work has led to a
debate amongst geneticists, molecular anthropologists and medical doctors as to the validity of conceps
such as "race". Some researchers insist that classifying people into groups based on ancestry may be
important from medical and social policy points of view, and claim to be able to do so accurately. Others
claim that individuals from different groups share far too much of their genetic material for group
membership to have any medical implications. This has reignited the scientific debate over the validity of
human classification and concepts of "race".

[edit] Molecular lineages, Y chromosomes and mitochondrial DNA

Further information: Human genetic variation

Mitochondria are intracellular organelles that contain DNA, this mitochondrial DNA (mtDNA) is passed
in a direct female line of descent from mother to child. Human Y chromosomes are male specific sex
chromosomes, any human that possesses a Y chromosome will be morphologically male. Y chromosomes
are therefore passed from father to son. When a mutation arises in mtDNA or Y chromosome it is passed
down a specific maternal or paternal line and because mutations accumulate on these molecules they can
be used to identify specific molecular lineages. These mutations are derived from copying mistakes, when
the DNA is copied it is possible that a single mistake occurs in the DNA sequence, these single mistakes
are called single nucleotide polymorphisms (SNPs).

Molecular lineages. [show]


Ancestral Haplogroup Haplogroup A (Hg A) Haplogroup B (Hg B)

Mitochondrial DNA and Y chromosome research has produced three reproducible observations relevant
to race and human evolution. [48]

Firstly all mtDNA and Y chromosome lineages derive from a common ancestral molecule. For mtDNA
this ancestor is estimated to have lived about 140,000-290,000 years ago (Mitochondrial Eve), while for Y
chromosomes the ancestor is estimated to have lived about 70,000 years ago (Y chromosome Adam).
These observations are robust, and the individuals that originally carried these ancestral molecules are the
direct female and male line most recent common ancestors of all extant anatomically modern humans.
The observation that these are the direct female line and male line ancestors of all living humans should
not be interpreted as meaning that either was the first anatomically modern human. Nor should we assume
that there were no other modern humans living concurrently with mitochondrial Eve or Y chromosome
Adam. A more reasonable explanation is that other humans who lived at the same time did indeed
reproduce and pass their genes down to extant humans, but that their mitochondrial and Y chromosomal
lineages have been lost over time, probably due to random events (e.g. producing only male or female
children). It is impossible to know to what extent these non-extant lineages have been lost, or how much
they differed from the mtDNA or Y chromosome of our maternal and paternal lineage MRCA. The
difference in dates between Y chromosome Adam and mitochondrial Eve is usually attributed to a higher
extinction rate for Y chromosomes. This is probably because a few very successful men produce a great
many children, while a larger number of less successful men will produce far fewer children.

Secondly mtDNA and Y chromosome work supports a recent African origin for anatomically modern
humans, with the ancestors of all extant modern humans leaving Africa somewhere between 100,000 -
50,000 years ago.[48][49][50][51]
Thirdly studies show that specific types (haplogroups) of mtDNA or Y chromosomes do not always
cluster by geography, ethnicity or race, implying multiple lineages are involved in founding modern
human populations, with many closely related lineages spread over large geographic areas, and many
populations containing distantly related lineages.[48] Keita et al. (2004) say, with reference to Y
chromosome and mtDNA studies and their relevance to concepts of "race":

Y-chromosome and mitochondrial DNA genealogies are especially interesting because they demonstrate the lack of
concordance of lineages with morphology and facilitate a phylogenetic analysis. Individuals with the same
morphology do not necessarily cluster with each other by lineage, and a given lineage does not include only
individuals with the same trait complex (or 'racial type'). Y-chromosome DNA from Africa alone suffices to make
this point. Africa contains populations whose members have a range of external phenotypes. This variation has
usually been described in terms of 'race' (Caucasoids, Pygmoids, Congoids, Khoisanoids). But the Y-chromosome
clade defined by the PN2 transition (PN2/M35, PN2/M2) [see haplogroup E3b and Haplogroup E3a] shatters the
boundaries of phenotypically defined races and true breeding populations across a great geographical expanse21.
African peoples with a range of skin colors, hair forms and physiognomies have substantial percentages of males
whose Y chromosomes form closely related clades with each other, but not with others who are phenotypically
similar. The individuals in the morphologically or geographically defined 'races' are not characterized by 'private'
distinct lineages restricted to each of them.[52]

[edit] How much are genes shared? Clustering analyses and what they tell us

Further information: Human genetic variation

Human genetic variation is not distributed


uniformly throughout the global
population, the global range of human Infobox
habitation means that there are great
distance between some human populations Multi Locus Allele Clusters
(e.g. between South America and Southern
Africa) and this will reduce gene flow [show]
between these populations. On the other
hand environmental selection is also likely to play a role in differences between human populations.
Conversely it is now believed that the majority of genetic differences between populations is selectively
neutral. The existence of differences between peoples from different regions of the world is relevant to
discussions about the concept of "race", some biologists believe that the language of "race" is relevant in
describing human genetic variation. It is now possible to reasonably estimate the continents of origin of
an individual's ancestors based on genetic data[53]

Richard Lewontin has claimed that "race" is a meaningless classification because the majority of human
variation is found within groups (~85%), and therefore two individuals from different "races" are almost
as likely to be as similar to each other as either is to someone from their own "race". In 2003 A. W. F.
Edwards rebuked this argument, claiming that Lewontin's conclusion ignores the fact that most of the
information that distinguishes populations is hidden in the correlation structure of the data and not simply
in the variation of the individual factors (see Infobox: Multi Locus Allele Clusters). Edwards concludes
that "It is not true that 'racial classification is ... of virtually no genetic or taxonomic significance' or that
'you can't predict someone’s race by their genes'."[54] Researchers such as Neil Risch and Noah Rosenberg
have argued that a person's biological and cultural background may have important implications for
medical treatment decisions, both for genetic and non-genetic reasons.[55][56][57]

The results obtained by clustering analyses are dependent on several criteria:


• The clusters produced are relative clusters and not absolute clusters, each cluster is the product of
comparisons between sets of data derived for the study, results are therefore highly influenced by
sampling strategies. (Edwards, 2003)
• The geographic distribution of the populations sampled, because human genetic diversity is
marked by isolation by distance, populations from geographically distant regions will form much
more discrete clusters than those from geographically close regions. (Kittles and Weiss, 2003)
• The number of genes used. The more genes used in a study the greater the resolution produced and
therefore the greater number of clusters that will be identified. (Tang, 2005)

If a landmass is considered with variation distributed in one dimension (west-east). Top: Distribution of
genetic variation if a small island model is considered, there are two "populations" with a narrow region
of hybridisation where migration occurs, this pattern is clustered. Bottom: Distribution of genetic
variation if isolation by distance is considered, all variation is gradual over the extent of the landmass, this
pattern is clinal.

Rosenberg et al.'s (2002) paper "Genetic Structure of Human Populations." especially was taken up by
Nicholas Wade in the New York Times as evidence that genetics studies supported the "popular
conception" of race.[58] On the other hand Rosenberg's work used samples from the Human Genome
Diversity Project (HGDP), a project that has collected samples from individuals from 52 ethnic groups
from various locations around the world. The HGDP has itself been criticised for collecting samples on an
"ethnic group" basis, on the grounds that ethnic groups represent constructed categories rather than
categories which are solely natural or biological. Scientists such as the molecular anthropologist Jonathan
Marks, the geneticists David Serre, Svante Pääbo, Mary-Claire King and medical doctor Arno G.
Motulsky argue that this is a biased sampling strategy, and that human samples should have been
collected geographically, i.e. that samples should be collected from points on a grid overlaying a map of
the world, and maintain that human genetic variation is not partitioned into discrete racial groups
(clustered), but is spread in a clinal manner (isolation by distance) that is masked by this biased sampling
strategy.[59][60][61] The existence of allelic clines and the observation that the bulk of human variation is
continuously distributed, has led scientists such as Kittles and Weiss (2003) to conclude that any
categorization schema attempting to partition that variation meaningfully will necessarily create artificial
truncations.[62] It is for this reason, Reanne Frank argues, that attempts to allocate individuals into ancestry
groupings based on genetic information have yielded varying results that are highly dependent on
methodological design.[63]

In a follow up paper "Clines, Clusters, and the Effect of Study Design on the Inference of Human
Population Structure" in 2005, Rosenberg et al. maintain that their clustering analysis is robust. But they
also agree that there is evidence for clinality (isolation by distance). Thirdly they distance themselves
from the language of race, and do not use the term "race" in any of their publications: "The arguments
about the existence or nonexistence of 'biological races' in the absence of a specific context are largely
orthogonal to the question of scientific utility, and they should not obscure the fact that, ultimately, the
primary goals for studies of genetic variation in humans are to make inferences about human evolutionary
history, human biology, and the genetic causes of disease."[64]

One of the underlying questions regarding the distribution of human genetic diversity is related to the
degree to which genes are shared between the observed clusters, and therefore the extent that membership
of a cluster can accurately predict an individuals genetic makeup or susceptibility to disease. This is at the
core of Lewontin's argument. Lewontin used Sewall Wright's Fixation index (FST), to estimate that on
average 85% of human genetic diversity is contained within groups. Are members of the same cluster
always more genetically similar to each other than they are to members of a different cluster? Lewontin's
argument is that within group differences are almost as high as between group differences, and therefore
two individuals from different groups are almost as likely to be more similar to each other than they are to
members of their own group. Can clusters correct for this finding? In 2004 Bamshad et al. used the data
from Rosenberg et al. (2002) to investigate the extent of genetic differences between individuals within
continental groups relative to genetic differences between individuals between continental groups. They
found that though these individuals could be classified very accurately to continental clusters, there was a
significant degree of genetic overlap on the individual level.[65]

Percentage similarity between two individuals from different clusters when 377 microsatellite
markers are considered.[65]

x Africans Europeans Asians

Europeans 36.5 — —

Asians 35.5 38.3 —

Indigenous Americans 26.1 33.4 35

This question was addressed in more detail in a 2007 paper by Witherspoon et al. entitled "Genetic
Similarities Within and Between Human Populations".[66] Where they make the following observations:

• Genetic differences between human continental populations account for only a small fraction of
the differences between people.
• Multilocus clusters provide accurate and reproducible results for dividing people into the correct
populations.
• Two individuals from different populations are often more genetically alike to each other than they
are to individuals from their own population.

The paper states that "All three of the claims listed above appear in disputes over the significance of
human population variation and 'race'" and asks "If multilocus statistics are so powerful, then how are we
to understand this [last] finding?"

Witherspoon et al. (2007) attempt to reconcile these apparently contradictory findings, and show that the
observed clustering of human populations into relatively discrete groups is a product of using what they
call "population trait values". This means that each individual is compared to the "typical" trait for several
populations, and assigned to a population based on the individual's overall similarity to one of the
populations as a whole. They therefore claim that clustering analyses cannot necessarily be used to make
inferences regarding the similarity or dissimilarity of individuals between or within clusters, but only for
similarities or dissimilarities of individuals to the "trait values" of any given cluster. The paper measures
the rate of misclassification using these "trait values" and calls this the "population trait value
misclassification rate" (CT). The paper investigates the similarities between individuals by use of what
they term the "dissimilarity fraction" (ω): "the probability that a pair of individuals randomly chosen from
different populations is genetically more similar than an independent pair chosen from any single
population." Witherspoon et al. show that two individuals can be more genetically similar to each other
than to the typical genetic type of their own respective populations, and yet be correctly assigned to their
respective populations. An important observation is that the likelihood that two individuals from different
populations will be more similar to each other genetically than two individuals from the same population
depends on several criteria, most importantly the number of genes studied and the distinctiveness of the
populations under investigation. For example when 10 loci are used to compare three geographically
disparate populations (sub-Saharan African, East Asian and European) then individuals are more similar
to members of a different group about 30% of the time. If the number of loci is increased to 100
individuals are more genetically similar to members of a different population ~20% of the time, and even
using 1000 loci, ω ~ 10%. They do state that for these very geographically separated populations it is
possible to reduce this statistic to 0% when tens of thousands of loci are used. That means that individuals
will always be more similar to members of their own population. But the paper notes that humans are not
distributed into geographically separated populations, omitting intermediate regions may produce a false
distinctiveness for human diversity. The paper supports the observation that "highly accurate
classification of individuals from continuously sampled (and therefore closely related) populations may be
impossible". Furthermore the results indicate that clustering analyses and self reported ethnicity may not
be good estimates for genetic susceptibility to disease risk. Witherspoon et al. conclude that:

given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with
the observation that most human genetic variation is found within populations, not between them. It is also
compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are
used, individuals are frequently more similar to members of other populations than to members of their own
population.

[edit] Summary of different biological definitions of race

Biological definitions of race (Long & Kittles, 2003) et al.

Concept Reference Definition

"A great division of mankind, characterized as a group by the sharing of a


Hooton certain combination of features, which have been derived from their common
Essentialist
(1926) descent, and constitute a vague physical background, usually more or less
obscured by individual variations, and realized best in a composite picture."

"An aggregate of phenotypically similar populations of a species, inhabiting a


Taxonomic Mayr (1969) geographic subdivision of the range of a species, and differing taxonomically
from other populations of the species."

"Races are genetically distinct Mendelian populations. They are neither


Dobzhansky
Population individuals nor particular genotypes, they consist of individuals who differ
(1970)
genetically among themselves."

Lineage Templeton "A subspecies (race) is a distinct evolutionary lineage within a species. This
(1998) definition requires that a subspecies be genetically differentiated due to barriers
to genetic exchange that have persisted for long periods of time; that is, the
subspecies must have historical continuity in addition to current genetic
differentiation."

[edit] Current views across disciplines

One result of debates over the meaning and validity of the concept "race" is that the current literature
across different disciplines regarding human variation lacks consensus, though within some fields, such as
biology, there is strong consensus. Some studies use the word race in its early essentialist taxonomic
sense. Many others still use the term race, but use it to mean a population, clade, or haplogroup. Others
eschew the concept of race altogether, and use the concept of population as a less problematical unit of
analysis.

Since 1932, some college textbooks introducing physical anthropology have increasingly come to reject
race as a valid concept: from 1932 to 1976, only seven out of thirty-two rejected race; from 1975 to 1984,
thirteen out of thirty-three rejected race; from 1985 to 1993, thirteen out of nineteen rejected race.
According to one academic journal entry, where 78 percent of the articles in the 1931 Journal of Physical
Anthropology employed these or nearly synonymous terms reflecting a bio-race paradigm, only 36
percent did so in 1965, and just 28 percent did in 1996.[67] The American Anthropological Association,
drawing on biological research, currently holds that "The concept of race is a social and cultural
construction... . Race simply cannot be tested or proven scientifically," and that, "It is clear that human
populations are not unambiguous, clearly demarcated, biologically distinct groups. The concept of 'race'
has no validity ... in the human species".[8]

In an ongoing debate, some geneticists argue that race is neither a meaningful concept nor a useful
heuristic device,[68] and even that genetic differences among groups are biologically meaningless,[69] on the
grounds that more genetic variation exists within such races than among them, and that racial traits
overlap without discrete boundaries.[70] Other geneticists, in contrast, argue that categories of self-
identified race/ethnicity or biogeographic ancestry are both valid and useful,[71] that these categories
correspond with clusters inferred from multilocus genetic data,[72] and that this correspondence implies
that genetic factors might contribute to unexplained phenotypic variation between groups.[73]

In February, 2001, the editors of the medical journal Archives of Pediatrics and Adolescent Medicine
asked authors to no longer use "race" as an explanatory variable and not to use obsolescent terms. Some
other peer-reviewed journals, such as the New England Journal of Medicine and the American Journal of
Public Health, have made similar endeavours.[74] Furthermore, the National Institutes of Health recently
issued a program announcement for grant applications through February 1, 2006, specifically seeking
researchers who can investigate and publicize among primary care physicians the detrimental effects on
the nation's health of the practice of medical racial profiling using such terms. The program
announcement quoted the editors of one journal as saying that, "analysis by race and ethnicity has become
an analytical knee-jerk reflex."[75]

A survey, taken in 1985 (Lieberman et al. 1992), asked 1,200 American anthropologists how many
disagree with the following proposition: "There are biological races in the species Homo sapiens." The
responses were:

• physical anthropologists 41%


• cultural anthropologists 53%[76]

The figure for physical anthropologists at PhD granting departments was slightly higher, rising from 41%
to 42%, with 50% agreeing. This survey, however, did not specify any particular definition of race
(although it did clearly specify biological race within the species Homo Sapiens); it is difficult to say
whether those who supported the statement thought of race in taxonomic or population terms.

The same survey, taken in 1999,[77] showed the following changing results for anthropologists:

• physical anthropologists 69%


• cultural anthropologists 80%

In Poland the race concept was rejected by only 25 percent of anthropologists in 2001, although: "Unlike
the U.S. anthropologists, Polish anthropologists tend to regard race as a term without taxonomic value,
often as a substitute for population."[78]

In the face of these issues, some evolutionary scientists have simply abandoned the concept of race in
favor of "population." What distinguishes population from previous groupings of humans by race is that it
refers to a breeding population (essential to genetic calculations) and not to a biological taxon. Other
evolutionary scientists have abandoned the concept of race in favor of cline (meaning, how the frequency
of a trait changes along a geographic gradient). (The concepts of population and cline are not, however,
mutually exclusive and both are used by many evolutionary scientists.)

According to Jonathan Marks,

By the 1970s, it had become clear that (1) most human differences were cultural; (2) what was not
cultural was principally polymorphic - that is to say, found in diverse groups of people at different
frequencies; (3) what was not cultural or polymorphic was principally clinal - that is to say,
gradually variable over geography; and (4) what was left - the component of human diversity that
was not cultural, polymorphic, or clinal - was very small.
A consensus consequently developed among anthropologists and geneticists that race as the
previous generation had known it - as largely discrete, geographically distinct, gene pools - did not
exist.[79]

In the face of this rejection of race by evolutionary scientists, many social scientists have replaced the
word race with the word "ethnicity" to refer to self-identifying groups based on beliefs concerning shared
culture, ancestry and history. Alongside empirical and conceptual problems with "race," following the
Second World War, evolutionary and social scientists were acutely aware of how beliefs about race had
been used to justify discrimination, apartheid, slavery, and genocide. This questioning gained momentum
in the 1960s during the U.S. civil rights movement and the emergence of numerous anti-colonial
movements worldwide. They thus came to understood that these justifications, even when expressed in
language that sought to appear objective, were social constructs.[7]

[edit] Races as social constructions

Main articles: Social interpretations of race and Racialism

Even as the idea of "race" was becoming a powerful organizing principle in many societies, the
shortcomings of the concept were apparent. In Europe, the gradual transition in appearances from one
group to adjacent groups emphasized that "one variety of mankind does so sensibly pass into the other,
that you cannot mark out the limits between them," as Blumenbach observed in his writings on human
variation (Marks 1995, p. 54). As anthropologists and other evolutionary scientists have shifted away
from the language of race to the term population to talk about genetic differences, Historians,
anthropologists and social scientists have re-conceptualized the term "race" as a cultural category or social
construct, in other words, as a particular way that some people have of talking about themselves and
others. As Stephan Palmie has recently summarized, race "is not a thing but a social relation";[9] or, in the
words of Katya Gibel Mevorach, "a metonym," "a human invention whose criteria for differentiation are
neither universal nor fixed but have always been used to manage difference."[10] As such it cannot be a
useful analytical concept; rather, the use of the term "race" itself must be analyzed. Moreover, they argue
that biology will not explain why or how people use the idea of race: history and social relationships will.
For example, the fact that to some in the United States, categories such as "Hispanic or Latino" are
viewed to constitute a race (instead of an ethnic group) reflect this new idea of "race as a social
construct". However, it may be in the interest of dominant groups to cluster Spanish speakers into a
single, isolated population, rather than classifying them according to Race, as they in fact are.

[edit] In the United States

Main article: Race in the United States


see also Admixture in the United States

The immigrants to the Americas came ultimately from every region of Europe, Africa, and Asia.
Throughout America the immigrants mixed among themselves and with the indigenous inhabitants of the
continent. In the United States, for example, most people who self-identify as African American have
some European ancestors — in one analysis of genetic markers that have differing frequencies between
continents, European ancestry ranged from an estimated 7% for a sample of Jamaicans to ∼23% for a
sample of African Americans from New Orleans (Parra et al. 1998). Similarly, many people who identify
as European American have some African or Native American ancestors, either through openly interracial
marriages or through the gradual inclusion of people with mixed ancestry into the majority population. In
a survey of college students who self-identified as white in a northeastern U.S. university, ∼30% were
estimated to have less than 90% European ancestry.[80]

In the United States since its early history, Native Americans, African Americans and European
Americans were classified as belonging to different races. For nearly three centuries, the criteria for
membership in these groups were similar, comprising a person’s appearance, his fraction of known non-
White ancestry, and his social circle.2 But the criteria for membership in these races diverged in the late
19th century. During Reconstruction, increasing numbers of Americans began to consider anyone with
"one drop" of known "Black blood" to be Black regardless of appearance.3 By the early 20th century, this
notion of invisible blackness was made statutory in many states and widely adopted nationwide.4 In
contrast, Amerindians continue to be defined by a certain percentage of "Indian blood" (called blood
quantum) due in large part to American slavery ethics. Finally, for the past century or so, to be White one
had to have perceived "pure" White ancestry.

Efforts to sort the increasingly mixed population of the United States into discrete categories generated
many difficulties (Spickard 1992). By the standards used in past censuses, many millions of children born
in the United States have belonged to a different race than have one of their biological parents. Efforts to
track mixing between groups led to a proliferation of categories (such as "mulatto" and "octoroon") and
"blood quantum" distinctions that became increasingly untethered from self-reported ancestry. A person's
racial identity can change over time, and self-ascribed race can differ from assigned race (Kressin et al.
2003). Until the 2000 census, Latinos were required to identify with a single race despite the long history
of mixing in Latin America; partly as a result of the confusion generated by the distinction, 32.9% (U.S.
census records) of Latino respondents in the 2000 census ignored the specified racial categories and
checked "some other race". (Mays et al. 2003 claim a figure of 42%)

The difference between how Native American and Black identities are defined today (blood quantum
versus one-drop) has demanded explanation. According to anthropologists such as Gerald Sider, the goal
of such racial designations was to concentrate power, wealth, privilege and land in the hands of Whites in
a society of White hegemony and privilege (Sider 1996; see also Fields 1990). The differences have little
to do with biology and far more to do with the history of racism and specific forms of White supremacy
(the social, geopolitical and economic agendas of dominant Whites vis-à-vis subordinate Blacks and
Native Americans) especially the different roles Blacks and Amerindians occupied in White-dominated
19th century America. The theory suggests that the blood quantum definition of Native American identity
enabled Whites to acquire Amerindian lands, while the one-drop rule of Black identity enabled Whites to
preserve their agricultural labor force. The contrast presumably emerged because as peoples transported
far from their land and kinship ties on another continent, Black labor was relatively easy to control, thus
reducing Blacks to valuable commodities as agricultural laborers. In contrast, Amerindian labor was more
difficult to control; moreover, Amerindians occupied large territories that became valuable as agricultural
lands, especially with the invention of new technologies such as railroads; thus, the blood quantum
definition enhanced White acquisition of Amerindian lands in a doctrine of Manifest Destiny that
subjected them to marginalization and multiple episodic localized campaigns of extermination.

The political economy of race had different consequences for the descendants of aboriginal Americans
and African slaves. The 19th century blood quantum rule meant that it was relatively easier for a person
of mixed Euro-Amerindian ancestry to be accepted as White. The offspring of only a few generations of
intermarriage between Amerindians and Whites likely would not have been considered Amerindian at all
(at least not in a legal sense). Amerindians could have treaty rights to land, but because an individual with
one Amerindian great-grandparent no longer was classified as Amerindian, they lost any legal claim to
Amerindian land. According to the theory, this enabled Whites to acquire Amerindian lands. The irony is
that the same individuals who could be denied legal standing because they were "too White" to claim
property rights, might still be Amerindian enough to be considered as "breeds", stigmatized for their
Native American ancestry.

The 20th century one-drop rule, on the other hand, made it relatively difficult for anyone of known Black
ancestry to be accepted as White. The child of a Black sharecropper and a White person was considered
Black. And, significant in terms of the economics of sharecropping, such a person also would likely be a
sharecropper as well, thus adding to the employer's labor force.

In short, this theory suggests that in a 20th century economy that benefited from sharecropping, it was
useful to have as many Blacks as possible. Conversely, in a 19th century nation bent on westward
expansion, it was advantageous to diminish the numbers of those who could claim title to Amerindian
lands by simply defining them out of existence.

It must be mentioned, however, that although some scholars of the Jim Crow period agree that the 20th
century notion of invisible Blackness shifted the color line in the direction of paleness, thereby swelling
the labor force in response to Southern Blacks' great migration northwards, others (Joel Williamson, C.
Vann Woodward, George M. Fredrickson, Stetson Kennedy) see the one-drop rule as a simple
consequence of the need to define Whiteness as being pure, thus justifying White-on-Black oppression. In
any event, over the centuries when Whites wielded power over both Blacks and Amerindians and widely
believed in their inherent superiority over people of color, it is no coincidence that the hardest racial group
in which to prove membership was the White one.

In the United States, social and legal conventions developed over time that forced individuals of mixed
ancestry into simplified racial categories (Gossett 1997). An example is the "one-drop rule" implemented
in some state laws that treated anyone with a single known African American ancestor as black (Davis
2001). The decennial censuses conducted since 1790 in the United States also created an incentive to
establish racial categories and fit people into those categories (Nobles 2000). In other countries in the
Americas where mixing among groups was overtly more extensive, social categories have tended to be
more numerous and fluid, with people moving into or out of categories on the basis of a combination of
socioeconomic status, social class, ancestry, and appearance (Mörner 1967).
The term "Hispanic" as an ethnonym emerged in the 20th century with the rise of migration of laborers
from American Spanish-speaking countries to the United States. It includes people who had been
considered racially distinct (Black, White, Amerindian, Asian, and mixed groups) in their home countries.
Today, the word "Latino" is often used as a synonym for "Hispanic". In contrast to "Latino"´or "Hispanic"
"Anglo" is now used to refer to non-Hispanic White Americans or non-Hispanic European Americans,
most of whom speak the English language but are not necessarily of English descent.

[edit] In Brazil

Main article: Race in Brazil

Compared to 19th century United States, 20th century Brazil was characterized by a perceived relative
absence of sharply defined racial groups. According to anthropologist Marvin Harris (1989), this pattern
reflects a different history and different social relations. Basically, race in Brazil was "biologized," but in
a way that recognized the difference between ancestry (which determines genotype) and phenotypic
differences. There, racial identity was not governed by such a rigid descent rule as in the United States. A
Brazilian child was never automatically identified with the racial type of one or both parents, nor were
there only a very limited number of categories to choose from. Over a dozen racial categories would be
recognized in conformity with all the possible combinations of hair color, hair texture, eye color, and skin
color. These types grade into each other like the colors of the spectrum, and no one category stands
significantly isolated from the rest. That is, race referred preferencially to appearance, not heredity. The
complexity of racial classifications in Brazil is reflective of the extent of miscegenation in Brazilian
society, a society that remains highly, but not strictly, stratified along color lines. Henceforth, the
Brazilian narrative of a perfect "post-racist" country, must be met with caution, as sociologist Gilberto
Freyre demonstrated in 1933 in Casa Grande e Senzala.

[edit] Marketing of race: genetic lineages as social lineages

New research in molecular genetics, and the marketing of genetic identities through the analysis of one's
Y chromosome, mtDNA or autosomal DNA, has reignited the debate surrounding race. Most of the
controversy surrounds the question of how to interpret these new data, and whether conclusions based on
existing data are sound. Although the vast majority of researchers endorse the view that continental
groups do not constitute different subspecies, and molecular geneticists generally reject the identification
of mtDNA and Y chromosomal lineages or allele clusters with "races", some anthropologists have
suggested that the marketing of genetic analysis to the general public in the form of "Personalized Genetic
Histories" (PGH) is leading to a new social construction of race. See above sections Molecular lineages,
Y chromosomes and mitochondrial DNA and How much are genes shared? Clustering analyses and what
they tell us.

Typically, a consumer of a commercial PGH service sends in a sample of DNA which is analyzed by
molecular biologists and is sent a report, of which the following is a sample

"African DNA Ancestry Report"

The subject's likely haplogroup L2 is associated with the so-called Bantu expansion from West and Central sub-
Saharan Africa east and south, dated 2,000-4,000 years ago .... Between the 15th and 19th centuries C.E, the
Atlantic slave trade resulted in the forced movement of approximately 13 million people from Africa, mainly to the
Americas. Only approximately 11 million survived the passage and many more died in the early years of captivity.
Many of these slaves were traded to the West African Cape Verde ports of embarkation through Portuguese and
Arab middlemen and came from as far south as Angola. Among the African tribal groups, all Bantu-speaking, in
which L2 is common are: Hausa, Kanuri, Fulfe, Songhai, Malunjin (Angola), Yoruba, Senegalese, Serer and Wolof.
Although no single sentence in such a report is technically wrong, through the combination of these
sentences, anthropologists and others have argued, the report is telling a story that connects a haplotype
with a language and a group of tribes. This story is generally rejected by research scientists for the simple
reason that an individual receives his or her Y chromosome or mtDNA from only one ancestor in every
generation; consequently, with every generation one goes back in time, the percentage of one's ancestors
it represents halves; if one goes back hundreds (let alone thousands) of years, it represents only a tiny
fragment of one's ancestry. As Mark Shriver and Rick Kittles recently remarked,

For many customers of lineage-based tests, there is a lack of understanding that their maternal and paternal lineages
do not necessarily represent their entire genetic make-up. For example, an individual might have more than 85%
Western European 'genomic' ancestry but still have a West African mtDNA or NRY lineage.

Nevertheless, they acknowledge, such stories are increasingly appealing to the general public.[81] Thus, in
his book Blood of the Isles (published in the US and Canada as Saxons, Vikings and Celts: The Genetic
Roots of Britain and Ireland), however, Bryan Sykes discusses how people who have been mtDNA tested
by his commercial laboratory and been found to belong to the same haplogroup have parties together
because they see this as some sort of "bond", even though these people may not actually share very much
ancestry.

Through these kinds of reports, new advances in molecular genetics are being used to create or confirm
stories have about social identities. Although these identities are not racial in the biological sense, they are
in the cultural sense in that they link biological and cultural identities. Nadia Abu el-Haj has argued that
the significance of genetic lineages in popular conceptions of race owes to the perception that while
genetic lineages, like older notions of race, suggests some idea of biological relatedness, unlike older
notions of race they are not directly connected to claims about human behaviour or character. Abu el-Haj
has thus argued that "postgenomics does seem to be giving race a new lease on life." Nevertheless, Abu
el-Haj argues that in order to understand what it means to think of race in terms of genetic lineages or
clusters, one must understand that

Race science was never just about classification. It presupposed a distinctive relationship between "nature" and
"culture," understanding the differences in the former to ground and to generate the different kinds of persons
("natural kinds") and the distinctive stages of cultures and civilizations that inhabit the world.

Abu el-Haj argues that genomics and the mapping of lineages and clusters liberates "the new racial
science from the older one by disentangling ancestry from culture and capacity." As an example, she
refers to recent work by Hammer et al., which aimed to test the claim that present-day Jews are more
closely related to one another than to neighbouring non-Jewish populations. Hammer et. al found that the
degree of genetic similarity among Jews shifted depending on the locus investigated, and suggested that
this was the result of natural selection acting on particular loci. They therefore focused on the non-
recombining Y chromosome to "circumvent some of the complications associated with selection".[82] As
another example she points to work by Thomas et al., who sought to distinguish between the Y
chromosomes of Jewish priests (in Judaism, membership in the priesthood is passed on through the
father's line) and the Y chromosomes of non-Jews.[83] Abu el-Haj concluded that this new "race science"
calls attention to the importance of "ancestry" (narrowly defined, as it does not include all ancestors) in
some religions and in popular culture, and peoples' desire to use science to confirm their claims about
ancestry; this "race science," she argues is fundamentally different from older notions of race that were
used to explain differences in human behaviour or social status:

As neutral markers, junk DNA cannot generate cultural, behavioural, or, for that matter, truly biological differences
between groups .... mtDNA and Y-chromosome markers relied on in such work are not "traits" or "qualities" in the
old racial sense. They do not render some populations more prone to violence, more likely to suffer psychiatric
disorders, or for that matter, incapable of being fully integrated - because of their lower evolutionary development -
into a European cultural world. Instead, they are "marks," signs of religious beliefs and practices .... it is via
biological noncoding genetic evidence that one can demonstrate that history itself is shared, that historical traditions
are (or might well be) true."[84]

On the other hand, there are tests that do not rely on molecular lineages, but rather on correlations
between allele frequencies, often when allele frequencies correlate these are called clusters. Clustering
analyses are less powerful than lineages because they cannot tell an historical story, they can only
estimate the proportion of a person's ancestry from any given large geographical region. These sorts of
tests use informative alleles called Ancestry-informative marker (AIM), which although shared across all
human populations vary a great deal in frequency between groups of people living in geographically
distant parts of the world. These tests use contemporary people sampled from certain parts of the world as
references to determine the likely proportion of ancestry for any given individual. In a recent Public
Service Broadcasting (PBS) programme on the subject of genetic ancestry testing the academic Henry
Louis Gates: "wasn’t thrilled with the results (it turns out that 50 percent of his ancestors are likely
European)".[63] Charles Rotimi, of Howard University's National Human Genome Center, is one of many
who have highlighted the methodological flaws in such research - that "the nature or appearance of
genetic clustering (grouping) of people is a function of how populations are sampled, of how criteria for
boundaries between clusters are set, and of the level of resolution used" all bias the results - and
concluded that people should be very cautious about relating genetic lineages or clusters to their own
sense of identity.[85] (see also above section How much are genes shared? Clustering analyses and what
they tell us)

Thus, in analyses that assign individuals to groups it becomes less apparent that self-described racial
groups are reliable indicators of ancestry. One cause of the reduced power of the assignment of
individuals to groups is admixture. For example, self-described African Americans tend to have a mix of
West African and European ancestry. Shriver et al. (2003)[80] found that on average African Americans
have ~80% African ancestry. Also, in a survey of college students who self-identified as “white” in a
northeastern U.S. university, ~30% of whites had less than 90% European ancestry.[86]

Stephan Palmie has responded to Abu el-Haj's claim that genetic lineages make possible a new,
politically, economically, and socially benign notion of race and racial difference by suggesting that
efforts to link genetic history and personal identity will inevitably "ground present social arrangements in
a time-hallowed past," that is, use biology to explain cultural differences and social inequalities.[87]

[edit] Political and practical uses


[edit] Racism

Main articles: Racism and Racial segregation

[edit] Race and intelligence

Main article: Race and intelligence

Researchers have reported differences in the average IQ test scores of various ethnic groups. The
interpretation, causes, accuracy and reliability of these differences are highly controversial. Some
researchers, such as Arthur Jensen, Richard Herrnstein, and Richard Lynn have argued that such
differences are at least partially genetic. Others, for example Thomas Sowell, argue that the differences
largely owe to social and economic inequalities. Still others have such as Stephen Jay Gould and Richard
Lewontin have argued that categories such as "race" and "intelligence" are cultural constructs that render
any attempt to explain such differences (whether genetically or sociologically) meaningless.
The Flynn effect is the rise of average Intelligence Quotient (IQ) test scores, an effect seen in most parts
of the world, although at varying rates. Scholars therefore believe that rapid increases in average IQ seen
in many places are much too fast to be as a result of changes in brain physiology and more likely as a
result of environmental changes. The fact that environment has a significant effect on IQ demolishes the
case for the use of IQ data as a source of genetic information.[88][89]

[edit] In biomedicine

Main article: Race in biomedicine

There is an active debate among biomedical researchers about the meaning and importance of race in their
research. The primary impetus for considering race in biomedical research is the possibility of improving
the prevention and treatment of diseases by predicting hard-to-ascertain factors on the basis of more easily
ascertained characteristics. Some have argued that in the absence of cheap and widespread genetic tests,
racial identification is the best way to predict for certain diseases, such as Cystic fibrosis, Lactose
intolerance, Tay-Sachs Disease and sickle cell anemia, which are genetically linked and more prevalent in
some populations than others. The most well-known examples of genetically-determined disorders that
vary in incidence among populations would be sickle cell disease, thalassaemia, and Tay-Sachs disease.

distribution of the sickle cell trait

distribution of Malaria

There has been criticism of associating disorders with race. For example, in the United States sickle cell is
typically associated with black people, but this trait is also found in people of Mediterranean, Middle
Eastern or Indian ancestry.[90] The sickle cell trait offers some resistance to malaria. In regions where
malaria is present sickle cell has been positively selected and consequently the proportion of people with
it is greater. Therefore, it has been argued that sickle cell should not be associated with a particular race,
but rather with having ancestors who lived in a malaria-prone region. Africans living in areas where there
is no malaria, such as the East African highlands, have prevalence of sickle cell as low as parts of
Northern Europe.

Another example of the use of race in medicine is the recent U.S. FDA approval of BiDil, a medication
for congestive heart failure targeted at black people in the United States.[91] Several researchers have
questioned the scientific basis for arguing the merits of a medication based on race, however. As Stephan
Palmie has recently pointed out, black Americans were disproportionately affected by Hurricane Katrina,
but for social and not climatological reasons; similarly, certain diseases may disproportionately affect
different races, but not for biological reasons. Several researchers have suggested that BiDil was re-
designated as a medicine for a race-specific illness because its manufacturer, Nitromed, needed to propose
a new use for an existing medication in order to justify an extension of its patent and thus monopoly on
the medication,[92] not for pharmacological reasons.

Gene flow and intermixture also have an effect on predicting a relationship between race and "race linked
disorders". Multiple sclerosis is typically associated with people of European descent and is of low risk to
people of African descent. However, due to gene flow between the populations, African Americans have
elevated levels of MS relative to Africans.[93] Notable African Americans affected by MS include Richard
Pryor and Montel Williams. As populations continue to mix, the role of socially constructed races may
diminish in identifying diseases.

[edit] In law enforcement

In the U.S., the FBI identifies fugitives to categories they define as sex, physical features, occupation,
nationality, and race. From left to right, the FBI assigns the above individuals to the following races:
White, Black, White (Hispanic), Asian. Top row males, bottom row females.[94]

In an attempt to provide general descriptions that may facilitate the job of law enforcement officers
seeking to apprehend suspects, the United States FBI employs the term "race" to summarize the general
appearance (skin color, hair texture, eye shape, and other such easily noticed characteristics) of
individuals whom they are attempting to apprehend. From the perspective of law enforcement officers, it
is generally more important to arrive at a description that will readily suggest the general appearance of an
individual than to make a scientifically valid categorization by DNA or other such means. Thus in
addition to assigning a wanted individual to a racial category, such a description will include: height,
weight, eye color, scars and other distinguishing characteristics, etc. Scotland Yard use a classification
based in the ethnic background of British society: W1 (White-British), W2 (White-Irish), W9 (Any other
white background); M1 (White and black Caribbean), M2 (White and black African), M3 (White and
Asian), M9 (Any other mixed background); A1 (Asian-Indian), A2 (Asian-Pakistani), A3 (Asian-
Bangladeshi), A9 (Any other Asian background); B1 (Black Caribbean), B2 (Black African), B3 (Any
other black background); O1 (Chinese), O9 (Any other). Some of the characteristics that constitute these
groupings are biological and some are learned (cultural, linguistic, etc.) traits that are easy to notice.

In many countries, such as France, the state is legally banned from maintaining data based on race, which
often makes the police issue wanted notices to the public that include labels like "dark skin complexion",
etc. One of the factors that encourages this kind of circuitous wordings is that there is controversy over
the actual relationship between crimes, their assigned punishments, and the division of people into the so
called "races," leading officials to try to deemphasize the alleged race of suspects. In the United States,
the practice of racial profiling has been ruled to be both unconstitutional and also to constitute a violation
of civil rights. There is active debate regarding the cause of a marked correlation between the recorded
crimes, punishments meted out, and the country's "racially divided" people. Many consider de facto racial
profiling an example of institutional racism in law enforcement. The history of misuse of racial categories
to adversely impact one or more groups and/or to offer protection and advantage to another has a clear
impact on debate of the legitimate use of known phenotypical or genotypical characteristics tied to the
presumed race of both victims and perpetrators by the government.

More recent work in racial taxonomy based on DNA cluster analysis (see Lewontin's Fallacy) has led law
enforcement to narrow their search for individuals based on a range of phenotypical characteristics found
consistent with DNA evidence.[95]

While controversial, DNA analysis has been successful in helping police identify both victims and
perpetrators by giving an indication of what phenotypical characteristics to look for and what community
the individual may have lived in. For example, in one case phenotypical characteristics suggested that the
friends and family of an unidentified victim would be found among the Asian community, but the DNA
evidence directed official attention to missing Native Americans, where her true identity was eventually
confirmed.[96] In an attempt to avoid potentially misleading associations suggested by the word "race," this
classification is called "biogeographical ancestry" (BGA),[97] but the terms for the BGA categories are
similar to those used as for race. The difference is that ancestry-informative DNA markers identify
continent-of-ancestry admixture, not ethnic self-identity, and provide a wide range of phenotypical
characteristics such that some people in a biogeographical category will not match the stereotypical image
of an individual belonging to the corresponding race. To facilitate the work of officials trying to find
individuals based on the evidence of their DNA traces, firms providing the genetic analyses also provide
photographs showing a full range of phenotypical characteristics of people in each biogeographical group.
Of special interest to officials trying to find individuals on the basis of DNA samples that indicate a
diverse genetic background is what range of phenotypical characteristics people with that general mixture
of genotypical characteristics may display.

Similarly, forensic anthropologists draw on highly heritable morphological features of human remains
(e.g. cranial measurements) in order to aid in the identification of the body, including in terms of race. In
a recent article anthropologist Norman Sauer asked, "if races don't exist, why are forensic anthropologists
so good at identifying them."[98] Sauer observed that the use of 19th century racial categories is
widespread among forensic anthropologists:

• "In many cases there is little doubt that an individual belonged to the Negro, Caucasian, or
Mongoloid racial stock."[99]
• "Thus the forensic anthropologist uses the term race in the very broad sense to differentiate what
are commonly known as white, black and yellow racial stocks."[100]
• "In estimating race forensically, we prefer to determine if the skeleton is Negroid, or Non-
Negroid. If findings favor Non-Negroid, then further study is necessary to rule out Mongoloid."[101]

According to Sauer, "The assessment of these categories is based upon copious amounts of research on
the relationship between biological characteristics of the living and their skeletons." Nevertheless, he
agrees with other anthropologists that race is not a valid biological taxonomic category, and that races are
socially constructed. He argued there is nevertheless a strong relationship between the phenotypic features
forensic anthropologists base their identifications on, and popular racial categories. Thus, he argued,
forensic anthropologists apply a racial label to human remains because their analysis of physical
morphology enables them to predict that when the person was alive, that particular racial label would
have been applied to them.[102]

[edit] See also


• Breed • Race (historical definitions) • Racial segregation
• Black Nationalism • Racial stereotypes • Racial superiority
• Clan • Race and genetics • The Race Question
• The Race of the Future
• Ethnicity • Race and health
• Subspecies
• Species • Race and intelligence
• White Nationalism
• Political correctness • Race (fantasy)
• Whiteness studies
• Cultural difference • Race (U.S. census)
• Nationalism
• Population genetics • Race in biomedicine
• Ethnic nationalism
• Pre-Adamite • Race baiting
• List of ethnic groups
• Multiracial • Racial discrimination
• Genetic averaging

[edit] Footnotes

Day
From Wikipedia, the free encyclopedia

Jump to: navigation, search

Look up day in Wiktionary, the free dictionary.


Water, Rabbit, and Deer: three of the 20 day symbols in the Aztec calendar, from the Aztec calendar
stone.
For other uses, see Day (disambiguation).

A day (symbol d) is a unit of time equivalent to approximately 24 hours. It is not an SI unit but it is
accepted for use with SI.[1] The SI unit of time is the second.

The word 'day' can also refer to the (roughly) half of the day that is not night, also known as 'daytime'.
Both refer to a length of time. Within these meanings, several definitions can be distinguished. 'Day' may
also refer to a 'point' in time, as in answer to the question "On which day?".

The term comes from the Old English dæg, with similar terms common in all other Indo-European
languages, such as Tag in German and dive in Sanskrit.

Contents
[hide]

• 1 International System of Units (SI)


• 2 Astronomy
• 3 Colloquial
• 4 Introduction
• 5 Civil day
• 6 Leap seconds
• 7 Astronomy
• 8 Boundaries of the day
• 9 Metaphorical days
• 10 24 hours vs daytime
• 11 See also
• 12 Notes and references

• 13 External links

[edit] International System of Units (SI)


A day is defined as 86,400 seconds. The International Bureau of Weights and Measures (BIPM) currently
defines a second as

… the duration of 9 192 631 770 periods of the radiation corresponding to the transition between two hyperfine
levels of the ground state of the caesium 133 atom.[2]
This makes the SI day last exactly 794,243,384,928,000 of those periods.

In the 19th century it had also been suggested to make a decimal fraction (1⁄10,000 or 1⁄100,000) of an
astronomic day the base unit of time. This was an afterglow of decimal time and calendar, which had been
given up already.

[edit] Astronomy
A day of exactly 86,400 SI seconds is the fundamental unit of time in astronomy.

For a given planet, there are two types of day defined in astronomy:

• 1 apparent sidereal day - a single rotation of a planet with respect to the distant stars (for Earth it is
23.934 hours);
• 1 solar day - a single rotation of a planet with respect to its star.

[edit] Colloquial
The word refers to various relatedly defined ideas, including the following:

• the period of light when the Sun is above the local horizon (i.e., the time period from sunrise to
sunset);
• the full day covering a dark and a light period, beginning from the beginning of the dark period or
from a point near the middle of the dark period;
• a full dark and light period, sometimes called a nychthemeron in English, from the Greek for
night-day;
• the time period from 6:00 AM to 6:00 PM or 9:00 PM or some other fixed clock period
overlapping or set off from other time periods such as "morning", "evening", or "night".

Dagr, the Norse god of the day, rides his horse in this 19th century painting by Peter Nicolai Arbo.

[edit] Introduction
The word day is used for several different units of time based on the rotation of the Earth around its axis.
The most important one follows the apparent motion of the Sun across the sky (solar day). The reason for
this apparent motion is the rotation of the Earth around its axis, as well as the revolution of the Earth in its
orbit around the Sun.
A day, as opposed to night, is commonly defined as the period during which sunlight directly reaches the
ground, assuming that there are no local obstacles. Two effects make days on average longer than nights.
The Sun is not a point, but has an apparent size of about 32 minutes of arc. Additionally, the atmosphere
refracts sunlight in such a way that some of it reaches the ground even when the Sun is below the horizon
by about 34 minutes of arc. So the first light reaches the ground when the centre of the Sun is still below
the horizon by about 50 minutes of arc. The difference in time depends on the angle at which the Sun rises
and sets (itself a function of latitude), but amounts to almost seven minutes at least.

Ancient custom has a new day start at either the rising or setting of the Sun on the local horizon (Italian
reckoning, for example) The exact moment of, and the interval between, two sunrises or two sunsets
depends on the geographical position (longitude as well as latitude), and the time of year. This is the time
as indicated by ancient hemispherical sundials.

A more constant day can be defined by the Sun passing through the local meridian, which happens at
local noon (upper culmination) or midnight (lower culmination). The exact moment is dependent on the
geographical longitude, and to a lesser extent on the time of the year. The length of such a day is nearly
constant (24 hours ± 30 seconds). This is the time as indicated by modern sundials.

A further improvement defines a fictitious mean Sun that moves with constant speed along the celestial
equator; the speed is the same as the average speed of the real Sun, but this removes the variation over a
year as the Earth moves along its orbit around the Sun (due to both its velocity and its axial tilt).

The Earth's day has increased in length over time. The original length of one day, when the Earth was new
about 4.5 billion years ago, was about six hours as determined by computer simulation. It was 21.9 hours
620 million years ago as recorded by rhythmites (alternating layers in sandstone). This phenomenon is
due to tides raised by the Moon which slow Earth's rotation. Because of the way the second is defined, the
mean length of a day is now about 86,400.002 seconds, and is increasing by about 1.7 milliseconds per
century (an average over the last 2,700 years). See tidal acceleration for details.

[edit] Civil day


For civil purposes a common clock time has been defined for an entire region based on the mean local
solar time at some central meridian. Such time zones began to be adopted about the middle of the 19th
century when railroads with regular schedules came into use, with most major countries having adopted
them by 1929. For the whole world, 40 such time zones are now in use. The main one is "world time" or
Coordinated Universal Time (UTC).

The present common convention has the civil day starting at midnight, which is near the time of the lower
culmination of the mean Sun on the central meridian of the time zone. A day is commonly divided into 24
hours of 60 minutes of 60 seconds each.

[edit] Leap seconds


In order to keep the civil day aligned with the apparent movement of the Sun, positive or negative leap
seconds may be inserted.

A civil clock day is typically 86,400 SI seconds long, but will be 86,401 s or 86,399 s long in the event of
a leap second.

Leap seconds are announced in advance by the International Earth Rotation and Reference Systems
Service which measures the Earth's rotation and determines whether a leap second is necessary. Leap
seconds occur only at the end of a UTC month, and have only ever been inserted at the end of June 30 or
December 31.

[edit] Astronomy
In astronomy, the sidereal day is also used; it is about 3 minutes 56 seconds shorter than the solar day, and
close to the actual rotation period of the Earth, as opposed to the Sun's apparent motion. In fact, the Earth
spins 366 times about its axis during a 365-day year, because the Earth's revolution about the Sun
removes one apparent turn of the Sun about the Earth.

[edit] Boundaries of the day


For most diurnal animals, the day naturally begins at dawn and ends at sunset. Humans, with our cultural
norms and scientific knowledge, have supplanted Nature with several different conceptions of the day's
boundaries. The Jewish day begins at either sunset or at nightfall (when three second-magnitude stars
appear). Medieval Europe followed this tradition, known as Florentine reckoning: in this system, a
reference like "two hours into the day" meant two hours after sunset and thus times during the evening
need to be shifted back one calendar day in modern reckoning. Days such as Christmas Eve, Halloween,
and the Eve of Saint Agnes are the remnants of the older pattern when holidays began the evening before.
Present common convention is for the civil day to begin at midnight, that is 00:00 (inclusive), and last a
full twenty-four hours until 24:00 (exclusive).

In ancient Egypt, the day was reckoned from sunrise to sunrise. Muslims fast from daybreak to sunset
each day of the month of Ramadan. The "Damascus Document", copies of which were also found among
the Dead Sea scrolls, states regarding Sabbath observance that "No one is to do any work on Friday from
the moment that the sun's disk stands distant from the horizon by the length of its own diameter,"
presumably indicating that the monastic community responsible for producing this work counted the day
as ending shortly before the sun had begun to set.

In the United States, nights are named after the previous day, e.g. "Friday night" usually means the entire
night between Friday and Saturday. This is the opposite of the Jewish pattern. This difference from the
civil day often leads to confusion. Events starting at midnight are often announced as occurring the day
before. TV-guides tend to list nightly programs at the previous day, although programming a VCR
requires the strict logic of starting the new day at 00:00 (to further confuse the issue, VCRs set to the 12-
hour clock notation will label this "12:00 AM"). Expressions like "today", "yesterday" and "tomorrow"
become ambiguous during the night.

Validity of tickets, passes, etc., for a day or a number of days may end at midnight, or closing time, when
that is earlier. However, if a service (e.g. public transport) operates from e.g. 6:00 to 1:00 the next day
(which may be noted as 25:00), the last hour may well count as being part of the previous day (also for the
arrangement of the timetable). For services depending on the day ("closed on Sundays", "does not run on
Fridays", etc.) there is a risk of ambiguity. As an example, for the Nederlandse Spoorwegen (Dutch
Railways), a day ticket is valid 28 hours, from 0:00 to 28:00 (i.e. 4:00 the next day). To give another
example, the validity of a pass on London Regional Transport services is until the end of the "transport
day" -- that is to say, until 4:30 am on the day after the "expiry" date stamped on the pass.

[edit] Metaphorical days


In the Bible, as a way to describe that time is immaterial to God, one day is described as being like one
thousand years (Psalms 90:4, 2 Peter 3:8) to him. Also in 2 Peter 3:8, one thousand years is described as
being like one day. However, some Bible experts interpret this more literally as a way to understand some
prophecies like those in Book of Daniel and others (like the Book of Revelation) where are mentioned
days in form of weeks and years.

[edit] 24 hours vs daytime


To distinguish between a full day and daytime, the word nychthemeron may be used for the former, or
more colloquially the term '24 hours'. In other languages, the latter is also often used. Some languages
have a separate word for a full day, such as 'etmaal' in Dutch and 'сутки' in Russian. German and French
don't have similar words. In Spanish, 'singladura' is used, but only as a marine unit of length, being the
distance covered in 24 hours [1].

[edit] See also


• 1 E4 s, Times from 10 kiloseconds to 100 kiloseconds
• Calculating the day of the week
• Dagr
• Daylight
• Daylight saving time
• Season, for a discussion of daylight and darkness near the poles and the equator and places in-
between
• Week

[edit] Notes and references


Final goods
From Wikipedia, the free encyclopedia

(Redirected from Consumption goods)


Jump to: navigation, search
"Consumer goods" redirects here. For the band, see The Consumer Goods.

In economics final goods are goods that are ultimately consumed rather than used in the production of
another good. For example, a car sold to a consumer is a final good; the components such as tires sold to
the car manufacturer are not; they are intermediate goods used to make the final good.

When used in measures of national income and output the term final goods only includes new goods. For
instance, the GDP excludes items counted in an earlier year to prevent double counting of production
based on resales of the same item second and third hand. In this context the economic definition of goods
includes what are commonly known as services

Consumer goods are final goods specifically intended for the mass market. For instance, consumer goods
do not include investment assets, like precious antiques, even though these antiques are final goods.
Manufactured goods are goods that have been processed by way of machinery. As such, they are the
opposite of raw materials, but include intermediate goods as well as final goods.

[hide]
v•d•e
Types of goods

public good - private good - common good - common-pool resource - club good - anti-rival good

(non-)rivalrous good and (non-)excludable good


complementary good vs. substitute good
free good vs. positional good

(non-)durable good - intermediate good (producer good) - final good - capital good

inferior good - normal good - ordinary good - Giffen good - luxury good - Veblen good - superior good

search good - (post-)experience good - merit good - credence good - demerit good - composite good
This economics or finance-related article is a stub. You can help Wikipedia by expanding
it.
Retrieved from "http://en.wikipedia.org/wiki/Final_goods"
Categories: Goods | Manufactured goods | Economics and finance stubs

Flower
From Wikipedia, the free encyclopedia

Jump to: navigation, search

For other uses, see Flower (disambiguation).


A poster with twelve species of flowers or clusters of flowers of different families

A flower, sometimes known as a bloom or blossom, is the reproductive structure found in flowering
plants (plants of the division Magnoliophyta, also called angiosperms). The biological function of a
flower is to mediate the union of male sperm with female ovum in order to produce seeds. The process
begins with pollination, is followed by fertilization, leading to the formation and dispersal of the seeds.
For the higher plants, seeds are the next generation, and serve as the primary means by which individuals
of a species are dispersed across the landscape. The grouping of flowers on a plant are called the
inflorescence.

In addition to serving as the reproductive organs of flowering plants, flowers have long been admired and
used by humans, mainly to beautify their environment but also as a source of food.

Contents
[hide]

• 1 Flower specialization and pollination


• 2 Morphology
o 2.1 Floral formula
• 3 Development
o 3.1 Flowering transition
o 3.2 Organ Development
• 4 Pollination
o 4.1 Attraction methods
o 4.2 Pollination mechanism
o 4.3 Flower-pollinator relationships
• 5 Fertilization and dispersal
• 6 Evolution
• 7 Symbolism
• 8 Usage
• 9 See also
• 10 References

• 11 External links

Flower specialization and pollination


Each flower has a specific design which best encourages the transfer of its pollen. Cleistogamous flowers
are self pollinated, after which, they may or may not open. Many Viola and some Salvia species are
known to have these types of flowers.

Entomophilous flowers attract and use insects, bats, birds or other animals to transfer pollen from one
flower to the next. Flowers commonly have glands called nectaries on their various parts that attract these
animals. Some flowers have patterns, called nectar guides, that show pollinators where to look for nectar.
Flowers also attract pollinators by scent and color. Still other flowers use mimicry to attract pollinators.
Some species of orchids, for example, produce flowers resembling female bees in color, shape, and scent.
Flowers are also specialized in shape and have an arrangement of the stamens that ensures that pollen
grains are transferred to the bodies of the pollinator when it lands in search of its attractant (such as
nectar, pollen, or a mate). In pursuing this attractant from many flowers of the same species, the pollinator
transfers pollen to the stigmas—arranged with equally pointed precision—of all of the flowers it visits.

Callistemon citrinus flower.

Anemophilous flowers use the wind to move pollen from one flower to the next, examples include the
grasses, Birch trees, Ragweed and Maples. They have no need to attract pollinators and therefore tend not
to be "showy" flowers. Male and female reproductive organs are generally found in separate flowers, the
male flowers having a number of long filaments terminating in exposed stamens, and the female flowers
having long, feather-like stigmas. Whereas the pollen of entomophilous flowers tends to be large-grained,
sticky, and rich in protein (another "reward" for pollinators), anemophilous flower pollen is usually small-
grained, very light, and of little nutritional value to insects.

Morphology
Flowering plants are heterosporangiate, producing two types of reproductive spores. The pollen (male
spores) and ovules (female spores) are produced in different organs, but the typical flower is a
bisporangiate strobilus in that it contains both organs.

A flower is regarded as a modified stem with shortened internodes and bearing, at its nodes, structures
that may be highly modified leaves.[1] In essence, a flower structure forms on a modified shoot or axis
with an apical meristem that does not grow continuously (growth is determinate). Flowers may be
attached to the plant in a few ways. If the flower has no stem but forms in the axil of a leaf, it is called
sessile. When one flower is produced, the stem holding the flower is called a peduncle. If the peduncle
ends with groups of flowers, each stem that holds a flower is called a pedicel. The flowering stem forms a
terminal end which is called the torus or receptacle. The parts of a flower are arranged in whorls on the
torus. The four main parts or whorls (starting from the base of the flower or lowest node and working
upwards) are as follows:

Diagram showing the main parts of a mature flower

An example of a perfect flower, this Crateva religiosa flower has both stamens (outer ring) and a pistil
(center).

• Calyx: the outer whorl of sepals; typically these are green, but are petal-like in some species.
• Corolla: the whorl of petals, which are usually thin, soft and colored to attract insects that help the
process of pollination.
• Androecium (from Greek andros oikia: man's house): one or two whorls of stamens, each a
filament topped by an anther where pollen is produced. Pollen contains the male gametes.
• Gynoecium (from Greek gynaikos oikia: woman's house): one or more pistils. The female
reproductive organ is the carpel: this contains an ovary with ovules (which contain female
gametes). A pistil may consist of a number of carpels merged together, in which case there is only
one pistil to each flower, or of a single individual carpel (the flower is then called apocarpous).
The sticky tip of the pistil, the stigma, is the receptor of pollen. The supportive stalk, the style
becomes the pathway for pollen tubes to grow from pollen grains adhering to the stigma, to the
ovules, carrying the reproductive material.

Although the floral structure described above is considered the "typical" structural plan, plant species
show a wide variety of modifications from this plan. These modifications have significance in the
evolution of flowering plants and are used extensively by botanists to establish relationships among plant
species. For example, the two subclasses of flowering plants may be distinguished by the number of floral
organs in each whorl: dicotyledons typically having 4 or 5 organs (or a multiple of 4 or 5) in each whorl
and monocotyledons having three or some multiple of three. The number of carpels in a compound pistil
may be only two, or otherwise not related to the above generalization for monocots and dicots.

In the majority of species individual flowers have both pistils and stamens as described above. These
flowers are described by botanists as being perfect, bisexual, or hermaphrodite. However, in some species
of plants the flowers are imperfect or unisexual: having only either male (stamens) or female (pistil) parts.
In the latter case, if an individual plant is either female or male the species is regarded as dioecious.
However, where unisexual male and female flowers appear on the same plant, the species is considered
monoecious.
Additional discussions on floral modifications from the basic plan are presented in the articles on each of
the basic parts of the flower. In those species that have more than one flower on an axis—so-called
composite flowers—the collection of flowers is termed an inflorescence; this term can also refer to the
specific arrangements of flowers on a stem. In this regard, care must be exercised in considering what a
‘‘flower’’ is. In botanical terminology, a single daisy or sunflower for example, is not a flower but a
flower head—an inflorescence composed of numerous tiny flowers (sometimes called florets). Each of
these flowers may be anatomically as described above. Many flowers have a symmetry, if the perianth is
bisected through the central axis from any point, symmetrical halves are produced—the flower is called
regular or actinomorphic, e.g. rose or trillium. When flowers are bisected and produce only one line that
produces symmetrical halves the flower is said to be irregular or zygomorphic. e.g. snapdragon or most
orchids.

Floral formula

A floral formula is a way to represent the structure of a flower using specific letters, numbers, and
symbols. Typically, a general formula will be used to represent the flower structure of a plant family
rather than a particular species. The following representations are used:

Ca = calyx (sepal whorl; e. g. Ca5 = 5 sepals)


Co = corolla (petal whorl; e. g., Co3(x) = petals some multiple of three )
Z = add if zygomorphic (e. g., CoZ6 = zygomorphic with 6 petals)
A = androecium (whorl of stamens; e. g., A∞ = many stamens)
G = gynoecium (carpel or carpels; e. g., G1 = monocarpous)

x: to represent a "variable number"


∞: to represent "many"

A floral formula would appear something like this:

Ca5Co5A10 - ∞G1

Several additional symbols are sometimes used (see Key to Floral Formulas).

Development
Flowering transition

The transition to flowering is one of the major phase changes that a plant makes during its life cycle. The
transition must take place at a time that will ensure maximal reproductive success. To meet these needs a
plant is able to interpret important endogenous and environmental cues such as changes in levels of plant
hormones and seasonable temperature and photoperiod changes. Many perennial and most biennial plants
require vernalization to flower. The molecular interpretation of these signals through genes such as
CONSTANS and FLC ensures that flowering occurs at a time that is favorable for fertilization and the
formation of seeds.[2] Flower formation is initiated at the ends of stems, and involves a number of different
physiological and morphological changes. The first step is the transformation of the vegetative stem
primordia into floral primordia. This occurs as biochemical changes take place to change cellular
differentiation of leaf, bud and stem tissues into tissue that will grow into the reproductive organs. Growth
of the central part of the stem tip stops or flattens out and the sides develop protuberances in a whorled or
spiral fashion around the outside of the stem end. These protuberances develop into the sepals, petals,
stamens, and carpels. Once this process begins, in most plants, it cannot be reversed and the stems
develop flowers, even if the initial start of the flower formation event was dependent of some
environmental cue.[3] Once the process begins, even if that cue is removed the stem will continue to
develop a flower.

Organ Development

The ABC model of flower development.

The molecular control of floral organ identity determination is fairly well understood. In a simple model,
three gene activities interact in a combinatorial manner to determine the developmental identities of the
organ primordia within the floral meristem. These gene functions are called A, B and C-gene functions. In
the first floral whorl only A-genes are expressed, leading to the formation of sepals. In the second whorl
both A- and B-genes are expressed, leading to the formation of petals. In the third whorl, B and C genes
interact to form stamens and in the center of the flower C-genes alone give rise to carpels. The model is
based upon studies of homeotic mutants in Arabidopsis thaliana and snapdragon, Antirrhinum majus. For
example, when there is a loss of B-gene function, mutant flowers are produced with sepals in the first
whorl as usual, but also in the second whorl instead of the normal petal formation. In the third whorl the
lack of B function but presence of C-function mimics the fourth whorl, leading to the formation of carpels
also in the third whorl. See also The ABC Model of Flower Development.

Most genes central in this model belong to the MADS-box genes and are transcription factors that
regulate the expression of the genes specific for each floral organ.

Pollination

Grains of pollen sticking to this bee will be transferred to the next flower it visits
Main article: pollination

The primary purpose of a flower is reproduction. Since the flowers are the reproductive organs of plant,
they mediate the joining of the sperm, contained within pollen, to the ovules - contained in the ovary.
Pollination is the movement of pollen from the anthers to the stigma. The joining of the sperm to the
ovules is called fertilization. Normally pollen is moved from one plant to another, but many plants are
able to self pollinate. The fertilized ovules produce seeds that are the next generation. Sexual reproduction
produces genetically unique offspring, allowing for adaptation. Flowers have specific designs which
encourages the transfer of pollen from one plant to another of the same species. Many plants are
dependent upon external factors for pollination, including: wind and animals, and especially insects. Even
large animals such as birds, bats, and pygmy possums can be employed. The period of time during which
this process can take place (the flower is fully expanded and functional) is called anthesis.
Attraction methods

Bee orchid evolved to mimic a female bee to attracts male bee pollinators

Plants can not move from one location to another, thus many flowers have evolved to attract animals to
transfer pollen between individuals in dispersed populations. Flowers that are insect-pollinated are called
entomophilous; literally "insect-loving" in Latin. They can be highly modified along with the pollinating
insects by co-evolution. Flowers commonly have glands called nectaries on various parts that attract
animals looking for nutritious nectar. Birds and bees have color vision, enabling them to seek out
"colorful" flowers. Some flowers have patterns, called nectar guides, that show pollinators where to look
for nectar; they may be visible only under ultraviolet light, which is visible to bees and some other insects.
Flowers also attract pollinators by scent and some of those scents are pleasant to our sense of smell. Not
all flower scents are appealing to humans, a number of flowers are pollinated by insects that are attracted
to rotten flesh and have flowers that smell like dead animals, often called Carrion flowers including
Rafflesia, the titan arum, and the North American pawpaw (Asimina triloba). Flowers pollinated by night
visitors, including bats and moths, are likely to concentrate on scent to attract pollinators and most such
flowers are white.

Still other flowers use mimicry to attract pollinators. Some species of orchids, for example, produce
flowers resembling female bees in color, shape, and scent. Male bees move from one such flower to
another in search of a mate.

Pollination mechanism

The pollination mechanism employed by a plant depends on what method of pollination is utilized.

Most flowers can be divided between two broad groups of pollination methods:

Entomophilous: flowers attract and use insects, bats, birds or other animals to transfer pollen from one
flower to the next. Often they are specialized in shape and have an arrangement of the stamens that
ensures that pollen grains are transferred to the bodies of the pollinator when it lands in search of its
attractant (such as nectar, pollen, or a mate). In pursuing this attractant from many flowers of the same
species, the pollinator transfers pollen to the stigmas—arranged with equally pointed precision—of all of
the flowers it visits. Many flower rely on simple proximity between flower parts to ensure pollination.
Others, such as the Sarracenia or lady-slipper orchids, have elaborate designs to ensure pollination while
preventing self-pollination.
Anthers detached from a Meadow Foxtail flower.

A grass flower head (Meadow Foxtail) showing the plain coloured flowers with large anthers.

Anemophilous: flowers use the wind to move pollen from one flower to the next, examples include the
grasses, Birch trees, Ragweed and Maples. They have no need to attract pollinators and therefore tend not
to be "showy" flowers. Whereas the pollen of entomophilous flowers tends to be large-grained, sticky,
and rich in protein (another "reward" for pollinators), anemophilous flower pollen is usually small-
grained, very light, and of little nutritional value to insects, though it may still be gathered in times of
dearth. Honeybees and bumblebees actively gather anemophilous corn (maize) pollen, though it is of little
value to them.

Some flowers are self pollinated and use flowers that never open or are self pollinated before the flowers
open, these flowers are called cleistogamous. Many Viola species and some Salvia have these types of
flowers.

Flower-pollinator relationships

Many flowers have close relationships with one or a few specific pollinating organisms. Many flowers,
for example, attract only one specific species of insect, and therefore rely on that insect for successful
reproduction. This close relationship is often given as an example of coevolution, as the flower and
pollinator are thought to have developed together over a long period of time to match each other's needs.

This close relationship compounds the negative effects of extinction. The extinction of either member in
such a relationship would mean almost certain extinction of the other member as well. Some endangered
plant species are so because of shrinking pollinator populations.

Fertilization and dispersal


Main article: biological dispersal
Crocosmia flowers. In this picture the stamens of the flower are clearly visible.

Some flowers with both stamens and a pistil are capable of self-fertilization, which does increase the
chance of producing seeds but limits genetic variation. The extreme case of self-fertilization occurs in
flowers that always self-fertilize, such as many dandelions. Conversely, many species of plants have ways
of preventing self-fertilization. Unisexual male and female flowers on the same plant may not appear or
mature at the same time, or pollen from the same plant may be incapable of fertilizing its ovules. The
latter flower types, which have chemical barriers to their own pollen, are referred to as self-sterile or self-
incompatible (see also: Plant sexuality).

Evolution
Further information: Evolutionary history of plants#Evolution of flowers

Hydrangea flowers in Kamakura, Kanagawa, Japan

While land plants have existed for about 425 million years, the first ones reproduced by a simple
adaptation of their aquatic counterparts: spores. In the sea, plants -- and some animals -- can simply
scatter out genetic clones of themselves to float away and grow elsewhere. This is how early plants
reproduced. But plants soon evolved methods of protecting these copies to deal with drying out and other
abuse which is even more likely on land than in the sea. The protection became the seed, though it had not
yet evolved the flower. Early seed-bearing plants include the ginkgo and conifers. The earliest fossil of a
flowering plant, Archaefructus liaoningensis, is dated about 125 million years old.[4] Several groups of
extinct gymnosperms, particularly seed ferns, have been proposed as the ancestors of flowering plants but
there is no continuous fossil evidence showing exactly how flowers evolved. The apparently sudden
appearance of relatively modern flowers in the fossil record posed such a problem for the theory of
evolution that it was called an "abominable mystery" by Charles Darwin. Recently discovered angiosperm
fossils such as Archaefructus, along with further discoveries of fossil gymnosperms, suggest how
angiosperm characteristics may have been acquired in a series of steps.

Recent DNA analysis (molecular systematics)[5][6] show that Amborella trichopoda, found on the Pacific
island of New Caledonia, is the sister group to the rest of the flowering plants, and morphological
studies[7] suggest that it has features which may have been characteristic of the earliest flowering plants.
A Syrphid fly on a Grape hyacinth

The general assumption is that the function of flowers, from the start, was to involve other animals in the
reproduction process. Pollen can be scattered without bright colors and obvious shapes, which would
therefore be a liability, using the plant's resources, unless they provide some other benefit. One proposed
reason for the sudden, fully developed appearance of flowers is that they evolved in an isolated setting
like an island, or chain of islands, where the plants bearing them were able to develop a highly specialized
relationship with some specific animal (a wasp, for example), the way many island species develop today.
This symbiotic relationship, with a hypothetical wasp bearing pollen from one plant to another much the
way fig wasps do today, could have eventually resulted in both the plant(s) and their partners developing a
high degree of specialization. Island genetics is believed to be a common source of speciation, especially
when it comes to radical adaptations which seem to have required inferior transitional forms. Note that the
wasp example is not incidental; bees, apparently evolved specifically for symbiotic plant relationships, are
descended from wasps.

Likewise, most fruit used in plant reproduction comes from the enlargement of parts of the flower. This
fruit is frequently a tool which depends upon animals wishing to eat it, and thus scattering the seeds it
contains.

While many such symbiotic relationships remain too fragile to survive competition with mainland animals
and spread, flowers proved to be an unusually effective means of production, spreading (whatever their
actual origin) to become the dominant form of land plant life.

While there is only hard proof of such flowers existing about 130 million years ago, there is some
circumstantial evidence that they did exist up to 250 million years ago. A chemical used by plants to
defend their flowers, oleanane, has been detected in fossil plants that old, including gigantopterids[8],
which evolved at that time and bear many of the traits of modern, flowering plants, though they are not
known to be flowering plants themselves, because only their stems and prickles have been found
preserved in detail; one of the earliest examples of petrification.

The similarity in leaf and stem structure can be very important, because flowers are genetically just an
adaptation of normal leaf and stem components on plants, a combination of genes normally responsible
for forming new shoots.[9] The most primitive flowers are thought to have had a variable number of flower
parts, often separate from (but in contact with) each other. The flowers would have tended to grow in a
spiral pattern, to be bisexual (in plants, this means both male and female parts on the same flower), and to
be dominated by the ovary (female part). As flowers grew more advanced, some variations developed
parts fused together, with a much more specific number and design, and with either specific sexes per
flower or plant, or at least "ovary inferior".
Flower evolution continues to the present day; modern flowers have been so profoundly influenced by
humans that many of them cannot be pollinated in nature. Many modern, domesticated flowers used to be
simple weeds, which only sprouted when the ground was disturbed. Some of them tended to grow with
human crops, and the prettiest did not get plucked because of their beauty, developing a dependence upon
and special adaptation to human affection.[10]

Symbolism
Lilies are often used to denote life or resurrection

Flowers inspire decorative motifs

Flowers are common subjects of still life paintings, such as this one by Ambrosius Bosschaert the Elder

Chinese Jade ornament with flower design, Jin Dynasty (1115-1234 AD), Shanghai Museum.

Flowers are beloved for their various fragrances


Many flowers have important symbolic meanings in Western culture. The practice of assigning meanings
to flowers is known as floriography. Some of the more common examples include:

• Red roses are given as a symbol of love, beauty, and passion.


• Poppies are a symbol of consolation in time of death. In the UK, New Zealand, Australia and
Canada, red poppies are worn to commemorate soldiers who have died in times of war.
• Irises/Lily are used in burials as a symbol referring to "resurrection/life". It is also associated with
stars (sun) and its petals blooming/shining.
• Daisies are a symbol of innocence.

Flowers within art are also representative of the female genitalia, as seen in the works of artists such as
Georgia O'Keeffe, Imogen Cunningham, Veronica Ruiz de Velasco, and Judy Chicago, and in fact in
Asian and western classical art. Many cultures around the world have a marked tendency to associate
flowers with femininity.

The great variety of delicate and beautiful flowers has inspired the works of numerous poets, especially
from the 18th-19th century Romantic era. Famous examples include William Wordsworth's I Wandered
Lonely as a Cloud and William Blake's Ah! Sun-Flower.

Because of their varied and colorful appearance, flowers have long been a favorite subject of visual artists
as well. Some of the most celebrated paintings from well-known painters are of flowers, such as Van
Gogh's sunflowers series or Monet's water lilies. Flowers are also dried, freeze dried and pressed in order
to create permanent, three-dimensional pieces of flower art.

The Roman goddess of flowers, gardens, and the season of Spring is Flora. The Greek goddess of spring,
flowers and nature is Chloris.

In Hindu mythology, flowers have a significant status. Vishnu, one of the three major gods in the Hindu
system, is often depicted standing straight on a lotus flower.[11] Apart from the association with Vishnu,
the Hindu tradition also considers the lotus to have spiritual significance.[12] For example, it figures in the
Hindu stories of creation.[13]

Usage
In modern times, people have sought ways to cultivate, buy, wear, or otherwise be around flowers and
blooming plants, partly because of their agreeable appearance and smell. Around the world, people use
flowers for a wide range of events and functions that, cumulatively, encompass one's lifetime:

• For new births or Christenings


• As a corsage or boutonniere to be worn at social functions or for holidays
• As tokens of love or esteem
• For wedding flowers for the bridal party, and decorations for the hall
• As brightening decorations within the home
• As a gift of remembrance for bon voyage parties, welcome home parties, and "thinking of you"
gifts
• For funeral flowers and expressions of sympathy for the grieving

People therefore grow flowers around their homes, dedicate entire parts of their living space to flower
gardens, pick wildflowers, or buy flowers from florists who depend on an entire network of commercial
growers and shippers to support their trade.
Flowers provide less food than other major plants parts (seeds, fruits, roots, stems and leaves) but they
provide several important foods and spices. Flower vegetables include broccoli, cauliflower and
artichoke. The most expensive spice, saffron, consists of dried stigmas of a crocus. Other flower spices
are cloves and capers. Hops flowers are used to flavor beer. Marigold flowers are fed to chickens to give
their egg yolks a golden yellow color, which consumers find more desirable. Dandelion flowers are often
made into wine. Bee Pollen, pollen collected from bees, is considered a health food by some people.
Honey consists of bee-processed flower nectar and is often named for the type of flower, e.g. orange
blossom honey, clover honey and tupelo honey.

Hundreds of fresh flowers are edible but few are widely marketed as food. They are often used to add
color and flavor to salads. Squash flowers are dipped in breadcrumbs and fried. Edible flowers include
nasturtium, chrysanthemum, carnation, cattail, honeysuckle, chicory, cornflower, Canna, and sunflower.
Some edible flowers are sometimes candied such as daisy and rose (you may also come across a candied
pansy).

Flowers can also be made into herbal teas. Dried flowers such as chrysanthemum, rose, jasmine,
camomile are infused into tea both for their fragrance and medical properties. Sometimes, they are also
mixed with tea leaves for the added fragrance.

See also
• Plants
• List of garden plants
• Plant sexuality
• Garden
• Gardening
• Sowing
• Evolution of plants
• Plant evolutionary developmental biology

References
Religion
From Wikipedia, the free encyclopedia

Jump to: navigation, search

A religion is a of way of life based on tenets (or a belief system) about the ultimate power. It is generally
expressed through conducts such as prayers, rituals, or other practices, often centered upon specific
supernatural and moral claims about reality (the cosmos, and human nature) which may yield a set of
religious laws. Religion also encompasses ancestral or cultural traditions, writings, history, and
mythology, as well as personal faith and religious experience.

The term "religion" refers to both the personal practices related to communal faith and to group rituals
and communication stemming from shared conviction. "Religion" is sometimes used interchangeably with
"faith" or "belief system,"[1] but it is more socially defined than personal convictions, and it entails
specific behaviors, respectively.

The development of religion has taken many forms in various cultures. It considers psychological and
social roots, along with origins and historical development.

In the frame of western religious thought,[2] religions present a common quality, the "hallmark of
patriarchal religious thought": the division of the world in two comprehensive domains, one sacred, the
other profane.[3] Religion is often described as a communal system for the coherence of belief focusing on
a system of thought, unseen being, person, or object, that is considered to be supernatural, sacred, divine,
or of the highest truth. Moral codes, practices, values, institutions, tradition, rituals, and scriptures are
often traditionally associated with the core belief, and these may have some overlap with concepts in
secular philosophy. Religion is also often described as a "way of life" or a life stance.

Contents
[hide]

• 1 Etymology
• 2 Definitions of religion
• 3 Religion and superstition
• 4 History
o 4.1 Development of religion
o 4.2 The "Axial Age"
o 4.3 Middle Ages
o 4.4 Modern period
• 5 Classification
• 6 Religious belief
• 7 Related forms of thought
o 7.1 Religion and science
o 7.2 Religion, metaphysics, and cosmology
o 7.3 Mysticism and esotericism
o 7.4 Spirituality
o 7.5 Myth
o 7.6 Cosmology
• 8 Criticism
• 9 See also
• 10 Notes
• 11 References

• 12 External links

Etymology
The English word religion has been in use since the 13th century, loaned from Anglo-French religiun
(11th century), ultimately from the Latin religio, "reverence for God or the gods, careful pondering of
divine things, piety, the res divinae".[4]

The ultimate origins of Latin religio are obscure. It is usually accepted to derive from ligare "bind,
connect"; likely from a prefixed re-ligare, i.e. re (again) + ligare or "to reconnect." This interpretation is
favoured by modern scholars such as Tom Harpur and Joseph Campbell, but was made prominent by St.
Augustine, following the interpretation of Lactantius. Another possibility is derivation from a
reduplicated *le-ligare. A historical interpretation due to Cicero on the other hand connects lego "read",
i.e. re (again) + lego in the sense of "choose", "go over again" or "consider carefully".[5] It may also be
from Latin religiō, religiōn-, perhaps from religāre, to tie fast.[6]

Definitions of religion
Further information: Sociology of Religion, Transcendence, Theism, Sacred (comparative
religion), Religion and mythology, and Myth and ritual

Confucianism, Taoism, and Buddhism are one, a painting in the litang style portraying three men laughing
by a river stream, 12th century, Song Dynasty.

Religion has been defined in a wide variety of ways. Most definitions attempt to find a balance
somewhere between overly sharp definition and meaningless generalities. Some sources have tried to use
formalistic, doctrinal definitions while others have emphasized experiential, emotive, intuitive,
valuational and ethical factors. Definitions mostly include:

• a notion of the transcendent or numinous, often, but not always, in the form of theism
• a cultural or behavioural aspect of ritual, liturgy and organized worship, often involving a
priesthood, and societal norms of morality (ethos) and virtue (arete)
• a set of myths or sacred truths held in reverence or believed by adherents

Sociologists and anthropologists tend to see religion as an abstract set of ideas, values, or experiences
developed as part of a cultural matrix. For example, in Lindbeck's Nature of Doctrine, religion does not
refer to belief in "God" or a transcendent Absolute. Instead, Lindbeck defines religion as, "a kind of
cultural and/or linguistic framework or medium that shapes the entirety of life and thought… it is similar
to an idiom that makes possible the description of realities, the formulation of beliefs, and the
experiencing of inner attitudes, feelings, and sentiments.”[7] According to this definition, religion refers to
one's primary worldview and how this dictates one's thoughts and actions.

There is a tendency in the sociology of religion to emphasize the problems of any definition of religion.
Talal Asad has gone so far as to say ”there cannot be a universal definition of religion … because that
definition is itself the historical product of discursive processes”[8]

Other religious scholars have put forward a definition of religion that avoids the reductionism of the
various sociological and psychological disciplines that reduce religion to its component factors. Religion
may be defined as the presence of a belief in the sacred or the holy. For example Rudolf Otto's "The Idea
of the Holy," formulated in 1917, defines the essence of religious awareness as awe, a unique blend of
fear and fascination before the divine. Friedrich Schleiermacher in the late 18th century defined religion
as a "feeling of absolute dependence."

The Encyclopedia of Religion defines religion this way:[9]

In summary, it may be said that almost every known culture involves the religious in the above sense of a depth
dimension in cultural experiences at all levels — a push, whether ill-defined or conscious, toward some sort of
ultimacy and transcendence that will provide norms and power for the rest of life. When more or less distinct
patterns of behaviour are built around this depth dimension in a culture, this structure constitutes religion in its
historically recognizable form. Religion is the organization of life around the depth dimensions of experience —
varied in form, completeness, and clarity in accordance with the environing culture."
Other encyclopedic definitions include: "A general term used... to designate all concepts concerning the
belief in god(s) and goddess(es) as well as other spiritual beings or transcendental ultimate concerns"[10]
and "human beings' relation to that which they regard as holy, sacred, spiritual, or divine."[11]

Religion and superstition


Further information: Superstition, Magical thinking, and Magic and religion

While superstitions and magical thinking refer to nonscientific causal reasoning, applied to specific things
or actions, a religion is a more complex system about general or ultimate things, involving morality,
history and community. Because religions may include and exploit certain superstitions or make use of
magical thinking, while mixing them with broader considerations, the division between superstition and
religious faith is hard to specify and subjective. Religious believers have often seen other religions as
superstition.[12] Likewise, some atheists, agnostics, deists, and skeptics regard religious belief as
superstition. Religious practices are most likely to be labeled "superstitious" by outsiders when they
include belief in extraordinary events (miracles), an afterlife, supernatural interventions, apparitions or the
efficacy of prayer, charms, incantations, the meaningfulness of omens, and prognostications.

Greek and Roman pagans, who modeled their relations with the gods on political and social terms scorned
the man who constantly trembled with fear at the thought of the gods, as a slave feared a cruel and
capricious master. Such fear of the gods (deisidaimonia) was what the Romans meant by superstitio
(Veyne 1987, p 211). Early Christianity was outlawed as a superstitio Iudaica, a "Jewish superstition", by
Domitian in the 80s AD, and by AD 425, Theodosius II outlawed pagan traditions as superstitious.

The Roman Catholic Church considers superstition to be sinful in the sense that it denotes a lack of trust
in the divine providence of God and, as such, is a violation of the first of the Ten Commandments. The
Catechism of the Catholic Church states superstition "in some sense represents a perverse excess of
religion" (para. #2110).

Superstition is a deviation of religious feeling and of the practices this feeling imposes. It can even affect the
worship we offer the true God, e.g., when one attributes an importance in some way magical to certain practices
otherwise lawful or necessary. To attribute the efficacy of prayers or of sacramental signs to their mere external
performance, apart from the interior dispositions that they demand is to fall into superstition. Cf. Matthew 23:16-22
(para. #2111)

History
Main articles: History of religion and Timeline of religion

Detail from Religion, Charles Sprague Pearce (1896). Library of Congress Thomas Jefferson Building,
Washington, D.C.
The history of religion refers to the written record of human religious experiences and ideas. This period
of religious history typically begins with the invention of writing about 5,000 years ago(3,000 BCE) in
the Near East.

Development of religion

Main articles: Evolutionary origin of religions, Development of religion, Anthropology of


religion, and Prehistoric religion

There are a number of models regarding the ways in which religions come into being and develop.
Broadly speaking, these models fall into three categories:

• Models which see religions as social constructions;


• Models which see religions as progressing toward higher, objective truth;
• Models which see a particular religion as absolutely true.

In pre-modern (pre-urban) societies, religion is one defining factor of ethnicity, along with language,
regional customs, national costume, etc. As Xenophanes famously comments:

Men make gods in their own image; those of the Ethiopians are black and snub-nosed, those of the
Thracians have blue eyes and red hair.

Ethnic religions may include officially sanctioned and organized civil religions with an organized clergy,
but they are characterized in that adherents generally are defined by their ethnicity, and conversion
essentially equates to cultural assimilation to the people in question. The notion of gentiles ("nations") in
Judaism reflect this state of affairs, the implicit assumption that each nation will have its own religion.
Historical examples include Germanic polytheism, Celtic polytheism, Slavic polytheism and pre-
Hellenistic Greek religion.

The "Axial Age"

Main article: Axial Age

Karl Jaspers, in his Vom Ursprung und Ziel der Geschichte (The Origin and Goal of History), identified a
number of key Axial Age thinkers as having had a profound influence on future philosophy and religion,
and identified characteristics common to each area from which those thinkers emerged. Jaspers saw in
these developments in religion and philosophy a striking parallel without any obvious direct transmission
of ideas from one region to the other, having found very little recorded proof of extensive inter-
communication between the ancient Near East, Greece, India and China. Jaspers held up this age as
unique, and one which to compare the rest of the history of human thought to. Jaspers' approach to the
culture of the middle of the first millennium BCE has been adopted by other scholars and academics, and
has become a point of discussion in the history of religion.

In its later part, the "Axial Age" culminated in the development of monism and monotheism, notably of
Platonic realism and Neoplatonism in Hellenistic philosophy, the notion of atman in Vedanta Hindu
philosophy, and the notion of Tao in Taoism.
Central Asian (Tocharian?) and East-Asian Buddhist monks, Bezeklik, Eastern Tarim Basin, 9th-10th
century.

Middle Ages

Newer present-day world religions established themselves throughout Eurasia during the Middle Ages by:
Christianization of the Western world; Buddhist missions to East Asia; the decline of Buddhism in the
Indian subcontinent; and the spread of Islam throughout the Middle East, Central Asia, North Africa and
parts of Europe and India.

During the Middle Ages, Muslims were in conflict with Zoroastrians during the Islamic conquest of
Persia; Christians were in conflict with Muslims during the Byzantine-Arab Wars, Crusades, Reconquista
and Ottoman wars in Europe; Christians were in conflict with Jews during the Crusades, Reconquista and
Inquisition; Shamans were in conflict with Buddhists, Taoists, Muslims and Christians during the Mongol
invasions; and Muslims were in conflict with Hindus and Sikhs during Muslim conquest in the Indian
subcontinent.

Many medieval religious movements emphasized mysticism, such as the Cathars and related movements
in the West, the Bhakti movement in India and Sufism in Islam. Monotheism reached definite forms in
Christian Christology and in Islamic Tawhid. Hindu monotheist notions of Brahman likewise reached
their classical form with the teaching of Adi Shankara.

Modern period

European colonisation during the 15th to 19th centuries resulted in the spread of Christianity to Sub-
Saharan Africa, the Americas, Australia and the Philippines. The 18th century saw the beginning of
secularisation in Europe, rising to notability in the wake of the French Revolution.

In the 20th century, the regimes of Communist Eastern Europe and Communist China were explicitly
anti-religious. A great variety of new religious movements originated in the 20th century, many proposing
syncretism of elements of established religions. Adherence to such new movements is limited, however,
remaining below 2% worldwide in the 2000s. Adherents of the classical world religions account for more
than 75% of the world's population, while adherence to indigenous tribal religions has fallen to 4%. As of
2005, an estimated 14% of the world's population identifies as nonreligious.

Classification
Main article: Major religious groups
Further information: Comparative religion and Sociological classifications of religious
movements

Religious traditions fall into super-groups in comparative religion, arranged by historical origin and
mutual influence. Abrahamic religions originate in the Middle East, Indian religions in India and Far
Eastern religions in East Asia. Another group with supra-regional influence are African diasporic
religions, which have their origins in Central and West Africa.

Major religious groups as a percentage of the world population in 2005 (Encyclopaedia Britannica).

The main Religions of the World, mapped without denominations.


. In summary, religious adherence of the world's population is as follows: "Abrahamic": 53.5%, "Indian":
19.7%, irreligious: 14.3%, "Far Eastern": 6.5%, tribal religions: 4.0%, new religious movements: 2.0%.

• Abrahamic religions are by far the largest group, and these consist primarily of Christianity, Islam
and Judaism (sometimes the Bahá'í Faith is also included). They are named for the patriarch
Abraham, and are unified by the practice of monotheism. Today, around 3.4 billion people are
followers of Abrahamic religions and are spread widely around the world apart from the regions
around South-East Asia. Several Abrahamic organizations are vigorous proselytizers.[13]
• Indian religions originated in Greater India and tend to share a number of key concepts, such as
dharma and karma. They are of the most influence across the Indian subcontinent, East Asia,
South East Asia, as well as isolated parts of Russia. The main Indian religions are Hinduism,
Buddhism, Sikhism, and Jainism. Indian religions mutually influenced each other. Sikhism was
also influenced by the Abrahamic tradition of Sufism.
• Far Eastern religions consist of several East Asian religions which make use of the concept of Tao
(in Chinese) or Do (in Japanese or Korean). They include Taoism, Shinto, Chondogyo, Caodaism,
and Yiguandao. Far Eastern Buddhism (in which the group overlaps with the "Indian" group) and
Confucianism (which by some categorizations is not a religion) are also included.
• Iranic religions originated in Iran and include Zoroastrianism, Yazdanism and historical traditions
of Gnosticism (Mandaeanism, Manichaeism). It has significant overlaps with Abrahamic
traditions, e.g. in Sufism and in recent movements such as Bábísm and the Bahá'í Faith.
• African diasporic religions practiced in the Americas, imported as a result of the Atlantic slave
trade of the 16th to 18th centuries, building of traditional religions of Central and West Africa.
• Indigenous tribal religions, formerly found on every continent, now marginalized by the major
organized faiths, but persisting as undercurrents of folk religion. Includes African traditional
religions, Asian Shamanism, Native American religions, Austronesian and Australian Aboriginal
traditions and arguably Chinese folk religion (overlaps with Far Eastern religions). Under more
traditional listings, this has been referred to as "Paganism" along with historical polytheism.
• New religious movements, a heterogeneous group of religious faiths emerging since the 19th
century, often syncretizing, re-interpreting or reviving aspects of older traditions (Bahá'í, Hindu
revivalism, Ayyavazhi, Pentecostalism, polytheistic reconstructionism), some inspired by science-
fiction (UFO religions). See List of new religious movements, list of groups referred to as cults.

Demographic distribution of the major super-groupings mentioned is shown in the table below:

Name of Name of Number of


Date of Origin Main regions covered
Group Religion followers

Worldwide except Northwest Africa,


Christianity 2.1 billion 1st c. the Arabian Peninsula, and parts of
Central, East, and Southeast Asia.

Middle East, Northern Africa, Central


Asia, South Asia, Western Africa,
Indian subcontinent, Malay
Islam 1.5 billion 7th c. Archipelago with large population
centers existing in Eastern Africa,
Balkan Peninsula, Russia, Europe and
Abrahamic China.
religions
3.6 billion
>Israel and among Jewish diaspora
Judaism 14 million 1300 BCE (live mostly in USA, Canada, and
Europe)

Dispersed worldwide with no major


Bahá'í Faith 5 million 19th c. population centers

Rastafarianism 600,000 1930s Jamaica, Caribbean, Africa

Indian Hinduism 900 million no founder Indian subcontinent, Fiji, Guyana and
religions Mauritius
Iron Age (1200– Indian subcontinent, East Asia,
Buddhism 376 million Indochina, regions of Russia.
300 BCE)

25.8 India, Pakistan, Africa, Canada, USA,


1.4 billion Sikhism 15th c. United Kingdom
million

Iron Age (1200–


Jainism 4.2 million India, and East Africa
300 BCE)

Spring and
Taoism unknown Autumn Period China and the Chinese diaspora
(722 BC-481 BC)

Spring and
China, Korea, Vietnam and the
Confucianism unknown Autumn Period Chinese and Vietnamese diasporas
(722 BC-481 BC)

Shinto 4 million no founder Japan

Far Eastern
religions Caodaism 1-2 million 1925 Vietnam
500 million

1.13
Chondogyo 1812 Korea
million

Yiguandao 1-2 million c. 1900 Taiwan

no founder, a
combination of
Chinese folk
394 million Taoism, China
religion Confucianism and
Buddhism

Ethnic/tribal
400 million Primal
300 million no founder India, Asia
indigenous

African 100 million no founder Africa, Americas


traditional and
diasporic

Juche 19 million 1955 North Korea

Spiritism 15 million 19th century Brazil, Europe, North America

Neopaganism 1 million 20th century Europe, United States

Other
each over 500
Ahl-e Haqq 1 million ancient Iraq, Iran
thousand

800,000–
Yazidism ancient mainly Iraq
1,000,000

Unitarian-
800,000 1961 United States, Europe
Universalism

Scientology 500,000 1952 United States, Europe

Religious belief
Main article: Religious belief

Religious belief usually relates to the existence, nature and worship of a deity or deities and divine
involvement in the universe and human life. Alternately, it may also relate to values and practices
transmitted by a spiritual leader. Unlike other belief systems, which may be passed on orally, religious
belief tends to be codified in literate societies (religion in non-literate societies is still largely passed on
orally[14]). In some religions, like the Abrahamic religions, it is held that most of the core beliefs have been
divinely revealed.

Related forms of thought


Religion and science

Main article: Relationship between religion and science

Religious knowledge, according to religious practitioners, may be gained from religious leaders, sacred
texts (scriptures), and/or personal revelation. Some religions view such knowledge as unlimited in scope
and suitable to answer any question; others see religious knowledge as playing a more restricted role,
often as a complement to knowledge gained through physical observation. Some religious people
maintain that religious knowledge obtained in this way is absolute and infallible (religious cosmology).

The scientific method gains knowledge by testing hypotheses to develop theories through elucidation of
facts or evaluation by experiments and thus only answers cosmological questions about the physical
universe. It develops theories of the world which best fit physically observed evidence. All scientific
knowledge is subject to later refinement in the face of additional evidence. Scientific theories that have an
overwhelming preponderance of favorable evidence are often treated as facts (such as the theories of
gravity or evolution).

Early science such as geometry and astronomy was connected to the divine for most medieval scholars.
The compass in this 13th century manuscript is a symbol of God's act of creation.

Many scientists held strong religious beliefs (see List of Christian thinkers in science) and worked to
harmonize science and religion. Isaac Newton, for example, believed that gravity caused the planets to
revolve about the Sun, and credited God with the design. In the concluding General Scholium to the
Philosophiae Naturalis Principia Mathematica, he wrote: "This most beautiful System of the Sun, Planets
and Comets, could only proceed from the counsel and dominion of an intelligent and powerful being."
Nevertheless, conflict arose between religious organizations and individuals who propagated scientific
theories which were deemed unacceptable by the organizations. The Roman Catholic Church, for
example, has in the past[15] reserved to itself the right to decide which scientific theories were acceptable
and which were unacceptable. In the 17th century, Galileo was tried and forced to recant the heliocentric
theory based on the medieval church's stance that the Greek Hellenistic system of astronomy was the
correct one.[16][17]

Many theories exist as to why religions sometimes seem to conflict with scientific knowledge. In the case
of Christianity, a relevant factor may be that it was among Christians that science in the modern sense was
developed. Unlike other religious groups, as early as the 17th century the Christian churches had to deal
directly with this new way to investigate nature and seek truth.

The perceived conflict between science and Christianity may also be partially explained by a literal
interpretation of the Bible adhered to by many Christians, both currently and historically. The Catholic
Church has always held with Augustine of Hippo who explicitly opposed a literal interpretation of the
Bible whenever the Bible conflicted with Science. The literal way to read the sacred texts became
especially prevalent after the rise of the Protestant reformation, with its emphasis on the Bible as the only
authoritative source concerning the ultimate reality.[18] This view is often shunned by both religious
leaders (who regard literally believing it as petty and look for greater meaning instead) and scientists who
regard it as an impossibility.

Some Christians have disagreed or are still disagreeing with scientists in areas such as the validity of
Keplerian astronomy, the theory of evolution, the method of creation of the universe and the Earth, and
the origins of life. On the other hand, scholars such as Stanley Jaki have suggested that Christianity and its
particular worldview was a crucial factor for the emergence of modern science. In fact, most of today's
historians are moving away from the view of the relationship between Christianity and science as one of
"conflict" - a perspective commonly called the conflict thesis.[19][20] Gary Ferngren in his historical volume
about Science & Religion states:

While some historians had always regarded the [conflict] thesis as oversimplifying and distorting a complex
relationship, in the late twentieth century it underwent a more systematic reevaluation. The result is the growing
recognition among historians of science that the relationship of religion and science has been much more positive
than is sometimes thought. Although popular images of controversy continue to exemplify the supposed hostility of
Christianity to new scientific theories, studies have shown that Christianity has often nurtured and encouraged
scientific endeavour, while at other times the two have co-existed without either tension or attempts at
harmonization. If Galileo and the Scopes trial come to mind as examples of conflict, they were the exceptions
rather than the rule.[21]

In the Bahá'í Faith, the harmony of science and religion is a central tenet.[22] The principle states that that
truth is one, and therefore true science and true religion must be in harmony, thus rejecting the view that
science and religion are in conflict.[22] `Abdu'l-Bahá, the son of the founder of the religion, asserted that
science and religion cannot be opposed because they are aspects of the same truth; he also affirmed that
reasoning powers are required to understand the truths of religion and that religious teachings which are at
variance with science should not be accepted; he explained that religion has to be reasonable since God
endowed humankind with reason so that they can discover truth.[23] Shoghi Effendi, the Guardian of the
Bahá'í Faith, described science and religion as "the two most potent forces in human life."[24]

Proponents of Hinduism claim that Hinduism is not afraid of scientific explorations, nor of the
technological progress of mankind. According to them, there is a comprehensive scope and opportunity
for Hinduism to mold itself according to the demands and aspirations of the modern world; it has the
ability to align itself with both science and spiritualism. This religion uses some modern examples to
explain its ancient theories and reinforce its own beliefs. For example, some Hindu thinkers have used the
terminology of quantum physics to explain some basic concepts of Hinduism such as Maya or the illusory
and impermanent nature of our existence.

The philosophical approach known as pragmatism, as propounded by the American philosopher William
James, has been used to reconcile scientific with religious knowledge. Pragmatism, simplistically, holds
that the truth of a set of beliefs can be indicated by its usefulness in helping people cope with a particular
context of life. Thus, the fact that scientific beliefs are useful in predicting observations in the physical
world can indicate a certain truth for scientific theories; the fact that religious beliefs can be useful in
helping people cope with difficult emotions or moral decisions can indicate a certain truth for those
beliefs. (For a similar postmodern view, see grand narrative).

Religion, metaphysics, and cosmology

Please help improve this section by expanding it. Further information might be found on the
talk page. (June 2008)

Being both forms of belief system, religion and philosophy meet in several areas - notably in the study of
metaphysics and cosmology. In particular, a distinct set of religious beliefs will often entail a specific
metaphysics and cosmology. That is, a religion will generally have answers to metaphysical and
cosmological questions about the nature of being, of the universe, humanity, and the divine.

Mysticism and esotericism

Man meditating

Mysticism focuses on methods other than logic, but (in the case of esoteric mysticism) not necessarily
excluding it, for gaining enlightenment. Rather, meditative and contemplative practices such as Vipassanā
and yoga, physical disciplines such as stringent fasting and whirling (in the case of the Sufi dervishes), or
the use of psychoactive drugs such as LSD, lead to altered states of consciousness that logic can never
hope to grasp. However, regarding the latter topic, mysticism prevalent in the 'great' religions
(monotheisms, henotheisms, which are perhaps relatively recent, and which the word 'mysticism' is more
recent than,) includes systems of discipline that forbid drugs that can damage the body, including the
nervous system.

Mysticism (to initiate) is the pursuit of communion with, or conscious awareness of ultimate reality, the
divine, spiritual truth, or Deity through direct, personal experience (intuition or insight) rather than
rational thought. Mystics speak of the existence of realities behind external perception or intellectual
apprehension that are central to being and directly accessible through personal experience. They say that
such experience is a genuine and important source of knowledge.

Esotericism is often spiritual (thus religious) but can be non-religious/-spiritual, and it uses intellectual
understanding and reasoning, intuition and inspiration (higher noetic and spiritual reasoning,) but not
necessarily faith (except often as a virtue,) and it is philosophical in its emphasis on techniques of psycho-
spiritual transformation (esoteric cosmology). Esotericism refers to "hidden" knowledge available only to
the advanced, privileged, or initiated, as opposed to exoteric knowledge, which is public. All religions are
probably somewhat exoteric, but most ones of ancient civilizations such as Yoga of India, and the
mystery religions of ancient Egypt, Israel (Kabbalah,) and Greece are examples of ones that are also
esoteric.

Spirituality

Main article: Spirituality


A sadhu performing namaste in Madurai, India.

Members of an organized religion may not see any significant difference between religion and spirituality.
Or they may see a distinction between the mundane, earthly aspects of their religion and its spiritual
dimension.

Some individuals draw a strong distinction between religion and spirituality. They may see spirituality as
a belief in ideas of religious significance (such as God, the Soul, or Heaven), but not feel bound to the
bureaucratic structure and creeds of a particular organized religion. They choose the term spirituality
rather than religion to describe their form of belief, perhaps reflecting a disillusionment with organized
religion (see Major religious groups), and a movement towards a more "modern" — more tolerant, and
more intuitive — form of religion. These individuals may reject organized religion because of historical
acts by religious organizations, such as Christian Crusades and Islamic Jihad, the marginalisation and
persecution of various minorities or the Spanish Inquisition. The basic precept of the ancient spiritual
tradition of India, the Vedas, is the inner reality of existence, which is essentially a spiritual approach to
being.

Myth

Main article: Mythology

The word myth has several meanings.

1. A traditional story of ostensibly historical events that serves to unfold part of the world view of a
people or explain a practice, belief, or natural phenomenon;
2. A person or thing having only an imaginary or unverifiable existence; or
3. A metaphor for the spiritual potentiality in the human being. [25]

Ancient polytheistic religions, such as those of Greece, Rome, and Scandinavia, are usually categorized
under the heading of mythology. Religions of pre-industrial peoples, or cultures in development, are
similarly called "myths" in the anthropology of religion. The term "myth" can be used pejoratively by
both religious and non-religious people. By defining another person's religious stories and beliefs as
mythology, one implies that they are less real or true than one's own religious stories and beliefs. Joseph
Campbell remarked, "Mythology is often thought of as other people's religions, and religion can be
defined as mis-interpreted mythology."[26]

In sociology, however, the term myth has a non-pejorative meaning. There, myth is defined as a story that
is important for the group whether or not it is objectively or provably true. Examples include the death
and resurrection of Jesus, which, to Christians, explains the means by which they are freed from sin and is
also ostensibly a historical event. But from a mythological outlook, whether or not the event actually
occurred is unimportant. Instead, the symbolism of the death of an old "life" and the start of a new "life"
is what is most significant.
Urarina shaman, 1988

Cosmology

Main articles: Religious cosmology, Philosophy, Metaphysics, Esotericism, and Mysticism


Main articles: Spirituality, Mythology, and Philosophy of religion

Humans have many different methods which attempt to answer fundamental questions about the nature of
the universe and our place in it (cosmology). Religion is only one of the methods for trying to answer one
or more of these questions. Other methods include science, philosophy, metaphysics, astrology,
esotericism, mysticism, and forms of shamanism, such as the sacred consumption of ayahuasca among
Peruvian Amazonia's Urarina. The Urarina have an elaborate animistic cosmological system,[27] which
informs their mythology, religious orientation and daily existence. In many cases, the distinction between
these means are not clear. For example, Buddhism and Taoism have been regarded as schools of
philosophies as well as religions.

Given the generalized discontents with modernity, consumerism, over-consumption, violence and anomie,
many people in the so-called industrial or post-industrial West rely on a number of distinctive religious
worldviews. This in turn has given rise to increased religious pluralism, as well as to what are commonly
known in the academic literature as new religious movements, which are gaining ground across the globe.

Criticism
Main articles: Criticism of Religion, Antireligion, Secularism, Agnosticism, and Atheism

The Canadian scholar of comparative religion, Wilfred Cantwell Smith argued that religion, rather than
being a universally valid category as is generally supposed, is a peculiarly European concept of
comparatively recent origin.

Most Western criticism of religious constructs and their social consequences has come, however, from
atheists and agnostics. Anti-religious sentiment first gathered force during the 18th century European
Enlightenment, although pioneering critics such as Voltaire and his fellow Encyclopedists were for the
most part deists. The French Revolution then instituted what later became known as secularism, a
constitutional declaration of the separation of church and state. As well as being adopted by the new
French and U.S. republics, secularism soon came to be adopted by a number of nation states, both
revolutionary and post-colonial. Marx famously declared religion to be the "opium of the people," a
statement the implications of which were applied with an iron fist in social systems inspired by his
writings, most notably in the Soviet Union and China and, most notoriously, in Cambodia. The possible
implications of the rest of Marx's celebrated sentence - that religion is "the heart of a heartless world" -
were left stubbornly unconsidered. Systematic criticism of the philosophical underpinnings of religion had
paralleled the upsurge of scientific discourse within industrial society: T.H. Huxley had in 1869 coined
the term "agnostic," a baton taken up with alacrity by such figures as Robert Ingersoll. Later, Bertrand
Russell told the world Why I am not a Christian.

Many contemporary critics consider religion irrational by definition.[28][29][30] Some assert that dogmatic
religions are in effect morally deficient, elevating to moral status ancient, arbitrary, and ill-informed rules
- taboos on eating pork, for example, as well as dress codes and sexual practices[31] - possibly designed for
reasons of hygiene or even mere politics in a bygone era.

In North America and Western Europe the social fallout of the 9/11 attacks has fertilized a flurry of
secularist tracts with titles such as The God Delusion, The End of Faith and God is not Great: How
Religion Poisons Everything. This criticism is mostly focused on the monotheistic Abrahamic traditions.

See also
Religion
portal

Wikimedia Commons has media related to: Religion

Wikiquote has a collection of quotations related to: Religion

Main lists: List of basic religious topics and List of religious topics

• International Association for the Scientific Study of Religion


• Code of Hammurabi
• List of religious populations
• Religions by country
• Wealth and religion
• Religion and happiness
• Religious conversion

Notes

Terrorism
From Wikipedia, the free encyclopedia

Jump to: navigation, search

"Terrorist" redirects here. For other uses, see Terrorist (disambiguation).


Terrorism
Definitions
History
International conventions
Anti-terrorism legislation
Counter-terrorism
War on Terrorism
Red Terror
White Terror

By ideology

Communist
Eco-terrorism
Narcoterrorism
Nationalist
Racist

Religious
(Christian • Islamic • Jewish)

Types and tactics

Agro-terrorism
Bioterrorism
Car bombing
Environmental
Aircraft hijacking
Nuclear
Propaganda of the deed
Proxy bomb
Suicide attack

State involvement

State terrorism
State sponsorship
Configurations

Fronts
Lone wolf

Lists

Designated organizations
Incidents

v•d•e

Terrorism is the systematic use of terror especially as a means of coercion.[1]There is no internationally


agreed definition of terrorism.[2][3] Most common definitions of terrorism include only those acts which are
intended to create fear (terror), are perpetrated for an ideological goal (as opposed to a lone attack), and
deliberately target or disregard the safety of non-combatants.

Some definitions also include acts of unlawful violence and war. The history of terrorist organizations
suggests that they do not select terrorism for its political effectiveness.[4] Individual terrorists tend to be
motivated more by a desire for social solidarity with other members of their organization than by political
platforms or strategic objectives, which are often murky and undefined.[4] The word "terrorism" is
politically and emotionally charged,[5] and this greatly compounds the difficulty of providing a precise
definition. One 1988 study by the US Army found that over 100 definitions of the word "terrorism" have
been used.[6] A person who practices terrorism is a terrorist. The concept of terrorism is itself
controversial because it is often used by states to delegitimize political opponents, and thus legitimize the
state's own use of terror against those opponents.

Terrorism has been used by a broad array of political organizations in furthering their objectives; both
right-wing and left-wing political parties, nationalistic, and religious groups, revolutionaries and ruling
governments.[7] The presence of non-state actors in widespread armed conflict has created controversy
regarding the application of the laws of war.

While acts of terrorism are criminal acts as per the United Nations Security Council Resolution 1373 and
domestic jurisprudence of almost all countries in the world, terrorism refers to a phenomenon including
the actual acts, the perpetrators of acts of terrorism themselves and their motives. There is disagreement
on definitions of terrorism.

Contents
[hide]

• 1 Origin of term
• 2 Key criteria
• 3 Pejorative use
• 4 Definition in international law
• 5 Types
o 5.1 Democracy and domestic terrorism
• 6 Perpetrators
o 6.1 Terrorist groups
o 6.2 State sponsors
o 6.3 State terrorism
• 7 Tactics
• 8 Responses
• 9 Mass media
• 10 History
• 11 See also
• 12 Further reading
o 12.1 UN conventions
o 12.2 News monitoring websites specializing on articles on terrorism
o 12.3 Papers and articles on global terrorism
o 12.4 Papers and articles on terrorism and the United States
o 12.5 Papers and articles on terrorism and Israel
o 12.6 Muslim public opinion from the World Values Survey
o 12.7 Other

• 13 Footnotes

[edit] Origin of term


Main article: Definition of terrorism
See also: State terrorism

Under UNITED NATIONS resolution numbered 1566 taken in the year 2004 TERRORISM is defined as
an act of of voilence made in order to make a public mishap."Terror" comes a Latin word meaning "to
frighten." The terror cimbricus was a panic and state of emergency in Rome in response to the approach
of warriors of the Cimbri tribe in 105BC. The Jacobins cited this precedent when imposing a Reign of
Terror during the French Revolution. After the Jacobins lost power, "terrorist" became a term of abuse.
Although the Reign of Terror was imposed by a government, in modern times "terrorism" usually refers
to the killing of innocent people by a private group in such a way as to create a media spectacle. This
meaning can be traced back to Sergey Nechayev, who described himself as a "terrorist."[8] Nechayev
founded the Russian terrorist group People's Retribution (Народная расправа) in 1869.

In November 2004, a United Nations Security Council report described terrorism as any act "intended to
cause death or serious bodily harm to civilians or non-combatants with the purpose of intimidating a
population or compelling a government or an international organization to do or abstain from doing any
act." (Note that this report does not constitute international law.)[9]

In many countries, acts of terrorism are legally distinguished from criminal acts done for other purposes,
and "terrorism" is defined by statute; see definition of terrorism for particular definitions. Common
principles among legal definitions of terrorism provide an emerging consensus as to meaning and also
foster cooperation between law enforcement personnel in different countries. Among these definitions
there are several that do not recognize the possibility of legitimate use of violence by civilians against an
invader in an occupied country and would, thus label all resistance movements as terrorist groups. Others
make a distinction between lawful and unlawful use of violence. Ultimately, the distinction is a political
judgment.[10]
[edit] Key criteria
Official definitions determine counter-terrorism policy and are often developed to serve it. Most
government definitions outline the following key criteria: target, objective, motive, perpetrator, and
legitimacy or legality of the act. Terrorism is also often recognizable by a following statement from the
perpetrators.

Violence – According to Walter Laqueur of the Center for Strategic and International Studies, "the only
general characteristic of terrorism generally agreed upon is that terrorism involves violence and the threat
of violence." However, the criterion of violence alone does not produce a useful definition, as it includes
many acts not usually considered terrorism: war, riot, organized crime, or even a simple assault. Property
destruction that does not endanger life is not usually considered a violent crime, but some have described
property destruction by the Earth Liberation Front and Animal Liberation Front as violence and terrorism;
see eco-terrorism.

Psychological impact and fear – The attack was carried out in such a way as to maximize the severity
and length of the psychological impact. Each act of terrorism is a “performance,” devised to have an
impact on many large audiences. Terrorists also attack national symbols to show power and to attempt to
shake the foundation of the country or society they are opposed to. This may negatively affect a
government, while increasing the prestige of the given terrorist organization and/or ideology behind a
terrorist act.[11]

Perpetrated for a political goal – Something many terrorist attacks have in common is their perpetration
for a political purpose. Terrorism is a political tactic, not unlike letter writing or protesting, that is used by
activists when they believe no other means will effect the kind of change they desire. The change is
desired so badly that failure is seen as a worse outcome than the deaths of civilians. This is often where
the interrelationship between terrorism and religion occurs. When a political struggle is integrated into the
framework of a religious or "cosmic"[12] struggle, such as over the control of an ancestral homeland or
holy site such as Israel and Jerusalem, failing in the political goal (nationalism) becomes equated with
spiritual failure, which, for the highly committed, is worse than their own death or the deaths of innocent
civilians.

Deliberate targeting of non-combatants – It is commonly held that the distinctive nature of terrorism
lies in its intentional and specific selection of civilians as direct targets. Specifically, the criminal intent is
shown when babies, children, mothers, and the elderly are murdered, or injured, and put in harm's way.
Much of the time, the victims of terrorism are targeted not because they are threats, but because they are
specific "symbols, tools, animals or corrupt beings" that tie into a specific view of the world that the
terrorist possess. Their suffering accomplishes the terrorists' goals of instilling fear, getting a message out
to an audience, or otherwise accomplishing their often radical religious and political ends.[13]

Disguise – Terrorists almost invariably pretend to be non-combatants, hide among non-combatants, fight
from in the midst of non-combatants, and when they can, strive to mislead and provoke the government
soldiers into attacking the wrong people, that the government may be blamed for it. When an enemy is
identifiable as a combatant, the word terrorism is rarely used.[citation needed]

Unlawfulness or illegitimacy – Some official (notably government) definitions of terrorism add a


criterion of illegitimacy or unlawfulness[14] to distinguish between actions authorized by a government
(and thus "lawful") and those of other actors, including individuals and small groups. Using this criterion,
actions that would otherwise qualify as terrorism would not be considered terrorism if they were
government sanctioned. For example, firebombing a city, which is designed to affect civilian support for a
cause, would not be considered terrorism if it were authorized by a government. This criterion is
inherently problematic and is not universally accepted, because: it denies the existence of state terrorism;
the same act may or may not be classed as terrorism depending on whether its sponsorship is traced to a
"legitimate" government; "legitimacy" and "lawfulness" are subjective, depending on the perspective of
one government or another; and it diverges from the historically accepted meaning and origin of the term.
[15][16][17][18]
For these reasons this criterion is not universally accepted. Most dictionary definitions of the
term do not include this criterion.

[edit] Pejorative use


The terms "terrorism" and "terrorist" (someone who engages in terrorism) carry strong negative
connotations. These terms are often used as political labels to condemn violence or threat of violence by
certain actors as immoral, indiscriminate, unjustified or to condemn an entire segment of a population.[19]
Those labeled "terrorists" rarely identify themselves as such, and typically use other euphemistic terms or
terms specific to their situation, such as: separatist, freedom fighter, liberator, revolutionary, vigilante,
militant, paramilitary, guerrilla, rebel, or any similar-meaning word in other languages and cultures.
Jihadi, mujaheddin, and fedayeen are similar Arabic words that have entered the English lexicon.

On the question of whether particular terrorist acts, such as murder, can be justified as the lesser evil in a
particular circumstance, philosophers have expressed different views: While, according to David Rodin,
utilitarian philosophers can in theory conceive of cases in which evil of terrorism is outweighed by goods
that can be achieved in no morally less costly way, in practice utilitarians often universally reject
terrorism because it is very dubious that acts of terrorism achieve important goods in a utility efficient
manner, or that the "harmful effects of undermining the convention of non-combatant immunity is thought
to outweigh the goods that may be achieved by particular acts of terrorism."[20] Among the non-utilitarian
philosophers, Michael Walzer argued that terrorism is always morally wrong but at the same time those
who engaged in terrorism can be morally justified in one specific case: when "a nation or community
faces the extreme threat of complete destruction and the only way it can preserve itself is by intentionally
targeting non-combatants, then it is morally entitled to do so."[20]

In his book "Inside Terrorism" Bruce Hoffman wrote in Chapter One: Defining Terrorism that

"On one point, at least, everyone agrees: terrorism is a pejorative term. It is a word with
intrinsically negative connotations that is generally applied to one's enemies and opponents, or to
those with whom one disagrees and would otherwise prefer to ignore. 'What is called terrorism,'
Brian Jenkins has written, `'thus seems to depend on one's point of view. Use of the term implies a
moral judgment; and if one party can successfully attach the label terrorist to its opponent, then it
has indirectly persuaded others to adopt its moral viewpoint.' Hence the decision to call someone
or label some organization `terrorist' becomes almost unavoidably subjective, depending largely
on whether one sympathizes with or opposes the person/group/cause concerned. If one identifies
with the victim of the violence, for example, then the act is terrorism. If, however, one identifies
with the perpetrator, the violent act is regarded in a more sympathetic, if not positive (or, at the
worst, an ambivalent) light; and it is not terrorism."[5]

The pejorative connotations of the word can be summed up in the aphorism, "One man's terrorist is
another man's freedom fighter." This is exemplified when a group that uses irregular military methods is
an ally of a State against a mutual enemy, but later falls out with the State and starts to use the same
methods against its former ally. During World War II, the Malayan People’s Anti-Japanese Army was
allied with the British, but during the Malayan Emergency, members of its successor, the Malayan Races
Liberation Army, were branded terrorists by the British.[21][22] More recently, Ronald Reagan and others in
the American administration frequently called the Afghan Mujahideen freedom fighters during their war
against the Soviet Union,[23] yet twenty years later when a new generation of Afghan men are fighting
against what they perceive to be a regime installed by foreign powers, their attacks are labelled terrorism
by George W. Bush.[24][25] Groups accused of terrorism usually prefer terms that reflect legitimate military
or ideological action.[26][27][28] Leading terrorism researcher Professor Martin Rudner, director of the
Canadian Centre of Intelligence and Security Studies at Ottawa's Carleton University, defines "terrorist
acts" as attacks against civilians for political or other ideological goals, and goes on to say:

"There is the famous statement: 'One man's terrorist is another man's freedom fighter.' But that is
grossly misleading. It assesses the validity of the cause when terrorism is an act. One can have a
perfectly beautiful cause and yet if one commits terrorist acts, it is terrorism regardless."[29]

Some groups, when involved in a "liberation" struggle, have been called terrorists by the Western
governments or media. Later, these same persons, as leaders of the liberated nations, are called statesmen
by similar organizations. Two examples of this phenomenon are the Nobel Peace Prize laureates
Menachem Begin and Nelson Mandela.[30][31][32][33][34][35][36]

Sometimes states that are close allies, for reasons of history, culture and politics, can disagree over
whether members of a certain organization are terrorists. For example for many years some branches of
the United States government refused to label members of the Irish Republican Army (IRA) as terrorists,
while it was using methods against one of the United States' closest allies (Britain) that Britain branded as
terrorist attacks. This was highlighted by the Quinn v. Robinson case.[37][38]

Many times the term "terrorism" and "extremism" are interchangeably used. However, there is a
significant difference between the two. Terrorism essentially threat or act of physical violence. Extremism
involves using non-physical instruments to mobilise minds to achieve political or ideological ends. For
instance, Al Qaeda is involved in terrorism. The Iranian revolution of 1979 is a case of extremism[citation
needed]
. A global research report An Inclusive World (2007) asserts that extremism poses a more serious
threat than terrorism in the decades to come.

For these and other reasons, media outlets wishing to preserve a reputation for impartiality are extremely
careful in their use of the term.[39][40]

[edit] Definition in international law


There are several International conventions on terrorism with somewhat different definitions.[41] The
United Nations sees this lack of agreement as a serious problem.[41]

[edit] Types
In the spring of 1975, the Law Enforcement Assistant Administration in the United States formed the
National Advisory Committee on Criminal Justice Standards and Goals. One of the five volumes that the
committee was entitled Disorders and Terrorism, produced by the Task Force on Disorders and Terrorism
under the direction H.H.A. Cooper, Director of the Task Force staff.[42] The Task Force classified
terrorism into six categories.

• Civil Disorders – A form of collective violence interfering with the peace, security, and normal
functioning of the community.
• Political Terrorism – Violent criminal behaviour designed primarily to generate fear in the
community, or substantial segment of it, for political purposes.
• Non-Political Terrorism – Terrorism that is not aimed at political purposes but which exhibits
“conscious design to create and maintain high degree of fear for coercive purposes, but the end is
individual or collective gain rather than the achievement of a political objective.”
• Quasi-Terrorism – The activities incidental to the commission of crimes of violence that are
similar in form and method to genuine terrorism but which nevertheless lack its essential
ingredient. It is not the main purpose of the quasi-terrorists to induce terror in the immediate
victim as in the case of genuine terrorism, but the quasi-terrorist uses the modalities and
techniques of the genuine terrorist and produces similar consequences and reaction. For example,
the fleeing felon who takes hostages is a quasi-terrorist, whose methods are similar to those of the
genuine terrorist but whose purposes are quite different.
• Limited Political Terrorism – Genuine political terrorism is characterized by a revolutionary
approach; limited political terrorism refers to “acts of terrorism which are committed for
ideological or political motives but which are not part of a concerted campaign to capture control
of the State.
• Official or State Terrorism –"referring to nations whose rule is based upon fear and oppression
that reach similar to terrorism or such proportions.” It may also be referred to as Structural
Terrorism defined broadly as terrorist acts carried out by governments in pursuit of political
objectives, often as part of their foreign policy.

In an analysis prepared for U.S. Intelligence[43] four typologies are mentioned.

• Nationalist-Separatist
• Religious Fundamentalist
• New Religious
• Social Revolutionary

[edit] Democracy and domestic terrorism

The relationship between domestic terrorism and democracy is complex. Such terrorism is most common
in nations with intermediate political freedom and that the nations with the least terrorism are the most
democratic nations.[44][45][46][47] However, one study suggests that suicide terrorism may be an exception to
this general rule. Evidence regarding this particular method of terrorism reveals that every modern suicide
campaign has targeted a democracy- a state with a considerable degree of political freedom. The study
suggests that concessions awarded to terrorists during the 1980s and 1990s for suicide attacks increased
their frequency.[48]

Some examples of "terrorism" in non-democracies include ETA in Spain under Francisco Franco, the
Shining Path in Peru under Alberto Fujimori, the Kurdistan Workers Party when Turkey was ruled by
military leaders and the ANC in South Africa. Democracies, such as the United States, Israel, and the
Philippines, also have experienced domestic terrorism.

While a democratic nation espousing civil liberties may claim a sense of higher moral ground than other
regimes, an act of terrorism within such a state may cause a perceived dilemma: whether to maintain its
civil liberties and thus risk being perceived as ineffective in dealing with the problem; or alternatively to
restrict its civil liberties and thus risk delegitimizing its claim of supporting civil liberties. This dilemma,
some social theorists would conclude, may very well play into the initial plans of the acting terrorist(s);
namely, to delegitimize the state.[49]

[edit] Perpetrators
Acts of terrorism can be carried out by individuals, groups, or states. According to some definitions,
clandestine or semi-clandestine state actors may also carry out terrorist acts outside the framework of a
state of war. However, the most common image of terrorism is that it is carried out by small and secretive
cells, highly motivated to serve a particular cause and many of the most deadly operations in recent times,
such as 9/11, the London underground bombing, and the 2002 Bali bombing were planned and carried out
by a close clique, composed of close friends, family members and other strong social networks. These
groups benefited from the free flow of information and efficient Telecommunications to succeed where
others had failed.[50] Over the years, many people have attempted to come up with a terrorist profile to
attempt to explain these individuals' actions through their psychology and social circumstances. Others,
like Roderick Hindery, have sought to discern profiles in the propaganda tactics used by terrorists.

It has been found that a "terrorist" will look, dress, and behave like a normal person, such as a university
student, until he or she executes the assigned mission. Terrorist profiling based on personality, physical,
or sociological traits would not appear to be particularly useful. The physical and behavioral description
of the terrorist could describe almost any normal young person.[51]

[edit] Terrorist groups

Main articles: List of designated terrorist organizations and Lone wolf (terrorism)

[edit] State sponsors

Main article: State-sponsored terrorism

A state can sponsor terrorism by funding or harboring a terrorist organization. Opinions as to which acts
of violence by states consist of state-sponsored terrorism or not vary widely. When states provide funding
for groups considered by some to be terrorist, they rarely acknowledge them as such.

[edit] State terrorism

Main article: State terrorism


Civilization is based on a clearly defined and widely accepted yet often unarticulated
“ hierarchy. Violence done by those higher on the hierarchy to those lower is nearly always
invisible, that is, unnoticed. When it is noticed, it is fully rationalized. Violence done by those
lower on the hierarchy to those higher is unthinkable, and when it does occur is regarded with
shock, horror, and the fetishization of the victims. ”
— Derrick Jensen [52]

The concept of state terrorism is controversial. [53] Military actions by states during war are usually not
considered terrorism, even when they involve significant civilian casualties.[citation needed] The Chairman of
the United Nations Counter-Terrorism Committee has stated that the Committee was conscious of the 12
international Conventions on the subject, and none of them referred to State terrorism, which was not an
international legal concept. If States abused their power, they should be judged against international
conventions dealing with war crimes, international human rights and international humanitarian law.[4]
Former United Nations Secretary-General Kofi Annan has said that it is "time to set aside debates on so-
called 'state terrorism'. The use of force by states is already thoroughly regulated under international
law"[54] However, he also made clear that, "...regardless of the differences between governments on the
question of definition of terrorism, what is clear and what we can all agree on is any deliberate attack on
innocent civilians, regardless of one's cause, is unacceptable and fits into the definition of terrorism."[55]

State terrorism has been used to refer to terrorist acts by governmental agents or forces. This involve the
use of state resources employed by a state's foreign policies, such as the using its military to directly
perform acts of considered to be state terrorism. Professor of Political Science, Michael Stohl cites the
examples that include Germany’s bombing of London and the U.S. atomic destruction of Hiroshima
during World War II. He argues that “the use of terror tactics is common in international relations and the
state has been and remains a more likely employer of terrorism within the international system than
insurgents." They also cite the First strike option as an example of the "terror of coercive dipolomacy" as
a form of this, which holds the world "hostage,' with the implied threat of using nuclear weapons in "crisis
management." They argue that the institutionalized form of terrorism has occurred as a result of changes
that took place following World War ll. In this analysis, state terrorism exhibited as a form of foreign
policy was shaped by the presence and use of weapons of mass destruction, and that the legitimizing of
such violent behavior led to an increasingly accepted form of this state behavior. (Michael Stohl, “The
Superpowers and International Terror” Paper presented at the Annual Meeting of the International Studies
Association, Atlanta, March 27-April 1, 1984;"Terrible beyond Endurance? The Foreign Policy of State
Terrorism." 1988;The State as Terrorist: The Dynamics of Governmental Violence and Repression, 1984
P49).

State terrorism is has also been used to describe peace time actions by governmental agents or forces,
such as the bombing of Pan Am Flight 103 flight. Charles Stewart Parnell described William Gladstones
Irish Coercion Act as Terrorism in his "no-Rent manifesto" in 1881, during the Irish Land War.[5] The
concept is also used to describe political repressions by governments against their own civilian population
with the purpose to incite fear. For example, taking and executing civilian hostages or extrjuducial
elimination campaigns are commonly considered "terror" or terrorism, for example during Red Terror or
Great Terror. [56] Such actions are often also described as democide which has been argued to be
equivalent to state terrorism.[57] Empirical studies on this have found that democracies have little
democide.[58][59]

[edit] Tactics
Main article: Tactics of terrorism

Terrorism is a form of asymmetric warfare, and is more common when direct conventional warfare either
cannot be (due to differentials in available forces) or is not being used to resolve the underlying conflict.

The context in which terrorist tactics are used is often a large-scale, unresolved political conflict. The type
of conflict varies widely; historical examples include:

• Secession of a territory to form a new sovereign state


• Dominance of territory or resources by various ethnic groups
• Imposition of a particular form of government
• Economic deprivation of a population
• Opposition to a domestic government or occupying army

Terrorist attacks are often targeted to maximize fear and publicity. They usually use explosives or poison,
but there is also concern about terrorist attacks using weapons of mass destruction. Terrorist organizations
usually methodically plan attacks in advance, and may train participants, plant "undercover" agents, and
raise money from supporters or through organized crime. Communication may occur through modern
telecommunications, or through old-fashioned methods such as couriers.

[edit] Responses
Main article: Responses to terrorism
Responses to terrorism are broad in scope. They can include re-alignments of the political spectrum and
reassessments of fundamental values. The term counter-terrorism has a narrower connotation, implying
that it is directed at terrorist actors.

Specific types of responses include:

• Targeted laws, criminal procedures, deportations, and enhanced police powers


• Target hardening, such as locking doors or adding traffic barriers
• Pre-emptive or reactive military action
• Increased intelligence and surveillance activities
• Pre-emptive humanitarian activities
• More permissive interrogation and detention policies
• Official acceptance of torture as a valid tool

[edit] Mass media


Media exposure may be a primary goal of those carrying out terrorism, to expose issues that would
otherwise be ignored by the media. Some consider this to be manipulation and exploitation of the media.
[60]
Others consider terrorism itself to be a symptom of a highly controlled mass media, which does not
otherwise give voice to alternative viewpoints, a view expressed by Paul Watson who has stated that
controlled media is responsible for terrorism, because "you cannot get your information across any other
way". Paul Watson's organization Sea Shepherd has itself been branded "eco-terrorist", although it claims
to have not caused any casualties.

The mass media will often censor organizations involved in terrorism (through self-restraint or regulation)
to discourage further terrorism. However, this may encourage organisations to perform more extreme acts
of terrorism to be shown in the mass media.

There is always a point at which the terrorist ceases to manipulate the media gestalt. A point at which the violence
may well escalate, but beyond which the terrorist has become symptomatic of the media gestalt itself. Terrorism as
we ordinarily understand it is innately media-related.

—Novelist William Gibson[61]

[edit] History
Main article: History of terrorism

The term "terrorism" was originally used to describe the actions of the Jacobin Club during the "Reign of
Terror" in the French Revolution. "Terror is nothing other than justice, prompt, severe, inflexible," said
Jacobin leader Maximilien Robespierre. In 1795, Edmund Burke denounced the Jacobins for letting
"thousands of those hell hounds called terrorists" loose upon the people of France.

In January 1858, Italian patriot Felice Orsini threw three bombs in an attempt to assassinate French
Emperor Napolean III.[62] Eight bystanders were killed and 142 injured.[62] The incident played a crucial
role as an inspiration for the development of the early Russian terrorist groups.[62] Russian Sergey
Nechayev, who founded People's Retribution in 1869, described himself as a "terrorist", an early example
of the term being employed in its modern meaning.[8] Nechayev's story is told in fictionalized form by
Fyodor Dostoevsky in the novel The Possessed. German anarchist writer Johann Most dispensed "advice
for terrorists" in the 1880s.[63]
[edit] See also
• List of terrorist incidents
• List of terrorist organisations
• 9/11
• 7/7
• Abortion clinic bombers
• Agent provocateur
• Christian Terrorism
• Colombian Armed Conflict (1960s–present)
• Communist Terrorism
• Conspiracy theory
• Counter-terrorism
• Cyber-terrorism
• Destructive cult
• Domestic terrorist (United States)
• Eco-terrorism
• False flag operations
• Hate crime
• Hate group
• Hirabah
• Indoctrination
• Islamic Terrorism
• Middle east
• Narcoterrorism
• Northern Ireland
• Nuclear 9/11
• Propaganda
• Sikh Extremism
• Strategy of tension
• Suicide attack
• Symbionese Liberation Army
• Ten Threats identified by the United Nations
• Terror bombing
• Terrorism insurance
• Terrorist Screening Center
• Unconventional warfare
• Weather Underground
• World Trade Center

[edit] Further reading


Night
From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article is about the time of day. For other uses, see Night (disambiguation).
This article needs additional citations for verification. Please help improve this article by adding
reliable references. Unsourced material may be challenged and removed. (March 2008)

A composite satellite image of the earth at night.

Night or nighttime is the period of time when the sun is below the horizon. The opposite of night is day
(or "daytime" to distinguish it from "day" as used for a 24-hour period). Time of day varies based on
factors such as season, latitude, longitude and timezone.

Contents
[hide]

• 1 Duration and geography


• 2 On other celestial bodies
• 3 Impact on life
• 4 Humans and the night
o 4.1 Social and economic factors
o 4.2 Cultural aspects
• 5 See also

• 6 References

[edit] Duration and geography


Nights are shorter than days on average due to two factors. One, the sun is not a point, but has an apparent
size of about 32 minutes of arc. Two, the atmosphere refracts sunlight so that some of it reaches the
ground when the sun is below the horizon by about 34 minutes of arc. The combinaton of these two
factors means that light reaches the ground when the centre of the sun is below the horizon by about 50
minutes of arc. + - Without these effects, day and night would be the same length at the autumnal
(autumn/fall) and vernal (spring) equinoxes, the moments when the sun passes over the equator. In reality,
around the equinoxes the day is almost 14 minutes longer than the night at the equator, and even more
closer to the poles. The summer and winter solstices mark the shortest night and the longest night,
respectively.

The closer a location is to the North or South Pole, the larger the range of variation in the night's length.
Although equinoxes occur with a day and night close to equal length, before and after an equinox the ratio
of night to day changes more rapidly in locations near the poles than in locations between the Tropic of
Cancer and the Tropic of Capricorn. In the Northern Hemisphere, Denmark has shorter nights in June
than India has. In the Southern Hemisphere, Antarctica has longer nights in June than Chile has. The
Northern and Southern Hemispheres of the world experience the same patterns of night length at the same
latitudes, but the cycles are 6 months apart so that one hemisphere experiences long nights (winter) while
the other is experiencing short nights (summer).

Between the pole and the polar circle, the variation in daylight hours is so extreme that for a portion of the
summer, there is no longer an intervening night between consecutive days and in the winter there is a
period that there is no intervening day between consecutive nights.

[edit] On other celestial bodies


The phenomenon of day and night is due to the rotation of a celestial body about its axis, creating the
illusion of the sun rising and setting. Different bodies spin at very different rates, however. Some may
spin much faster than Earth, while others spin extremely slowly, leading to very long days and nights. The
planet Venus rotates once every 224.7 days – by far the slowest rotation period of any of the major
planets. In contrast, the gas giant Jupiter's sidereal day is only 9 hours and 56 minutes.[1] A planet may
experience large temperature variations between day and night, such as Mercury, the closest planet to the
sun. This is one consideration in terms of planetary habitability or the possibility of extraterrestrial life.

[edit] Impact on life

Bats are just one of the thousands of species of animals that are active during the night
Please help improve this section by expanding it. Further information might be found on the talk
page. (June 2008)

The disappearance of sunlight, the primary energy source for life on Earth, has dramatic impacts on the
morphology, physiology and behavior of almost every organism. Some animals sleep during the night,
while other nocturnal animals including moths and crickets are active during this time. The effects of day
and night are not seen in the animal kingdom alone, plants have also evolved adaptations to cope best
with the lack of sunlight during this time. For example, crassulacean acid metabolism in a unique type of
carbon fixation which allows photosynthetic plants to store carbon dioxide in their tissues as organic acids
during the night, which can then be used during the day to synthesize carbohydrates. This allows them to
keep their stomata closed during the daytime, preventing transpiration of precious water.

[edit] Humans and the night


[edit] Social and economic factors
A busy street at nighttime
Please help improve this section by expanding it. Further information might be found on the talk
page. (June 2008)

Throughout the rest of history, night has primarily been a time of resting and sleep for humans, since little
work or labor can be done in the dark. On the other hand, clandestine activities such as romance, sex,
prostitution, and criminal and police activity flourish.

As artificial lighting has improved, especially after the Industrial Revolution, night-time activity has
increased and become a significant part of the economy in most places. Many establishments, such as
nightclubs, bars, convenience stores, fast-food restaurants, gas stations, distribution facilities, and police
stations now operate 24 hours a day or stay open as late as 1 or 2 a.m. Even without artificial light,
moonlight sometimes makes it possible to travel or work outdoors at night. The phrase "The night is
young" refers to the period when the sun is below the horizon and not the period before midnight.

[edit] Cultural aspects

Please help improve this section by expanding it. Further information might be found on the talk
page. (June 2008)

Nótt, the personification of night in Norse mythology, rides her horse in this 19th century painting by
Peter Nicolai Arbo.

Night is often associated with danger and evil, because bandits and dangerous animals can be concealed
by darkness. The belief in magic often includes the idea that magic and magicians are more powerful at
night. Similarly, mythical and folkloric creatures as vampires, and werewolves are thought to be more
active at night. Ghosts are believed to wander around almost exclusively during night-time. In almost all
cultures, there exist stories and legends warning of the dangers of night-time. In fact, the Saxons called
the darkness of night the 'death mist'.[citation needed]
[edit] See also
Wikimedia Commons has media related to: Night

Look up night in Wiktionary, the free dictionary.


Listen to this article (info/dl)

This audio file was created from a revision dated 2006-12-10, and does not reflect subsequent edits to the article. (Audio help)
More spoken articles

• Earth clock
• Midnight
• Night sky
• Nightlife
• Nocturne
• Olbers' paradox

[edit] References

Day
From Wikipedia, the free encyclopedia

Jump to: navigation, search

Look up day in Wiktionary, the free dictionary.

Water, Rabbit, and Deer: three of the 20 day symbols in the Aztec calendar, from the Aztec calendar
stone.
For other uses, see Day (disambiguation).

A day (symbol d) is a unit of time equivalent to approximately 24 hours. It is not an SI unit but it is
accepted for use with SI.[1] The SI unit of time is the second.

The word 'day' can also refer to the (roughly) half of the day that is not night, also known as 'daytime'.
Both refer to a length of time. Within these meanings, several definitions can be distinguished. 'Day' may
also refer to a 'point' in time, as in answer to the question "On which day?".
The term comes from the Old English dæg, with similar terms common in all other Indo-European
languages, such as Tag in German and dive in Sanskrit.

Contents
[hide]

• 1 International System of Units (SI)


• 2 Astronomy
• 3 Colloquial
• 4 Introduction
• 5 Civil day
• 6 Leap seconds
• 7 Astronomy
• 8 Boundaries of the day
• 9 Metaphorical days
• 10 24 hours vs daytime
• 11 See also
• 12 Notes and references

• 13 External links

[edit] International System of Units (SI)


A day is defined as 86,400 seconds. The International Bureau of Weights and Measures (BIPM) currently
defines a second as

… the duration of 9 192 631 770 periods of the radiation corresponding to the transition between two hyperfine
levels of the ground state of the caesium 133 atom.[2]

This makes the SI day last exactly 794,243,384,928,000 of those periods.

In the 19th century it had also been suggested to make a decimal fraction (1⁄10,000 or 1⁄100,000) of an
astronomic day the base unit of time. This was an afterglow of decimal time and calendar, which had been
given up already.

[edit] Astronomy
A day of exactly 86,400 SI seconds is the fundamental unit of time in astronomy.

For a given planet, there are two types of day defined in astronomy:

• 1 apparent sidereal day - a single rotation of a planet with respect to the distant stars (for Earth it is
23.934 hours);
• 1 solar day - a single rotation of a planet with respect to its star.

[edit] Colloquial
The word refers to various relatedly defined ideas, including the following:
• the period of light when the Sun is above the local horizon (i.e., the time period from sunrise to
sunset);
• the full day covering a dark and a light period, beginning from the beginning of the dark period or
from a point near the middle of the dark period;
• a full dark and light period, sometimes called a nychthemeron in English, from the Greek for
night-day;
• the time period from 6:00 AM to 6:00 PM or 9:00 PM or some other fixed clock period
overlapping or set off from other time periods such as "morning", "evening", or "night".

Dagr, the Norse god of the day, rides his horse in this 19th century painting by Peter Nicolai Arbo.

[edit] Introduction
The word day is used for several different units of time based on the rotation of the Earth around its axis.
The most important one follows the apparent motion of the Sun across the sky (solar day). The reason for
this apparent motion is the rotation of the Earth around its axis, as well as the revolution of the Earth in its
orbit around the Sun.

A day, as opposed to night, is commonly defined as the period during which sunlight directly reaches the
ground, assuming that there are no local obstacles. Two effects make days on average longer than nights.
The Sun is not a point, but has an apparent size of about 32 minutes of arc. Additionally, the atmosphere
refracts sunlight in such a way that some of it reaches the ground even when the Sun is below the horizon
by about 34 minutes of arc. So the first light reaches the ground when the centre of the Sun is still below
the horizon by about 50 minutes of arc. The difference in time depends on the angle at which the Sun rises
and sets (itself a function of latitude), but amounts to almost seven minutes at least.

Ancient custom has a new day start at either the rising or setting of the Sun on the local horizon (Italian
reckoning, for example) The exact moment of, and the interval between, two sunrises or two sunsets
depends on the geographical position (longitude as well as latitude), and the time of year. This is the time
as indicated by ancient hemispherical sundials.

A more constant day can be defined by the Sun passing through the local meridian, which happens at
local noon (upper culmination) or midnight (lower culmination). The exact moment is dependent on the
geographical longitude, and to a lesser extent on the time of the year. The length of such a day is nearly
constant (24 hours ± 30 seconds). This is the time as indicated by modern sundials.
A further improvement defines a fictitious mean Sun that moves with constant speed along the celestial
equator; the speed is the same as the average speed of the real Sun, but this removes the variation over a
year as the Earth moves along its orbit around the Sun (due to both its velocity and its axial tilt).

The Earth's day has increased in length over time. The original length of one day, when the Earth was new
about 4.5 billion years ago, was about six hours as determined by computer simulation. It was 21.9 hours
620 million years ago as recorded by rhythmites (alternating layers in sandstone). This phenomenon is
due to tides raised by the Moon which slow Earth's rotation. Because of the way the second is defined, the
mean length of a day is now about 86,400.002 seconds, and is increasing by about 1.7 milliseconds per
century (an average over the last 2,700 years). See tidal acceleration for details.

[edit] Civil day


For civil purposes a common clock time has been defined for an entire region based on the mean local
solar time at some central meridian. Such time zones began to be adopted about the middle of the 19th
century when railroads with regular schedules came into use, with most major countries having adopted
them by 1929. For the whole world, 40 such time zones are now in use. The main one is "world time" or
Coordinated Universal Time (UTC).

The present common convention has the civil day starting at midnight, which is near the time of the lower
culmination of the mean Sun on the central meridian of the time zone. A day is commonly divided into 24
hours of 60 minutes of 60 seconds each.

[edit] Leap seconds


In order to keep the civil day aligned with the apparent movement of the Sun, positive or negative leap
seconds may be inserted.

A civil clock day is typically 86,400 SI seconds long, but will be 86,401 s or 86,399 s long in the event of
a leap second.

Leap seconds are announced in advance by the International Earth Rotation and Reference Systems
Service which measures the Earth's rotation and determines whether a leap second is necessary. Leap
seconds occur only at the end of a UTC month, and have only ever been inserted at the end of June 30 or
December 31.

[edit] Astronomy
In astronomy, the sidereal day is also used; it is about 3 minutes 56 seconds shorter than the solar day, and
close to the actual rotation period of the Earth, as opposed to the Sun's apparent motion. In fact, the Earth
spins 366 times about its axis during a 365-day year, because the Earth's revolution about the Sun
removes one apparent turn of the Sun about the Earth.

[edit] Boundaries of the day


For most diurnal animals, the day naturally begins at dawn and ends at sunset. Humans, with our cultural
norms and scientific knowledge, have supplanted Nature with several different conceptions of the day's
boundaries. The Jewish day begins at either sunset or at nightfall (when three second-magnitude stars
appear). Medieval Europe followed this tradition, known as Florentine reckoning: in this system, a
reference like "two hours into the day" meant two hours after sunset and thus times during the evening
need to be shifted back one calendar day in modern reckoning. Days such as Christmas Eve, Halloween,
and the Eve of Saint Agnes are the remnants of the older pattern when holidays began the evening before.
Present common convention is for the civil day to begin at midnight, that is 00:00 (inclusive), and last a
full twenty-four hours until 24:00 (exclusive).

In ancient Egypt, the day was reckoned from sunrise to sunrise. Muslims fast from daybreak to sunset
each day of the month of Ramadan. The "Damascus Document", copies of which were also found among
the Dead Sea scrolls, states regarding Sabbath observance that "No one is to do any work on Friday from
the moment that the sun's disk stands distant from the horizon by the length of its own diameter,"
presumably indicating that the monastic community responsible for producing this work counted the day
as ending shortly before the sun had begun to set.

In the United States, nights are named after the previous day, e.g. "Friday night" usually means the entire
night between Friday and Saturday. This is the opposite of the Jewish pattern. This difference from the
civil day often leads to confusion. Events starting at midnight are often announced as occurring the day
before. TV-guides tend to list nightly programs at the previous day, although programming a VCR
requires the strict logic of starting the new day at 00:00 (to further confuse the issue, VCRs set to the 12-
hour clock notation will label this "12:00 AM"). Expressions like "today", "yesterday" and "tomorrow"
become ambiguous during the night.

Validity of tickets, passes, etc., for a day or a number of days may end at midnight, or closing time, when
that is earlier. However, if a service (e.g. public transport) operates from e.g. 6:00 to 1:00 the next day
(which may be noted as 25:00), the last hour may well count as being part of the previous day (also for the
arrangement of the timetable). For services depending on the day ("closed on Sundays", "does not run on
Fridays", etc.) there is a risk of ambiguity. As an example, for the Nederlandse Spoorwegen (Dutch
Railways), a day ticket is valid 28 hours, from 0:00 to 28:00 (i.e. 4:00 the next day). To give another
example, the validity of a pass on London Regional Transport services is until the end of the "transport
day" -- that is to say, until 4:30 am on the day after the "expiry" date stamped on the pass.

[edit] Metaphorical days


In the Bible, as a way to describe that time is immaterial to God, one day is described as being like one
thousand years (Psalms 90:4, 2 Peter 3:8) to him. Also in 2 Peter 3:8, one thousand years is described as
being like one day. However, some Bible experts interpret this more literally as a way to understand some
prophecies like those in Book of Daniel and others (like the Book of Revelation) where are mentioned
days in form of weeks and years.

[edit] 24 hours vs daytime


To distinguish between a full day and daytime, the word nychthemeron may be used for the former, or
more colloquially the term '24 hours'. In other languages, the latter is also often used. Some languages
have a separate word for a full day, such as 'etmaal' in Dutch and 'сутки' in Russian. German and French
don't have similar words. In Spanish, 'singladura' is used, but only as a marine unit of length, being the
distance covered in 24 hours [1].

[edit] See also


• 1 E4 s, Times from 10 kiloseconds to 100 kiloseconds
• Calculating the day of the week
• Dagr
• Daylight
• Daylight saving time
• Season, for a discussion of daylight and darkness near the poles and the equator and places in-
between
• Week

[edit] Notes and references


A calendar is a system of organizing days for a social, religious, commercial or administrative purpose.
This organization is done by giving names to periods of time – typically days, weeks, months and years.
The name given to each day is known as a date. Periods in a calendar (such as years and months) are
usually, though not necessarily, synchronized with the cycles of some astronomical phenomenon, such as
the cycle of the sun, or the moon. Many civilizations and societies have devised a calendar, usually
derived from other calendars on which they model their systems, suited to their particular needs.

A calendar is also a physical device (often paper). This is the most common usage of the word. Other
similar types of calendars can include computerized systems, which can be set to remind the user of
upcoming events and appointments.

As a subset, calendar is also used to denote a list of particular set of planned events (for example, court
calendar).

The English word calendar is derived from the Latin word kalendae, which was the Latin name of the
first day of every month.[1]

Contents
[hide]

• 1 Calendar systems
o 1.1 Solar calendars
 1.1.1 Days used by solar calendars
 1.1.2 Calendar reform
o 1.2 Lunar calendars
• 2 Calendar subdivisions
• 3 Other calendar types
o 3.1 Arithmetic and astronomical calendars
o 3.2 Complete and incomplete calendars
• 4 Uses
• 5 Currently used calendars
o 5.1 Fiscal calendars
• 6 Gregorian calendar with Easter Sunday
• 7 Physical calendars
• 8 Legal
• 9 Calendars in computing
o 9.1 Layout
• 10 See also
o 10.1 List of calendars
• 11 Sources
• 12 References

• 13 External links

[edit] Calendar systems


A full calendar system has a different calendar date for every day. Thus the week cycle is by itself not a
full calendar system; neither is a system to name the days within a year without a system for identifying
the years.

The simplest calendar system just counts time periods from a reference date. This applies for the Julian
day. Virtually the only possible variation is using a different reference date, in particular one less distant
in the past to make the numbers smaller. Computations in these systems are just a matter of addition and
subtraction.

Other calendars have one (or multiple) larger units of time.

Calendars that contain one level of cycles:

• week and weekday – this system (without year, the week number keeps on increasing) is not very
common
• year and ordinal date within the year, e.g. the ISO 8601 ordinal date system

Calendars with two levels of cycles:

• year, month, and day – most systems, including the Gregorian calendar (and its very similar
predecessor, the Julian calendar), the Islamic calendar, and the Hebrew calendar
• year, week, and weekday – e.g. the ISO week date

Cycles can be synchronized with periodic phenomena:

• A lunar calendar is synchronized to the motion of the Moon (lunar phases); an example is the
Islamic calendar.
• A solar calendar is based on perceived seasonal changes synchronized to the apparent motion of
the Sun; an example is the Persian calendar.
• There are some calendars that appear to be synchronized to the motion of Venus, such as some of
the ancient Egyptian calendars; synchronization to Venus appears to occur primarily in
civilizations near the Equator.
• The week cycle is an example of one that is not synchronized to any external phenomenon
(although it may have been derived from lunar phases, beginning anew every month).

Very commonly a calendar includes more than one type of cycle, or has both cyclic and acyclic elements.
A lunisolar calendar is synchronized both to the motion of the moon and to the apparent motion of the
sun; an example is the Hebrew calendar.

Many calendars incorporate simpler calendars as elements. For example, the rules of the Hebrew calendar
depend on the seven-day week cycle (a very simple calendar), so the week is one of the cycles of the
Hebrew calendar. It is also common to operate two calendars simultaneously, usually providing unrelated
cycles, and the result may also be considered a more complex calendar. For example, the Gregorian
calendar has no inherent dependence on the seven-day week, but in Western society the two are used
together, and calendar tools indicate both the Gregorian date and the day of week.[2]
The week cycle is shared by various calendar systems (although the significance of special days such as
Friday, Saturday, and Sunday varies). Systems of leap days usually do not affect the week cycle. The
week cycle was not even interrupted when 10, 11, 12, or 13 dates were skipped when the Julian calendar
was replaced by the Gregorian calendar by various countries.

[edit] Solar calendars

Main article: Solar calendar

[edit] Days used by solar calendars

Solar calendars assign a date to each solar day. A day may consist of the period between sunrise and
sunset, with a following period of night, or it may be a period between successive events such as two
sunsets. The length of the interval between two such successive events may be allowed to vary slightly
during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar
day.

[edit] Calendar reform

Main article: Calendar reform

There have been a number of proposals for reform of the calendar, such as the World Calendar,
International Fixed Calendar and Holocene calendar. The United Nations considered adopting such a
reformed calendar for a while in the 1950s, but these proposals have lost most of their popularity.

[edit] Lunar calendars

Main article: Lunar calendar

Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within
each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the
tropical year, a purely lunar calendar quickly drifts against the seasons, which don't vary much near the
equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the
Islamic calendar.

A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign
the months with the seasons. An example is the Hebrew calendar which uses a 19-year cycle.

Lunar calendars are believed to be the oldest calendars invented by mankind. Cro-Magnon people are
claimed to have invented one around 32,000 BC.

[edit] Calendar subdivisions


Nearly all calendar systems group consecutive days into "months" and also into "years". In a solar
calendar a year approximates Earth's tropical year (that is, the time it takes for a complete cycle of
seasons), traditionally used to facilitate the planning of agricultural activities. In a lunar calendar, the
month approximates the cycle of the moon phase. Consecutive days may be grouped into other periods
such as the week.

Because the number of days in the tropical year is not a whole number, a solar calendar must have a
different number of days in different years. This may be handled, for example, by adding an extra day (29
February) in leap years. The same applies to months in a lunar calendar and also the number of months in
a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not
lunar, the year cannot be divided entirely into months that never vary in length.

Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities
that do not easily coincide with months or years. Many cultures use different baselines for their calendars'
starting years. For example, the year in Japan is based on the reign of the current emperor: 2006 was Year
18 of the Emperor Akihito.

See Decade, Century, Millennium

[edit] Other calendar types


[edit] Arithmetic and astronomical calendars

An astronomical calendar is based on ongoing observation; examples are the religious Islamic calendar
and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to
as an observation-based calendar. The advantage of such a calendar is that it is perfectly and perpetually
accurate. The disadvantage is that working out when a particular date would occur is difficult.

An arithmetic calendar is one that is based on a strict set of rules; an example is the current Jewish
calendar. Such a calendar is also referred to as a rule-based calendar. The advantage of such a calendar is
the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy.
Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to
changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand
years. After then, the rules would need to be modified from observations made since the invention of the
calendar.

[edit] Complete and incomplete calendars

Calendars may be either complete or incomplete. Complete calendars provide a way of naming each
consecutive day, while incomplete calendars do not. The early Roman calendar, which had no way of
designating the days of the winter months other than to lump them together as "winter", is an example of
an incomplete calendar, while the Gregorian calendar is an example of a complete calendar.

[edit] Uses
The primary practical use of a calendar is to identify days: to be informed about and/or to agree on a
future event and to record an event that has happened. Days may be significant for civil, religious or
social reasons. For example, a calendar provides a way to determine which days are religious or civil
holidays, which days mark the beginning and end of business accounting periods, and which days have
legal significance, such as the day taxes are due or a contract expires. Also a calendar may, by identifying
a day, provide other useful information about the day such as its season.

Calendars are also used to help people manage their personal schedules, time and activities, particularly
when individuals have numerous work, school, and family commitments. People frequently use multiple
systems, and may keep both a business and family calendar to help prevent them from overcommitting
their time.
Calendars are also used as part of a complete timekeeping system: date and time of day together specify a
moment in time. In the modern world, written calendars are no longer an essential part of such systems, as
the advent of accurate clocks has made it possible to record time independently of astronomical events.

[edit] Currently used calendars


Calendars in widespread use today include the Gregorian calendar, which is the de facto international
standard, and is used almost everywhere in the world for civil purposes, including in the People's
Republic of China and India (along with the Indian national calendar). Due to the Gregorian calendar's
obvious connotations of Western Christianity, non-Christians and even some Christians sometimes justify
its use by replacing the traditional era notations "AD" and "BC" ("Anno Domini" and "Before Christ")
with "CE" and "BCE" ("Common Era" and "Before Common Era"). The Hindu calendars are some of the
most ancient calendars of the world. Eastern Christians of eastern Europe and western Asia used for a
long time the Julian Calendar, that of the old Orthodox church, in countries like Russia. For over 1500
years, Westerners used the Julian Calendar also.

While the Gregorian calendar is widely used in Israel's business and day-to-day affairs, the Hebrew
calendar, used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel
(such as national holidays) and can be used there for business dealings (such as for the dating of checks).

The Iranian (Persian) calendar is used in Iran and Afghanistan. The Islamic calendar is used by most non-
Iranian Muslims worldwide. The Chinese, Hebrew, Hindu, and Julian calendars are widely used for
religious and/or social purposes. The Ethiopian calendar or Ethiopic calendar is the principal calendar
used in Ethiopia and Eritrea. In Thailand, where the Thai solar calendar is used, the months and days have
adopted the western standard, although the years are still based on the traditional Buddhist calendar.

Even where there is a commonly used calendar such as the Gregorian calendar, alternate calendars may
also be used, such as a fiscal calendar or the astronomical year numbering system[3].

[edit] Fiscal calendars

Main article: Fiscal calendar

A fiscal calendar (such as a 5/4/4 calendar) fixes each month at a specific number of weeks to facilitate
comparisons from month to month and year to year. January always has exactly 5 weeks (Sunday through
Saturday), February has 4 weeks, March has 4 weeks, etc. Note that this calendar will normally need to
add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending
on how the organization uses those dates. There exists an international standard way to do this (the ISO
week). The ISO week starts on a Monday, and ends on a Sunday. Week 1 is always the week that contains
4 January in the Gregorian calendar.

[edit] Gregorian calendar with Easter Sunday


Calculating the calendar of a previous year (for the Gregorian calendar taking account of the week) is a
relatively easy matter when Easter Sunday is not included on the calendar. However, calculating for
Easter Sunday is difficult because the calculation requires the knowledge of the full moon cycle. Easter
Sunday is on the first Sunday after the first full moon after the Vernal Equinox according to the computus.
So, this makes an additional calculation necessary on top of the normal calculation for January 1st and the
calculation of whether or not the year is a leap year.
There are only 14 different calendars when Easter Sunday is not involved. Each calendar is determined by
the day of the week January 1st falls on and whether or not the year is a leap year. However, when Easter
Sunday is included, there are 70 different calendars (two for each date of Easter).

[edit] Physical calendars

At-A-Glance 2004-2005 calendar

A calendar is also a physical device (often paper) (for example, a desktop calendar or a wall calendar). In
a paper calendar one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a
single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion
table to convert from weekday to date and back. With a special pointing device, or by crossing out past
days, it may indicate the current date and weekday. This is the most common usage of the word.

The sale of physical calendars has been restricted in some countries, and given as a monopoly to
universities and national academies. Examples include the Prussian Academy of Sciences and the
University of Helsinki, which had a monopoly on the sale of calendars in Finland until the 1990s.

[edit] Legal
Main article: Docket (court)

For lawyers and judges, the calendar is the docket used by the court to schedule the order of hearings or
trials. This is especially used in a criminal calendar. A paralegal or court officer may actually keep track
of the cases on the calendar or docket, by use of docketing software or law practice management software.

[edit] Calendars in computing


• Category:Calendaring standards
• Electronic calendar

[edit] Layout

There are different layouts for calendars.


A table for each weekA calendar which has a different month on each page. This page shows August

[edit] See also


• Calendar reform
• Calendrical calculation
• Real-Time Clock (RTC), which underlies the Calendar software on modern computers.
• Time for divisions smaller than one day

[edit] List of calendars

Main article: List of calendars

[edit] Sources

You might also like