You are on page 1of 4

A Short History of Probability

From Calculus, Volume II by Tom M. Apostol (2nd edition, John Wiley & Sons, 1969 ):

"A gambler's dispute in 1654 led to the creation of a mathematical theory of probability
by two famous French mathematicians, Blaise Pascal and Pierre de Fermat. Antoine
Gombaud, Chevalier de Méré, a French nobleman with an interest in gaming and
gambling questions, called Pascal's attention to an apparent contradiction concerning a
popular dice game. The game consisted in throwing a pair of dice 24 times; the problem
was to decide whether or not to bet even money on the occurrence of at least one "double
six" during the 24 throws. A seemingly well-established gambling rule led de Méré to
believe that betting on a double six in 24 throws would be profitable, but his own
calculations indicated just the opposite. This problem and others posed by de Méré led to
an exchange of letters between Pascal and Fermat in which the fundamental principles of
probability theory were formulated for the first time. Although a few special problems on
games of chance had been solved by some Italian mathematicians in the 15th and 16th
centuries, no general theory was developed before this famous correspondence. The
Dutch scientist Christian Huygens, a teacher of Leibniz, learned of this correspondence
and shortly thereafter (in 1657) published the first book on probability; entitled De
Ratiociniis in Ludo Aleae, it was a treatise on problems associated with gambling.
Because of the inherent appeal of games of chance, probability theory soon became
popular, and the subject developed rapidly during the 18th century. The major
contributors during this period were Jakob Bernoulli (1654-1705) and Abraham de
Moivre (1667-1754). In 1812 Pierre de Laplace (1749-1827) introduced a host of new
ideas and mathematical techniques in his book, Théorie Analytique des Probabilités.
Before Laplace, probability theory was solely concerned with developing a mathematical
analysis of games of chance. Laplace applied probabilistic ideas to many scientific and
practical problems. The theory of errors, actuarial mathematics, and statistical mechanics
are examples of some of the important applications of probability theory developed in the
l9th century. Like so many other branches of mathematics, the development of
probability theory has been stimulated by the variety of its applications. Conversely, each
advance in the theory has enlarged the scope of its influence. Mathematical statistics is
one important branch of applied probability; other applications occur in such widely
different fields as genetics, psychology, economics, and engineering. Many workers have
contributed to the theory since Laplace's time; among the most important are Chebyshev,
Markov, von Mises, and Kolmogorov. One of the difficulties in developing a
mathematical theory of probability has been to arrive at a definition of probability that is
precise enough for use in mathematics, yet comprehensive enough to be applicable to a
wide range of phenomena. The search for a widely acceptable definition took nearly three
centuries and was marked by much controversy. The matter was finally resolved in the
20th century by treating probability theory on an axiomatic basis. In 1933 a monograph
by a Russian mathematician A. Kolmogorov outlined an axiomatic approach that forms
the basis for the modern theory. (Kolmogorov's monograph is available in English
translation as Foundations of Probability Theory, Chelsea, New York, 1950.) Since then
the ideas have been refined somewhat and probability theory is now part of a more
general discipline known as measure theory."
A Brief History of Probability
Posted by Bill Abrams, Coeditor SecondMoment

Adapted from An Introduction to Mathematical Statistics and its Applications by Richard J. Larsen and Morris
L. Marx

No one knows where or when the notion of chance first arose. Nevertheless, evidence linking early humans
with devices for generating random events is plentiful. Archaeological digs throughout the ancient world
consistently turn up a curious overabundance of astragali, the heel bones of sheep and other vertebrates.
Why should the frequencies of these bones be so disproportionately high? One could hypothesize that our
forebears were fanatical foot fetishists, but two other explanations seem more plausible—the bones were
used for religious ceremonies and for gambling. Astragali have six sides but are not symmetrical. Those
found in excavations typically have their sides numbered or engraved. For many ancient civilizations,
astragali were the primary mechanism through which oracles solicited the opinions of their gods. In Asia
Minor, for example, it was customary in divination rites to roll, or cast, five astragali. Each possible
configuration was associated with the name of a god and carried with it the sought-after advice. An outcome
of (1,3,3,4,4), for instance, was said to be the throw of the savior Zeus, and was taken as a sign of
encouragement. A (4,4,4,6,6), on the other hand, the throw of the child-eating Cronos, would send everyone
scurrying for cover. Gradually, over thousands of years, astragali were replaced by dice, and the latter
became the most common means of generating random events. Pottery dice have been found in Egyptian
tombs built before 2000 B.C, and by the time Greek civilizations was in full flower, dice were everywhere.
(Loaded dice have also been found from antiquity. While mastering the mathematics of probability would
prove to be a formidable task for our ancestors, they quickly learned how to cheat.) The historical record
blurs the distinctions initially drawn between divination ceremonies and recreational gaming. Among more
recent societies, though, gambling emerged as a distinct entity, and its popularity was irrefutable. The
Greeks and Romans were consummate gamblers, as were the early Christians. Rules for many of the Greek
and Roman games have been lost, but we can recognize the lineage of certain modern diversions in what
was played during the Middle Ages. The most popular dice game of that period was called hazard, the name
deriving from the Arabic al zhar, which means “a die.” Hazard is thought to have been brought to Europe by
soldiers returning from the Crusades, and its rules are much like those of our modern-day craps. Cards were
first introduced in the fourteenth century and immediately gave rise to a game known as Primero, and early
form of poker. Board games such as backgammon were also popular during this period. Given this rich
tapestry of games and the obsession with gambling that characterized so much of the Western World, it may
seem more than a little puzzling that a formal study of probability was not undertaken sooner than it was.
The first instance of anyone conceptualizing probability in terms of a mathematical model occurred in the
sixteenth century, which means that more than 2000 years of dice games, card games, and board games
passed by before someone finally had the insight to write down even the simplest probabilistic abstractions.

Greek Philosophy and Early Christian TheologyHistorians generally agree that, as a subject, probability
got off to a rocky start because of its incompatibility with two of the most dominant forces in the evolution of
our Western culture, Greek philosophy and early Christian theology. The Greeks were comfortable with the
notion of chance, but it went against their nature to suppose that random events could be quantified in any
useful fashion. They believed that any attempt to reconcile mathematically what did happen with what
should have happened was, in their phraseology, an improper juxtaposition of the “earthly plane” with the
“heavenly plane.” Making matters worse was the antiempiricism that permeated Greek thinking. Knowledge,
to them, was not something that should be derived by experimentation. It was better to reason out a
question logically than to search for its explanation in a set of numerical observations. Together, these two
attitudes had a deadening effect. The Greeks had no motivation to think about probability in any abstract
sense, nor were they faced with the problems of interpreting data that might have pointed them in the
direction of a probability calculus. If the prospects for the study of probability were dim under the Greeks,
they became even worse when Christianity broadened its sphere of influence. The Greeks and Romans at
least accepted the existence of chance. They believed their gods to be either unable or unwilling to get
involved in matters so mundane as the outcome of the roll of a die. For the early Christians, though, there
was no such thing as chance. Every event, no matter how trivial, was perceived to be a direct manifestation
of God’s deliberate intervention. In the words of St. Augustine: “We say that those causes that are said to be
by chance are not nonexistent but are hidden, and we attribute them to the will of the true God…” Taking
Augustine’s position makes the study of probability moot, and it makes a probabilist a heretic. Not
surprisingly, nothing of significance was accomplished in the subject for the next fifteen hundred years.
The Renaissance It was in the sixteenth century that probability, like a mathematical Lazarus, arose from
the dead. Orchestrating its resurrections was one of the most eccentric figures in the entire history of
mathematics, Gerolamo Cardano. By his own admission, Cardano personified the best and worst of the
Renaissance man. He was born in Pavia in 1501, but facts about his personal life are difficult to verify. He
wrote an autobiography, but his penchant for lying raises doubts much of what he says. Cardano was
formally trained in medicine, but his interest in probability derived from his addiction to gambling. His love of
dice and cards was so all-consuming that he is said to have once sold all his wife’s possessions just to get
table stakes! Fortunately, something positive came out of Cardano’s obsession. He began looking for a
mathematical model that would describe in an abstract way the outcome of a random event. What he
eventually formalized is now called the classical definition of probability: If the total number of possible
outcomes, all equally likely, associated with some actions is n and if m of those n result in the occurrence of
some given event, then the probability of that event is m/n. Put another way, if a fair die is rolled, there are n
= 6 possible outcomes. If the event “outcome is greater than or equal to 5” is the one in which we are
interested, then m = 2 (the outcomes 5 and 6) and the probability of the even is 2/6, or 1/3. Cardano had
tapped into the most basic principle in probability. The model he discovered may seem trivial in retrospect,
but it represented a giant step forward. His was the first recorded instance of anyone computing a
theoretical, as opposed to an empirical, probability. Still, the actual impact of Cardano’s work was minimal.
He wrote a book in 1525, but it was not published until 1663, and by then, the focus of the Renaissance, as
well as interest in probability, had shifted from Italy to France.

More Gambling or The Problem of PointsThe date cited by many historians as the “beginning” of
probability is 1654. In Paris a well-to-do gambler, the Chevalier de Mere, asked several prominent
mathematicians, including Blaise Pacal, a series of questions, the best known of which was the problem of
points:Two people, A and B, agree to play a series of fair games until one person has won six games. They
each have wagered the same amount of money, the intention being that the winner will be awarded the
entire pot. But suppose, for whatever reason, the series is prematurely terminated, at which point A has won
five games and B three. How should the stakes be divided?[The correct answer is that A should receive
seven-eights of the total amount wagered. (Hint: suppose the contest was resumed, what scenarios would
lead to A being the first person to win six games?)] Pascal was intrigued by de Mere’s questions and shared
his thoughts with Pierre Fermat, a Toulouse civil servant and probably the most brilliant mathematician in
Europe. Fermat graciously replied, and from the now famous Pascal-Fermat correspondence came not only
the solution to the problem of points but the foundation for more general results. More significantly, news of
what Pascal and Fermat were working on spread quickly. Others got involved, of whom the best known was
the Dutch scientists and mathematician Christiaan Huygens. The delays and the indifference that plagued
Cardano a century earlier were not going to happen again. Best remembered for his work in optics and
astronomy, Huygens, early in his career, was intrigued by the problem of points. In 1657 he published De
Ratiociniis in Aleae Ludo (Calculations in Games of Chance), a very significant work, far more
comprehensive than anything Pascal and Fermat had done. For almost 50 years it was the standard
“textbook” in the theory of probability. Huygens, of course, has supporters who feel that he should be
credited as the founder of probability. Almost all the mathematics of probability was still waiting to be
discovered. What Huygens wrote was only the humblest of beginnings, a set of 14 Propositions bearing little
resemblance to the topics we teach today. But the foundation was there. The mathematics of probability was
finally on firm footing.

The History of Probability (article 2 of 3)

It is generally agreed that the origins of probability lie in the seventeenth century,
specifically, in 1654. Scientists Blaise Pascal and Pierre de Fermat exchanged a series of
letters in this year that would eventually lead to the publication of a text based on Pascal
and de Fermat's correspondence two years later. This text, written by Christiaan Huygens
explored the concept of probability.Over the remainder of the seventeenth century, there
were numerous texts published that dealt with probability, by such scientists as John
Graunt, Franciscus van Schooten, and John Arbuthnot.In 1710, Arbuthnot made a
presentation on the ratio of male births to female births, and found that the numbers were
nearly equal. Later, in 1712, it was improved upon by Willem Jacob 'sGravesande.Over
the course of the rest of the the eighteenth century, there were papers published on the
relatively new theory of probability. Abraham de Moivre and Pierre Simon Laplace are
two of the scientists who write articles on the new science.In 1733, de Moivre published
a paper that would later become the universal description of the distribution of
observational errors. A theorem by Thomas Bayes is published posthumously in 1763, but
is not noticed much until 1780.Carl Friedrich Gauss recognized de Moivre's paper as the
universal description of the distribution of observational errors, as mentioned above, in
1809. His logic, unfortunately, is rather circular in nature. One year later, Laplace
improved upon Gauss's argument, fixing the circular nature.Adolphe Quetelet, Siméon
Denis Poisson, Pafnuti Chebyshev, Francis Galton, and Karl Pearson present various
ideas and expansions based on the theory of probability over the course of the nineteenth
century, including the standard deviation, regression, and the "law of large numbers".In
the twentieth century, Pearson, Charles Spearman, William Gosset, and Andrei
Kolmogorov (among others) present new ideas on probability that take us to the point
where we are today. One of the more noteworthy of these ideas is factor analysis, begun
by Spearman in 1904 and completed in 1912.Today, probability is used in many fields,
such as sociology, agriculture, and the study of genetics.

Using probability concepts to resolve power quality complaints


Lim, P.K.S.; Wyatt, T.E.; Bray, C.W.
Industrial and Commercial Power Systems Technical Conference, 1997. Conference Record, Papers Presented at
the 1997 Annual Meeting., IEEE 1997Volume , Issue , 11-16 May 1997 Page(s):41 – 46
Summary:Power quality problems and customer demands for their resolution are expected to increase as
nonlinear loads and sensitive electronic equipment proliferate throughout power systems. Resolving these
problems often requires the application of power quality monitors to determine waveform quality and aid in problem
diagnosis. Investigators must optimize limited personnel and equipment resources while resolving problems
expeditiously. This paper presents a mathematical approach to use as a basis for efficient utilization of resources
such as power monitors and personnel. For example, consider the nuisance tripping of an adjustable speed drive
at a certain time of day and on certain days of the week. In such an instance, the investigator may attempt to
establish the period between occurrences and use this information in pursuing the cause. The determination of
how to use this information may be based upon experience without any realization that an application of probability
concepts and rules-of-thumb is being used. At any given time the investigator will know the number of complaints
needing attention. Based upon the observed periodicity of the occurrences, informed decisions may be made to
allocate equipment and personnel to resolve the maximum number of problems in a minimum time. In an attempt
to validate this mathematical approach, existing power quality case histories are examined

You might also like