History of probability facts for kids
Probability is all about how likely something is to happen. It helps us understand events that are uncertain. For example, what are the chances of flipping a coin and getting heads? Or what's the likelihood that it will rain tomorrow?
Probability has two main parts. One part looks at how likely an idea or hypothesis is, based on the evidence we have. The other part studies how random events work, like throwing dice or flipping coins. People have thought about the first part for a long time, especially in law. But the mathematical study of dice and games started later. Important thinkers like Gerolamo Cardano, Blaise Pascal, Pierre de Fermat, and Christiaan Huygens worked on this in the 1500s and 1600s.
Probability helps us understand random experiments when we know how they usually turn out. Statistics uses data from these experiments to figure out things we don't know yet.
Contents
Where Did the Word "Probability" Come From?
The words "probable" and "probability" come from the Latin word probabilis. This word was used by the ancient Roman writer Cicero. It meant something that was "plausible" or "generally accepted."
The mathematical meaning of "probability" started being used around 1718. In the 1700s, people also used the word "chance" to mean probability. The study of probability was even called the "Doctrine of Chances." The word "chance" comes from a Latin word meaning "a fall" or "a case."
The English word "likely" comes from an old Norse word, likligr. It first meant "having a similar appearance." Later, around the 1400s, it started to mean "probably." The word "likelihood" also came to mean "probability" around the same time.
How the Study of Probability Began
For a long time, people tried to figure out how likely things were. Ancient and medieval laws, for example, had different levels of "proof" to deal with uncertain evidence in court.
During the Renaissance, people talked about betting using "odds," like "ten to one." Ships were insured based on how risky a journey seemed. But there was no real math to figure out these odds or insurance costs.
The first mathematical ideas about probability came from Gerolamo Cardano in the 1560s. His work wasn't published until 100 years later. Then, Pierre de Fermat and Blaise Pascal wrote letters to each other in 1654. They discussed problems like how to fairly split the money in a game if it had to stop early. Christiaan Huygens then wrote a complete book on the topic in 1657.
Games of Chance in Ancient Times
People in ancient times played games using astragali, which are ankle bones from animals. Pottery from ancient Greece shows that people would toss these bones into a circle drawn on the floor, much like playing marbles today.
In Egypt, archaeologists found a game called "Hounds and Jackals." This game is very similar to the modern game "Snakes and Ladders." These early games seem to be the first steps toward creating dice.
Early Dice Games
The first dice game mentioned in Christian writings was called Hazard. It was played with two or three dice. People think knights brought this game to Europe after returning from the Crusades.
The famous writer Dante Alighieri (1265-1321) mentioned this game. People who studied Dante's work thought more about it. With three dice, the lowest total you can get is three (one on each die). To get a total of four, you could have a two on one die and ones on the other two.
Cardano and Galileo's Dice Discoveries
Gerolamo Cardano also thought about the sum of three dice. He noticed something interesting: there are the same number of combinations that add up to 9 as those that add up to 10 if you just list the numbers.
For example, to get 9, you could have: (6,2,1), (5,3,1), (5,2,2), (4,4,1), (4,3,2), (3,3,3). To get 10, you could have: (6,3,1), (6,2,2), (5,4,1), (5,3,2), (4,4,2), (4,3,3).
However, Cardano realized that some combinations are more likely than others. For example, there's only one way to get (3,3,3) (each die must be a 3). But there are six ways to get (6,2,1) because the numbers can be in any order (1,2,6; 1,6,2; 2,1,6; 2,6,1; 6,1,2; 6,2,1).
When you count all the possible ways (or "permutations"), there are 27 ways to get a total of 10, but only 25 ways to get a total of 9. From this, Cardano figured out that throwing a 9 is less likely than throwing a 10. He also showed that odds can be found by comparing the number of good outcomes to the number of bad outcomes. This means the probability of an event is the number of good outcomes divided by the total number of possible outcomes.
Galileo also wrote about dice throwing between 1613 and 1623. He came to a similar conclusion as Cardano. He said that certain numbers are more likely to be rolled because there are more ways to create those numbers.
Probability in the 1700s
In the 1700s, probability became a strong field of mathematics. Jacob Bernoulli's book Ars Conjectandi (published after he died in 1713) and Abraham De Moivre's The Doctrine of Chances (1718) were very important. They showed how to calculate many complex probabilities.
Bernoulli proved a version of the law of large numbers. This law says that if you do a random experiment many, many times, the average of the results will likely be very close to what you expect. For example, if you flip a fair coin 1000 times, you'll probably get close to 500 heads. The more times you flip it, the closer the number of heads will be to half of the total flips.
Probability in the 1800s
The power of probability to handle uncertainty was shown by Carl Friedrich Gauss. He used it to figure out the path of the dwarf planet Ceres from just a few observations. The theory of errors used a method called method of least squares to fix observations that might have mistakes. This was especially useful in astronomy. It assumed that errors usually follow a normal distribution to find the most likely true value.
In 1812, Pierre-Simon Laplace published his book Théorie analytique des probabilités. In it, he brought together many key ideas in probability and statistics.
Later in the 1800s, Statistical mechanics became a big success. Scientists like Ludwig Boltzmann and J. Willard Gibbs used probability to explain things like the temperature of gases. They showed that these properties come from the random movements of many tiny particles.
The history of probability itself was studied by Isaac Todhunter. His huge book, A History of the Mathematical Theory of Probability from the Time of Pascal to that of Laplace (1865), is a key work.
Probability in the 1900s
In the 1900s, probability and statistics became very closely linked. This happened through the work on hypothesis testing by Ronald Fisher and Jerzy Neyman. This method is now used a lot in science, like in biology and psychology experiments. It's also used in clinical trials for new medicines and in economics.
Here's how hypothesis testing works: You start with an idea, or "hypothesis," like "this new medicine usually works." This idea suggests what kind of results you would expect to see if it were true. If your observations roughly match what the hypothesis predicts, then the idea is supported. If they don't match, the hypothesis is rejected.
The study of stochastic processes grew to include areas like Markov processes and Brownian motion. Brownian motion describes the random jiggling movement of tiny particles floating in a liquid. This idea became a model for understanding random changes in stock markets. This led to using complex probability models in mathematical finance. A famous example is the Black–Scholes formula, which helps figure out the value of stock options.
The 1900s also saw long discussions about how to understand probability. In the middle of the century, many believed in frequentism. This idea says that probability means how often something happens over a very large number of trials. By the end of the century, there was a renewed interest in the Bayesian view. This view says that probability is about how well an idea is supported by the evidence for it.
The mathematical study of probabilities, especially when there are endless possible outcomes, was made easier by Kolmogorov's axioms in 1933.