# Richard Feynman on probability fluctuations

The Feynman Lectures on Physics are available online courtesy of Caltech. Here’s an elegant description of probability in the real world:

By chance, we mean something like a guess. Why do we make guesses? We make guesses when we wish to make a judgment but have incomplete information or uncertain knowledge. We want to make a guess as to what things are, or what things are likely to happen. Often we wish to make a guess because we have to make a decision. For example: Shall I take my raincoat with me tomorrow? For what earth movement should I design a new building? Shall I build myself a fallout shelter? Shall I change my stand in international negotiations? Shall I go to class today?

Sometimes we make guesses because we wish, with our limited knowledge, to say as much as we can about some situation. Really, any generalization is in the nature of a guess. Any physical theory is a kind of guesswork. There are good guesses and there are bad guesses. The theory of probability is a system for making better guesses. The language of probability allows us to speak quantitatively about some situation which may be highly variable, but which does have some consistent average behavior.

Let us consider the flipping of a coin. If the toss—and the coin—are “honest,” we have no way of knowing what to expect for the outcome of any particular toss. Yet we would feel that in a large number of tosses there should be about equal numbers of heads and tails. We say: “The probability that a toss will land heads is 0.5.”

We speak of probability only for observations that we contemplate being made in the future. By the “probability” of a particular outcome of an observation we mean our estimate for the most likely fraction of a number of repeated observations that will yield that particular outcome.

Our definition requires several comments. First of all, we may speak of a probability of something happening only if the occurrence is a possible outcome of some repeatable observation. It is not clear that it would make any sense to ask: “What is the probability that there is a ghost in that house?”

You may object that no situation is exactly repeatable. That is right. Every different observation must at least be at a different time or place. All we can say is that the “repeated” observations should, for our intended purposes, appear to be equivalent. We should assume, at least, that each observation was made from an equivalently prepared situation, and especially with the same degree of ignorance at the start. (If we sneak a look at an opponent’s hand in a card game, our estimate of our chances of winning are different than if we do not!)

You may have noticed another rather “subjective” aspect of our definition of probability. We have referred to N(A) as “our estimate of the most likely number [of event A occurring].” We do not mean that we expect to observe exactly N(A), but that we expect a number near N(A), and that the number N(A) is more likely than any other number in the vicinity. If we toss a coin, say, 30 times, we should expect that the number of heads would not be very likely to be exactly 15, but rather only some number near to 15, say 12, 13, 14, 15, 16, or 17. However, if we must choose, we would decide that 15 heads is more likely than any other number. We would write P(heads)=0.5.

We would like now to use our ideas about probability to consider in some greater detail the question: “How many heads do I really expect to get if I toss a coin N times?” Before answering the question, however, let us look at what does happen in such an “experiment.” Figure 6–1 shows the results obtained in the first three “runs” of such an experiment in which N=30. The sequences of “heads” and “tails” are shown just as they were obtained. The first game gave 11 heads; the second also 11; the third 16. In three trials we did not once get 15 heads. Should we begin to suspect the coin? Or were we wrong in thinking that the most likely number of “heads” in such a game is 15? Ninety-seven more runs were made to obtain a total of 100 experiments of 30 tosses each. The results of the experiments are given in Table 6–1.

Looking at the numbers in Table 6–1, we see that most of the results are “near” 15, in that they are between 12 and 18. We can get a better feeling for the details of these results if we plot a graph of the distribution of the results. We count the number of games in which a score of k was obtained, and plot this number for each k. Such a graph is shown in Fig. 6–2. A score of 15 heads was obtained in 13 games. A score of 14 heads was also obtained 13 times. Scores of 16 and 17 were each obtained more than 13 times. Are we to conclude that there is some bias toward heads? Was our “best estimate” not good enough? Should we conclude now that the “most likely” score for a run of 30 tosses is really 16 heads? But wait! In all the games taken together, there were 3000 tosses. And the total number of heads obtained was 1493. The fraction of tosses that gave heads is 0.498, very nearly, but slightly less than half. We should certainly not assume that the probability of throwing heads is greater than 0.5! The fact that one particular set of observations gave 16 heads most often, is a fluctuation. We still expect that the most likely number of heads is 15.

Richard Feynman – Lecture 6: Probability