I totally made that up, but it could happen–many people use a coin toss to help make “fair” decisions. That’s an example of statistical decision theory, but it’s not the purpose of this post.
How many times does a test pilot need to see a dangerous flight condition in an experimental airplane to believe it’s dangerous?
This question is the application at the heart of the following flipping penny example. Imagine that I hand you a penny. Abe Lincoln is on one side, and who knows what is on the other–perhaps the Lincoln Memorial or something more modern, like the shield pictured here.
Is it a fair penny? How many times will you have to flip the penny to determine whether or not it is fair? How much “statistical” certainty will you have when you’ve flipped it that many times?
It might surprise you to hear this from a statistician, but I suggest this:
You don’t need statistics to determine if the penny is fair.
It goes back to the theme this month, about assumptions…
If you don’t know where you are starting from (your assumptions), you could follow the correct directions and end up somewhere totally wrong. Or you could arrive at the right place even though you started at the wrong place.
What assumptions are present in this application? A few more questions might illuminate the problem better or help us figure out how to answer the question, “how many flips?”
For example, you could examine the penny under a microscope, weigh it, or even cut a microscopic piece of it off for analysis in a mass spectrometer. You could measure its circumference, measure it’s thickness, and even compare it side by side with other pennies.
After you’ve done all that, consider this: How many times would we expect this penny–which we now believe to be fair–to come up heads if we toss it ten times? The binomial distribution would explain how to predict that, but that’s not even statistics.
If you thought that you needed statistics to prove the penny’s fairness, was there some particular assumption that you started from to arrive at that conclusion? Could you frame “data” questions from a different perspective in your everyday life?
What kinds of problems do need statistics?
In aerospace, we begin with the assumption that the airplane is trying to kill us–if we see any kind of dangerous situation developing, we believe it as evidence that supports our assumption. That’s the safest thing to do.
You can believe that your penny is fair and use ATOMs to help you show that it is, or you can believe that your penny is not fair and use ATOMs to show the opposite. The point is, before we even decide to use statistics, you have a lot of engineering judgment to apply to the situation.
Are you using yours?
How do we find our way then, when we are exploring the unknown, blazing a trail into uncharted territory? How do we apply elementary statistical principles to transform uncertainty into decisive action? What is to prevent us from making a preposterous application of ATOMs when we deal with very complex situations, those in which our intuition fails?
These questions are not much different from those faced by Chuck Yeager before he ever broke the sound barrier or Neil Armstrong as he took that first step on the moon. Neither of these men, nor anyone around them–with hundreds or thousands of highly educated, very scientific people on these teams–knew what to expect. Or did they…?
ATOMs is a monthly column that introduces analytical tools of mathematics and statistics and illustrates their application. To read more about ATOMs, you can read Where Do We Go From Here, or view the online workbook here.