# Posts from the ‘Uncategorized’ Category

## #48 My favorite problem, part 2

Now we continue with the analysis of my favorite problem, which I call “choosing the best,” also known as the “secretary problem.”  This problem entails hiring a job candidate according to a strict set of rules.  The most difficult rules are that you can only assess candidates’ quality after you have interviewed them, you must decide on the spot whether or not to hire someone and can never reconsider someone that you interviewed previously, and you must hire the very best candidate or else you have failed in your task.

Here’s a reminder of the outline for this three-part series:

In the previous post (here), we analyzed some small cases by hand and achieved the Key Insight that led to the general form of the optimal solution: Let a certain number of candidates go by, and then hire the first candidate you see who is the best so far.  The question now is how many candidates to let pass before you begin to consider hiring one.  We’ll tackle the general case of that question in this post, and we’ll consider cases as large as 5000 candidates.

I tell students that the derivation of the probability function in section 4 is the most mathematically challenging section of this presentation.  But even if they struggle to follow that section, they should be able to understand the analysis in sections 6 and 7.  This will provide a strong hint of the Remarkable Result that we’ll confirm in the next post.

Before we jump back in, let me ask you to make predictions for the probability of successfully choosing the best candidate, using the optimal strategy, for the numbers of candidates listed in the table (recall that the last number is the approximate population of the world):

As always, questions that I pose to students appear in italics.

4. Deriving the probability function

We need to figure out, for a given number of candidates, how many candidates you should let pass before you actually consider hiring one.  This is where the math will get a bit messy.  Let’s introduce some symbols to help keep things straight:

• Let n represent the number of candidates.
• Let i denote the position in line of the best candidate.
• Let r be the position of first “contender” that we actually consider hiring.
• The strategy is to let the first (r – 1) candidates go by, before you genuinely consider hiring one.

We will express the probability of successfully choosing the best candidate as a function of both n and r.  After we have done that, then for any value of n, we can evaluate this probability for all possible values of r to determine the value that maximizes the probability.

First we will determine conditional probabilities for three different cases.  To see why breaking this problem into cases is helpful, let’s reconsider the n = 4 situation that we analyzed in the previous post (here).  We determined that the “let 1 go by” strategy is optimal, leading to success with 11 of the 24 possible orderings.  What value of r does this optimal strategy correspond to?  Letting 1 go by means that r = 2 maximizes the probability of success when n = 4.

These 24 orderings are shown below.  The ones that lead to successfully choosing the best with the “let 1 go by” strategy are displayed in green:

Looking more closely at our analysis of the 24 orderings with the “let 1 go by” (r = 2) strategy, we can identify different cases for how the position of the best candidate (i) compares to the value of r.  I’ve tried to use cute names (in italics below) to help with explaining what happens in each case:

• Case 1, Too soon (i < r): The best candidate appears first in line.  Because our strategy is to let the first candidate go by, we do not succeed for these orderings.  Which orderings are these?  A, B, C, D, E, and F.
• Case 2, Just right (i = r): The best candidate appears in the first position that we genuinely consider hiring, namely second in line.  We always succeed in choosing the best candidate in this circumstance.  Which orderings are these?  G, H, M, N, S, and T.
• Case 3a, Got fooled (i > r): The best candidate appears after the first spot at which we consider hiring someone.  But before we get to the best candidate, we get fooled into hiring someone else who is the best we’ve seen so far.  Which orderings are these?  O, P, R, U, V, W, and X.
• Case 3b, Patience pays off (also i > r): Again the best candidate appears after the first spot at which we consider hiring someone.  But now we do not get fooled by anyone else and so we succeed in choosing the best.  Which orderings are these?  I, J, K, L, and Q.

As we move now from the specific n = 4 case to the general case for any given value of n, we will consider the analogous three cases for how the position of the best candidate (i) compares to the value of r:

• Case 1: i < r, so the best candidate is among the first (r – 1) in line.  What is the probability that you successfully choose the best candidate in this case?  Remember that the strategy is to let the first (r – 1) go by, so the probability of success equals zero.  In other words, this is the unlucky situation in which the best candidate arrives while you are still screening candidates solely to gain information about quality.
• Case 2: i = rWhat is the probability that you successfully choose the best candidate in this case? When the best candidate is in position r, then that candidate will certainly be better than the previous ones you have seen, so the probability of success equals one.  This is the ideal situation, because the best candidate is the first one that you actually consider hiring.
• Case 3: i > r, so the best candidate arrives after you have started to consider hiring candidates*.  What is the probability that you successfully choose the best candidate in this case?  This is the most complicated of the three situations by far.  The outcome is not certain.  You might succeed in choosing the best, but you also might not.  Remember from our brute force analyses by enumeration that the problem is that you might get fooled into hiring someone who is the best you’ve seen but not the overall best.  What has to happen in order for you to get fooled like this?  In this situation, then you will succeed in choosing the best unless the best of the first (i – 1) candidates occurs after position (r – 1).  In this situation, you will be fooled into hiring that candidate rather than the overall best candidate.  In other words, you will succeed when the best among the first (i – 1) candidates occurs among the first (r – 1) that you let go by.  Because we’re assuming that all possible orderings are equally likely, the probability of success in this situation is therefore (r – 1) / (i – 1).

* This is the hardest piece for students to follow in the entire three-part post.  I always encourage them to take a deep breath here.  I also reassure them that they can follow along again after this piece, even if they do not understand this part completely.

The following diagrams may help student to think through these three cases.  The * symbol reveals the position of the best candidate.  The red region indicates candidates among the first (r – 1) who are not considered for hiring.  The blue region for case 3 contains candidates who could be hired even though they are not the very best candidate.

How do we combine these conditional probabilities to determine the overall probability of success?  When students need a hint, I remind them that candidates arrive in random order, so the best candidate is equally likely to be in any of the n positions.  This means that we simply need to take the average* of these conditional probabilities:

* This is equivalent to using the law of total probability.

This expression simplifies to:

The above works for values of r ≥ 2.  The r = 1 situation means hiring the first candidate in line, so the probability of success is 1/n when r = 1.  Using S to denote the event that you successfully choose the best candidate, the probability function can therefore be written in general as:

Our task is now clear: For a given value of n, we evaluate this function for all possible values of r (from 1 to n).  Then we determine the value of r that maximizes this probability.  Simple, right?  The only problem is that those sums are going to be very tedious to calculate.  How can we calculate those sums, and determine the optimal value, efficiently?  Students realize that computers are very good (and fast) at calculating things over and over and keeping track of the results.  We just need to tell the computer what to do.

5. Coding the probability function

If your students have programming experience, you could ask them to write the code for this task themselves.  I often give students my code after I first ask some questions to get them thinking about what the code needs to do: How many loops do we need?  Do we need for loops or while loops?  What vectors do we need, and how long are the vectors?

We need two for loops, an outer one that will work through values of r, and an inner one that will calculate the sum term in the probability function.  We also need a vector in which to store the success probabilities for the various values of r; that vector will have length n.

I also emphasize that this is a different use of computing than we use in much of the course.  Throughout my class, we make use of computers to perform simulations.  In other words, we use computers to generate random data according to a particular model or process.  But that’s not what we’re doing here.  Now we are simply using the computer to speed up a very long calculation, and then produce a graph of the results, and finally pick out the maximum value in a list.

Here is some R code* that accomplishes this task:

* A link to a file containing this code appears at the end of this post.

We figured out the n = 4 case by analyzing all 24 orderings in the last post (here), so let’s first test this code for that situation.  Here’s the resulting output:

Explain how this graph is consistent with what we learned previously.  We compared the “let 1 go by” and “let 2 go by” strategies.  We determined the probabilities of successfully choosing the best to be 11/24 ≈ 0.4583 and 10/24 ≈ 0.4167, respectively.  The “let 1 go by” strategy corresponds to r = 2, and “let 2 go by” means r = 3.  Sure enough, the probabilities shown in the graph for r = 2 and r = 3 look to be consistent with these probabilities.  Why does it make sense that r = 1 and r = 4 give success probabilities of 0.25?  Setting r = 1 means always hiring the first candidate in line.  That person will be the best with probability 1/4.  Similarly, r = 4 means always hiring the last of the four candidates in line, so this also has a success probability of 1/4.

Let’s do one more test, this time with n = 5 candidates, which I mentioned near the end of the previous post.  Here’s the output:

Based on this output, describe the optimal strategy with 5 candidates.  Is this consistent with what I mentioned previously?  Now the value that maximizes the success probability is r = 3.  The optimal strategy is to let the first two candidates go by*; then starting with the third candidate, hire the first one you encounter who is the best so far.  This output for the n = 5 case is consistent with what I mentioned near the end of the previous post.  The success probabilities are 24/120, 50/120, 52/120, 42/120, and 24/120 for r = 1, 2, 3, 4, and 5, respectively.

* Even though I keep using the phrase “go by,” this means that you assess their quality when you interview those candidates, because later you have to decide whether a candidate is the best you’ve seen so far.

6. Practice with a particular case

Now consider the case of n = 12 candidates, which produces this output:

Describe the optimal strategy.  The value r = 5 maximizes the success probability when n = 12.  The optimal strategy is therefore to let the first 4 candidates go by and then hire the first one you find who is the best so far.  What percentage of the time will this strategy succeed in choosing the best?  The success probability with this strategy is 0.3955, so this strategy will succeed 39.55% of the time in the long run.  How does this probability compare to the n = 5 case?  This probability (of successfully choosing the best) continues to get smaller as the number of candidates increases.  But the probability has dropped by less than 4 percentage points (from 43.33% to 39.55%) as the number of candidates increased from 5 to 12.

To make sure that students understand how the optimal strategy works, I ask them to apply the strategy to the following 25 randomly generated orderings (from the population of 12! = 479,001,600 different orderings with 12 candidates).  This exercise can also be helpful for understanding the three cases that we analyzed in deriving the probability function above.  For each ordering, determine whether or not the optimal strategy succeeds in choosing the best candidate.

I typically give students 5-10 minutes or so to work on this, and I encourage them to work in groups.  Sometimes we work through several orderings together to make sure that they get off to a good start.  With ordering A, the best candidate appears in position 3, so our let-4-go-by strategy means that we’ll miss out on choosing the best.  The same is true for ordering B, for which the best candidate is in position 2.  With ordering C, we’re fooled into hiring the second-best candidate sitting in position 6, and we never get to the best candidate, who is back in position 10.  Orderings D and E are both ideal, because the best candidate is sitting in the prime spot of position 5, the very first candidate that we actually consider hiring.  Ordering F is another unlucky one in which the best candidate appears early, while we are still letting all candidates go by.

I like to go slowly through ordering G with students.  Is ordering G a winner or a loser?  It’s a winner!  Why is it a winner when the best candidate is the very last one in line?  Because we got lucky with the second-best candidate showing up among the first four, which means that nobody other than the very best would be tempting enough to hire.

The following table shows the results of this exercise.  The numbers in bold color indicate which candidate would be hired.  Green letters and numbers reveal which orderings lead to successfully choosing the best.  Red letters and numbers indicate orderings that are not successful.

In what proportion of the 25 random orderings does the optimal strategy succeed in choosing the best candidate?  Is this close to the long-run probability for this strategy?  The optimal strategy resulted in success for 10 of these 25 orderings.  This proportion of 0.40 is very close to the long-run probability of 0.3955 for the n = 12 case from the R output.

7. Analyzing graphs, with a hint of the Remarkable Result

Now let’s run the code to analyze the probability function, and determine the optimal strategy, for larger numbers of candidates.  Here’s the output for n = 50 candidates:

Describe the optimal strategy.  What is its probability of success?  How has this changed from having only 12 candidates?  The output reveals that the optimal value is r = 19, so the optimal strategy is to let the first 18 candidates go by and then hire the first one who is the best so far.  The probability of success is 0.3743, which is only about two percentage points smaller than when there were only 12 candidates.  How were your initial guesses for this probability?  Most students find that their initial guesses were considerably lower than the actual probability of success with the optimal strategy.

The graph on the left below shows how the optimal value of r changes as the number of candidates ranges from 1 to 50, and the graph on the right reveals how the optimal probability of success changes:

Describe what each graph reveals.  The optimal value of r increases roughly linearly with the number of candidates n.  This optimal value always stays the same for two or three values of n before increasing by one.  As we expected, the probability of success with the optimal strategy decreases as the number of candidates increases.  But this decrease is very gradual, much slower than most people expect.  Increasing the number of candidates from 12 to 50 only decreases the probability of success from 0.3955 to 0.3743, barely more than two percentage points.

Now consider output for the n = 500 (on the left) and n = 5000 (on the right) cases*:

* The n = 5000 case takes only one second on my laptop.

What do these functions have in common?  All of these functions have a similar shape, concave-down and slightly asymmetric with a longer tail to the high end.   How do the optimal values of r compare?  The optimal values of r are 185 when n = 500 and 1840 when n = 5000.  By increasing the number of candidates tenfold, the optimal value of r increases almost tenfold.  In both cases, the optimal strategy is to let approximately 37% of the candidates go by, and then hire the first you see who is the best so far.  How quickly is the optimal probability of success decreasing?  This probability is decreasing very, very slowly.  Increasing the number of candidates from 50 to 500 to 5000 only results in the success probability (to three decimal places) falling from 0.374 to 0.369 to 0.368.

The following graphs display the optimal value of r, and the probability of success with the optimal strategy, as functions of the number of candidates:

Describe what these graphs reveal.  As we noticed when we examined the graph up to 50 candidates, the optimal value of r continues to increase roughly linearly.  The probability of success with the optimal strategy continues to decrease at a very, very slow rate.

Some students focus so intently on answering my questions that they miss the forest for the trees, so I ask: Do you see anything remarkable here?  Yes!  What’s so remarkable?  The decrease in probability is so gradual that it’s hard to see with the naked eye in this graph.  Moreover, if 5000 candidates apply for your job, and your hiring process has to decide on the spot about each candidate that you interview, with no opportunity to ever go back and consider someone that you previously passed on, you can still achieve a 36.8% chance of choosing the very best candidate in the entire 5000-person applicant pool.

How were your guesses, both at the start of the previous post and the start of this one?

Let’s revisit the following table again, this time with probabilities filled in through 5000 candidates.  There’s only one guess left to make.  Even though the probability of choosing the best has decreased only slightly as we increase the number of candidates from 50 to 500 to 5000 candidates, there’s still a very long way to go from 5000 to almost 7.8 billion candidates!  Make your guess for the probability of successfully choosing the best if every person in the world applies for this job.

In the next post we will determine what happens as the number of candidates approaches infinity.  This problem provides a wonderful opportunity to apply some ideas and tools from single-variable calculus.  We will also discuss some other applications, including how you can amaze your friends and, much more importantly, find your soulmate in life!

P.S.  Here is a link to a file with the R code for evaluating the probability function:

## #47 My favorite problem, part 1

I described my favorite question in post #2 (here) and my favorite theorem in post #10 (here). Now I present my favorite problem and show how I present this to students.  I have presented this to statistics and mathematics majors in a probability course, as a colloquium for math and stat majors at other institutions, and for high school students in a problem-solving course or math club.  I admit that the problem is not especially important or even realistic, but it has a lot of virtues: 1) easy to understand the problem and follow along to a Key Insight, 2) produces a Remarkable Result, 3) demonstrates problem-solving under uncertainty, and 4) allows me to convey my enthusiasm for probability and decision-making.  Mostly, though, this problem is: 5) a lot of fun!

I take about 50 minutes to present this problem to students.  For the purpose of this blog, I will split this into a three-part series.  To enable you to keep track of where we’ve been and where we’re going, here’s an outline:

I believe that students at all levels, including middle school, can follow all of part 1, and the Key Insight that emerges in section 3 is not to be missed.  The derivation in section 4 gets fairly math-y, so some students might want to skim or skip that section.  But then sections 6 and 7 are widely accessible, providing students with practice applying the optimal strategy and giving a hint at the Remarkable Result to come.  Sections 8 and 9 require some calculus, both derivatives and integrals.  Students who have not studied calculus could skip ahead to the confirmation of the Remarkable Result at the end of section 9.

As always, questions that I pose to students appear in italics.

1. A personal story

Before we jump in, I’ll ask for your indulgence as I begin with an autobiographical digression*.

* Am I using this word correctly here – is it possible to digress even before the story begins?  Hmm, I should look into that.  But I digress …

In the fall of 1984, I was a first-year graduate student in the Statistics Department at Carnegie Mellon University.  My professors and classmates were so brilliant, and the coursework was so demanding, that I felt under-prepared and overwhelmed.  I was questioning whether I had made the right decision in going to graduate school.  I even felt intimidated in the one course that was meant to be a cakewalk: Stat 705, Perspectives on Statistics.  This course consisted of faculty talking informally to new graduate students about interesting problems or projects that they were working on, but I was dismayed that even these talks went over my head.  I was especially dreading going to class on the day that the most renowned faculty member in the department, Morrie DeGroot, was scheduled to speak.  He presented a problem that he called “choosing the best,” which is more commonly known as the “secretary problem.”  I thought it was a fascinating problem with an ingenious solution.  Even better, I understood it!  Morrie’s talk went a long way in convincing me that I was in the right place after all.

When I began looking for undergraduate teaching positions several years later, I used Morrie’s “choosing the best” problem for my teaching demonstration during job interviews.  It didn’t go very well.  One reason is that I did not think carefully enough about how to adapt the problem for presenting to undergraduates.  Another reason is that, being a novice teacher, I had not yet come to realize the importance of structuring my presentation around … (wait for it) …. asking good questions!

A few years later, I revised my “choosing the best” presentation to make it accessible and (I hope) engaging for students at both undergraduate and high school levels.  Since the, I have enjoyed giving this talk to many groups of students.  This is my first attempt to put this presentation in writing.

2. The problem statement, and making predictions

Here’s the background of the problem: Your task is to hire a new employee for your company.  Your supervisor imposes the following restrictions on the hiring process:

1. You know how many candidates have applied for the position.
2. The candidates arrive to be interviewed in random order.
3. You interview candidates one at a time.
4. You can rank the candidates that you have interviewed from best to worst, but you have no prior knowledge about the quality of the candidates.  In other words, after you’ve interviewed one person, you have no idea whether she is a good candidate or not.  After you’ve interviewed two people, you know who is better and who is worse (ties are not allowed), but you do not know how they compare to the candidates yet to be interviewed.  And so on …
5. Once you have interviewed a candidate, you must decide immediately whether to hire that person.  If you decide to hire, the process ends, and all of the other candidates are sent home.  If you opt not to hire, the process continues, but you can no longer consider any candidates that you have previously interviewed.  (You might assume that some other company has snatched up the candidates that you decided to pass on.)
6. Your supervisor will be satisfied only if you hire the best candidate.  Hiring the second best candidate is no better than hiring the very worst.

The first three of these conditions seem very reasonable.  The fourth one is a bit limiting, but the last two are incredibly restrictive!  You have to make a decision immediately after seeing each candidate?  You can never go back and reconsider a candidate that you’ve seen earlier?  You’ve failed if you don’t hire the very best candidate?  How can you have any chance of succeeding at this seemingly impossible task?  That’s what we’re about to find out.

To prompt students to think about how daunting this task is, I start by asking: For each of the numbers of candidates given in the table, make a guess for the optimal probability that you will succeed at hiring the best candidate.

Many students look at me blankly when I first ask for their guesses.  I explain that the first entry means that only two people apply for the job.  Make a guess for the probability that you successfully select the best candidate, according to the rules described above.  Then make a guess for this probability when four people apply.  Then increase the applicant pool to 12 people, and then 24 people.  Think about whether you expect this probability to increase, decrease, or stay the same as the number of candidates increases.  Then what if 50, or 500, or 5000 people apply – how likely are you to select the very best applicant, subject to the harsh rules we’ve discussed?  Finally, the last entry is an estimate of the total number of people in the world (obtained here on May 24, 2020).  What’s your guess for the probability of selecting the very best candidate if every single person on the planet applies for the job?

I hope that students guess around 0.5 for the first probability and then make smaller probability guesses as the number of candidates increases.  I expect pretty small guesses with 24 candidates, extremely small guesses with 500 candidates, and incredibly small guesses with about 7.78 billion candidates*.

* With my students, I try to play up the idea of how small these probabilities must be, but some of them are perceptive enough to realize that this would not be my favorite problem unless it turns out that we can do much, much better than most people expect.

We’ll start by using brute-force enumeration to analyze this problem for small numbers of candidates.

Suppose that only one candidate applies: What will you do?  What is your probability of choosing the best candidate?

This is a great situation, right?  You have no choice but to hire this person, and they are certainly the best candidate among those who applied, so your probability of successfully choosing the best is 1!*

* I joke with students that the exclamation point here really does mean one-factorial.  I have to admit that in a typical class of 35 or so students, the number who appreciate this joke is usually no larger than 1!

Now suppose that two candidates apply.  In how many different orderings can the two candidates arrive?  There are two possible orderings: A) The better candidate comes first and the worse one second, or B) the worse candidate comes first and the better one second.  Let me use the notation 12 for ordering A, 21 for the ordering B.

What are your options for your decision-making process here?  Well, you can hire the first person in line, or you can hire the second person in line.  Remember that rule #4 means that after you have interviewed the first candidate, you have no idea as to whether the candidate was a strong or weak one.  So, you really do not gain any helpful information upon interviewing the first candidate.

What are the probabilities of choosing the best candidate with these options?  There’s nothing clever or complicated here.  You succeed if you hire the first person with ordering A, and you succeed if you hire the second person with ordering B.  These two orderings are equally likely, so your probability of choosing the best is 0.5 for either option.

I understand that we’re not off to an exciting start.  But stay tuned, because we’re about to discover the Key Insight that will ratchet up the excitement level.

Now suppose that three candidates apply.  How many different orderings of the three candidates are possible?  Here are the six possible orderings:

What should your hiring strategy be?  One thought is to hire the first person in line.  Then what’s your probability of choosing the best?  Orderings A and B lead to success, and the others do not, so your probability of choosing the best is 2/6, also known as 1/3.  What if you decide to hire the second person in line?  Same thing: orderings C and E produce success, so the probability of choosing the best is again 2/6.  Okay, then how about deciding to hire the last person in line?  Again the same: 2/6 probability of success (D and F produce success).

Well, that’s pretty boring.  At this point you’re probably wondering why in the world this is my favorite problem.  But I’ll let you in on a little secret: We can do better.  We can adopt a more clever strategy that achieves a higher success probability than one-third.  Perhaps you’ve already had the Key Insight.

Let’s think through this hiring process one step at a time.  Imagine yourself sitting at your desk, waiting to interview the three candidates who have lined up in the hallway.  You interview the first candidate.  Should you hire that person?  Definitely not, because you’re stuck with that 1/3 probability of success if you do that.  So, you should thank the first candidate but say that you will continue looking.  Move on to interview the second candidate.

The optimal answer to whether you should hire the second candidate consists of two words : It depends.  On what does it depend?  On whether the second candidate is better or worse than the first one.  If the second person is better than the first one, should you hire that person?  Sure, go ahead.  But if the second person is worse than the first one, should you hire that person?  Absolutely not!  In this case, you know for sure that you’re not choosing the best if you hire the second person knowing that the first one was better.  The only sensible decision is to take your chances with the third candidate.

You caught that, right?  That was the Key Insight I’ve been promising.  You learn something by interviewing the first candidate, because that enables you to discern whether the second candidate is better or worse than the first.  You can use this knowledge to increase your probability of choosing the best.

To make sure that we’re all clear about this, let me summarize the strategy: Interview the first candidate but do not hire her.  Then if the second candidate is better than the first, hire the second candidate.  But if the second candidate is worse than the first, hire the third candidate.

Determine the probability of successfully choosing the best with this strategy.  For students who need a hint: For each of the six possible orderings, determine whether or not this strategy succeeds at choosing the best.

First notice that orderings A (123) and B (132) do not lead to success, because the best candidate is first in line.  But ordering C (213) is a winner: The second candidate is better than the first, so you hire her, and she is in fact the best.  Ordering D (231) takes advantage of the key insight: The second candidate is worse than the first, so you keep going and hire the third candidate, who is indeed the best.  Ordering E (312) is also a winner.  But with ordering F (321), you hire the second person, because she is better than the first person, not knowing that the best candidate is still waiting in the wings.  The orderings for which you succeed in choosing the best are shown with + in bold green here:

The probability of successfully choosing the best is therefore 3/6 = 0.5.  Increasing the number of candidates from 2 to 3 does not reduce the probability of choosing the best, as long as you use the strategy based on the Key Insight.

Now let’s consider the case with 4 candidates.  How many different orderings are possible?  The answer is 4! = 24, as shown here:

Again we’ll make use of the Key Insight.  You should certainly not hire the first candidate in line.  Instead use the knowledge gained from interviewing that candidate to assess whether subsequent candidates are better or worse.  Whenever you find a candidate who is the best that you have encountered, hire her. We still need to decide between these two hiring strategies:

• Let the first candidate go by.  Then hire the next candidate you see who is the best so far.
• Let the first two candidates go by.  Then hire the next candidate you see who is the best so far.

How can we decide between these two hiring strategies?  For students who need a hint, I offer: Make use of the list of 24 orderings.  We’re again going to use a brute force analysis here, nothing clever.  For each of the two strategies, we’ll go through all 24 orderings and figure out which lead to successfully choosing the best.  Then we’ll count how many orderings produce winners for the first strategy and how many do so for the second strategy.

Go ahead and do this.  At this point I encourage students to work in groups and give them 5-10 minutes to conduct this analysis.  I ask them to mark the ordering with * if it produces a success with the first strategy and with a # if it leads to success with the second strategy.  After a minute or two, to make sure that we’re all on the same page, I ask: What do you notice about the first row of orderings?  A student will point out that the best candidate always arrives first in that row, which means that you never succeed in choosing the best with either of these strategies.  We can effectively start with the second row.

Many students ask about ordering L (2431), wondering whether either strategy calls for hiring the third candidate because she is better than the second one.  I respond by asking whether the third candidate is the best that you have seen so far.  The answer is no, because the first candidate was better.  Both strategies say to keep going until you find a candidate who is better than all that you have seen before that point.

When most of the student groups have finished, I go through the orderings one at a time and ask them to tell me whether or not it results in success for the “let 1 go by” strategy.  As we’ve already discussed, the first row, in which the best candidate arrives first, does not produce any successes.  But the second row tells a very different story.  All six orderings in the second row produce success for the “let 1 go by” strategy.  Because the second-best candidate arrives first in the second row, this strategy guarantees that you’ll keep looking until you find the very best candidate.  The third row is a mixed bag.  Orderings M and N are winners because the best candidate is second in line.  Orderings O and P are instructive, because we are fooled into hiring the second-best candidate and leave the best waiting in the wings.  Ordering Q produces success but R does not.  In the fourth row, the first two orderings are winners but the rest are not.  Here’s the table, with successes marked by \$ in bold green:

How about the “let 2 go by” strategy?  Again the first row produces no successes.  The first two columns are also unlucky, because the best candidate was second in line and therefore passed over.  Among the orderings that are left, all produce successes except R and X, where we are fooled into hiring the second-best candidate.  Orderings O, P, U, V, and W are worth noting, because they lead to success for the “let 2 go by” strategy but not for “let 1 go by.”  Here’s the table for the “let 2 go by” strategy, with successes marked by # in bold green:

So, which strategy does better?  It’s a close call, but we see 11 successes with “let 1 go by” (marked with \$) and 10 successes with ”let 2 go by” (indicated by #).  The probability of choosing the best is therefore 11/24 ≈ 0.4583 by using the optimal (let 1 go by) strategy with 4 candidates.

How does this probability compare to the optimal strategy with 3 candidates?  The probability has decreased a bit, from 0.5 to 0.4583.  This is not surprising; we knew that the task gets more challenging as the number of candidates increases.  What is surprising is that the decrease in this probability has been so small as we moved from 2 to 3 to 4 candidates.  How does this probability compare to the naïve strategy of hiring the first person in line with 4 candidates?  We’re doing a lot better than that, because 45.83% is a much higher success rate than 25%.

These examples with very small numbers of candidates suggest the general form of the optimal* strategy:

• Let a certain number of candidates go by.
• Then hire the first candidate you see who is the best among all you have seen thus far.

* I admit to mathematically inclined students that I have not formally proven that this strategy is optimal.  For a proof, see Morrie DeGroot’s classic book Optimal Statistical Decisions.

Ready for one more? Now suppose that there are 5 candidates.  What’s your guess for the optimal strategy – let 1 go by, or let 2 go by, or let 3 go by?  In other words, the question is whether we want to garner information from just one candidate before we seriously consider hiring, or if it’s better to learn from two candidates before we get serious, or perhaps it’s best to take a look at three candidates.  I don’t care what students guess, but I do want them to reflect on the Key Insight underlying this question before they proceed.  How many possible orderings are there?  There are now 5! =120 possible orderings.  Do you want to spend your time analyzing these 120 orderings by brute force, as we did with 24 orderings in the case of 4 candidates?  I am not disappointed when students answer no, because I hope this daunting task motivates them to want to analyze the general case mathematically.  Just for fun, let me show the 120 orderings:

We could go through all 120 orderings one at a time. For each one, we could figure out whether it’s a winner or a loser with the “let 1 go by” strategy, and then repeat for “let 2 go by,” and then again for “let 3 go by.”  I do not ask my students to perform such a tedious task, and I’m not asking you to do that either.  How about if I just tell you how this turns out?  The “let 1 go by” strategy produces a successful outcome for 50 of the orderings, compared to 52 orderings for “let 2 go by” and 42 orderings for “let 3 go by.”

Describe the optimal strategy with 5 candidates.  Let the first 2 candidates go by.  Then hire the first candidate you see who is the best you’ve seen to that point.  What is the probability of success with that strategy?  This probability is 52/120 ≈ 0.4333.  Interpret this probability.  If you were to use the optimal strategy with 5 candidates over and over and over again, you would successfully choose the best candidate in about 43.33% of those situations.  Has this probability decreased from the case with 4 candidates?  Yes, but only slightly, from 45.83% to 43.33%.  Is this probability larger than a naïve approach of hiring the first candidate?  Yes, a 43.33% chance is much greater than a 1/5 = 20% chance.

We’ve accomplished a good bit, thanks to the Key Insight that we discovered in the case with three candidates.  Here is a graph of the probability of choosing the best with the optimal strategy, as a function of the number of candidates:

Sure enough, this probability is getting smaller as the number of candidates increases.  But it’s getting smaller at a much slower pace than most people expect.  What do you think will happen as we increase the number of candidates?  I’ll ask you to revise your guesses from the beginning of this activity, based on what we have learned thus far.  Please make new guesses for the remaining values in the table:

I hope you’re intrigued to explore more about this probability function.  We can’t rely on a brute force analysis any further, so we’ll do some math to figure out the general case in the next post.  We’ll also practice applying the optimal strategy on the 12-candidate case, and we’ll extend this probability function as far as 5000 candidates.  This will provide a strong hint of the Remarkable Result to come.

## #46 How confident are you? Part 3

How confident are you that your students can explain:

• Why do we use a t-distribution (rather than the standard normal z-distribution) to produce a confidence interval for a population mean?
• Why do we check a normality condition, when we have a small sample size, before calculating a t-interval for a population mean?
• Why do we need a large enough sample size to calculate a normal-based confidence interval for a population proportion?

I suspect that my students think we invent these additional complications – t instead of z, check normality, check sample size – just to torment them.  It’s hard enough to understand what 95% confidence means (as I discussed in post #14 here), and that a confidence interval for a mean is not a prediction interval for a single observation (see post #15 here).

These questions boil down to asking: What goes wrong if we use a confidence interval formula when the conditions are not satisfied?  If nothing bad happens when the conditions are not met, then why do we bother checking conditions?  Well, something bad does happen.  That’s what we’ll explore in this post.  Once again we’ll use simulation as our tool.  In particular, we’ll return to an applet called Simulating Confidence Intervals (here).  As always, questions for students appear in italics.

1. Why do we use a t-distribution, rather than a z-distribution, to calculate a confidence interval for a population mean?

It would be a lot easier, and would seem to make considerable sense, just to plug in a z-value, like this*:

* I am using standard notation: x-bar for sample mean, s for sample standard deviation, n for sample size, and z* for a critical value from a standard normal distribution.  I often give a follow-up group quiz in which I simply ask students to describe what each of these four symbols means, along with μ.

Instead we tell students that we need to use a different multiplier, which comes from a completely different probability distribution, like so:

Many students believe that we do this just to make their statistics course more difficult.  Other students accept that this adjustment is necessary for some reason, but they figure that they are incapable of understanding why.

We can inspire better reactions than these.  We can lead students to explore what goes wrong if we use the z-interval and how the t-interval solves the problem.  As we saw in post #14 (here), the key is to use simulation to explore how confidence intervals behave when we randomly generate lots and lots of them (using the applet here).

To conduct this simulation, we need to assume what the population distribution looks like.  For now let’s assume that the population has a normal distribution with mean 50 and standard deviation 10.  We’ll use a very small sample size of 5, a confidence level of 95%, and we’ll simulate selecting 500 random samples from the population.  Using the first formula above (“z with s”), the applet produces output like this:

The applet reports that 440 of these 500 intervals (88.0%, the ones colored green) succeed in capturing the population mean.  The success percentage converges to about 87.8% after generating tens and hundreds of thousands of these intervals.  I ask students:

• What problem with the “z with s” confidence interval procedure does this simulation analysis reveal?  A confidence level of 95% is supposed to mean that 95% of the confidence intervals generated with the procedure succeed in capturing the population parameter, but the simulation analysis reveals that this “z with s” procedure is only succeeding about 88% of the time.
• In order to solve this problem, do we need the intervals to get a bit narrower or wider?  We need the intervals to get a bit wider, so some of the intervals that (barely) fail to include the parameter value of 50 will include it.
• Which of the four terms in the formula – x-bar, z*, s, or n – can we alter to produce a wider interval?  In other words, which one does not depend on the data?  The sample mean, sample standard deviation, and sample size all depend on the data.  We need to use a different multiplier than z* to improve this confidence interval procedure.
• Do we want to use a larger or smaller multiplier than z*?  We need a slightly larger multiplier, in order to make the intervals a bit wider.

At this point I tell students that a statistician named Gosset, who worked for Guinness brewery, determined the appropriate multiplier, based on what we call the t-distribution.  I also say that:

• The t-distribution is symmetric about zero and bell-shaped, just like the standard normal distribution.
• The t-distribution has heavier tails (i.e., more area in the tails) than the standard normal distribution.
• The t-distribution is actually an entire family of distributions, characterized by a number called its degrees of freedom (df).
• As the df gets larger and larger, the t-distribution gets closer and closer to the standard normal distribution.
• For a confidence interval for a population mean, the degrees of freedom is one less than the sample size: n – 1.

The following graph displays the standard normal distribution (in black) and a t-distribution with 4 degrees of freedom (in blue).  Notice that the blue curve has heavier tails than the black one, so capturing the middle 95% of the distribution requires a larger critical value.

With a sample size of 5 and 95% confidence, the critical value turns out to be t* = 2.776, based on 4 degrees of freedom.  How does this compare to the value of z* for 95% confidence?  Students know that z* = 1.96, so the new t* multiplier is considerably larger, which will produce wider intervals, which means that a larger percentage of intervals will succeed in capturing the value of the population mean.

That’s great that the new t* multiplier produces wider intervals, but: How can we tell whether this t* adjustment is the right amount to produce 95% confidence?  That’s easy: Simulate!  Here is the result of taking the same 500 samples as above, but using the t-interval rather than the z-interval:

How do these intervals compare to the previous ones?  We can see that these intervals are wider.  Do more of them succeed in capturing the parameter value?  Yes, more are green, and so fewer are red, than before.  In fact, 94.6% of these 500 intervals succeed in capturing the value of 50 that we set for the population mean.  Generating many thousands more samples and intervals reveals that the long-run success rate is very close to 95.0%.

What happens with larger sample sizes?  Ask students to explore this with the applet.  They’ll find that the percentage of successful intervals using the “z with s” method increases as the sample size does, but continues to remain less than 95%.  The coverage success percentages increase to approximately 93.5% with a sample size of n = 20, 94.3% with n = 40, and 94.7% with n = 100.  With the t-method, these percentages hover near 95.0% for all sample sizes.

Does t* work equally well with other confidence levels?  You can ask students to investigate this with simulation also.  They’ll find that the answer is yes.

By the way, why do the widths of these intervals vary from sample to sample?  I like this question as a check on whether students understand what the applet is doing and how these confidence interval procedures work.  The intervals have different widths because the value of the sample standard deviation (s in the formulas above) varies from sample to sample.

Remember that this analysis has been based on sampling from a normally distributed population.  What if the population follows a different distribution?  That’s what we’ll explore next …

2. What goes wrong, with a small sample size, if the normality condition is not satisfied?

Students again suspect that we want them to check this normality condition just to torment them.  It’s very reasonable for them to ask what bad thing would happen if they (gasp!) use a procedure even when the conditions are not satisfied.  Our strategy for investigating this will come as no surprise: simulation!  We’ll simulate selecting samples, and calculating confidence intervals for a population mean, from two different population distributions: uniform and exponential.  A uniform distribution is symmetric, like a normal distribution, but is flat rather than bell-shaped.  In contrast, an exponential distribution is sharply skewed to the right.  Here are graphs of these two probability distributions (uniform in black, exponential in blue), both with a mean of 50:

The output below displays the resulting t-intervals from simulating 500 samples from a uniform distribution with sample sizes of 5 on the left, 20 on the right:

For these 500 intervals, the percentages that succeed are 92.8% on the left, 94.4% on the right.  Remind me: What does “succeed” mean here?  I like to ask this now and then, to make sure students understand that success means capturing the actual value (50, in this case) of the population mean.  I went on to use R to simulate one million samples from a uniform distribution with these sample sizes.  I found success rates of 93.4% with n = 5 and 94.8% with n = 20.  What do these percentages suggest?  The t-interval procedure works well for data from a uniform population even with samples as small as n = 20 and not badly even with sample sizes as small as n = 5, thanks largely to the symmetry of the uniform distribution.

Sampling from the highly-skewed exponential distribution reveals a different story.  The following output comes from sample sizes (from left to right) of 5, 20, 40, and 100:

The rates of successful coverage in these graphs (again from left to right) are 87.8%, 92.2%, 93.4%, and 94.2%.  The long-run coverage rates are approximately 88.3%, 91.9%, 93.2%, and 94.2%.  With sample data from a very skewed population, the t-interval gets better and better with larger sample sizes, but still fails to achieve its nominal (meaning “in name only”) confidence level even with a sample size as large as 100.

The bottom line, once again, is that when the conditions for a confidence interval procedure are not satisfied, that procedure will successfully capture the parameter values less often than its nominal confidence level.  How much less often depends on the sample size (smaller is worse) and population distribution (more skewed is worse).

Also note that there’s nothing magical about the number 30 that is often cited for a large enough sample size.  A sample size of 5 from a uniform distribution works as well as a sample size of 40 from an exponential distribution, and a sample size of 20 from a uniform distribution is comparable to a sample size of 100 from an exponential distribution.

Next we’ll shift gears to explore a confidence interval for a population proportion rather than a population mean …

3. What goes wrong when the sample size conditions are not satisfied for a confidence interval for a population proportion?

The conventional method for estimating a population proportion π is*:

* I adhere to the convention of using Greek letters for parameter values, so I use π (pi) for a population proportion.

We advise students not to use this procedure with a small sample size, or when the sample proportion is close to zero or one.  A typical check is that the sample must include at least 10 “successes” and 10  “failures.”  Can students explain why this check is necessary?  In other words, what goes wrong if you use this procedure when the condition is not satisfied?  Yet again we can use simulation to come up with an answer.

Let’s return to the applet (here).  Now we’ll select Proportions, Binomial, and the Wald method (which is one of the names for the conventional method above).  Let’s use a sample size of n = 15 and a population proportion of π = 0.1.  Here is some output for 500 simulated samples and the resulting confidence intervals:

Something weird is happening here.  I only see two red intervals among the 500, yet the applet reports that only 78.6% of these intervals succeeded in capturing the value of the population proportion (0.1).  How do you explain this?  When students are stymied, I direct their attention to the graph of the 500 simulated sample proportions that also appears in the applet:

For students who need another hint: What does the red bar at zero mean?  Those are simulated samples for which there were zero successes.  The resulting confidence “interval” from those samples consists only of the value zero.  Those “intervals” obviously do not succeed in capturing the value of the population proportion, which we stipulated to be 0.1 for the purpose of this simulation.  Because those “intervals” consist of a single value, they cannot be seen in the graph of the 500 confidence intervals.

Setting aside the oddity, the important point here is that less than 80% of the allegedly 95% confidence intervals succeeded in capturing the value of the population parameter: That is what goes wrong with this procedure when the sample size condition is not satisfied.  It turns out that the long-run proportion* of intervals that would succeed, with n = 15 and π = 0.1, is about 79.2%, far less than the nominal 95% confidence level.

* You could ask mathematically inclined students to verify this from the binomial distribution.

Fortunately, we can introduce students to a simple alternative procedure, known as “plus-four,” that works remarkably well.  The idea of the plus-four interval is to pretend that the sample contained two more “successes” and two more “failures” than it actually did, and then carry on like always.  The plus-four 95% confidence interval* is therefore:

The p-tilde symbol here represents the modified sample proportion, after including the fictional successes and failures.  In other words, if x represents the number of successes, then p-tilde = (x + 2) / (n + 4).

How does p-tilde compare to p-hat?  Often a student will say that p-tilde is larger than p-hat, or smaller than p-hat.  Then I respond with a hint: What if p-hat is less than 0.5, or equal to 0.5, or greater than 0.5?  At this point, some students realize that p-tilde is closer to 0.5 than p-hat, or equal to 0.5 if p-hat was already equal to 0.5.

Does this fairly simple plus-four adjustment really fix the problem?  Let’s find out with … simulation!  Here are the results for the same 500 simulated samples that we looked at above:

Sure enough, this plus-four method generated a 93.8% success rate among these 500 intervals.  In the long run (with this case of n = 15 and π = 0.1), the success rate approaches 94.4%.  This is very close to the nominal confidence level of 95%, vastly better than the 79.2% success rate with the conventional (Wald) method.  The graph of the distribution of 500 simulated p-tilde values on the right above reveals the cause for the improvement: The plus-four procedure now succeeds when there are 0 successes in the sample, producing a p-tilde value of 2/19 ≈ 0.105, and this procedure fails only with 4 or more successes in the sample.

Because of the discrete-ness of a binomial distribution with a small sample size, the coverage probability is very sensitive to small changes.  For example, increasing the sample size from n = 15 to n = 16, with a population proportion of π = 0.1, increases the coverage rate with the 95% plus-four procedure from 94.4% to 98.3%.  Having a larger coverage rate than the nominal confidence level is better than having a smaller one, but notice that the n = 16 rate misses the target value of 95% by more than the n = 15 case.  Still, the plus-four method produces a coverage rate much closer to the nominal confidence level than the conventional method for all small sample sizes.

Let’s practice applying this plus-four method to sample data from the blindsight study that I described in post #12 (Simulation-based inference, part 1, here).  A patient who suffered brain damage that caused vision loss on the left side of her visual field was shown 17 pairs of house drawings.  For each pair, one of the houses was shown with flames coming out of the left side.  The woman said that the houses looked identical for all 17 pairs.  But when she was asked which house she would prefer to live in, she selected the non-burning house in 14 of the 17 pairs.

The population proportion π to be estimated here is the long-run proportion of pairs for which the patient would select the non-burning house, if she were to be shown these pairs over and over.  Is the sample size condition for the conventional (Wald) confidence interval procedure satisfied?  No, because the sample consists of only 3 “failures,” which is considerably less than 10.  Calculate the point estimate for the plus-four procedure.  We pretend that the sample consisted of two additional “successes” and two additional “failures.”  This gives us p-tilde = (14 + 2) / (17 + 4) = 16/21 ≈ 0.762.  How does this compare to the sample proportion?  The sample proportion (of pairs for which she chose the non-burning house) is p-hat = 14/17 ≈0.824.  The plus-four estimate is smaller, as it is closer to one-half.  Use the plus-four method to determine a 95% confidence interval for the population proportion.  This confidence interval is: 0.762 ± 1.96×sqrt(0.762×0.238/21), which is 0.762 ± 0.182, which is the interval (0.580 → 0.944).  Interpret this interval.  We can be 95% that in the long run, the patient would identify the non-burning house for between 58.0% and 94.4% of all showings.  This interval lies entirely above 0.5, so the data provide strong evidence that the patient does better than randomly guessing between the two drawings.  Why is this interval so wide?  The very small sample size, even after adding four hypothetical responses, accounts for the wide interval.  Is this interval valid, despite the small sample size?  Yes, the plus-four procedure compensates for the small sample size.

We have tackled three different “what would go wrong if a condition was not satisfied?” questions and found the same answer every time: A (nominal) 95% confidence interval would succeed in capturing the actual parameter value less than 95% of the time, sometimes considerably less.  I trust that this realization helps to dispel the conspiracy theory among students that we introduce such complications only to torment them.  On the contrary, our goal is to use procedures that actually succeed 95% of the time when that’s how often they claim to succeed.

As a wrap-up question for students on this topic, I suggest asking once again: What does the word “succeed” mean when we speak of a confidence interval procedure succeeding 95% of the time?  I hope they realize that “succeed” here means that the interval includes the actual (but unknown in real life, as opposed to a simulation) value of the population parameter.  I frequently remind students to think about the green intervals, as opposed to the red ones, produced by the applet simulation, and I ask them to remind me how the applet decided whether to color the interval as green or red.

## #45 Simulation-based inference, part 3

I’m a big believer in introducing students to concepts of statistical inference through simulation-based inference (SBI).  I described activities for introducing students to the concepts of p-value and strength of evidence in posts #12 (here) and #27 (here).  The examples in both of these previous posts concerned categorical variables.  Now I will describe an activity for leading students to use SBI to compare two groups with a numerical response.  As always, questions that I pose to students appear in italics.

Here’s the context for the activity: Researchers randomly assigned 14 male volunteers with high blood pressure to one of two diet supplements – fish oil or regular oil.  The subjects’ diastolic blood pressure was measured at the beginning of the study and again after two weeks.  Prior to conducting the study, researchers conjectured that those with the fish oil supplement would tend to experience greater reductions in blood pressure than those with the regular oil supplement*.

a) Identify the explanatory and response variables.  Also classify each as categorical or numerical.

I routinely ask this question of my students at the start of each activity (see post #11, Repeat after me, here).  The explanatory variable is type of diet supplement, which is categorical and binary.  The response variable is reduction in diastolic blood pressure, which is numerical.

b) Is this a randomized experiment or an observational study?  Explain.

My students know to expect this question also.  This is a randomized experiment, because researchers assigned each participant to a particular diet supplement.

c) State the hypotheses to be tested, both in words and in symbols.

I frequently remind my students that the null hypothesis is typically a statement of no difference or no effect.  In this case, the null hypothesis stipulates that there’s no difference in blood pressure reductions, on average, between those who could be given a fish oil supplement as compared a regular oil supplement.  The null hypothesis can also be expressed as specifying that the type of diet supplement has no effect on blood pressure reduction.  Because of the researchers’ prior conjecture, the alternative hypothesis is one-sided: Those with a fish oil supplement experience greater reduction in blood pressure, on average, than those with a regular oil supplement.

In symbols, these hypotheses can be expressed as H0: mufish = mureg vs. Ha: mufish > mureg.  Some students use x-bar symbols rather than mu in the hypotheses, which gives me an opportunity to remind them that hypotheses concern population parameters, not sample statistics.

I try to impress upon students that hypotheses can and should be determined before the study is conducted, prior to seeing the data.  I like to reinforce this point by asking them to state the hypotheses before I show them the data.

Here are dotplots showing the sample data on reductions in systolic blood pressure (measured in millimeters of mercury) for these two groups (all data values are integers):

d) Calculate the average blood pressure reduction in each group. What symbols do we use for these averages?  Also calculate the difference in these group means (fish oil group minus regular oil group).  Are the sample data consistent with the researchers’ conjecture?  Explain.

The group means turn out to be: x-barfish = 46/7 ≈ 6.571 mm for the fish oil group, x-barreg = -8/7 ≈ -1.143 for the regular oil group.  This difference is 54/7 ≈ 7.714 mm.  The data are consistent with the researchers’ conjecture, because the average reduction was greater with fish oil than with regular oil.

e) Is it possible that there’s really no effect of the fish oil diet supplement, and random chance alone produced the observed differences in means between these two groups?

I remind students that they’ve seen this question, or at least its very close cousin, before.  We asked this same question about the results of the blindsight study, in which the patient identified the non-burning house in 14 of 17 trials (see post #12, here).  We also asked this about the results of the penguin study, in which penguins with a metal band were 30 percentage points more likely to die than penguins without a metal band (see post #27, here).  My students know that the answer I’m looking for has four letters: Sure, it’s possible.

But my students also know that the much more important question is: How likely is it?  At this point in class I upbraid myself for using the vague word and ask: What does it mean here?  I’m very happy when a student explains that I mean to ask how likely it is to obtain sample mean reductions at least 7.714 mm apart, favoring fish oil, if type of diet supplement actually has no effect on blood pressure reduction.

f) How can we investigate how surprising it would be to obtain results as extreme as this study’s, if in fact there were no difference between the effects of fish oil and regular oil supplements on blood pressure reduction?

Students have seen different versions of this question before also.  The one-word answer I’m hoping for is: Simulate!

g) Describe (in detail) how to conduct the simulation analysis to investigate the question in part f).

Most students have caught on to the principle of simulation at this point, but providing a detailed description in this new scenario, with a numerical response variable, can be challenging.  I follow up with: Can we simply toss a coin as we did with the blindsight study?  Clearly not.  We do not have a single yes/no variable.  Can we shuffle and deal out cards with two colors?  Again, no.  The two colors represented success and failure, but we now have numerical responses.  How can we use cards to conduct this simulation?  Some students have figured out that we can write the numerical responses from the study onto cards.  What does each card represent?  One of the participants in the study.  How many cards do we need?  Fourteen, one for each participant.  What do we do with the cards?  Shuffle them.  And then what?  Separate them into two groups of 7 cards each.  What does this represent?  Random assignment of the 14 subjects into one of the two diet supplement groups.  Then what?  Calculate the average of the response values in each group.  And then?  Calculate the difference in those two averages, being careful to subtract in the same order that we did before: fish oil group minus regular oil group.  Great, what next?  This one often stumps students, until they remember that we need to repeat this process, over and over again, until we’ve completed a large number of repetitions.

Before we actually conduct this simulation, I ask:

h) Which hypothesis are we assuming to be true as we conduct this simulation?  This gives students pause, until they remember that we always assume the null hypothesis to be true when we conduct a significance test.  They can also state this in the context of the current study: that there’s no difference, on average, between the blood pressure reductions that would be achieved with a fish oil supplement versus a regular oil supplement.  I also want them to think about how it applies in this case: How does this assumption manifest itself in our simulation process?  This is a hard question.  I try to tease out the idea that we’re assuming the 14 participants were going to experience whatever blood pressure reduction they did no matter which group they had been assigned to.

Now, finally, having answered all of these preliminary questions, we’re ready to do something.  Sometimes I provide index cards to students and ask them to conduct a repetition or two of this simulation analysis by hand.  But I often skip this part* and proceed directly to conduct the simulation with a computer.

* I never skip the by-hand simulation with coins in the blindsight study or with playing cards in the penguin study, because I think the tactile aspect helps students to understand what the computer does.  But the by-hand simulation takes considerably more time in this situation, with students first writing the 14 response values on 14 index cards and later having to calculate two averages.  My students have already conducted tactile simulations with the previous examples, so I trust that they can understand what the computer does here.

I especially like that this applet (here), designed by Beth Chance, illustrates the process of pooling the 14 response values and then re-randomly assigning them between the two groups.  The first steps in using the applet are to clear the default dataset and enter (or paste) the data for this study.  (Be sure to click on “Use Data” after entering the data.)  The left side of the screen displays the distributions and summary statistics.  Then clicking on “Show Shuffle Options” initiates simulation capabilities on the right side of the screen.  I advise students to begin with the “Plot” view rather than the “Data” view.

i) Click on “Shuffle Responses” to conduct one repetition of the simulation.  Describe what happens to the 14 response values in the dotplots.  Also report the resulting value of the difference in group means (again taking the fish oil group minus the regular oil group).

This question tries to focus students’ attention on the fact that the applet is doing precisely what we described for the simulation process: pooling all 14 (unchanging) response values together and then re-randomizing them into two groups of 7.

j) Continue to click on “Shuffle responses” for a total of 10 repetitions.  Did we obtain the same result (for the difference in group means) every time?  Are any of the difference in groups means as large as the value observed in the actual study: 7.714 mm?

Perhaps it’s obvious that the re-randomizing does not produce the same result every time, but I think this is worth emphasizing.  I also like to keep students’ attention on the key question of how often the simulation produces a result as extreme as the actual study.

k) Now enter 990 for the number of shuffles, which will produce a total of 1000 repetitions.  Consider the resulting distribution of the 1000 simulated differences in group means.  Is the center where you would expect?  Does the shape have a recognizable pattern?  Explain.

Here is some output from this simulation analysis:

The mean is very close to zero.  Why does this make sense?  The assumption behind the simulation is that type of diet supplement has no effect on blood pressure reduction, so we expect the difference in group means (always subtracting in the same order: fish oil group minus regular oil group) to include about half positive values and half negative values, centered around zero.  The shape of this distribution is very recognizable at this point of the course: approximately normal.

l) Use the Count Samples feature of the applet to determine the approximate p-value, based on the simulation results.  Also describe how you determine this.

The applet does not have a “Calculate Approximate P-value” button.  That would have been easy to include, of course, but the goal is for students to think through how to determine this for themselves.  Students must realize that the approximate p-value is the proportion of the 1000 simulated differences in group means that are 7.714 or larger.  They need to enter the value 7.714 in the box* next to “Count Samples Greater Than” and then click on “Count.”  The following output shows an approximate p-value of 0.006:

* If a student enters a different value here, the applet provides a warning that this might not be the correct value, but it proceeds to do the count.

m) Interpret what this (approximate) p-value means.

This is usually a very challenging question.  But based on simulation-based inference, students need not memorize this interpretation of a p-value.  Instead, they simply have to describe what’s going on in the graph of simulation results: If there were no effect of diet supplement on blood pressure reductions, then about 0.6% of random assignments would produce a difference in sample means, favoring the fish oil group, of 7.714 or greater.  I also like to model conveying this idea with a different sentence structure, such as: About 0.6% of random assignments would produce a difference in sample means, favoring the fish oil group, of 7.714 or greater, assuming that there were no effect of diet supplement on blood pressure reductions.  The hardest part of this for most students is remembering to include the if or assuming part of this sentence.

Now we are ready to draw some conclusions.

n) Based on this simulation analysis, do the researchers’ data provide strong evidence that the fish oil supplement produces a greater reduction in blood pressure, on average, than the regular oil supplement?  Also explain the reasoning process by which your conclusion follows from the simulation analysis.

The short answer is yes, the data do provide strong evidence that the fish oil supplement is more helpful for reducing blood pressure than the regular oil supplement.  I hope students answer yes because they understand the reasoning process, not because they’ve memorized that a small p-value means strong evidence of …  I do not consider “because the p-value is small” to be an adequate explanation of the reasoning process.  I’m looking for something such as: “It would be very unlikely to obtain a difference in group mean blood pressure reductions of 7.714mm or greater, if fish oil were no better than regular oil.  But this experiment did find a difference in group means of 7.714mm.  Therefore, we have strong evidence against the hypothesis of no effect, in favor of concluding that fish oil does have a beneficial effect on blood pressure reduction.”

At this point I make a show of pointing out that I just used the important word effect, so I then ask:

o) Is it legitimate to draw a cause-and-effect conclusion between the fish oil diet and greater blood pressure reductions?  Justify your answer.

Yes, a cause-and-effect conclusion is warranted here, because this was a randomized experiment and the observed difference in group means is unlikely to occur by random assignment alone if there were no effect of diet supplement type on blood pressure reduction.

p) To what population is it reasonable to generalize the results of this study?

Because the study included only men, it seems unwise to conclude that women would necessarily respond to a fish oil diet supplement in the same way.  Also, the men in this study were all volunteers who suffered from high blood pressure.  It’s probably best to generalize only to men with high blood pressure who are similar to those in this study.

Whew, that was a lot of questions*!  I pause here to give students a chance to ask questions and reflect on this process.  I also reinforce the idea, over and over, that this is the same reasoning process they’ve seen before, with the blindsight study for a single proportion and with the penguin study for comparing proportions.  The only difference now is that we have a numerical response, so we’re looking at the difference in means rather than proportions.  But the reasoning process is the same as always, and the interpretation of p-value is the same as always, and the way we assess strength of evidence is the same as always.

* We didn’t make it to part (z) this time, but this post is not finished yet …

Now I want to suggest three extensions that you could consider, either in class or on assignments, depending on your student audience, course goals, and time constraints.  You could pursue any or all of these, in any order.

Extension 1: Two-sample t-test

q) Conduct a two-sample t-test of the relevant hypotheses.  Report the value of the test statistic and p-value.  Also summarize your conclusion.

The two-sample (unpooled) test statistic turns out to be t = 3.06, with a (one-sided) p-value of ≈ 0.007*.  Based on this small p-value, we conclude that the sample data provide strong evidence that fish oil reduced blood pressure more, on average, than regular oil.

* Whenever this fortunate occurrence happens, I tell students that this is a p-value of which James Bond would be proud!

r) How does the result of the t-test compare to that of the simulation analysis?

The result are very similar.  The approximate p-value from the simulation analysis above was 0.006, and the t-test gave an approximate p-value of 0.007.

Considering how similar these results are, you might be wondering why I recommend bothering with the simulation analysis at all.  The most compelling reason is that the simulation analysis shows students what a p-value is: the probability of obtaining such a large (or even larger) difference in group means, favoring the fish oil group, if there were really no difference between the treatments.  I think this difficult idea comes across clearly in the graph of simulated results that we discussed above.  I don’t think calculating a p-value from a t-distribution helps to illuminate this concept.

Extension 2: Comparing medians

Another advantage of simulation-based inference is that it provides considerable flexibility with regard to the choice of statistic to analyze.  For example, could we compare the medians of the two groups instead of their means?  From the simulation-based perspective: Sure!  Do we need to change the analysis considerably?  Not at all!  Using the applet, we simply select the difference in medians rather than the difference in means from the pull-down list of statistic options on the left side.  If we were writing our own code, we would simply replace mean with median

s) Before we conduct a simulation analysis of the difference in median blood pressure reductions between the two groups, first predict what the distribution of 1000 simulated differences in medians will look like, including the center and shape of the distribution.

One of these is much easier to anticipate than the other: We can expect that the center will again be near zero, again because the simulation operates under the assumption of no difference between the treatments.  But medians often do not follow a predictable, bell-shaped curve like means often do, especially with such small sample sizes of 7 per group.

t) Use the applet to conduct a simulation analysis with 1000 repetitions, examining the difference in medians between the groups.  Describe the resulting distribution of the 1000 simulated differences in medians.

Here is some output:

The center is indeed close to zero.  The shape of this distribution is fairly symmetric but very irregular.  This oddness is due to the very small sample sizes and the many duplicate data values.  In fact, there are only eight possible values for the difference in medians: ±8, ±7, ±2, and ±1.

u) How do we determine the approximate p-value from this simulation analysis?  Go ahead and calculate this.

This question makes students stop and think.  I really want them to be able to answer this correctly, because they’re not really understanding simulation-based inference if they can’t.  I offer a hint: Do we plug in 7.714 again and count beyond that value?  Most students realize that the answer is no, because 7.714 was the difference in group means, not medians, in the actual study.  Then where do we count?  Many students see that we need to count how often the simulation gave a result as extreme as the difference in medians in the actual study, which was 8mm.

Here’s the same graph, with results for which the difference in sample medians is 8 or greater colored in red:

v) Compare the results of analyzing medians rather than means.

We obtained a much smaller p-value when comparing means (0.006) than when comparing medians (0.029).  In both cases, we have reasonably strong evidence that fish oil is better than regular oil for reducing blood pressure, but we have stronger evidence based on means than on medians.

Extension 3: Exact randomization test

What we’ve simulated above is often called a randomization test.  Could we determine the p-value for the randomization test exactly rather than approximately with simulation?  Yes, in principle, but this would involve examining all possible ways to randomly assign subjects between the treatment groups.  In most studies, there are often too many combinations to analyze efficiently.  In this study, however, the number of participants is small enough that we can determine the exact randomization distribution of the statistic.  I only ask the following questions in courses for mathematically inclined students.

w) In many ways can 14 people be assigned to two groups of 7 people each?

This is what the combination (also called a binomial coefficient) 14-choose-7 tells us.  This is calculated as: 14! / (7! ×7!) = 3432.  That’s certainly too many to list out by hand, but that’s a pretty small number to tackle with some code.

x) Describe what to do, in principle, to determine the exact randomization distribution.

We continue to assume that the 14 participants were going to obtain the same blood pressure reduction values that they did, regardless of which diet supplement group they had been assigned to.  For each of these 3432 ways to split the 14 participants into two groups of 7 each, we calculate the mean/median of data values in each group, and then we calculate the difference in means/medians (fish oil group minus regular oil group).  I’ll spare you the coding details.  Here’s what we get, with difference in means on the left, difference in medians on the right:

y) How would you calculate the exact p-values?

For the difference in means, we need to count how many of the 3432 possible random assignments produce a difference in means of 7.714 or greater.  It turns out that only 31 give such an extreme difference, so the exact p-value is 31/3432 ≈ 0.009.

If we instead compare medians, it turns out that exactly 100 of the 3432 random assignments produce a difference in medians of 8 or greater, for a p-value of 100/3432 ≈ 0.029.  Interestingly, 8 is the largest possible difference in medians, but there are 100 different ways to achieve this value from the 14 data values.

z) Did the simulation results come close to the exact p-values?

Yes.  The approximate p-value based on comparing means was 0.006, very close to the exact p-value of 0.009.  Similarly, the approximate p-value based on comparing medians was 0.029, the same (to three decimal places) as the exact p-value.

If you’re intrigued by simulation-based inference but reluctant to redesign your entire course around this idea, I recommend sprinkling a bit of SBI into your course.  Depending on how many class sessions you can devote to this, I recommend these sprinkles in this order:

1. Inference for a single proportion with a 50/50 null, as with the blindsight study of post #12 (here)
2. Comparing two proportions, as with the penguin study of post #27 (here)
3. Comparing two means or medians, as with the fish oil study in this post
4. Inference for correlation, as with the draft lottery toward the end of post #9 (here)

For each of these scenarios, I strongly suggest that you introduce the simulation-based approach before the conventional method.  This can help students to understand the logic of statistical inference before getting into the details.  I also recommend emphasizing that the reasoning process is the same throughout these scenarios.  After leading students through the simulation-based approach, you can impress upon students that the conventional methods are merely shortcuts that predict what the simulation results would look like without bothering to conduct the simulation.

P.S. Here is a link to the datafile for this activity:

P.P.S. I provided a list of textbooks that prominently include simulation-based inference at the end of post #12 (here).

P.P.P.S. I dedicate this post to George Cobb, who passed away in the last week.  George had a tremendous impact on my life and career through his insightful and thought-provoking writings and also his kind mentoring and friendship.

George’s after-dinner address at the inaugural U.S. Conference on Teaching Statistics in 2005 inspired many to pursue simulation-based inference for teaching introductory statistics.  His highly influential article based on this talk, titled “The Introductory Statistics Course: A Ptolemaic Curriculum?,” appeared in the inaugural issue of Technology Innovations in Statistics Education (here).  George wrote: “Before computers statisticians had no choice. These days we have no excuse. Randomization-based inference makes a direct connection between data production and the logic of inference that deserves to be at the core of every introductory course.”

George’s writings contributed greatly as my Ask Good Questions teaching philosophy emerged.  At the beginning of my career, I read his masterful article “Introductory Textbooks: A Framework for Evaluation,” in which he simultaneously reviewed 16 textbooks for the Journal of the American Statistical Association (here).  Throughout this review George repeated the following mantra over and over: Judge a textbook by its exercises, and you cannot go far wrong.  This sentence influenced me not only for its substance – what teachers ask students to do is more important than what teachers tell students – but also for its style – repeating a pithy phrase can leave a lasting impression.

Another of my favorite sentences from George, which has stayed in my mind and influenced my teaching for decades, is: Shorn of all subtlety and led naked out of the protective fold of education research literature, there comes a sheepish little fact: lectures don’t work nearly as well as many of us would like to think (here).

I had the privilege of interviewing George a few years ago for the Journal of Statistics Education (here).  His wisdom, humility, insights, and humor shine throughout his responses to my questions.

## #44 Confounding, part 2

Many introductory statistics students find the topic of confounding to be one of the most confounding topics in the course.  In the previous post (here), I presented two extended examples that introduce students to this concept and the related principle that association does not imply causation.  Here I will present two more examples that highlight confounding and scope of conclusions.  As always, this post presents many questions for posing to students, which appear in italics.

3. A psychology professor at a liberal arts college recruited undergraduate students to participate in a study (here).  Students indicated whether they had engaged in a single night of total sleep deprivation (i.e., “pulling an all-nighter”) during the term.  The professor then compared the grade point averages (GPAs) of students who had and who had not pulled an all-nighter.  She calculated the following statistics and determined that the difference in the group means is statistically significant (p-value < 0.025):

a) Identify the observational units and variables.  What kinds of variables are these?  Which is explanatory, and which is response?

My students know to expect these questions at the outset of every example, to the point that they sometimes groan.  The observational units are the 120 students.  The explanatory variable is whether or not the student pulled at least one all-nighter in the term, which is categorical.  The response variable is the student’s grade point average (GPA), which is numerical.

b) Is this a randomized experiment or an observational study?  Explain how you can tell.

My students realize that this is an observational study, because the students decided for themselves whether to pull an all-nighter.  They were not assigned, randomly or otherwise, to pull an all-nighter or not.

c) Is it appropriate to draw a cause-and-effect conclusion between pulling an all-nighter and having a lower GPA?  Explain why or why not.

Most students give a two-letter answer followed by a two-word explanation here.  The correct answer is no.  Their follow-up explanation can be observational study or confounding variables.  I respond that this explanation is a good start but would be much stronger if it went on to describe a potential confounding variable, ideally with a description of how the confounding variable provides an alternative explanation for the observed association.  The following question asks for this specifically.

d) Identify a (potential) confounding variable in this study.  Describe how it could provide an alternative explanation for why students who pulled an all-nighter have a smaller mean GPA than students who have not.

Students know this context very well, so they are quick to propose many good explanations.  The most common suggestion is that the student’s study skills constitute a confounding variable.  Perhaps students with poor study skills resort to all-nighters, and their low grades are a consequence of their poor study skills rather than the all-nighters.  Another common response is coursework difficulty, the argument being that more difficult coursework forces students to pull all-nighters and also leads to lower grades.  Despite having many good ideas here, some students struggle to express the confounding variable as a variable.  Another common error is to describe the link between their proposed confounding variable and the explanatory variable, neglecting to describe a link with the response.

e) Is it appropriate to rule out a cause-and-effect relationship between pulling an all-nighter and having a lower GPA?  Explain why or why not.

This may seem like a silly question, but I think it’s worth asking.  Some students go too far and think that not drawing a cause-and-effect conclusion is equivalent to drawing a no-cause-and-effect conclusion.  The answer to this question is: Of course not!  It’s quite possible that pulling an all-nighter is harmful to a student’s academic performance, even though we cannot conclude that from this study.

f) Describe how (in principle) you could design a new study to examine whether pulling an all-nighter has a negative impact on academic performance (as measured by grades).

Many students give the answer I’m looking for: Conduct a randomized experiment.  Then I press for more details: What would a randomized experiment involve?  The students in the study would need to be randomly assigned to pull an all-nighter or not.

g) How would your proposed study control for potential confounding variables?

I often need to expand on this question to prompt students to respond: How would a randomized experiment account for the fact that some students have better study skills than others, or are more organized than others, or have more time for studying than others?  Some students realize that this is what random assignment achieves.  The purpose of random assignment is to balance out potential confounding variables between the groups.  In principle, students with very good study skills should be balanced out between the all-nighter and no-all-nighter groups, just as students with poor study skills should be similarly balanced out.  The explanatory variable imposed by the researcher should then constitute the only difference between the groups.  Therefore, if the experiment ends up with a significant difference in mean GPAs between the groups, we can attribute that difference to the explanatory variable: whether or not the student pulled an all-nighter.

I end this example there, but you could return to this study later in the course.  You could ask students to conduct a significance test to compare the two groups and calculate a confidence interval for the difference in population means.  At that point, I strongly recommend asking about causation once again.  Some students seem to think that inference procedures overcome concerns from earlier in the course about confounding variables.  I think we do our students a valuable service by reminding them* about issues such as confounding even after they have moved on to study statistical inference. .

* Even better than reminding them is asking questions that prompt students to remind you about these issues.

4. Researchers interviewed parents of 479 children who were seen at a university pediatric ophthalmology clinic.  They asked parents whether the child slept primarily in room light, darkness, or with a night light before age 2.  They also asked about the child’s eyesight diagnosis (near-sighted, far-sighted, or normal vision) from their most recent examination.

a) What are the observational units and variables in this study?  Which is explanatory, and which is response?  What kind of variables are they?

You knew this question was coming first, right?  The observational units are the 479 children.  The explanatory variable is the amount of lighting in the child’s room before age 2.  The response variable is the child’s eyesight diagnosis.  Both variables are categorical, but neither is binary.

b) Is this an observational study or a randomized experiment?  Explain how you can tell.

Students also know to expect this question at this point.  This is an observational study.  Researchers did not assign the children to the amount of light in their rooms.  They merely recorded this information.

The article describing this study (here) included a graph similar to this:

c) Does the graph reveal an association between amount of lighting and eyesight diagnosis?  If so, describe the association.

Yes, the percentage of children who are near-sighted increases as the amount of lighting increases.  Among children who slept in darkness, about 10% were near-sighted, compared to about 34% among those who slept with a night light and about 55% among those who slept with room light.  On the other hand, the percentage with normal vision decreases as the amount of light increases, from approximately 65% to 50% to 30%.

Here is the two-way table of counts:

d) Were most children who slept in room light near-sighted?  Did most near-sighted children sleep in room light?  For each of these questions, provide a calculation to support your answer.

Some students struggle to recognize how these questions differ.  The answer is yes to the first question, because 41/75 ≈ 0.547 of those who slept in room light were near-sighted.  For the second question, the answer is no, because only 41/137 ≈ 0.299 of those who were near-sighted slept in room light.

e) Is it appropriate to conclude that light in a child’s room causes near-sightedness?  Explain your answer.

No.  Some students reflexively say observational study for their explanation.  Others simply say confounding variables.  These responses are fine, as far as they go, but the next question prompts students to think harder and explain more fully.

f) Some have proposed that parents’ eyesight might be a confounding variable in this study.  How would that explain the observed association between the bedroom lighting condition and the child’s eyesight?

Asking about this specific confounding variable frees students to concentrate on how to explain the confounding.  Most students point out that eyesight is hereditary, so near-sighted parents tend to have near-sighted children.  Unfortunately, many students stop there.  But this falls short of explaining the observed association, because it says nothing about the lighting in the child’s room.  Completing the explanation requires adding that near-sighted parents may tend to use more light in the child’s room than other parents, perhaps so they can more easily check on the child during the night.

The next set of questions continues this example by asking about how one could (potentially) draw a cause-and-effect conclusion on this topic.

g) What would conducting a randomized experiment to study this issue entail?

Children would need to be randomly assigned to have a certain amount of light (none, night light, or full room light) in their bedroom before the age of 2.

h) How would a randomized experiment control for parents’ eyesight?

This question tries to help students focus on the goal of random assignment: to balance out all other characteristics of the children among the three groups.  For example, children with near-sighted parents should be (approximately) distributed equally among the three groups, as should children of far-sighted parents and children of parents with normal vision.  Even better, we also expect random assignment to balance out factors that we might not think of in advance, or might not be able to observe or measure, that might be related to the child’s eyesight.

i) What would be the advantage of conducting a randomized experiment to study this issue?

If data from a randomized experiment show strong evidence of an association between a child’s bedroom light and near-sightedness, then we can legitimately conclude that the light causes an increased likelihood of near-sightedness.  This cause-and-effect conclusion would be warranted because random assignment would (in principle) account for other potential explanations.

j) Would conducting such a randomized experiment be feasible in this situation?  Would it be ethical?

To make this feasible, parents would need to be recruited who would agree to allow random assignment to determine how much light (if any) to use in their child’s bedroom.  It might be hard to recruit parents who would give up this control over their child’s environment.  This experiment would be ethical as long as parents were fully informed and consented to this agreement.

You can return to this example, and the observational data from above, later in the course to give students practice with conducting a chi-square test.  This provides another opportunity to ask them about the scope of conclusions they can draw.

l) Conduct a chi-square test.  Report the test statistic and p-value.  Summarize your conclusion.  The test statistic turns out to be approximately 56.5.  With 4 degrees of freedom, the p-value is extremely close to zero, about 7.6×10^(-12).  The data provide overwhelming evidence against the null hypothesis of no association, in favor of the alternative that there is an association between amount of light in the child’s room before age 2 and eyesight diagnosis later in childhood.

m) In light of the very large test statistic and extremely small p-value, is it reasonable to conclude that light in a child’s room causes an increased chance of the child becoming near-sighted?  I think it’s very important to ask this again after conducting the hypothesis test.  Some students mistakenly think that hypothesis tests are so advanced that they can override what they learned earlier in the course.  The extremely small p-value in no way compensates for the observational nature of these data and the possibility of confounding variables.  A cause-and-effect conclusion between bedroom light and near-sightedness still cannot be drawn.

n) Why do you think the researchers bothered to collect and analyze these data, considering that no causal conclusion can be drawn?

Some students believe that a cause-and-effect conclusion is the only kind worth drawing. I ask this question to help them realize that establishing evidence of association can be a worthy goal in its own right, apart from the question of causation.

o) Is it reasonable to generalize this study’s finding about an association between room lighting and near-sightedness to the population of all children in the United States?  Explain.

Most students realize that the correct answer is no, but many mistakenly attribute this to the observational nature of the data.  With regard to generalizability, the key point is that the children in this study were not randomly selected from any population.  They were all patients at a university pediatric ophthalmology clinic, so they are not likely to be representative of all U.S. children with regard to issues involving eyesight.  The finding of an association between increased bedroom light and near-sightedness may or may not hold in the larger population of U.S. children in general.

Asking this question can help students who confuse bias and confounding, or who believe that bias and confounding are the same idea.  This can also remind students of the important distinction between random sampling and random assignment, which I discussed in posts #19 and #20 (Lincoln and Mandela, here and here).

Observational studies abound in many fields.  They often produce intriguing results that are discussed in news media.  Accordingly, it’s important for students to understand the topic of confounding and especially how confounding affects the scope of conclusions that can be drawn from observational studies.  The four examples in this two-part series introduce students to these ideas.  They also provide an opportunity to make connections among different parts of the course, spanning topics of data exploration and statistical inference as well as design of studies and scope of conclusions.

P.S. The topic of drawing cause-and-effect conclusions legitimately from observational studies has become widely studied.  I confess that I do not address this topic in my introductory statistics courses, but some argue strongly that I am doing my students a disservice in this regard.  After all, the most important causal conclusion of the twentieth century may have been that smoking causes cancer, which was not determined by randomly assigning humans to smoke or not.

One of the most prominent advocates for causal inference is Judea Pearl, who has co-authored a general-audience book titled The Book of Why: The New Science of Cause and Effect (information and excerpts can be found here).  Statistics educators who argue for including this topic prominently include Milo Schield (here), Danny Kaplan (here), and Jeff Witmer (here).  A recent article in the Journal of Statistics Education by Cummiskey et al (here) also makes this case.

P.P.S. for teachers of AP Statistics: I’ll be conducting some one-hour sessions via zoom in which I lead students through the first five questions on the 2011 exam, discussing what graders looked for and highlighting common student errors.  I hope this provides some helpful practice and preparation for the upcoming 2020 AP Statistics exam.  Please contact me (allanjrossman@gmail.com) if you would like to invite your students to attend one of these sessions.

## #43 Confounding, part 1

The topic of confounding is high on the list of most confounding topics in introductory statistics.  Dictionary.com provides these definitions of confound (here):

1. to perplex or amaze, especially by a sudden disturbance or surprise; bewilder; confuse: The complicated directions confounded him.
2. to throw into confusion or disorder: The revolution confounded the people.
3. to throw into increased confusion or disorder
4. to treat or regard erroneously as identical; mix or associate by mistake: Truth confounded with error.
5. to mingle so that the elements cannot be distinguished or separated
6. to damn (used in mild imprecations): Confound it!

Definition #5 comes closest to how we use the term in statistics.  Unfortunately, definitions #1, #2, and #3 describe what the topic does to many students, some of whom respond in a manner that illustrates definition #6.

In this post I will present two activities that introduce students to this important but difficult concept, along with some follow-up questions for assessing their understanding.  One example will involve two categorical variables, and the other will feature two numerical variables.  As always, questions that I pose to students appear in italics.

I have used a variation of the following example, which I updated for this post, for many years.  I hold off on defining the term confounding until students have anticipated the idea for themselves.  Even students who do not care about sports and know nothing about basketball can follow along.

1. During the 2018-19 National Basketball Association season, the Sacramento Kings won 13 home games and lost 16 when they had a sell-out crowd, compared to 11 home wins and 1 loss when they had a smaller crowd.

a) Identify the observational units, explanatory variable, and response variable in this study.  Also classify each variable as categorical or numerical.

As I argued in post #11 (Repeat after me, here), I think these questions are important to ask at the start of nearly every activity, to orient students to the context and the type of analysis required.  The observational units are games, more specifically home games of the Sacramento Kings in the 2018-19 season.  The explanatory variable is crowd size, and the response variable is game outcome.  As presented here, both variables are categorical (and binary).  Crowd size could be studied as a numerical variable, but that information is presented here as whether or not the crowd was a sell-out or smaller.

b) Organize the data into a table of counts, with the explanatory variable groups in columns.

First we set up the table as follows:

Then I suggest to students that we work with each number as we encounter it in the sentence above, so I first ask where the number 2018 should go in the table.  This usually produces more groans than laughs, and then we proceed to fill in the table as follows:

Some optional questions for sports fans: Does the number 41 make sense in this context?  Basketball fans nod their heads, knowing that an NBA team plays an 82-game season, with half of the games played at home.  Did the Kings win more than half of their home games?  Yes, they won 24 of 41 home games, which is 58.5%.  Does this mean that the Kings were an above-average team in that season?  No.  In fact, after including data from their games away from home, they won only 39 of 82 games (47.6%) overall.

c) Calculate the proportion of wins for each crowd size group.  Do these proportions suggest an association (relationship) between the explanatory and response variables?  Explain.

The Kings won 11/12 (.917, or 91.7%) of games with a smaller crowd.  They won 13/29 (.448, or 44.8%) of games with a sell-out crowd.  This seems like a substantial difference (almost 48 percentage points), which suggests that there is an association between crowd size and game outcome.  The Kings had a much higher winning percentage with a smaller crowd than with a sell-out crowd.

d) Produce a well-labeled segmented bar graph to display these proportions.

Here’s a graph generated by Excel:

e) Is it reasonable to conclude that a sell-out crowd caused the team to play worse?  If not, provide an alternative explanation that plausibly explains the observed association.

This is the key question of the entire activity.  I always find that some students have been anticipating this question and are eager to respond: Of course not!  These students explain that the Kings are more likely to have a sell-out crowd when they’re playing against a good team with superstar players, such as the Golden State Warriors with Steph Curry.  I often have to prod students to supply the rest of the explanation: What else is true about the good teams that they play against?  The Kings are naturally less likely to win against such strong teams.

At this point I introduce the term confounding variable as one whose potential effects on a response variable cannot be distinguished from those of the explanatory variable.  I also point out that a confounding variable must be related to both the explanatory and response variable.  Finally, I emphasize that because of the potential for confounding variables, one cannot legitimately draw cause-and-effect conclusions from observational studies.

f) Identify a confounding variable in this study, and explain how this confounding variable is related to both the explanatory and response variable.

This is very similar to question (e), now asking students to express their explanation with this new terminology.  Some students who provide the alternative explanation well nevertheless struggle to specify a confounding variable clearly.  A good description of the proposed confounding variable is: strength of opponent.  It seems reasonable to think that a stronger opponent is more likely to generate a sell-out crowd, and a stronger opponent also makes the game less likely to result in a win for the home team.

I usually stop this in-class activity there, but you could ask students to dig deeper in a homework assignment or quiz.  For example, we can look at more data to explore whether our conjectures about strength of opponent hold true.

It seems reasonable to use the opposing team’s percentage of games won in that season as a measure of its strength.  Let’s continue to work with categorical variables by classifying teams with a winning percentage of 40% and below as weak, between 40% and 60% as moderate, 60% and above as strong.  This leads to the following tables of counts:

Do these data support the two conjectures about how strength of opponent relates to crowd size and to game outcome?  Support your answer with appropriate calculations and graphs.

The first conjecture was that stronger opponents are more likely to generate a sell-out crowd.  This is supported by the data, as we see that 100% (10/10) of strong opponents produced a sell-out crowd, compared to 61.9% (13/21) of moderate opponents and 60% (6/10) of weak opponents.  These percentages are shown in this segmented bar graph:

The second conjecture was that stronger opponents are less likely to produce a win by the home team.  This is clearly supported by the data.  The home team won 100% (10/10) of games against weak opponents, which falls to 57.1% (12/21) of games against moderate teams, and only 20% (2/10) of games against strong teams.  These percentages are shown in this segmented bar graph:

Here’s a quiz question based on a different candidate for a confounding variable. It also seems reasonable to think that games played on weekends (let’s include Fridays with Saturdays, and Sundays) are more likely to attract a sell-out crowd that games played on weekdays.  What else would have to be true about the weekend/weekday breakdown in order for that to be a confounding variable for the observed association between crowd size and game outcome?  What remains is for students to mention a connection with the response variable: Weekend games would need to be less likely to produce a win for the home team, as compared to weekday games.

Again we can look at the data on this question.  Consider the following tables of counts:

Do the data support the argument for the weekday vs. weekend variable as a confounding variable?  Cite relevant calculations to support your response.  Only half of the argument is supported by the data.  Weekend games were slightly more likely to produce a sell-out crowd than a weekday game (13/17 ≈ 0.765 vs. 16/24 ≈ 0.667).  But weekend games were not less likely to produce a home team win than weekday games (11/17 ≈ 0.647 vs. 13/24 ≈ 0.542).  Therefore, the day of week variable does not provide an alternative explanation for why sell-out crowds are less likely to see a win by the home team than a smaller crowd.

Students could explore much more with these data*.  For example, they could analyze opponent’s strength as a numerical variable rather than collapsing it into three categories as I did above.

* I provide a link to the datafile at the end of this post.

The second example is based on an activity that I have used for more than 25 years.  My first contribution to the Journal of Statistics Education, from 1994 (here), presented an example for distinguishing association from causation based on the relationship between a country’s life expectancy and its number of people per television.  In updating the example for this post, I chose a different variable and used data as of 2017 and 2018 from the Word Bank (here and here)*.

* Again, a link to the datafile appears at the end of this post.

2. The following table lists the life expectancy (in years) and the number of automatic teller machines (ATMs per 100,000 adults) in 24 countries around the world:

a) Identify the observational units and variables.  What type of variable are these?  Which is explanatory and which is response?

Yes, I start with these fundamental questions yet again.  The observational units are countries, the explanatory variable is number of ATMs per 100,000 adults, and the response is life expectancy.  Both variables are numerical.

b) Which of the countries listed has the fewest ATMs per 100,000 adults?  Which has the most?

This question is unnecessary, I suppose, but I think it helps students to engage with the data and context.  Haiti has the fewest ATMs: about 2 per 100,000 adults.  The United States has the most: about 174 ATMs per 100,000 adults.

c) Produce a scatterplot of the data, with the response variable on the vertical axis.

Here’s the scatterplot:

d) Does the scatterplot indicate an association between life expectancy and number of ATMs?  Describe its direction, strength, and form.

Yes, the scatterplot reveals a positive association between a country’s life expectancy and its number of ATMs per 100,000 adults.  This association is moderately strong but not linear.  The form follows a curved pattern.

e) Do you believe that installing more ATM machines in countries such as Haiti, Bangladesh, Algeria, and Kenya would cause their inhabitants to live longer?  If not, provide a more plausible, alternative (to cause-and-effect) explanation for the observed association.

This is the key question in the activity, just as with the question in the previous activity about whether sell-out crowds cause the home team to play worse.  Students realize that the answer here is a resounding no.  It’s ridiculous to think that installing more ATMs would cause Haitians to live longer.  Students can tell you the principle that association is not causation.

Students can also suggest a more plausible explanation for the observed association.  They talk about how life expectancy and number of ATMs are both related to the overall wealth, or technological sophistication, of a country.

f) Identify a (potential) confounding variable, and explain how it might relate to the explanatory and response variables.

This is very similar to the previous question.  Here I want students to use the term confounding variable and to express their suggestion as a variable.  Reasonable answers include measures of a country’s wealth or technological sophistication.

This completes the main goal for this activity.  At the risk of detracting from this goal, I often ask an additional question:

g) Would knowing a country’s number of ATMs per 100,000 adults be helpful information for predicting the life expectancy of the country?  Explain.

The point of this question is much harder for students to grasp than with the preceding questions.  I often follow up with this hint: Would you make different life expectancy predictions depending on whether a country has 10 vs. 100 ATMs per 100,000 adults?  Students confidently answer yes to this one, so they gradually come to realize that they should also answer yes to the larger question: Knowing a country’s number of ATMs per 100,000 adults is helpful for predicting life expectancy.  I try to convince them that the association is real despite the lack of a cause-and-effect connection.  Therefore, predictions can be enhanced from additional data even without a causal* relationship.

* I greatly regret that the word causal looks so much like the word casual.  To avoid this potential confusion, I say cause-and-effect much more than causal.  But I had just used cause-and-effect in the previous sentence, so that caused me to switch to causal in the last sentence of the paragraph.

This example also leads to extensions that work well on assignments.  For example, I ask students to:

• take a log transformation of the number of ATMs per 100,000 adults,
• describe the resulting scatterplot of life expectancy vs. this transformed variable,
• fit a least squares line to the (transformed) data,
• interpret the value of r^2,
• interpret the slope coefficient, and
• use the line to predict the life expectancy of a country that was not included in the original list.

Here is a scatterplot of life expectancy vs. log (base 10) of number of ATMs per 100,000 adults, with the least squares line:

The relationship between life expectancy and this transformed variable is positive, moderately strong, and fairly linear.  With this log transformation, knowing a country’s number of ATMs per 100,000 adults explains 46.7% of the variability in countries’ life expectancy values.  The slope coefficient of 9.356 means that the model predicts an increase of 9.356 years in life expectancy for a tenfold increase in number of ATMs per 100,000 adults.  Using this line to predict the life expectancy of Costa Rica, which has 74.41 ATMs per 100,000 adults produces: predicted life expectancy = 60.51 + 9.356×log(74.41) ≈ 60.51 + 9.356×1.87 ≈ 78.02 years.  The actual life expectancy reported for Costa Rica in 2018 is 80.10, so the prediction underestimated by only 2.08 years.

Two earlier posts that focused on multivariable thinking also concerned confounding variables.  In post #3 (here), the graduate program was a confounding variable between an applicant’s gender and the admission decision.  Similarly, in post #35 (here), age was a confounding variable between a person’s smoking status and their lung capacity.

In next week’s second part of this two-part series, I will address more fully the issue of drawing causal conclusions.  Along the way I will present two more examples that involve confounding variables, with connections to data exploration and statistical inference.  I hope these questions can lead students to be less confounded by this occasionally vexing* and perplexing topic.

* I doubt that the term vexing variable will catch on, but it does have a nice ring to it!

## #42 Hardest topic, part 2

In last week’s post (here), I suggested that sampling distributions constitute the hardest topic to teach in introductory statistics.  I provided five recommendations for teaching this challenging topic, including an exhortation to hold off on using the term sampling distribution until students understand the basic idea.  I also gave many examples of questions that can help students to develop their understanding of this concept.

In this post I present five more suggestions for teaching the topic of sampling distributions, along with many more examples of questions for posing to students.  As always, such questions appear in italics.  Let’s continue the list …

6. Pay attention to the center of a sampling distribution as well as its shape and variability.

We teachers understandably devote a lot of attention to the shape and variability of a sampling distribution*.  I think we may neglect to emphasize center as much we should.  With a sample proportion or a sample mean, the mean of its sampling distribution is the population proportion or population mean.  Maybe we do not make a big deal of this result because it comes as no surprise.  But this is the very definition of unbiasedness, which is worth our drawing students’ attention to.

* I’ll say more about these aspects in upcoming suggestions.

We can express the unbiasedness of a sample mean mathematically as:

As I have argued before (in post #19, Lincoln and Mandela, part 1, here), this seemingly simple equation is much more challenging to understand than it appears.  The three symbols in this equation all stand for a different mean.  Ask students: Express what this equation says in a sentence.  This is not easy, so I lead my students thorough this one symbol at a time: The mean of the sample means is the population mean.  A fuller explanation requires some more words: If we repeatedly take random samples from the population, then the mean of the sample means equals the population mean.  This is what it means* to say that the sample mean is an unbiased estimator of the population mean.

* Oops, sorry for throwing another mean at you!

I emphasize to students that this result is true regardless of the population distribution and also for any sample size.  The result is straight-forward to derive from properties of expected values.  I show students this derivation in courses for mathematically inclined students but not in a typical Stat 101 course, where I rely on simulations to convince students that the result is believable.

I suspect that we take unbiasedness of a sample proportion and sample mean for granted, but you don’t have to study obscure statistics in order to discover one that is not unbiased.  For example, the sample standard deviation is not unbiased when sampling from a normal distribution*.

* The sample variance is unbiased in this case, but the unbiasedness does not survive taking the square root.

The following graph of sample standard deviations came from simulating 1,000,000 random samples of size 10 from a normal distribution with mean 100 and standard deviation 25:

What aspect of this distribution reveals that the sample standard deviation is not an unbiased estimator of the population standard deviation?  Many students are tempted to point out the slight skew to the right in this distribution.  That’s worth noting, but shape is not relevant to bias.  We need to notice that the mean of these sample standard deviations (≈ 24.32) is not equal to the value that we used for the population standard deviation (σ = 25). Granted, this is not a large amount of bias, but this difference (24.32 vs. 25) is much more than you would expect from simulation variability with one million repetitions*.

* Here’s an extra credit question for students: Use the simulation results to determine a 95% confidence interval for the expected value of the sample standard deviation, E(S).  This confidence interval turns out to be approximately (24.31 → 24.33), an extremely narrow interval thanks to the very large number of repetitions.

7. Emphasize the impact of sample size on sampling variability.

Under suggestion #1 in the previous post (here), I emphasized the key idea that averages vary less than individual values.  The corollary to this is that averages based on larger samples vary less than averages based on smaller samples.  You don’t need to tell students this; you can lead them to tell you by asking them to … (wait for it) … simulate!  Returning to the context of sampling Reese’s Pieces candies, consider these two graphs from simulation analyses (using the applet here), based on a sample size of 25 candies on the left, 100 candies on the right:

What’s the most striking difference between these two distributions?  Some students comment that the distribution on the right is more “filled in” that the one of the left.  I respond that this is a good observation, but I think there’s a more important difference.  Then I encourage students to focus on the different axis scales between the graphs.  Most students recognize that the graph on the right has much less variability in sample proportions than the one on the right.  How do the standard deviations (of the sample proportions) compare between the two graphs?  Students respond that the standard deviation is smaller on the right.  How many times larger is the standard deviation on the left than the one on the right?  Students reply that the standard deviation is about twice as big on the left as the right.  By how many times must the sample size increase in order to cut the standard deviation of the sample proportion in half?  Recalling that the sample sizes were 25 and 100, students realize that they need to quadruple the sample size in order to cut this standard deviation in half.

I lead students through a similar set of questions based on simulating the sampling distribution of a sample mean.  Students again come to realize that the standard deviation of a sample mean decreases as the sample size increases, and also that a four-fold increase in sample size cuts this standard deviation in half.  This leads us to the result:

I follow up by asking: Explain the difference between SD(X-bar) and σ.  Even students who somewhat understand the idea can have difficulty with expressing this well.  The key is that σ represents the standard deviation of the individual values in the population (penny ages, or word lengths, or weights, or whatever), but SD(X-bar) is the standard deviation of the sample means (averages) that would result from repeatedly taking random samples from the population.

Here’s an assessment question* about the impact of sample size on a sampling distribution: Suppose that a region has two hospitals.  Hospital A has about 10 births per day, and hospital B has about 50 births per day.  About 50% of all babies are boys, but the percentage who are boys varies at each hospital from day to day.  Over the course of a year, which hospital will have more days on which 60% or more of the births are boys – A, B, or negligible difference between A and B?

* This is a variation of a classic question posed by psychologists Kahneman and Tversky, described here.

Selecting the correct answer requires thinking about sampling variability.  The smaller hospital will have more variability in the percentage of boys born on a day, so Hospital A will have more days on which 60% or more of the births are boys.  Many students struggle with this question, not recognizing the important role of sample size on sampling variability.

This principle that the variability of a sample statistic decreases as sample size increases applies to many other statistics, as well.  For example, I ask students to think about the sampling distribution of the inter-quartile range (IQR), comparing sample sizes of 10 and 40, under random sampling from a normally distributed population.  How could you investigate this sampling distribution?  Duh, with simulation!  Describe how you would conduct this simulation.  Generate a random sample of 10 values from a normal distribution.  Calculate the IQR of the 10 sample values.  Repeat this for a large number of repetitions.  Produce a graph and summary statistics of the simulated sample IQR values.  Then repeat all these steps with a sample size of 40 instead of 10.

I used R to conduct such a simulation analysis with 1,000,000 repetitions. Using a normally distributed population with mean 100 and standard deviation 25, I obtained the following graphs (sample size of 10 on the left, 40 on the right):

Compare the variability of the sample IQR with these two sample sizes.  Just as with a sample mean, the variability of the sample IQR is smaller with the larger sample size.  Does the sampling variability of the sample IQR decrease as much by quadrupling the sample size as with the sample mean?  No.  We know that the SD of the sample mean is cut in half by quadrupling the sample size.  But the SD of the sample IQR decreases from about 10.57 to 5.96, which is a decrease of 43.6%, a bit less than 50%.

8. Note that population size does not matter (much).

As long as the population size is considerably larger than the sample size, the population size has a negligible impact on the sampling distribution.  This revelation runs counter to most students’ intuition, so I think it fails to sink in for many students.  This minimal role of population size also stands in stark contrast to the important role of sample size described under the previous suggestion.

How can we help students to appreciate this point?  Simulation, of course.  In post #19 (Lincoln and Mandela, part 1, here), I described a sampling activity using the 268 words in the Gettysburg Address as the population.  The graph on the left below displays the distribution of word lengths (number of letters) in this population (obtained from the applet here).  For the graph on the right, the population has been expanded to include 40 copies of the Gettysburg Address, producing a population size of 268×40 = 10,720 words.

How do these two population distributions compare?  These distributions are identical, except for the population sizes.  The proportions of words at each length value are the same, so the population means and standard deviations are also the same.  The counts on the vertical axis are the only difference in the two graphs.

Now let’s use the applet to select 10,000 samples, with a sample size of 10 words per sample, from each of these two populations.   The graphs below display the resulting distributions of sample means, on the left from the original population and the right from the 40-times-larger-population:

How do these two distributions of sample means compare?  These two sampling distributions are essentially the same.  They both have a very slight skew to the right.  Both means are very close to the population mean of 4.295 letters per word.  The standard deviations of the sample means are very similar in the two sampling distributions, with a slightly smaller standard deviation from the smaller population.  Here’s the bottom-line question: Did the very different population sizes have much impact on the distribution of the sample means?   No, not much impact at all.

Would the variability in a sample mean or a sample proportion differ considerably, depending on whether you were selecting a random sample of 1000 people in California (about 40 million residents) or Montana (about 1 million residents)?  Once again, the population size barely matters, so the (probably surprising) answer is no.

Speaking of large populations, you might also let students know that sampling from a probability distribution is equivalent to sampling from an infinite population.  This is a subtle point, tricky for many students to follow.  You could introduce this idea of sampling from an infinite process with the Reese’s Pieces applet (here).

Depending on your student audience, you could use this as an opening to discuss the finite population correction factor, given by the following expression, where n represents sample size and N population size:

This is the factor by which the standard deviation of the sampling distribution should be adjusted when sampling from a finite population, rather than from an infinite process represented by a probability distribution.  When the population size N is considerably larger than the sample size n, this factor is very close to 1, so the adjustment is typically ignored.  A common guideline is that the population size should be at least 20 (some say 10) times larger than the sample size in order to ignore this adjustment.

9. Celebrate the wonder!

Sampling variability means that the value of a sample statistic varies from sample to sample.  But a sampling distribution reveals a very predictable pattern to that variation.  We should not be shy about conveying to students how remarkable this is!

Consider three populations represented by the following probability distributions:

Are these three probability distributions similar?  Certainly not.  On the left is a normal distribution, in the middle a shifted exponential distribution, and on the right a discrete distribution with five equally spaced values.  These distributions are not similar in the least, except that I selected these populations to have two characteristics in common: They all have mean 100 and standard deviation 20.

Now let’s use software (R, in this case) to select 100,000 random samples of n = 40 from each population, calculating the sample mean for each sample.  Here are the resulting distributions of 100,000 sample means:

That example is very abstract, though, so many students do not share my enthusiasm for how remarkable that result is.  Here’s a more specific example: In post #36 (Nearly normal, here), I mentioned that birthweights of babies in the U.S. can be modelled by a normal distribution with mean 3300 grams and standard deviation 500 grams.  Consider selecting a random sample of 400 newborns from this population.  Which is larger: the probability that a single randomly selected newborn weighs between 3200 and 3400 grams, or the probability that the sample mean birthweight in the random sample of 400 newborns is between 3200 and 3400 grams?  Explain your answer.

The second probability is much larger than the first.  The distribution of sample means is much less variable than the distribution of individual birthweights.  Therefore, a sample mean birthweight is much more likely to be within ±100 grams of the mean than an individual birthweight.  These probabilities turn out to be about 0.1585 (based on z-scores of ±0.2) for an individual baby, compared to 0.9999 (based on z-scores of ±4.0) for the sample mean birthweight.

I think this is remarkable too: Even when we cannot predict an individual value well at all, we can nevertheless predict a sample average very accurately.

Now let’s work with with a categorical variable.  Here is the distribution of sample proportions that results from simulating 1,000,000 samples of sample size 1000 per sample, assuming that the population proportion with the characteristic is 0.4 (using Minitab software this time):

What’s remarkable here?  Well, for one thing, this does look amazingly like a bell-shaped curve.  More importantly, let me ask: About what percentage of the sample proportions are within ±0.03 of the assumed population proportion?  The answer is very close to 95%.  So what, why is this remarkable?  Well, let’s make the context the proportion of eligible voters in the United States who prefer a particular candidate in an election.  There’s about a 95% chance that the sample proportion preferring that candidate would be within ±0.03 of the population proportion with that preference.  Even though there are more 250 million eligible voters in the U.S., we can estimate the proportion who prefer a particular candidate very accurately (to within ±0.03 with 95% confidence) based on a random* sample of only 1000 people!  Isn’t this remarkable?!

* I hasten to add that random is a very important word in this statement. Selecting a random sample of people is much harder to achieve than many people believe.

10. Don’t overdo it.

I stated at the outset of this two-part series that sampling distributions comprise the hardest topic to teach in introductory statistics.  But I’m not saying that this is the most important topic to teach.  I think many teachers succumb to the temptation to spend more time on this topic than is necessary*.

* No doubt I have over-done it myself in this long, two-part series.

Sampling distributions lie at the heart of fundamental concepts of statistical inference, namely p-values and confidence intervals.  But we can lead students to explore and understand these concepts* without teaching sampling distributions for their own sake, and without dwelling on mathematical aspects of sampling distributions.

* Please see previous posts for ideas and examples. Posts #12, #13, and #27 (here, here, and here) use simulation-based inference to introduce p-values. Posts #14 and #15 (here and here) discuss properties of confidence intervals.

This lengthy pair of posts began when I answered a student’s question about the hardest topic to teach in introductory statistics by saying: how the value of a sample statistic varies from sample to sample, if we were to repeatedly take random samples from a population. I conclude by restating my ten suggestions for teaching this challenging topic:

2. Hold off on using the term sampling distribution, and then always add of what.
3. Simulate!
4. Start with the sampling distribution of a sample proportion, then a sample mean.
5. Emphasize the distinctions among three different distributions: population distribution, sample distribution, sampling distribution.
6. Pay attention to the center of a sampling distribution as well as its shape and variability.
7. Emphasize the impact of sample size on sampling variability.
8. Note that population size does not matter (much).
9. Celebrate the wonder!
10. Don’t over-do it.

## #41 Hardest topic, part 1

As I recounted in post #38 (here), a student recently asked what I think is the hardest topic to teach in an introductory statistics course.  My response was: how the value of a sample statistic varies from sample to sample, if we were to repeatedly take random samples from a population.  As you no doubt realize, I could have answered much more succinctly: sampling distributions.

Now I will offer suggestions for helping students to learn about this most challenging topic.  Along the way, in keeping with the name and spirit of this blog, I will sprinkle in many questions for posing to students, as always in italics.

Just as you can’t run before you can walk, you also can’t understand the long-run pattern of variation in a statistic until you first realize that the value of a statistic varies from sample to sample.  I think many teachers consider sampling variability to be so obvious that it does not warrant mentioning.  But have you heard the expression, widely but mistakenly attributed to Einstein*, that “the definition of insanity is doing the same thing over and over and expecting different results”?  Well, if you take a random sample of 10 Reese’s Pieces candies from a large bag, and then do that over and over again, is it crazy to expect to obtain different values for the sample proportions of candies that are orange?  Of course not!  In fact, you would be quite mistaken to expect to see the same result every time.

I think this is a key idea worth emphasizing.  One way to do that is to give students samples of Reese’s Pieces candies*, ask them to calculate the proportion that are orange in their sample, and produce a dotplot on the board to display the variability in these sample proportions.

* Just for fun, I often ask my students: In what famous movie from the 1980s did Reese’s Pieces play a role in the plot?  Apparently the Mars company that makes M&Ms passed on this opportunity, and Hershey Foods jumped at the chance to showcase its lesser-known Reese’s Pieces**.  The answer is E.T. the Extra-Terrestrial.

** See here for a discussion of this famous product-placement story.

As we study sampling variability, I also ask students: Which do you suspect varies less: averages or individual values?  This question is vague and abstract, so I proceed to make it more concrete: Suppose that every class on campus calculates the average height of students in the class.  Which would vary less: the heights of individual students on campus, or the average heights in these classes?  Explain your answer.

I encourage students to discuss this in groups, and they usually arrive at the correct answer: Averages vary less than individual values.  I want students to understand this fundamental property of sampling variability before we embark on the study of sampling distributions.

2. Hold off on using the term sampling distribution, and then always add of what.

The term sampling distribution is handy shorthand for people who already understand the idea*.  But I fear that using this term when students first begin to study the concept is unhelpful, quite possibly harmful to their learning.

* For this reason, I will not hesitate to use the term throughout this post.

I suggest that we keep students’ attention on the big idea: how the value of a sample statistic would vary from sample to sample, if random samples were randomly selected over and over from a population.  That’s quite a mouthful, consisting of 25 words with a total of 118 letters.  It’s a lot easier to say sampling distribution, with only 2 words and 20 letters.  But the two-word phrase does not convey meaning unless you already understand, whereas the 25-word description reveals what we’re studying.  I’ll also point out that the 25 words are mostly short, with an average length of only 4.72 letters per word, compared to an average length of 10.0 letters per word in the two-word phrase*.

* I’m going to resist the urge to determine the number of Scrabble points in these words.  See post #37 (What’s in a name, here) if that appeals to you.

I don’t recommend withholding the term sampling distribution from students forever.  But for additional clarity, I do suggest that we always add of what.  For example, we should say sampling distribution of the sample mean, or of the sample proportion, or of the chi-square test statistic, rather than expecting students to figure out what we intend from the context.

3. Simulate!

Sampling distributions address a hypothetical question: what would happen if …  This hypothetical-ness is what makes the topic so challenging to understand.  I realize, of course, that the mathematics of random variables provides one approach to studying sampling distributions, but I think the core idea of what would happen if … comes alive for students with simulation.  We can simulate taking thousands of samples from a population to see what the resulting distribution of the sample statistic looks like.

What do I recommend next, after you and your students have performed such a simulation?  That’s easy: Simulate again.  What next?  Simulate again, this time perhaps by changing a parameter value, asking students to predict what will change, and then running the simulation to see what does change in the distribution of the sample statistics.  Then what?  Simulate some more!  Now change the sample size, ask students to predict what will change in the sampling distribution, and then examine the results.

I hope that students eventually see so many common features in simulation results that they start to wonder if there’s a way to predict the distribution of a sample statistic in advance, without needing to run the simulation.  At this point, we teachers can play the hero’s role by presenting the mathematical results about approximate normality.  This is also a good time, after students have explored lots of simulation analyses of how a sample statistic varies from sample to sample, to introduce the term sampling distribution.

I think simulation is our best vehicle for helping students to visualize the very challenging concept of what would happen if …  But I hasten to add that simulation is not a panacea.  Even extensive use of simulation does not alter my belief that sampling distributions are the hardest topic in Stat 101.

How can we maximize the effectiveness of simulation for student learning of this topic?  One answer is to make the simulation as visual as possible.  For example, my colleague Beth Chance designed an applet (here) that simulates random selection of Reese’s Pieces by showing candies emerging from a machine:

Students see the candies coming out of the machine and the resulting value of the sample proportion that are orange.  Then they see the graph of sample proportions on the right being generated sample-by-sample as the candy machine dispenses more and more samples.

Another way to make sure that simulation is effective for student learning is to ask (good) questions that help students to understand what’s going on with the simulation.  For example, about the Reese’s Pieces applet: What are the observational units in a single sample?  What is the variable, and what kind of variable is it?  What are the observational units in the graph on the right?  What is the variable, and what kind of variable is it?  In a single sample, the observational units are the individual pieces of candy, and the variable is color, which is categorical.  About the graph on the right, I used only 100 samples in the simulation above so we can see individual dots.  For a student who has trouble identifying the observational units, I give a hint by asking: What does each of the 100 dots represent?  The observational units are the samples of 25 candies, and the variable is the sample proportion that are orange, which is numerical.  These questions can help students to focus on this important distinction between a single sample and a sampling distribution of a statistic.

What do you expect to change in the graph when we change the population proportion (probability) from 0.4 to 0.7?  Most students correctly predict that the entire distribution of sample proportions will shift to the right, centering around 0.7.  Then changing the input value and clicking on “Draw Samples” confirms this prediction.  What do you expect to change in the graph when we change the sample size from 25 to 100?  This is a harder question, but many students have the correct intuition that this change reduces the variability in the distribution of sample proportions.

Here’s another question that tries to draw students’ attention to how simulation works: Which of the inputs has changed between the graph on the left and the graph on the right below – probability, sample size, or number of samples?  What is the impact of that change?

A hint for students who do not spot the correct answer immediately: Do these distributions differ much in their centers or their variability?  The answer here is no, based on both the graph and the means and standard deviations.  (Some students need to be convinced that the difference between the standard deviations here – 0.100 vs. 0.098 – is negligible and unimportant.)  This suggests that the population proportion (probability) and sample size did not change.  The only input value that remains is the correct answer: number of samples.  The scale on the vertical axis makes clear that the graph on the right was based on a larger number of samples than the graph on the left.  This is a subtle issue, the point being that the number of samples, or repetitions, in a simulation analysis is not very important.  It simply needs to be a large number in order to display the long-run pattern as clearly as possible.  The graph on the right is based on 10,000 samples, compared to 1000 samples for the graph on the left.

4. Start with the sampling distribution of a sample proportion, then a sample mean.

Simulating a sampling distribution requires specifying the population from which the random samples are to be selected.  This need to specify the population is a very difficult idea for students to understand.  In practice, we do not know the population.  In fact, the reason for taking a sample is to learn about the population.  But we need to specify a population to sample from in order to examine the crucial question of what would happen if … When studying a yes/no variable and therefore a sample proportion, you only need to specify one number in order to describe the entire population: the population proportion.  Specifying the population is more complicated when studying a sample mean of a numerical variable, because you need to think about the shape and variability of the distribution for that population.  This relative simplicity is why I prefer to study the sampling distribution of a sample proportion before moving to the sampling distribution of a sample mean.

5. Emphasize the distinctions among three different distributions: population distribution, sample distribution, sampling distribution*.

* It’s very unfortunate that those last two sound so similar, but that’s one of the reasons for suggestion #2, that we avoid using the term sampling distribution until students understand the basic idea.

The best way to emphasize these distinctions is to display graphs of these three distributions side-by-side-by-side.  For example, the following graphs, generated from the applet here, show three distributions:

• ages (in years) in a population of 1000 pennies
• ages in a random sample of 25 pennies
• sample mean ages for 10,000 random samples of 25 pennies each

Which of these graphs has different observational units and variables from the other two graphs?  The graph on the right is the odd one out.  The observational units on the right are not pennies but samples of 25 pennies.  The variable on the right is sample mean age, not individual age.  Identify the number of observational units in each of these graphs.  I admit that this is not a particularly important question, but I want students to notice that the population (on the left) consists of 1000 pennies, the sample (in the middle) has 25 pennies, and the distribution of sample means (on the right) is based on 10,000 samples of 25 pennies each.

Which of the following aspects of a distribution do the three graphs have in common – shape, center, or variability?  The similar mean values indicate that the three graphs have center in common.  Describe how the graphs differ on the other two aspects.  The distribution of sample means on the right has much less variability than the distributions of penny ages on the left and in the middle, again illustrating the principle that averages vary less than individual values.  The distribution of sample means on the right is also quite symmetric and bell-shaped, as compared to the skewed-right distributions of penny ages in the other two graphs.

This issue reminds me of an assessment question that I discussed in post #16 (Questions about cats, here): Which is larger – the standard deviation of the weights of 1000 randomly selected people, or the standard deviation of the weights of 10 randomly selected cats?  This question is not asking about the mean weight of a sample.  It’s simply asking about the standard deviation of individual weights, so the sample size is not relevant.  Nevertheless, many students mistakenly respond that cats’ weights have a larger standard deviation than people’s weights.

Here’s a two-part assessment question that address this issue: Suppose that body lengths of domestic housecats (not including the tail) have mean 18 inches and standard deviation 3 inches.  a) Which would be larger – the probability that the length of a randomly selected cat is longer than 20 inches, or the probability that the average length in a random sample of 50 cats is longer than 20 inches, or are these probabilities the same?  b) Which would be larger – the probability that the length of a randomly selected cat is between 17 and 19 inches, or the probability that the average length in a random sample of 50 cats is between 17 and 19 inches, or are these probabilities the same?  To answer these questions correctly, students need to remember that averages vary less than individual values.  So, because a length of 20 inches is greater than the mean, the probability of exceeding 20 inches is greater for an individual cat than for a sample average.  Similarly, the probability of being between 17 and 19 inches is greater for a sample average than for an individual cat, because this interval is centered on the population mean.

I find that I have more to say about teaching what I consider to be the hardest topic in an introductory statistics course, but this post is already on the long side.  I will provide five more suggestions and several more examples about teaching sampling distributions next week.

## #40 Back to normal

I presented some questions for helping students to understand concepts related to normal distributions in post #36 (here).  I return to normal distributions* in this post by presenting an extended activity (or assignment) that introduces the topic of classification and the concept of trade-offs in error probabilities.  This activity also gives students additional practice with calculating probabilities and percentiles from normal distributions.  As always, questions that I pose to students appear in italics.

* I came up with the “back to normal” title of this post many weeks ago, before so much of daily life was turned upside down by the coronavirus pandemic.  I realize that everyday life will not return to normal soon, but I decided to continue with the title and topic for this post.

Suppose that a bank uses an applicant’s score on some criteria to decide whether or not to approve a loan for the applicant.  Suppose for now that these scores follow normal distributions, both for people who would repay to the loan and for those who would not.  Those who would repay the loan have a mean of 70 and standard deviation of 8; those who not repay the loan have a mean of 30 and standard deviation of 8.

• a) Draw sketches of these two normal curves on the same axis.
• b) Write a sentence or two comparing and contrasting these distributions.
• c) Suggest a decision rule, based on an applicant’s score, for deciding whether or not to give a loan to the applicant.
• d) Describe the two kinds of classification errors that could be made in this situation.
• e) Determine the probabilities of the two kinds of error with this rule.

a) Below is a graph, generated with R, of these two normal distributions.  The red curve on the left pertains to people who would not repay the loan; the green curve on the right is for those who would repay the loan:

b) The two distributions have the same shape and variability.  But their centers differ considerably, with a much larger center for those who would repay the loan.  The scores show very little overlap between the two groups.

c) Most students have the reasonable thought to use the midpoint of the two means (namely, 50) as the cutoff value for a decision rule.  Some students need some help to understand how to express the decision rule: Approve the loan for those with a score of 50 or higher, and deny the loan to those with a score below 50.

d) This is the key question that sets up the entire activity.  Students need to recognize and remember that there are two distinct issues (variables) here: 1) whether or not the applicant would in fact repay the loan, and 2) whether the loan application is approved or denied.  Keeping these straight in one’s mind is crucial to understanding and completing this activity.  I find myself reminding students of this distinction often.

With these two variables in mind, the two kinds of errors are:

• Denying the loan to an applicant who would repay
• Approving the loan for an applicant who would not repay

e) The z-scores are (50 – 70) / 8 = -2.50 for one kind of error and (50 – 30) / 8 = 2.50 for the other.  Both probabilities are approximately 0.006.  At this point I prefer that students use software* for these calculations, so they can focus on the concepts of classification and error probability trade-offs.  These probabilities are shown (but hard to see, because they are so small) in the shaded areas of the following graph, with cyan for the first kind of error and pink for the other:

* Software options include applets (such as here), R, Minitab, Excel, …

More interesting questions arise when the two score distributions are not separated so clearly.

Now suppose that the credit scores are normally distributed with mean 60 and standard deviation 8 among those who would repay the loan, as compared to mean 40 and standard deviation 12 among those who would not repay the loan.

• f) Draw sketches of these two normal curves on the same axis.
• g) Describe how this scenario differs from the previous one.
• h) Determine the probabilities of the two kinds of error (using the decision rule based on a cut-off value of 50).
• i) Write a sentence or two to interpret the two error probabilities in context.

f) Here is the new graph:

g) The primary change is that the centers of these score distributions are much closer than before, which means that the distributions have much more overlap than before.  This will make it harder to distinguish people who would repay their loan and those who would not.  A smaller difference is that the variability now differs in the two scores distributions, with slightly less variability in the scores of those who would repay the loan.

h) These error probabilities turn out to be approximately 0.106 for the probability that an applicant who would repay the loan is denied (shown in cyan in the graph below), 0.202 for the probability that an applicant who would not repay is approved (shown in pink):

i) I think this question is important for assessing whether students truly understand, and can successfully communicate, what they have calculated.  There’s a 10.6% chance that an applicant who would repay the loan is denied the loan.  There’s a 20.2% chance that an applicant who would not repay the loan is approved.

Now let’s change the cutoff value in order to decrease one of the error probabilities to a more acceptable level.

• j) In which direction – smaller or larger – would you need to change the decision rule’s cutoff value in order to decrease the probability that an applicant who would repay the loan is denied?
• k) How would the probability of the other kind of error – approving a loan for an applicant who would not repay it – change with this new cutoff value?
• l) Determine the cutoff value needed to decrease the error probability in (j) to .05.  Does this confirm your answer to (j)?
• m) Determine the other error probability with this new cut-off rule.  Does this confirm your answer to (k)?
• n) Write a sentence or two to interpret the two error probabilities in context.

j) This question prompts students to think about the goal before doing the calculation.  This kind of error occurs when the score is less than the cutoff value, and we need the error probability to decrease from 0.106 to 0.050.  Therefore, we need a smaller cutoff value, less than the previous cutoff of 50.  Here is a graph of the situation, with the cyan-colored area reduced to 0.05:

k) Using a smaller cutoff value will produce a larger area above that value under the curve for people who would not repay the loan, as shown in pink in the graph above.  Therefore, the second error probability will increase as the first one decreases.

l) Students need to calculate a percentile here.  Specifically, they need to determine the 5th percentile of a normal distribution with mean 60 and standard deviation 8.  They could use software to determine this, or they could realize that the z-score for the 5th percentile is -1.645.  The new cutoff value needs to be 1.645 standard deviations below the mean: 60 – 1.645×8 = 46.84.  This is indeed smaller than the previous cutoff value of 50.  When students mistakenly add 1.645 standard deviations to the mean, I hope that they realize their error by recalling their correct intuition that the cutoff value should be smaller than before.

m) This probability turns out to be approximately 0.284, which is indeed larger than with the previous cutoff (0.202).

n) Now there’s a 5% chance that an applicant who would repay the loan is denied, because that’s how we determined the cutoff value for the decision rule.  This rule produces a 28.4% chance that an applicant who would not repay the loan is approved.

Now let’s reduce the probability of the other kind of error.

• o) Repeat parts (j) – (n) with the goal of decreasing the probability that an applicant who would not repay the loan is approved to 0.05.

o) For this goal, the cutoff value needs to become larger than 50, which increases the probability that an applicant who would repay the loan is denied.  The cut-off value is now 1.645 standard deviations above the mean: 40 + 1.645×12 = 59.74.  This increases the other error probability to approximately 0.487.  This means that 48.7% of those who would repay the loan are denied, and 5% of those who would not repay are approved, as depicted in the following graph:

Now that we have come up with three different decision rules, I ask students to think about how we might compare them.

• p) If you consider the two kinds of errors to be equally serious, how might you decide which of the three decision rules considered thus far is the best?

This open-ended question is a tough one for students.  I give them a hint to think about the “equally serious” suggestion, and some suggest looking at the average (or sum) of the two error probabilities.

• q) Calculate the average of the two error probabilities for the three cutoff values that we have considered.
• r) Which cutoff value is the best, according to this criterion, among these three options?

We can organize our previous calculations in a table:

According to this criterion, the best cutoff value among these three options is 50, because that produces the smallest average error probability.  But of course, these three values are not the only possible choices for the cutoff criterion.  I suggest to students that we could write some code to calculate the two error probabilities, and their average, for a large number of possible cutoff values.  In some courses, I ask them to write this code for themselves; in other courses I provide them with the following R code:

• s) Explain what each line of code does.
• t) Run the code and describe the resulting graph.
• u) Report the optimal cutoff value and its error probabilities.
• v) Write a sentence describing the optimal decision rule.

Asking students to explain what code does is no substitute for asking them to write their own code, but it can assess some of their understanding:

• The first line creates a vector of cutoff values from 30 to 70.
• The second line calculates the probability that an applicant who would repay the loan has a score below the cutoff value and so would mistakenly be denied.
• The third line calculates the probability that an applicant who would not repay the loan has a score above the cutoff value and so would mistakenly be approved.
• The fourth line calculates the average of these two error probabilities.
• The fifth line produces a graph of average error probability as a function of cutoff value.
• The sixth line determines the optimal cutoff value by identifying which minimizes the average error probability.

Here is the resulting graph:

This graph shows that cutoff values in the neighborhood of 50 are much better (in terms of minimizing average error probability) than cutoff values less than 40 or greater than 60.  The minimum value of average error probability appears to be close to 0.15, achieved at a cutoff value slightly above 50.

The R output reveals that the optimal cutoff value is 50.14, very close to the first cutoff value that we analyzed.  With this cutoff value, the probability of denying an applicant who would repay the load is 0.109, and the probability of approving an applicant who would not repay is 0.199.  The average error probability with this cutoff value is 0.154.

The optimal decision rule, for minimizing the average of the two error probabilities, is to approve a loan for those with a score of 50.14 or greater, and deny a loan to those with a score of less than 50.14.

• w) Now suppose that you consider denying an applicant who would repay the loan to be three time worse than approving an applicant who would not repay the loan.  What criterion might you minimize in this case?
• x) With this new criterion, would you expect the optimal cutoff value to be larger or smaller than before?  Explain.
• y) Describe how you would modify the code to minimize the appropriate weighted average of the error probabilities.
• z) Run the modified code.  Report the optimal cutoff value and its error probabilities.  Also write a sentence describing the optimal decision rule.

We can take the relative importance of the two kinds of errors into account by choosing the cut-off value that minimizes a weighted average of the two error probabilities.  Because we consider the probability of denying an applicant who would repay to be the more serious error, we need to reduce that probability, which means using a smaller cutoff value.

We do not need to change the first three lines of code.  The key change comes in the fourth line, where we must calculate a weighted average instead of an ordinary average.  Then we need to remember to use the weighted average vector in the fifth and sixth lines.  Here is the modified R code:

The graph produced by this code follows:

We see from the graph that the weighted average of error probabilities is minimized with a cutoff value near 45.  The R output reveals the optimal cutoff value to be 45.62.  The associated error probabilities are 0.036 for denying an applicant who would repay, 0.320 for approving an applicant who would not repay, and 0.107 for the weighted average.  The optimal decision rule for this situation is to approve applicants with a score of 45.62 or higher, deny applicants with a score of less than 45.62.

Whew, I have reached the end of the alphabet*, so I’d better stop there!

* You may have noticed that I had to squeeze a few questions into part (z) to keep from running out of letters.

Most teachers like to give their students an opportunity for lots of practice with normal distribution calculations.  With this activity, I have tried to show that you can provide such practice opportunities while also introducing students to ideas such as classification and error probability trade-offs.

P.S. I have used a version of this activity for many years, but I modified the context for this blog post after watching a session at the RStudio conference held in San Francisco at the end of January.  Martin Wattenberg and Fernanda Viegas gave a very compelling presentation (a recording of which is available here) in which they described an interactive visualization tool (available here) that allows students to explore how different cutoff values affect error probabilities.  Their tool addresses issues of algorithmic fairness vs. bias by examining the impact of different criteria on two populations – labeled as blue and orange people.

P.P.S. I was also motivated to develop this activity into a blog post by a presentation that I saw from Chris Franklin in Atlanta in early February.  Chris presented some activities described in the revised GAISE report for PreK-12 (the updated 2020 version will appear here later this year), including one that introduces the topic of classification.

## #39 Batch testing

One of my favorite examples for studying discrete random variables and expected values involves batch testing for a disease.  I would not call this a classic probability problem, but it’s a fairly common problem that appears in many probability courses and textbooks.  I did not intend to write a blog post about this, but I recently read (here) that the Nebraska Public Health Lab has implemented this idea for coronavirus testing.  I hope this topic is timely and relevant, as so many teachers meet with their students remotely in these extraordinary circumstances.  As always, questions that I pose to students appear in italics.

Here are the background and assumptions: The idea of batch testing is that specimens from a group of people are pooled together into one batch, which then undergoes one test.  If none of the people has the disease, then the batch test result will be negative, and no further tests are required.  But if at least one person has the disease, then the batch test result will be positive, and then each person must be tested individually.  Let the random variable X represent the total number of tests that are conducted.  Let’s start with a disease probability of p = 0.1 and a sample size of n = 8.  Assume that whether or not a person has the disease is independent from person to person.

a) What are the possible values of X?  When students need a hint, I say that there are only two possible values.  If they need more of a hint, I ask about what happens if nobody in the sample has the disease, and what happens if at least one person in the sample has the disease.  If nobody has the disease, then the process ends after that 1 test. But if at least one person has the disease, then all 8 people need to undergo individual tests.  The possible values of X are therefore 1 and 9.

b) Determine the probability that only one test is needed.  For students who do not know where to start, I ask: What must be true in order that only one test is needed?  They should recognize that only one test is needed when nobody has the disease.  Because we’re assuming independence, we calculate the probability that nobody has the disease by multiplying each person’s probability of not having the disease.  Each person has probability 0.9 of not having the disease, so the probability that nobody has the disease is (0.9)^8 ≈ 0.430.

c) Determine the probability for the other possible value of X.  Because there are only two possible values, we can simply subtract the other probability from 1, giving 1 – (0.9)^8 ≈ 0.570.  I point out to students that this is the probability that at least one person in the sample has the disease. I also note that it’s often simplest to calculate such a probability with the complement rule: Pr(at least one) = 1 – Pr(none).

d) Interpret these probabilities with sentences that begin “There’s about a _____ % chance that __________ .”  I like to give students practice with expressing probabilities in sentence form: There’s about a 43% chance that only one test is needed, and about a 57% chance that nine tests are needed.

e) Display the probability distribution of X in a table.  For a discrete random variable, a probability distribution consists of its possible values and their probabilities.  We can display this probability distribution as follows:

f) Determine the expected value of the number of tests that will be conducted.  With only two possible values, this is a very straightforward calculation: E(X) = 1×[(.9)^8] + 9×[1–(.9)^8] = 9 – 8×[(.9)^8] ≈ 5.556 tests.

g) Interpret what this expected value means.  In post #18 (What do you expect, here), I argued that we should adopt the term long-run average in place of expected value.  The interpretation is that if we were to repeat this batch testing process for a large number of repetitions, the long-run average number of tests that we would need would be very close to 5.556 tests.

h) Which is more likely – that the batch procedure will require one test or nine tests?  This is meant to be an easy one: It’s more likely, by a 57% to 43% margin, that the procedure will require nine tests.

i) In what sense is batch testing better than simply testing each individual at the outset?  This is the key question, isn’t it?  Part (h) suggests that perhaps batch testing is not helpful, because in any one situation you’re more likely to need more tests with batch testing than you would with individual testing from the outset.  But I point students who need a hint back to part (g): In the long run, you’ll only need an average of 5.562 tests with batch testing, which is fewer than the 8 tests you would always need with individual testing.  If you need to test a large number of people, and if tests are expensive or in limited supply, then batch testing provides some savings on the number of tests needed.

The questions above used particular values for the number of people (n) and the probability that an individual has the disease (p).  Next I ask students to repeat their analysis for the general case.

j) Specify the probability distribution of X, in terms of n and p.  If students need a hint, I remind them that there are still only two possible values of X.  If nobody has the disease, only 1 test is needed.  If at least one person has the disease, then (n+1) tests are needed.  The probability that only 1 test is needed is the product of each individual’s probability of not having the disease: (1–p)^n.  Then the complement rule establishes that the probability of needing (n+1) tests is: 1–(1–p)^n.  The probability distribution of X is shown in the table:

k) Determine the expected value of the number of tests, as a function of n and p.  The algebra gets a bit messy, but setting this up is straightforward: E(X) = 1×[(1–p)^n] + (n+1)×[1–(1-p)^n], which simplifies to n+1–n×[(1–p)^n].

l) Verify that this function produces the expected value that you calculated above when n = 8 and p = 0.1.  I want students to develop the habit of mind to check their work like this on their own, but I can model this practice by asking this question explicitly.  Sure enough, plugging in n = 8 and p = 0.1 produces E(X) = 5.556 tests.

m) Graph E(X) as a function of n, for values from 2 to 50, with a fixed value of p = 0.1.  Students can use whatever software they like to produce this graph, including Excel:

n) Describe the behavior of this function.  This is an increasing function.  This makes sense because having more people produces a greater chance that at least one person has the disease, so this increases the expected number of tests.  The behavior of the function is most interesting with a small sample size.  The function is slightly concave up for sample sizes less than 10, and then close to linear for larger sample sizes.

o) Determine the values of n for which batch testing is advantageous compared to individual testing, in terms of producing a smaller expected value for the number of tests.  Here’s the key question again.  We are looking in the graph for values of n (number of people) for which the expected number of tests (represented by the dots) is less than the value of n.  The gray 45-degree line in the following graph makes this comparison easier to see:

From this graph, we see that the expected number of tests with 25 people is a bit less than 25, and the expected number of tests with 35 people is slightly greater than 35, but it’s hard to tell from the graph with 30 people.  We can zoom in on some values to see where the expected number of tests begins to exceed the sample size:

This zoomed-in table reveals that the expected number of tests is smaller with batch testing, as compared to individual testing, when there are 33 or fewer people.  (Remember that we have assumed that the disease probability is p = 0.1 here.)

p) Now graph E(X) as a function of p, for values from 0.01 to 0.50 in multiples of 0.01, with a fixed value of n = 8.  Here is what Excel produces:

q) Describe the behavior of this function.  This function is also increasing, indicating that we expect to need more tests as the probability of an individual having the disease increases.  The rate of increase diminishes gradually as the probability increases, approaching a limit of 9 tests.

r) Determine the values of p for which batch testing is advantageous compared to individual testing.  Looking at the graph, we see that the expected number of tests is less than 8 for values of p less than 0.2.  We also see that the exact cutoff value is a bit larger than 0.2, but we need to perform some algebra to solve the inequality:

s) Express your finding from the previous question in a sentence.  I ask this question because I worry that students become so immersed with calculations and derivations that they lose sight of the big picture.  I hope they’ll say something like: With a sample size of 8 people, the expected number of tests with batch testing is less than for individual testing whenever the probability that an individual has the disease is less than approximately 0.2289.

Here’s a quiz question that I like to ask following this example, to assess whether students understood the main idea: The following table shows the expected value of the number of tests with batch testing, for several values of n and p:

a) Show how the value 47.15 was calculated.  b) Circle all values in the table for which batch testing is advantageous compared to individual testing.

Students should answer (a) by plugging n = 50 and p = 0.05 into the expected value formula that we derived earlier: 50 + 1 – 50×[(1–0.05)^50] ≈ 47.15.  To answer part (b), students should circle the values in bold below, because the expected number of tests is less than n, the number of people who need testing:

Here is an extension of this example that I like to use on assignments and exams: Suppose that 8 people to be tested are randomly split into two groups of 4 people each.  Within each group of 4 people, specimens are combined into a single batch to be tested.  If anyone in the batch has the disease, then the batch test will be positive, and those 4 people will need to be tested individually.  Assume that each person has probability 0.1 of having the disease, independently from person to person.  a) Determine the probability distribution of Y, the total number of tests needed.  b) Calculate and interpret E(Y).  c) Is this procedure better than batch-testing all 8 people in this case?  Justify your answer.

Some students struggle with the most basic step here, recognizing that the possible values for the total number of tests are 2, 6, and 10.  The total number of tests will be just 2 if nobody has the disease.  If one batch has nobody with the disease and the other batch has at least one person with the disease, then 4 additional tests are needed, making a total of 6 tests.  If both batches have at least one person with the disease, then 8 additional tests are needed, which produces a total of 10 tests.

The easiest probability to calculate is the best-case scenario Pr(Y = 2), because this requires that none of the 8 people have the disease: (.9)^8 ≈ 0.430.  Now students do not have the luxury of simply subtracting this from one, so they must calculate at least one of the other probabilities.  Let’s calculate the worst-case scenario Pr(Y = 10) next, which means that at least one person in each batch has the disease: (1–.9^4)×(1–.9^4) ≈ 0.118.

At this point students can determine the remaining probability by subtracting the sum of the other two probabilities from one: Pr(Y = 6) = 1 – Pr(Y = 2) – Pr(Y = 10) ≈ 0.452.  For students who adopt the good habit of solving such problems in multiple ways as a check on their calculations, they could also calculate Pr(Y = 6) as: 2×(.9^4)×(1–.9^4).  It’s easy to forget the 2 here, which is necessary because either of the two batches could be the one with the disease.

The following table summarizes these calculations to display the probability distribution of Y:

The expected value turns out to be: E(Y) = 2×0.430 + 6×0.452 + 8×0.118 ≈ 4.751 tests*.  If we were to repeat this testing procedure a large number of times, then the long-run average number of tests needed would be very close to 4.751.  This is smaller than the expected value of 5.556 tests when all eight specimens are batched together.  This two-batch strategy is better than the one-batch plan, and also better than simply conducting individual tests. In the long run, the average number of tests is smallest with the two-batch plan.

* An alternative method for calculating this expected value is to double the expected number of tests with 4 people from our earlier derivation: 2×[4+1–4×(.9^4)] ≈ 4.751 tests.

This is a fairly challenging exam question, so I give generous partial credit.  For example, I make part (a) worth 6 points, and students earn 3 points for correctly stating the three possible values.  They earn 1 point for any one correct probability, and they also earn a point if their probabilities sum to one.  Part (b) is worth 2 points.  Students can earn full credit on part (b) by showing how to calculate an expected value correctly, even if their part (a) is incorrect.  An exception is that I deduct a point if their expected value is beyond what I consider reasonable in this context.  Part (c) is also worth 2 points, and students can again earn full credit regardless of whether their answer to part (b) is correct, by comparing their expected value to 5.556 and making the appropriate decision.

As I conclude this post, let me emphasize that I am not qualified to address how practical (or impractical) batch testing might be in our current situation with coronavirus.  My point here is that students can learn that probabilistic thinking can sometimes produce effective strategies for overcoming problems.  More specifically, the batch testing example can help students to deepen their understanding of probability rules, discrete random variables, and expected values.

This example also provides an opportunity to discuss timely and complex issues about testing for a disease when tests are scarce or expensive.  One issue is the difficulty of estimating the value of p, the probability than an individual to be tested has the disease.  In the rapidly evolving case of coronavirus, this probability varies considerably by place, time, and health status of the people to be tested.  Here are some data about estimating the probability that an individual to be tested has the disease:

• The COVID Tracking Project (here) reports that as of March 29, the United States has seen 139,061 positive results in 831,351 coronavirus tests, for a percentage of 16.7%.  The vast majority who have taken a test thus far have displayed symptoms or been in contact with others who have tested positive, so this should not be regarded as an estimate of the prevalence of the disease in the general public.  State-by-state data can be found here.
• Also as of the afternoon of March 29, the San Luis Obispo County (where I live) Public Health Department has tested 404 people and obtained 33 positive results (8.2%).  Another 38 positive test results in SLO County have been reported by private labs, but no public information has been released about the number of tests conducted by these private labs.  Information for SLO is updated daily here.
• Iceland has conducted tests much more broadly than most countries, including individuals who do not have symptoms (see here).  As of March 29, Iceland’s Directorate of Health is reporting (here) that 1020 of 15,484 people (6.6%) have tested positive for coronavirus.

Also note that the assumption of independence in the batch testing example is unreasonable if the people to be tested have been in contact with each other.  In the early days of this pandemic, one criterion for being tested has been proximity to others who have tested positive.  Another note is that the batch testing analysis does not take into account that test results may not always be correct.

Like everyone, I hope that more and more tests for coronavirus become widely available in the very near future.

P.S. For statistics teachers who are making an abrupt transition to teaching remotely, I recommend the StatTLC (Statistics Teaching and Learning Corner) blog (here), which has recently published several posts with helpful advice on this very timely topic.