Skip to content

#64 My first week

Many thanks to all who sent encouragement in response to last week’s post (here) about my harrowing experience with creating my first video for my students.  I’m happy to report that my first-ever week of remote teaching went well.  I promise not to turn this blog into a personal diary, but I’d like to share some reflections based on this past week.


I woke up last Monday excited and nervous for the first day of the school year.  That was a good and familiar, even comforting, feeling.  Some unfamiliar feelings followed for the rest of the day.  It was very strange not to leave my house for the first day of school, and it was also weird to realize at the end of the day that I had not changed out of my sweat pants.

I was very glad that many students showed up for my first live zoom session at 8am on Monday.  I also appreciated that many of them turned their cameras on, so I could see their faces on the screen.  A large majority of my students are beginning their first term at Cal Poly, and they seemed eager to get started.  I was excited that these students were beginning the academic coursework of their college experience with me.

One fun thing is that the very first student to join the zoom session turned out to have her birthday on that day.  I know this because we worked through the infamous draft lottery example (see post #9, here), so I asked students to find their own birthday’s draft number, and it turned out that this student’s birthday had draft number 1, which meant that she was born on September 14, last Monday.


I have used three different zoom tools to interact with students:

  1. Breakout rooms provide an opportunity for students to discuss questions with each other.  For example, we used breakout rooms at the beginning of the first session for groups of 5-6 students to introduce themselves to each other.  Then we used the same breakout rooms later for students to discuss possible explanations for the apparent paradox with the famous Berkeley graduate admissions data (see post #3 here).
  2. Polls provide immediate feedback on students’ understanding (see Roxy Peck’s guest post #55 about clicker questions here).  For example, I used polls to ask students to identify variables as categorical or numerical and to indicate whether a number was a parameter or a statistic.
  3. Chat allows students to ask questions of me, and I’ve also asked them to type in responses to some questions in the chat window.  For example, students determined the median draft number for their birth month and typed their finding into the chat.

During Friday’s live zoom session, we studied ideas related to sampling, and we worked through the Gettysburg Address activity (see post #19, Lincoln and Mandela, part 1, here).  I was apprehensive about how this activity would work remotely, but I was pleasantly surprised that it went smoothly.  I prepared a google form in advance and pasted a link in the chat window, through which students entered the average word length in their self-selected sample of ten words from the speech.  This allowed me to see their responses in real time and paste the results into an applet (here), so we could examine a dotplot of the distribution of their sample averages.  Because a large majority of the students’ sample averages exceeded the population average of 4.3 letters per word, the resulting graph illustrated sampling bias:

I also created videos for students who could not attend the optional live sessions.  I’m even getting slightly more comfortable with making videos.  But making corrections to the auto-captioning takes a while, perhaps because the software has trouble translating words from my peculiar voice.  Some unfortunate mis-translations of what I have said include:

  • “grandmother” for “parameter”
  • “in America” for “a numerical variable”
  • “selected a tree” for “selected at random”
  • “once upon a time” for “one sample at a time”
  • “sample beans” for “sample means”

I have already given many quizzes to my students, even after just one week.  I give a quiz based on each handout, just to make sure that they were paying attention as they worked through the examples, either in a live session with me or on their own or by watching a video.  I also assign an application quiz for each handout, in which students apply what they have learned to a new context.  I have also asked students to complete several miscellaneous quizzes, for example by answering questions about a Hans Rosling video on visualizing human progress (here) that I asked them to watch.   I regard these quizzes as low-stakes assessments, and I encourage students to work together on them.


I conclude this brief post by offering five take-aways from my first week of remote teaching.  I realize that none of these is the least bit original, and I suspect that none will provide any insights for those who taught remotely in the spring or started remote teaching in the fall earlier than I did.

  1. Remote teaching can be gratifying.  Rather than thinking about how much I would prefer to be in a classroom with my students and down the hall from my colleagues, I hope to concentrate on my happy discovery that interacting with students virtually can be fun.
  2. Remote teaching can be engaging.  I greatly appreciate my students’ being such good sports about answering my questions and participating in activities.  (See Kelly Spoon’s guest post #60, here, for several ideas about connecting with students online.)
  3. Asking good questions is central to helping students learn*, remotely as well as in-person.
  4. Remote teaching requires considerable preparation**.  For me, some of this preparation has involved planning when to use breakout rooms and polls and chat.  Collecting data from students also requires more preparation than simply asking students to put their result on the board.  Writing quizzes also requires entering the questions into the learning management system after crafting the questions in the first place.
  5. Remote teaching is very tiring.***  I have found the combination of having to prepare so extensively, integrate different technologies at the same time, and stare at a screen for many hours per day to be exhausting!

* You did not see this one coming, did you?

** But on the positive side of the ledger, my commute time has been reduced by nearly 100%.

*** Of course, perhaps age is a confounding variable that explain my fatigue.  Never before have I been as old to start a new school year as I am now.

Here’s one more takeaway, one that I regret: I have much less time and thought to devote to this blog than I had last year.  That’s why this post is so brief and perhaps unhelpful.  As always, thanks for reading and bearing with me.

#63 My first video

I recently endured a harrowing, horrifying, humbling, even humiliating experience.  That’s right: I recorded my first video.

My first-ever online teaching experience begins today, September 14*.  In preparation, I thought I’d record a brief video to introduce myself to my students, hoping to begin the process of establishing a bit of a connection even though I’ll probably never meet these students in person.  I wanted the video to be brief, about five minutes or so.  I’ve never followed a script in class, so I did not write a script for the video, hoping that non-scripted spontaneity would make it more appealing.  But I did prepare some PowerPoint slides, partly to remember what I wanted to say, and also so the slides would occupy most of the screen with my face appearing only in a small corner.  I wanted to use Zoom to make the video, just because I like to keep things simple.  I’ve already used Zoom a bit, and I’ll be using Zoom for live sessions with my students this fall.

* This is the same date that was selected first and received draft number 1 in the infamous 1970 draft lottery.  In post #9 (here), I describe a class activity that illustrates statistical thinking by analyzing those lottery results.

So, I entered the room that now serves as my home office, started my computer, opened Zoom, launched a new meeting, shared my screen, put my PowerPoint file in presentation mode, looked into the camera, pressed the record button, and started talking to myself …

I finished about seven-and-a-half minutes later, only 50% beyond my target time of five minutes*.  I waited for Zoom to produce the recording, and then I eagerly pressed the play button.  This is when the experience turned harrowing.

* Post #28 (here) pertains to my pervasive pet-peeve involving student misunderstandings of percentage differences.


I really don’t like watching myself on a screen, but I understand that many people feel this way about themselves, and Zoom use over the past six months has somewhat inured me to this unpleasant feeling.  That wasn’t the harrowing part.

Those of you who know me, or have heard me give presentations, can probably anticipate that I found the horrifying part to be listening to my voice.  For those of you who have never heard me: I have a very unusual and peculiar* speaking voice.  It doesn’t sound nearly as odd to me in real life as it does on a recording.  After listening to just the first few seconds of the Zoom recording, I was overcome by a desire to apologize to everyone who’s ever had to listen to me – students, colleagues, friends, wife, cats, …  I only hope that this is something that you get used to and barely notice after a while.

* Friends use the word distinctive here to spare my feelings.

To be more specific, my voice tends to rise rather than fall at the end of sentences.  This vocal pattern is sometimes referred to as “upspeak.”  This is apparently a serious topic of research inquiry among linguists, and a Google search will provide a lot of information, references, and advice about upspeak.  My favorite anecdote about this phenomenon is that novelist Richard Russo invested one of his characters with upspeak in his delightful satire of academic life Straight Man.  Russo’s main character, the reluctant chairman of a college’s English department, describes the speaking voice of the department secretary as follows: Most of Rachel’s statements sound like questions.  Her inability to let her voice fall is related to her own terrible insecurity and lack of self-esteem.  To emphasize this aspect of her speaking voice, Russo uses a question mark at the end of Rachel’s sentences throughout the book?*

* Yes, I used that punctuation on purpose to demonstrate Russo’s technique.

In case you’re wondering whether I’m exaggerating about my own upspeaking, I’ll point out that during conference and workshop presentations, I often ask those in attendance to guess where I’m from.  Just asking the question is usually good for a laugh, as people realize that I am acknowledging my unusual vocal inflections, and they’re often curious to know the answer.  Common guesses often include Ireland, Scotland, Scandinavia, Canada, and the upper Midwest.  None of those is correct*.  I believe that my peculiar voice is more of an individual quirk than a regional dialect.

* I will reveal the answer later in this post.


After I overcame my revulsion at hearing my own voice enough to get back to work on my first video, I made and discarded several attempts due to mis-speakings and awkward pauses and the like.  Then as I went through the fifth take, I thought I had a keeper.  I successfully avoided the mis-speaking and pauses.  I was saying what I wanted to say in a reasonable manner.  As I got to the end, I was almost looking forward to playing it back to confirm that this would be the final take, the one to be posted for my students.  It probably would have been, except for one flaw: I realized to my horror that I had been sharing and recording the wrong screen!  I was sharing and recording my laptop screen rather than my monitor screen*, which was the one with the Powerpoint presentation! 

* I’ve actually used just a laptop for the past 20 years until recently.  Seeing that I would need to teach online in the fall, my wife very kindly bought me a new monitor a few months ago.  As this story reveals, I’m still getting used to it.

A few takes later, I again thought I had a keeper, and I was certain that I had shared and recorded the correct screen this time.  I was feeling very proud of myself, downright excited as I got to the last slide, in which I thanked students for taking the time to watch my first video.  But then …  My brain completely froze, and I couldn’t find the button to stop the recording!  I don’t know whether the Zoom control bar was hidden behind the PowerPoint presentation or behind some other application or what, but I flailed about for a full 30 seconds, muttering to myself (and, of course, to the microphone) the whole time.  I know this should be no big deal; it can’t be hard to edit out those last 30 seconds, but I didn’t know how to do that*!

* Now I wish that I had kept all of these outtakes.  But I didn’t realize at the time that there would be so many, or that the experience would make such an impact on me that I would write a full, self-indulgent blog post about it.

I know that none of this was Zoom’s fault, but at this point I decided to learn the basics and record the next few takes with Screencast-o-matic.  These actually went fairly well, and it only took a few more takes to end up with the final version that I posted for my students.  All together, I spent many, many hours making a 7.5-minute video.


Just for fun, let me show you some of the slides from my first video presentation.  I start by telling students where I’m from and pointing out that I slowly ventured a bit farther from home as I went to college and then graduate school and then my first teaching position:

I also wanted to let students know that while I am a very experienced teacher of statistics, I am a complete novice when it comes to teaching online courses:

To reveal a more personal side, I told students about some of my hobbies, along with some photos:

I have mentioned before (see posts #25 and #26 here and here) that I give lots of quizzes to my students.  I plan to do that again with my online course this fall.  In fact, I suspect that very frequent quizzes will be all the more useful in an online setting for helping to keep students on task, indicating what they should be learning, and providing them with feedback on their progress.  I even decided to give them a quiz based on my self-introduction video.  This is an auto-graded, multiple-choice quiz administered in our course management system Canvas.  I expect this quiz to provide students with easy points to earn, because all of the answers appear in the video, and they can re-watch the video after they see the quiz questions.  Here are the questions:

  1. In which state did I live for the first 39 years of my life? [Options: Arizona, California, Hawaii, Mississippi, Pennsylvania]
  2. How many states have I been in? [Options: 1, 13, 47, 50]
  3. What kind of pets have I had? [Options: Birds, Cats, Dogs, Fish, Snakes]
  4. Which of the following is NOT the name of one of my pets? [Options: Cosette, Eponine, Punxsutawney Phil, Puti]
  5. What is the name of my fantasy sports teams? [Options: Cache Cows, Domestic Shorthairs, Markov Fielders, Netminders, Sun Cats]
  6. For how many years have I been at Cal Poly? [Options: 2, 19, 31, 58]
  7. How much experience do I have with online teaching? [Options: None, A little, A lot]
  8. What was my primary project while on leave from Cal Poly for the past academic year? [Options: Playing online games, Proving mathematical theorems, Reading mystery novels, Starting a business, Writing a blog]
  9. What is my teaching philosophy? [Options: Ask good questions, Insist on perfection, Learn by viewing, Rely on luck]
  10. Am I funny? [Option: Well I try to be but I may not succeed often]

So, how did you do?  The correct answers are: Pennsylvania, 47 (all but Arkansas, Mississippi, North Dakota), Cats, Punxsutawney Phil, Domestic Shorthairs, 19, None, Writing a blog, Ask good questions, Well I try to be but I may not succeed often.


P.S. If you would like to watch my first video for yourself, please bear in mind my warning about the peculiarity of my speaking voice.  But if that does not dissuade you, the video can be found here.

#62 Moral of a silly old joke

I have always liked this silly old joke, which I first heard decades ago:

A man takes his dog to see a talent scout, proudly claiming that his dog can talk.  Of course, the talent scout is very skeptical. To convince her, the man asks the dog: What’s on top of a house?  The dog eagerly responds: “Roof, roof!”  The unimpressed talent scout rolls her eyes and tells the man to leave.  The man seizes a second chance and asks the dog: How does sandpaper feel?  The dog gleefully responds: “Rough, rough!”  The scout gets out of her chair and moves to escort the man out of her office.  Begging for one last chance, the man asks the dog: Who was the greatest baseball player of all time?  The dog enthusiastically responds: “Ruth, Ruth!”  The fed-up talent scout removes the man and dog from her office.  Out in the hallway, looking up at the man with a confused and crestfallen expression on his face, the dog says: “DiMaggio?”

Part of why I like this joke is that “DiMaggio?” strikes me as the perfect punch line.  I have seen versions of the joke in which the dog says: “Maybe I should have said DiMaggio?”  I don’t think that’s as funny as the single-word response.  I also don’t think the joke would work nearly as well with Mays* or Aaron or Williams or Trout as the punch line, because those names are so much easier to pronounce than DiMaggio**.

* Joe Posnanski, from whom I have copied this footnoting technique that he calls a Pos-terisk, ranks Willie Mays as the only baseball player better than Babe Ruth (here).

** A name that works nearly as well is Clemente.  Having grown up in western Pennsylvania in the 1960s and 1970s, my favorite baseball player will always be Roberto Clemente.


What in the world does this have to do with teaching statistics, which is the whole point of this blog?!

Please forgive me, as I’m a bit out of practice with writing blog posts*.  Now I will try to connect this silly old joke to the whole point of this blog.

* I again thank the nine guest bloggers who contributed posts during my hiatus in July and August.  If you missed any of these posts, please check them out from the list here

Please consider: What is the moral of this joke?  Let me rephrase that: What did the man do wrong?  Or, to put this in a more positive light: What should the man have done differently?

I’ll give you a hint, as I often do with my own students: The answer that I’m fishing for contains three words.  Want another hint?  Those three words contain a total of 16 letters.  One more hint? The first word has the fewest letters (3), and the last word has the most letters (9).

All right, I’ve dragged this on long enough.  I suspect that you’ve figured out what I think the moral of this silly old joke is.  In order to achieve his (and his dog’s) lifelong dream, all the man needed to do was: Ask good questions.

That’s where the man messed up, right?  His obvious mistake was asking questions for which the answers correspond so well with sounds that an ordinary dog makes.  The man’s incredibly poor choice of questions prevented the dog from demonstrating his remarkable ability.


I repeat: What does this have to do with teaching statistics?!  I suspect that my moral is abundantly clear at this point, but please allow me to summarize:

  • To help our students learn, we need to ask good questions. 
  • To enable our students to demonstrate what they can do, we need to ask good questions. 
  • To empower our students to achieve their potential, we need to ask good questions.

I said in my very first post (see question #8 here) that these three words capture whatever wisdom I may have to offer for teachers of statistics: Ask good questions.  I tried to provide many specific examples over the next 51 posts (here).  That is the whole point of this blog.  I think that’s how we teachers should focus most of our time, effort, and creativity.  Whenever I start to forget this, for example when I momentarily succumb to the temptation to believe that it’s more important to master intricacies of Canvas or Zoom or Powerpoint or Camtasia or Flipgrid or Discord or LockDown Browser or Github or even R, I remember the moral of a silly old joke.


P.S. My professional leave for the 2019-2020 academic year has come to an end, and I am preparing to return to my full-time teaching role*.  I’m hoping to find time to resume writing weekly blog posts, because I greatly enjoy this and hope that these essays have some value.  But I won’t have nearly as much time to devote to blogging for the next nine months, so I’ll need to make the essays shorter or fewer.  Please stick around, and we’ll see how it goes.  For the month of September, I ask for your indulgence as I write some short and unusual blog posts that are less directly applicable to teaching statistics than my typical essays.  As always, thanks very much for reading!

* Our fall classes at Cal Poly will begin on Monday, September 14.  I’ll be teaching online for the first time in my 30+-year career.  Wish me luck!

P.P.S. Thanks to Julie Clark for providing a photo of her dog Tukey. As far as I know, this Tukey cannot talk, but I would not bet against him being able to draw boxplots.

#61 Text as data

This guest post has been contributed by Dennis Sun.  You can contact Dennis at dsun09@calpoly.edu

Dennis Sun is a colleague of mine in the Statistics Department at Cal Poly. He teaches courses in our undergraduate program in data science* as well as statistics. Dennis also works part-time as a data scientist for Google. Dennis is a terrific and creative teacher with many thought-provoking ideas. I am very glad that he agreed to write this guest post about one aspect of his introductory course in data science that distinguishes it from most introductory courses in statistics.

* My other department colleague who has taught for our data science program is Hunter Glanz, who has teamed with Jo Hardin and Nick Horton to write a blog about teaching data science (here).


I teach an “Introduction to Data Science” class at Cal Poly for statistics and computer science majors. Students in my class are typically sophomores who have at least one statistics course and one computer science course under their belt. In other words, my students arrive in my class with some idea of what statistics can do and the programming chops to execute those ideas. However, many of them have never written code to analyze data. My course tries to bring these two strands of their education together.

Of course, many statisticians write code to analyze data. What makes data science different? In my opinion, one of the most important aspects is the variety of data. Most statistics textbooks start by assuming that the data is already in tabular form, where each row is an observation and each column is a variable. However, data in the real world comes in all shapes and sizes. For example, an audio file of someone speaking is data. So is a photograph or the text of a book. These types of data are not in the ready-made tabular form that is often assumed in statistics textbooks. In my experience, there is too much overhead involved to teach students how to work with audio or image data in an introductory course, so most of my non-standard data examples come from the world of textual data.


I like to surprise students with my first example of textual data: Dr. Seuss books. Observations in this “dataset” include:

  1. “I am Sam. I am Sam. Sam I am….”
  2. “One fish, two fish, red fish, blue fish….”
  3. “Every Who down in Whoville liked Christmas a lot….”

and so on. To analyze this data using techniques they learned in statistics class, it first must be converted into tabular form. But how?

One simple approach is a bag of words. In the bag of words representation, each row is a book (or, more generally, a “document”), and each column is a word (or, more generally, a “term”). Each entry in the table is a frequency representing the number of times a term appears in a document. This table, called the “term-frequency matrix,” is illustrated below:

The resulting table is very wide, with many more columns than rows and most entries equal to 0. Can we use this representation of the data to figure out which documents are most similar? This sparks a class discussion about how and why a data scientist would do this.

How might we quantify how similar two documents are? Students usually first propose calculating some variation of Euclidean distance. If xi represents the vector of counts in document i, then the Euclidean distance between two documents i and j is defined as:

This is just the formula for the distance between two points that students learn in their algebra class (and is essentially the Pythagorean theorem), but the formula is intimidating to some students, so I try to explain what is going on using pictures. If we think of xi and xj as vectors, then d(xi, xj) measures the distance between the tips of the arrows.

For example, suppose that the two documents are:

  1. “I am Sam. I am Sam. Sam I am.”
  2. “Why do I like to hop, hop, hop? I do not know. Go ask your Pop.”

and the words of interest are “Sam” and “I.” Then the two vectors are x1 = (3,3) and x2 = (0,2), because the first document contains 3 of each word, and the second includes no “Sam”s and two “I”s.  These two vectors, and the distance between them, are shown here:

At this point, a student will usually observe that the frequencies scale in proportion to the length of the document. For example, the following documents are qualitatively similar:

  1. “I am Sam.”
  2. “I am Sam. I am Sam. Sam I am.”

yet their vectors are not particularly close, since one vector is three times the length of the other:

How could we fix this problem?  There are several ways. Some students propose making the vectors the same length before comparing them, while others suggest measuring the angles between the vectors. What I like about this discussion is that students are essentially invoking ideas from linear algebra without realizing it or using any of the jargon. In fact, many of my students have not taken linear algebra yet at this point in their education. It is helpful for them to see vectors, norms, and dot products in a concrete application, where they arise naturally.

Why would anyone want to know how similar two documents are? Students usually see that such a system could be used to recommend books: “If you liked this, you might also like….”* Students also suggest that it might be used to cluster documents into groups**. However, rarely does anyone suggest the application that I assign as a lab.

* This is called a “recommender system” in commercial applications.

** Indeed, a method of clustering called “hierarchical clustering” is based on distances between observations.


We can use similarity between documents to resolve authorship disputes. The most celebrated example concerns the Federalist Papers, first analyzed by statisticians Frederick Mosteller and David Wallace in the early 1960s (see here). Yes, even though the term “data science” has only become popular in the last 10 years, many of the ideas and methods are not new, dating back over 50 years. However, whereas Mosteller and Wallace did quite a bit of probability modeling, our approach is simpler and more direct.

The Federalist Papers are a collection of 85 essays penned by three Founding Fathers (Alexander Hamilton, John Jay, and James Madison) to drum up support for the new U.S. Constitution.* However, the essays were published under a pseudonym “Publius.” The authors of 70 of the essays have since been conclusively identified, but there are still 15 papers whose authorship is disputed.

* When I first started using this example in my class, few students were familiar with the Federalist Papers. However, the situation has greatly improved with the immense popularity of the musical Hamilton.

I give my students the texts of all 85 Federalist papers (here), along with the authors of the 70 undisputed essays:

Their task is to determine, for each of the 15 disputed essays, the most similar undisputed essays. The known authorships of these essays are then used to “vote” on the authorship of the disputed essay.

After writing some boilerplate code to read in and clean up the texts of the 85 papers, we split each document into a list of words and count up the number of times each word appears in each document. My students would implement this in the programming language Python, which is a general-purpose language that is particularly convenient for text processing, but the task could be carried out in any language, including R.

Rare context-specific words, like “trembling,” are less likely to be a marker of a writer’s style than general words like “which” or “as.” We restrict to the 30 most common words. We also normalize the vectors to be the same length so that distances are invariant to the length of the document. We end up with a table like the following:

Now, let’s look at one of the disputed papers: Federalist Paper #18. We calculate the Euclidean distance between this document and every other document:

Of course, the paper that is most similar to Paper #18 is … itself. But the next few papers should give us some useful information. Let’s grab the authors of these most similar papers:

Although the second closest paper, Paper #19, is also disputed (which is why its author is given as the missing value NaN), the third closest paper was definitively written by Madison. If we look at the 3 closest papers with known authorship, 2 were written by Madison. This suggests attributing Paper #18 to Madison.

What the students just did is machine learning—training a K=3-nearest neighbors classifier on the 70 undisputed essays to predict the authorship Paper #18 — although we do not use any of that terminology. I find that students rarely have trouble understanding conceptually what needs to be done in this concrete problem, even if they struggle to grasp more abstract machine learning ideas such as training and test sets. Thus, I have started using this lab as a teaser for machine learning, which we study later in the course.


Next I ask students: How could you validate whether these predictions are any good? Of course, we have no way of knowing who actually wrote the disputed Federalist Papers, so any validation method has to be based on the 70 papers whose authorship is known.

After a few iterations, students come up with some variant of the following: for each of these 70 papers, we can find the 3 closest papers among the other 69 papers. Then, we can validate the prediction using these 3 closest papers against the known author of the paper, producing a table like the following:

In machine learning, this table is known as a “confusion matrix.” From the confusion matrix, we try to answer questions like:

  1. How accurate is this method overall?
  2. How accurate is this method for predicting documents written by Madison?

Most students assess the method overall by calculating the percentage of correct (or incorrect) predictions, obtaining an accuracy of 67/70 ≈ 96%.

However, I usually get two different answers to the second question:

  • The method predicted 15 documents to be written by Madison, but only 13 were. So the “accuracy for predicting Madison” is 13/15 ≈ 87%.
  • Madison actually wrote 14 of the documents, of which 13 were identified correctly. So the “accuracy for predicting Madison” is 13/14 ≈ 93%.

Which answer is right? Of course, both are perfectly valid answers to the question. These two different interpretations of the question are called “precision” and “recall” in machine learning, and both are important considerations.

One common mistake that students make is that they will include paper i itself as one of the three closest papers to paper i. They realize immediately why this is wrong when this is pointed out. If we think of our validation process as an exam, it is like giving away the answer key on an exam! This provides an opportunity to discuss ideas such as overfitting and cross-validation, again at an intuitive level, without using jargon.*

* The approach of finding the closest papers among the other 69 papers is formally known as “leave-one-out cross validation.”


I have several more labs in my data science class involving textual data. For example, I have students verify Zipf’s Law (learn about this from the video here) for different documents. A student favorite, which I adapted from my colleague Brian Granger (follow him on twitter here) is the “Song Lyrics Generator” lab, where students scrape song lyrics from their favorite artist from the web, train a Markov chain on the lyrics, and use the Markov chain to generate new songs by that artist. One of my students even wrote a Medium post (here) about this lab.

Although I am not an expert in natural language processing, I use textual data often in my data science class, because it is both rich and concrete. It has just enough complexity to stretch students’ imaginations about what data is and can do, but not so much that it is overwhelming to students with limited programming experience. The Federalist Papers lab in particular intersects with many technical aspects of data science, including linear algebra and machine learning, but the concreteness of the task allows us to discuss key ideas (such as vector norms and cross-validation) at an intuitive level, without using jargon. It also touches upon non-technical aspects of data science, including the emphasis on prediction (note the conspicuous absence of probability in this blog post) and the need for computing (the texts are long enough that the term frequencies are not feasible to count by hand). For students who know a bit of programming, this provides them with an end-to-end example of how to use data to solve real problems.

#60 Reaching students online

This guest post has been contributed by Kelly Spoon.  You can contact her at kspoon@sdccd.edu.

Kelly Spoon teaches statistics in San Diego, at a two-year college (San Diego Mesa College) and for an AP Statistics class (Torah High School of San Diego).  I met Kelly through twitter (@KellyMSpoon), where she shares lots of ideas about teaching statistics and mathematics, and at the AMATYC conference in Milwaukee last fall.  Kelly has since hosted me to give a workshop for her colleagues in San Diego and to conduct a review session for her AP Statistics students via zoom.  Kelly is very passionate about teaching statistics, dedicated to helping all students succeed, and knowledgeable about content and pedagogy.  I am very glad that she agreed to contribute this guest blog post on the very timely topic of teaching statistics (and reaching students) online*.

* Speaking of timely, my first taste of online teaching will begin three weeks from today.


When Allan asked if I would write a guest blog post, I didn’t hesitate to email back with an emphatic yes. Not only because I owe him for presenting to faculty at my college AND doing a review for my AP Statistics students, but because I’m always excited to share my passion for teaching statistics.

Then the actual writing started, and I immediately regretted this decision. There’s just too much to share in such a short space. In the end, I wrote an entirely too long blog post for which Allan suggested some minor edits to fit a theme of fearlessness. I asked myself: What does it mean to teach fearlessly?

To me, the broadest definition is a willingness to thoughtfully try new things – whether tools, policies, assessments, or formats. And at this point, most of us fit that definition by the circumstances of distance learning that have been thrust upon us. Now that I’m a week into a new completely online semester, my previous draft felt like it was missing what most of us want to know right now: How do we teach statistics online?

After having a successful first week of the new fall term that mostly gave me energy rather than leaving me feeling drained (as most of last spring’s emergency remote classes did), I thought I’d share some insights as to how I made that first week work for me. To keep with the theme of this blog, these insights are presented as answers to questions that you might ask yourself as you’re designing your online statistics course. I hope these questions are generic enough to stand the test of time to remain relevant when we’re back in a classroom.


1. Cultivating curiosity

Knowing where you want to end up (your desired outcomes) is crucial when designing a course or individual lesson, but the starting point is sometimes overlooked. As you think about your course, whether you’re meeting in person, on Zoom, or you don’t have scheduled meetings, ask yourself: Does my lesson plan make students want to learn more?

This is where Allan’s blog comes in handy. He has many great examples of good questions that truly spark curiosity, often without requiring a deep understanding of the subject matter to start. However, simply including good questions in a lecture allows students to opt out and wait for the professor or another student to do the thinking for them. Simulation-based inference and the many awesome applets that exist in that same vein are one great way to build curiosity for theory-based inference. Regardless of class modality, one of my favorite tools for sparking curiosity is the activity builder in Desmos.

If you haven’t tried out the Desmos Activity Builder (here), you’re missing out. This one tool can answer questions such as: How do I do activities if I’m teaching online? What if I want to assign activities as homework? What if I don’t want to buy Hershey’s Kisses to make confidence intervals for the proportion that land on their base? The Desmos activity builder allows you to add math, graphs, tables, video, images, text to slides for students to work through. You can have students input math, graphs, tables, text, answer multiple choice, multiple selection, reorder selections, even doodle on a graph or image. That was quite the list. See the image below for a visual of all the things you can add to an activity in Desmos:

On the instructor end, you can see exactly where students are (so it’s great to use if you’re meeting with students at a particular time which we all now know is called synchronous) – I use this to pause the activity and debrief when most students have reached a particular point or nudge those students who seem to be stalled. You can also see student work in real-time and provide them feedback directly in the activity. And many activities have been designed to combine data from across the entire class, allowing you to recreate some favorite in-person activities in an online space.

Here are a few Desmos activities that I’ve created, used, or plan to use to build curiosity:

a) Reading Graphs (here)

This activity was inspired by a workshop on culturally responsive teaching. These graphs and questions appear in my lecture notes before we discuss displays for data. Typically, I have students work in groups of four to answer all of the questions for their graph. Then we do a numbered-head protocol (they number themselves 1-4, and I use a random number generator on the projector to choose a victim to report out) to debrief the activity.  I show them that they already know most everything in that section of the lecture notes, with the added bonus of being able to bring in topical graphs*, including ones on social justice issues. For my asynchronous classes, students go through this activity on their own but can see other student responses once they share. For my synchronous class, I occasionally “pause” the activity to discuss some of the responses to a particular graph.  For instance, the following bar chart of children in poor families leads to so many more questions than answers: What defines a family as poor? Are the observational units the children or the families? Does it matter? What if the parents have different education levels? Where are the other 8%?!

* Please ignore the titanic mosaic plot; I really haven’t found better.

b) Skew the Script – Lesson 1.1 (here)

I just found this activity, despite being a longtime fangirl of @AnkerMath on twitter. Skew the Script (here) has a great curriculum created for AP Statistics with student and instructor resources that strive to make the content relevant. It focuses on using real-world examples and creating equity-driven lessons. This particular exercise has students analyze and describe the problems with a lot of bad graphs. I plan on starting off the 2nd week with this one! I’ll tweet how it goes.

c) Does Beyoncé Write Her Own Songs? (here)

This activity is taken entirely from StatsMedic (here) and adapted for Desmos by Richard Hung (find him on twitter here). StatsMedic is built on a framework of “experience first, formalize later” (EFFL), so their entire curriculum – which they provide for free on their site – is inherently designed to build curiosity. For this particular activity, I’ve edited it a bit to bring in some Allan-inspired questions, like identifying observational units and variables (see post #11, Repeat after me, here). This activity is a variation of Allan’s Gettysburg Address activity (see post #19, Lincoln and Mandela, part 1, here) or the Random Rectangles* activity, and is great for building understanding of sampling bias, random sampling, and sampling distributions.

*I first did the Random Rectangles activity in a workshop conducted by Roxy Peck ; it apparently originated in Activity-Based Statistics by Scheaffer et al.

I believe lectures inherently kill curiosity – even a lecture with questions interspersed for this purpose. Students know that eventually you will tell them the answer, and many will sit and wait until someone else does the work. At least in my flipped classroom, these types of activities incentivize my students to go watch those lectures by making them curious enough to want to know more. As a bonus, I can keep referring back to that tangible activity: Remember when you placed your magnet on the class dotplot in the Random Rectangles activity?


2. Building a collaborative and safe learning environment

So, we can present good questions or well-designed activities to ignite that sense of wonder in our students, but we also need the students to feel connected to each other and to us as educators, especially in an online environment. That brings me to my next question: Am I providing opportunities for students to connect with and learn from one another?

In a traditional classroom, these opportunities may happen organically. Students may chat before class or set up study groups, even if our classes don’t explicitly carve out time for collaboration. In an online class, these moments need to be constructed and provided for students. 

Using Google slides with breakout rooms in Zoom is my go-to for collaboration between students in an online environment. For those of you unfamiliar with Google Slides, they are essentially Google’s version of PowerPoint. The bonus is that you can get a shareable link that allows anyone to edit the slides – even if they don’t have a google account! They just have to click the link, and then they are editing simultaneously. My typical setup is to create a slide for each group within one shared presentation. The slides contain the instructions about what the students should add to the slide to complete the activity. Here are a few of the activities I’ve already used in class:

a) Personality Coordinates

This activity is an ice-breaker – before you roll your eyes, let me finish! – where students put their names on four points and then have to work together to label the X and Y axes. I personally can tolerate this particular ice-breaker because it serves as a needed review of a coordinate plane that I can reference again when we start looking at scatterplots. You can read more about this activity where I originally found out about it on Dan Meyer’s blog (here).

In the image below, you’ll see the circle representing students on slides of the presentation and the highlighted areas are what students are working on. Slides make it easy at a glance to check that students are making progress and let you know which groups you should check in on. There’s even a comment feature so you can provide quick feedback without being too intrusive. If you want to know more about how I ran this activity, check out this twitter thread (here), where I provide the links to the slidedeck and instructions I presented before putting students in breakout rooms.

b) Sampling IRL

This particular activity is a discussion board in my fully online asynchronous class. However, in my synchronous class that meets on Zoom, I saved myself a lot of grading by creating a slide deck in the same vein. On day 1, students worked with a group to fill in a slide with how they would attempt to collect a sample from a given population (students at my college, students at all area community colleges, Starbucks customers, adults in our city).

Based on timing, the second half of this activity happened on the following day, which also allowed me to reformat the slides and add new questions. On Day 2, I moved each breakout room to a new slide and they had to answer two questions:

  1. Could you follow the sampling scheme that the group laid out? If not, what is unclear?
  2. Are there any groups of people who might be left out based on their sampling scheme? Who are they? What type of people from the population will be under/over represented?

In this particular example, I didn’t reinvent anything, I just took an existing prompt and turned it into a collaborative activity by having students answer these questions in groups. And again, the added bonus was that I only needed to grade 8 slides as opposed to 32 discussion posts!

I have loved using this type of activity in my classes. Previously I did a lot of similar activities in face-to-face classes utilizing giant post-its or just the whiteboards around class. I do like that Google slides allows these contributions to be saved to come back to. Here are some things I’ve found that help this run smoothly:

  • Provide roles for the breakout rooms – students don’t have to use them, but it sets expectations. You can see my slide with roles below:
  • Emphasize that someone must share their screen in the breakout rooms. I say this at least three times before opening breakout rooms and then broadcast it to all breakout rooms a few minutes in.
  • Aim for twenty minutes as the sweet spot in terms of length.
  • Monitor progress on the slides, and use the comments to give quick feedback.
  • Join each breakout room to check that all members are contributing.
  • Make your instructions the background image, so students don’t accidentally delete the stuff they need.
  • Know how to access version history, in case a student deletes a slide or encounters an equally devastating problem.
  • If you want to run an activity that requires more than one slide per group, use a slide as a landing page (shared as view only) with the edit links to all the group slides:
  • If you’re using Canvas, you can create a Google Cloud assignment (see a video here) to assign the slides to students who missed class. 

3. Connecting with students

Another key to student success is that students feel a connection to you. That brings us to my third question: How can I ensure that students feel connected to me?

For me, it’s about sharing things I’m interested in. I tried a “liquid syllabus” (see here) this semester rather than my traditional welcome letter, but they both contain the same information that is missing from a traditional syllabus:

  • A section about me and my extracurricular interests – which I try to keep varied so that each student might see some small thing we have in common.
  • My teaching philosophy.
  • What a typical week looks like in our course.

I also respond to each student’s introduction in my asynchronous classes. On our first quiz of the semester, I ask all of my students to ask one question about the course, statistics, or myself and tell me something about themselves. I make sure to respond to each and every one. Yes, my first week of classes is a challenge, but I find that connection pays off later. And it never hurts to interject something you’re passionate about into your lectures and examples – much like Allan, most of my examples are about cats (see blog post #16, Questions about cats, here), and my Canvas pages are adorned with cats too.


4. Creating a safe place for mistakes

If you creep on my welcome site for students, you would see this section: “My course is built on the idea that we aren’t perfect the first time we do something and those mistakes are how we improve and learn. Every assignment (with the exception of exams) can be redone after you receive some guidance from me on how to improve it. There are multiple ways for you to demonstrate your understanding – discussions, projects, exams, creative assignments… If you’ve struggled in a traditional classroom, I hope we’ll find a way to get through this together.” This brings me to my next question: How am I demonstrating to students the value in making mistakes?

I don’t know about you, but I have countless students who are frozen into inaction by their fear of failure. Students that I know understood the material will turn in tests with blank pages. When I ask them about it, they profess that they just weren’t sure they were on the right track. I try to demonstrate how useful mistakes are with my policies (see above), as well as in how I highlight student work and respond to students. I try to bring up “good mistakes” in class or in video debriefs, focusing on the thinking that led the student to that answer and all the understanding that their work shows. I hope that by applauding those efforts and working hard to build those connections with and between students, they will be more willing to share their thinking without fear.*

* This letter from a former student shows that I’m on the right track, but I need to add a question about this to my end-of-semester survey to make sure all students feel this way.


5. Assessing understanding

Online assessments are a tricky beast. It’s nearly impossible to be sure our students are the ones taking our assessments and that they are doing so without some outside aid. I feel like I have to include this section because it’s the most common question I get from faculty – how can I make sure my students aren’t cheating? Short answer, you can’t. So here’s the question to ask yourself: Are exams the best way to assess student knowledge?

Consider projects or other tasks where students can demonstrate that they understand the course content. Projects have the added bonus of letting students see how statistics is actually used to answer questions, relevant to what they are interested in, and connected to the other courses they are taking. I personally do a variation on the ASA Project Competition (here), where students can either submit a written report or record a presentation.

I still have exams, too. I’ve just lessened their weight so that student don’t have any real incentive to cheat. And I have embraced open-ended questions. For years, I avoided these types of questions because they were harder to grade and truly required students to have a better understanding and communication skills than the same question cleverly written as a multiple choice. On my latest exam, here’s one of the options for a particular question pool:

Many colleges were scrambling to provide resources for students with the switch to remote learning. They surveyed students by reaching out via the students’ listed email addresses to see what resources they would need to continue to attend classes in the switch to online. Do you believe this is a good survey technique? Explain why or why not. What are some issues that may arise from this survey option?

Four years of reading the AP Statistics exam has trained me not to fear reading free response questions like the one above. Even three years ago, I’d probably be shaking in my boots at the prospect of grading over a hundred free response questions on a given exam. I cannot emphasize enough how useful participating in the AP reading has been for me as an educator. Empowered by that experience, my “complete” student response to the question has four components:

  1. States that the voluntary response method described is not a good technique.
  2. Notes and provides a reason students may not be included in the survey responses – such as they choose not to take it, don’t check their email, or …
  3. Notes that students without resources are less likely to respond to the survey.
  4. Concludes that the schools will underestimate the resources needed as a result of (3).

Much like an AP scoring rubric, students must get component 1 in order to earn any points for the problem. And for full credit, they must include all four components. If you’re looking for some great questions, beyond those that Allan has provided us here over the past year, previous AP Statistics free response questions are a great place to get inspiration as you write assessments and corresponding rubrics*.

* StatsMedic has very helpfully categorized all of these questions by topic here.


6. The Real Question

All of the questions I’ve asked you to reflect on throughout this post come down to a common theme: Am I reaching ALL of my students?

I’m lucky enough to work at a campus that has provided me with data on my classes’ success rates disaggregated by gender, age, and ethnicity. I know what groups I need to work harder to reach. If possible, get these data from your school. If not, have students self report and then see if you notice any trends throughout the semester/year. If you’re new to the idea of culturally responsive teaching, I strongly recommend Zaretta Hammond’s Culturally Responsive Teaching and the Brain – it’s a great mix of research, practical tips, and reflection.


I hope you found something you can use in your classrooms in this post. Take what works for you, leave what doesn’t. And keep continuously reflecting on your own teaching practices.

Here are Allan’s own words (from post #52, Top thirteen topics, here), because I think they bear repeating: “I know that if I ever feel like I’ve got this teaching thing figured out, it will be time for me to retire, both from teaching and from writing this blog.”

This is my mantra*. Keep reflecting on your choices. Keep trying new things. Keep being fearless. Hopefully along the way, we’ll do better for all of our students.

* Minus the blog part, because I have no idea how he did this for 52 weeks!

#59 Popularity contest

This guest post has been contributed by Anna Fergusson. You can contact Anna at a.fergusson@auckland.ac.nz.

Anna Fergusson is a Professional Teaching Fellow in the Department of Statistics at the University of Auckland.  I met Anna at the 2019 Joint Statistical Meetings, where she gave a terrific talk about introducing statistics students to data science, which is the topic of her Ph.D. research.  I admit that part of the appeal of Anna’s presentation was that her activity involved photos of cats.  But more impressive is that Anna described a fascinating activity through which she introduces introductory students to modern computational tools while emphasizing statistical thinking throughout.  I am delighted that Anna agreed to write this guest post about her activity, which also highlights her admirable and effective “sneaky” approach to student learning.  I also encourage you to follow Anna’s blog, with the not-so-subtle title of Teaching Statistics is Awesome and which has become one of my favourites*, here.

* I am using this non-conventional (for Americans) spelling in appreciation for Anna’s becoming my first guest contributor from outside the U.S.


I am thrilled to write this week’s guest post, not just because I get to add another activity to Allan’s examples of “stats with cats” (see post #16 here), but also because I strongly believe in asking good questions to guide students to discover “new-to-them” ideas or methods.

A current focus for my teaching and research is the design of accessible and engaging learning activities that introduce statistics students to new computational ideas or tools.  For these “first exposure” type learning tasks, I use What if..? style questions to encourage curiosity-driven learning. I also use the “changing stuff and seeing what happens” approach for introducing computational concepts, rather than starting the task with formal definitions and examples.

It’s an approach that has been described by both students and teachers as “sneaky,” but I think that it is a pretty good strategy for designing tasks that support the participation of a wide range of students. To pull off this undercover approach, you need a good cover story – something that is engaging, interesting and fun! A really “popular” task I have used to introduce APIs (Application Programming Interfaces) for accessing data involves searching for photos of cats and dogs online. I’ve tried out several versions of this task over the last few years with a range of school-level students and teachers, but this particular version of the task is from the introductory-level university course I’ve designed for students who have not completed Grade 12 mathematics or statistics. The overall question for the exploration is: What is more popular on Pixabay – photos of cats or photos of dogs?


I usually start the activity by asking students: What is your favourite type of animal, cats or dogs? I would like to say that there is a deeper learning point being made here, for example getting students to acknowledge their own personal biases before they attempt to learn from data, but really I ask this question so I can pretend to be offended when more students state that they prefer dogs than cats! And also so I can use this meme:

Source: https://cheezburger.com/7754132480

I then ask students to go to pixabay.com and explore what they can find out about whether photos of cats or dogs are more popular on this website. The only direction I give students is to make sure they have selected “photos” when they search and to point out that the first row of photos are sponsored ones. I encourage students to work in pairs or small groups for this activity.

While finding pretty adorable photos of cats and dogs, students are familiarising themselves with the website and what data might be available for analysis, which will come in handy later in the task. It also helps that popularity metrics such as likes and views are already familiar to students thanks to social media. I generally give students about five minutes to explore and then ask groups to share with the class what they have learned about the popularity of cat and dog photos, including what their “hunch” is about which animal is more popular on Pixabay.

There are a lot of approaches that students can take to explore and compare popularity, and it’s helpful to have some questions up your sleeve to ask each group as they share what they learned. For example, one approach is to determine how many photos are returned when you search for “cat” and compare this to the number of photos that are returned when you search for “dog”. You can ask students who use this approach What happens when you search for “cat” compared to “CAT” compared to “cats”? Students may or may not have noticed that their search terms are being “manipulated” in some way by the website.

Another good question is: Were all the photos returned the kind of “cat” that you expected? This can lead into a discussion about how photos are uploaded and given “tags” by the photographer, and whether the website checks whether the tags are appropriate or correct. Most students discover that if you hover over a photo returned in the search query, you can see some metrics associated with the photo, such as its top three tags and the number of likes, favourites and comments the photo has (see an example below).

To encourage students to think about how the photos are ordered in the search results, I ask students: What photos are being shown to you first when you search for “cat”? Can you spot a pattern to the order of the photos? Initially, students might think that it is just the number of likes (the thumbs-up count) that is determining the order, but if they look across the first 20 or so photos, they should notice that the pattern of decreasing like counts as you move “down the rank” doesn’t always hold.

I also prompt discussion about the nature of the “metrics” by asking: What is another reason why one photo might have more likes than another photo? Clearly, you can’t like a photo if you’ve never viewed it! Additionally, some photos may have been on the website for longer than others and some of these variables require more effort on the part of the “searcher” than others e.g. viewing a photo versus liking a photo.

This phase of the task works well because students are exploring data, generating questions, and integrating statistical and computational thinking, all without any requirements to perform calculations or write precise statistical statements. However, there is only so much you can learn from the website before needing a way to access more of the data faster than viewing each photo individually. Fortunately, Pixabay offers an API service to access photos and data related to the photos (you can find the documentation about the API here).


Don’t know anything about APIs? Don’t worry, neither do my students, and in keeping with my sneaky approach, we’re not going to jump into the API documentation. Instead, I ask students to pay attention to the URL when they search for different photos. I then use a sequence of questions to guide students towards structuring an API request for a particular search:

  • What do you notice changes about the URL each time you try a new search?
  • Can you change the photos searched for and displayed on the page by changing the URL directly?
  • Can you work out how to search for “dog costume” by changing the URL rather than using the search box?

For example, the screenshot below shows that the URL contains fixed information like “photos” and “search” but the last part changes depending on what you search for:

Through this sequence of questions, students start to notice the structure of the URL, and they also learn just a little bit about URL encoding when they try a search based on two words. For example, a search for “cat costume” will result in (1) cute photos of cats, but also (2) a URL where the spaces have been replaced with “%20”: https://pixabay.com/photos/search/cat%20costume/.

I then ask students to find a photo of a cat or a dog that they really like and to click on this photo to open its webpage. I then use a sequence of questions to guide students towards structuring an API request for a particular photo:

  • What do you notice about the URL for a specific photo?
  • How is it different from the URL when we were searching for photos?
  • Which part do you think is the ID for the photo?
  • What happens if you delete all the words describing the photo and leave just the ID number, such as: https://pixabay.com/photos/551554?
  • Is there a photo that has an ID based on your birth date?
  • What was the first photo uploaded to the website?
  • How could we randomly select one photo from all the photos on Pixabay?

That last question is a sneaky way to bring in a little bit of discussion about sampling frames, which will be important later in the task if/when we discuss inference.

Once students have played around with changing the URL to change what is displayed on the webpage, I congratulate them on becoming “URL hackers.” Now it’s time to look more closely at what data about the photo is available on its webpage. I typically ask students to write down all the variables they could “measure” about their chosen photo. Depending on time, we can play a quick round of “Variable Boggle,” where each pair of students tries to describe another variable that no other pair has already described before them.


I then tell the students that the Pixabay API is basically a way to grab data about each photo digitally rather than us copying and pasting the data ourselves into a spreadsheet, and that to get data from the API we have to send a request. I then introduce them to an app that I have developed that allows students to: (1) play around with constructing and testing out Pixabay API requests, and (2) obtain samples of photos as datasets.

The app is available here.  Clicking on the top left button that says “API explorer” takes you to the screen shown below:

The API explorer is set up to show a request for an individual photo/image, and students only need to change the number to match the id of the photo they have selected. When they send the request, they will get data back about their photo as JSON (JavaScript Object Notation). As students have already recorded the data about their photo earlier in the task, they don’t seem to be intimidated by this new data structure. I then ask students to compare what we could view about the photo on its webpage with the data we can access about each photo from the API, asking: What is the same? What is missing? What is new?

For example, a comparison of the information available for a photo on the webpage and the JSON returned for an individual photo reveals that only the first three tags about a photo are provided by the API, that the date the photo was created is not provided, and that a new variable called imageSize is provided by the API:

Reminding them of earlier discussion about how long a photo has been online for, I point out that the date the image was uploaded is not directly available from the API (if students have not already identified this is missing when sharing the similarities and differences between data on the webpage and data from the API). I ask them: Is there another variable about the photo that we could use to estimate how long the photo has been online? Do any of these variables appear to contain date information? Once we’ve narrowed it down to two potential candidates – previewURL and userImageURL – I ask students to compare the dates shown in the URL to the date uploaded on the webpage for the photo. This mini-exploration leads to a discussion that we could use the date from the previewURL to estimate the date the photo was uploaded, and that while the dates don’t always match up, the date from previewURL appears to be a reasonable proxy.

One of the limitations of the Pixabay API is that you only get a maximum of 500 results for any request. You do have a choice of ordering the results in terms of popularity or date uploaded, and for my app I have chosen to return the results in terms of popularity (hence the title of the activity!). To help students discover this and also a little more about how JSON is structured, we can use the API explorer to search photos based on a keyword. To connect back to our initial search for “cat” or “dog”, I tell students they can change the API request from “id=” to “q=” to search for photos based on a key word or words. I ask them to use the API explorer to search for photos of cats, and to compare the first three results from their API request (q=cat) to the first three results from searching for “cat” on the Pixabay website (see screenshots below).


Now that we’ve learned a little how we can use the Pixabay API to access data about photos, it’s time to refocus on our overall question: What is more popular on Pixabay – photos of cats or photos of dogs? To do this, we’ll use another feature of the app that allows students to obtain random samples of the most popular photos. I direct students to use the app to take a random sample of 100 cats and 100 dogs from the most popular photos on Pixabay, and the app then displays all the photos in the sample on the left side of the screen:

The interface is designed to allow for a new categorical variable to be created, based on dragging the photos across the page in two groups (see later for examples of explorations of this nature). For this exploration, we don’t need a new categorical variable because we searched for photos of dogs and cats, and the search term used is one of the variables. To use all the photos under “No group” students need to re-label the “No group” heading to something else like “All.” Clicking the “Show data table” button allows students to see the data about each photo as a rectangular data structure (each row is a different photo):

Clicking the “Get links to data” button allows students a quick way to “jump with the data” into an online tool for exploring the data, as well as the option to download the data as a CSV file. I use this task with students after they have already used a tool like iNZight lite (here) to explore data. This means I can just ask my students to use the data to check their hunch about whether photos of cats or dogs are more popular on Pixabay, and give them time to explore their data with their partner/group. Similar to earlier in the task, after about 10 minutes I ask the different pairs/groups of students to share what they have learned. Most groups make plots comparing likes by the search term, as shown here:

Some students create a new variable, for example the number of likes per days online, and compare this for the cat and dog photos in the sample, as below:

Depending on where the class is at in terms of learning about sample-to-population inference, we can talk about more formal approaches for comparing the popularity of cat and dog photos on Pixabay. An important aspect to that discussion is that the population is not all photos on Pixabay, but the most popular photos as determined by Pixabay using some sort of algorithm unknown to us.

The activity ends with asking students to carry out their own exploration to compare the popularity of two types of photos on Pixabay. The huge advantage we have with introducing an API as a source of data to students, and providing an app that allows easy access to that API, is that students get to choose what they want to explore. By using an API connected to a photo-sharing website with search capabilities, students also have a way of building familiarity with the data before accessing the data set. Beyond comparisons of popularity, other interesting investigations involve using what is shown in the photo to create a new categorical variable. For example, I’ve had students explore whether most photos of dogs are outside shots (see earlier discussion and screenshot of creating new categorical variables using the popularity contest app). Other interesting research questions from students have included: Are most of the popular Pixabay photos tagged as “cat,” photos of domestic cats?


Often my students form their ‘hunch” for a research question based on viewing the first 20 or so photos from the website search.  Then they are surprised not to find a similar result when taking a random sample of popular photos. I think there’s something nice in this idea of not jumping to conclusions from searches generated by an algorithm designed to give prominence to some photos over others! My students have also written about how the task helps expand their ideas of where they can get data from and makes them more aware of how much data is being collected from them as they interact with websites.

I commented at the beginning of this post that tasks like these have been described by others as “sneaky.” I’ve also been accused of tricking students into learning because I made the activities so much fun. In fact, my students’ enjoyment continues even when I extend this task to introduce them to using R code to interact with Pixabay photos and the API. I say “even” because so many of my students have pre-determined negative views about learning computer programming, so they really are genuinely surprised to find that the experience of “coding with data” can be fun. Especially if you use a “cover story” of creating memes, using Pixabay photos as a sneaky way to learn about arguments for functions!

When we design activities that introduce students to new computational ideas or tools, it’s only natural to make the “new thing” the star of the show. Although the overall learning goal of this task is to introduce students to some new ideas related to APIs, the immersive experience of searching for photos to find out whether cats are more popular than dogs is the real star of every act of this show. By structuring and asking good questions to drive learning rather than focusing on formal definitions initially, I believe a wide range of students are supported to engage with the many statistical and computational ideas that they discover along the way. What else makes this task successfully sneaky? Cats, of course, lots and lots of photos of cats!

#58 Lizards and ladybugs: illustrating the role of questioning

This guest post has been contributed by Christine Franklin.  You can contact Chris at chris_franklin@icloud.com.

Chris Franklin has been one of the strongest advocates for statistics education at the K-12 level for the past 25 years.  She has made a tremendous impact in this area through her writings and presentations, and also with her mentorship and leadership on individual levels.  Her work includes the PreK-12 GAISE report (here), the Statistical Education of Teachers report (here), and a college-level textbook (here).  Chris also served as Chief Reader of the AP Statistics program.  Chris is retired from the Statistics Department at the University of Georgia, and she currently serves as the inaugural K-12 Statistical Ambassador for the American Statistical Association (read more about this here).  I am very pleased that Chris agreed to write this guest blog post about the role of questioning described in the forthcoming revision of the PreK-12 GAISE report.


It has been my great fortune to be part of the writing teams for both the Pre-K-12 GAISE Framework published in 2005 (here) and the soon-to-be published Pre-K-12 GAISE II (tentatively planned for autumn release 2020)*. The GAISE Framework of essential concepts is built around the four-step statistical problem-solving process: formulate statistical investigative question, collect/consider data, analyze the data, and interpret the results.  This framework involves three levels of statistical experience, with Level A roughly equivalent to elementary, B to middle, and C to high school. Question-posing throughout the statistical problem-solving process and at each of the progressive levels is essential:

* The GAISE II writing team, which also developed the examples presented in this post, includes Anna Bargagliotti (co-chair), Pip Arnold, Rob Gould, Sheri Johnson, Leticia Perez, and Denise Spangler.

This four-step statistical problem-solving process typically begins with formulating a statistical investigative question. When analyzing secondary data from an available source, the process might start with considering the data. The problem-solving process is not linear, and it is important to interrogate continuously throughout analyzing the data and interpreting the results. Posing good questions and knowing when to question is a skill that we must constantly hone. The GAISE II report presents 22 examples across the three levels to illustrate the necessity of being able to reason statistically and to make sense of data. Key within all these examples is the role of questioning. I will present two of my favorite examples from GAISE II to illustrate the crucial role of questioning.


Example 1: Those Adorable Ladybugs

1. Formulate Statistical Investigative Questions

One of the new more science-focused investigations presented at Level A in GAISE II is about ladybugs. With beginning students, teachers might provide guidance when coming up with a statistical investigative question, the overarching question that begins the investigation. As students advance from Level A to Level B, students take more ownership in posing questions through the process. A statistical investigative question a student might pose is: What does a ladybug usually look like? or How many spots do ladybugs typically have? that ask for a summary. The statistical investigative question the student poses might also be comparative such as: Do red ladybugs tend to have more spots than black ladybugs?  Questions for this step of the process are shown here:

To answer these questions, we need to observe some ladybugs. Students might collect them outdoors. Teachers can also mail-order live ladybugs. An alternative is to use photo cards that allow students to observe a variety of ladybugs:

2. Collect/Consider Data – Data Collection Questions

To answer the statistical investigative questions posed by the students, data collection questions are developed. Some examples are given in the figure below:

These questions collect data for one numerical variable (number of spots) and two categorical variables (color of body and color of spots).  Collecting data requires careful measurement and even at this level, students will have to wrestle with questions such as: What is a spot versus a blemish? The class needs to agree upon some criteria as to what constitutes a spot. For example, they might decide not to count spots that are on the margins of the elytra, which is the hard wing cover.

How might young students organize the data? They could use data cards to organize the variable values for each ladybug, where each data card represents a case (the ladybug), as shown above. These physical data cards can assist beginning students to develop understanding on what is a ‘case’, a challenging concept for even advanced students. The students might next create a table, also as shown above.

3. Analyze the Data – Analysis Questions

How do the students now make sense of the data?  Beginning Level A students might use a picture graph that allows each ladybug to be identified. As students advance to Level B, they can use a dotplot. Teachers should support Level A students in thinking about the distribution and asking analysis questions. Analysis questions might prompt different representations or prompt the need for different data collection questions.  This step is depicted here:

4. Interpret – Connecting to the Statistical Investigative Question

As the analysis questions are answered, the results of the data analysis aid in answering the statistical investigative question(s). Level A students are not expected to reason beyond the sample, and the teacher should encourage the students to state their conclusion in terms of the sample. Some possible student responses are shown here:

The ladybug investigation allows students at a young age to experience the statistical problem- solving process, recognize the necessity of always questioning throughout the process, and learn how to make sense of data by developing understanding of cases, variables, data types, and a distribution. These young students can also begin to experience that questioning through the statistical problem process is not necessarily linear – a typical upper-end Level B and Level C experience as illustrated with the following example.


Example 2: Those Cute Lizards

As students transition from Level B to Level C, they are becoming more advanced with the types of questions posed throughout the statistical problem-solving process, considering datasets that are larger and not necessarily clean for analysis, and using more tools and methods for analyzing the data.

1. Formulate Statistical Investigative Questions

Suppose students in a science class are investigating the impact of human development on wildlife. In an earlier analysis of a small pilot dataset, the students concluded that lizards in “disturbed” habitats (those with human development) tended to have greater mass than lizards in natural habitats. This led the students to pose and investigate the following question: Can a lizard’s mass be used to predict whether it came from a disturbed or a natural habitat?

2. Collect/Consider Data – Data Collection Questions

The students searched for available data that might help answer this statistical investigative question. They found a dataset where a biologist randomly captured individual lizards of one species across these two different habitats on four islands in the Bahamas (see research article here). The biologist found 81 lizards from natural habitats and 78 from disturbed habitats and recorded measurements on several different variables, as shown here:

Students should explore and interrogate the dataset, asking what variables are included, what unit of measurement was used for each variable, and whether the variables will be useful and appropriate for answering the statistical investigative question. If the data are reasonable for investigating the posed statistical question, then the students will move to the analysis stage. If the data are not reasonable, they need to search for other data.

3. Analyze Data – Analysis Questions and Interpret

Recall the initial statistical investigative question:  Can a lizard’s mass be used to predict whether it came from a disturbed or a natural habitat?

Students at Level B/C might first consider the distribution of mass for each of the two groups, asking appropriate analysis questions to compare the characteristics of those groups with respect to shape, center, variability, and possible unusual observations. The dotplots below, created in the Common Online Data Analysis Platform (CODAP, available here), display the distributions of mass (in grams) for the two types of lizards:

Students see considerable overlap in the two distributions but some separation. We want students to recognize that the more separation in the distributions, the better we can predict lizard habitat from mass. In thinking about how they can use these distributions to predict lizard habitat from mass, a student can consider a classification approach by asking: Where would you draw a cutoff line for the two distributions of mass to predict type of habitat?

Students might see a separation of the two distributions at around 6.25 grams, thus proposing the classification rule: If the lizard’s mass is less than 6.25 grams, then classify the lizard as from a natural habitat; otherwise, classify the lizard as from a disturbed habitat. Due to the significant overlap, many lizards would be mis-classified with this rule. Students can then count the number of mis-classifications with this rule, as shown here:

Students can then create a table/matrix and calculate the mis-classification rate to be 55/159 ≈ 0.346, or 34.6%:

Should we be satisfied with a mis-classification rate of 35%, or can we improve with a different classification rule? We want students to revisit the two distributions of mass and consider finding a different cutoff point that will lower the mistakes made and reduce the mis-classification rate. Students may notice that if the cutoff point is lowered to 5 grams, we will mis-classify a few “natural” lizards but will correctly classify many more “disturbed” lizards:

The mis-classification rate becomes (32+11)/159 = 43/159 ≈ 0.270, or 27.0%, so this new classification rule reduces the mis-classification rate from 35% to 27%. Students can continue to develop other rules that further reduce the mis-classification rate.


Encourage students to be inventive as they develop more classification rules. They may soon be asking if there are others variables in the data set that may help in predicting the type of habitat in addition to mass. Thus, they now return to posing another possible statistical investigative question: Can a lizard’s mass and head depth be used to predict whether it came from a natural or disturbed habitat? 

Now back at the analysis component of the statistical problem-solving process, a student at Level B/C may first explore the bivariate relationship between the two numerical variables, mass and head depth, by examining a scatterplot. Utilizing output from a web applet in ArtofStat (here), we notice a moderate positive linear relationship between mass and head depth.  A line of best fit to the data yields the equation: predicted mass (grams) = -5.27 + 2.01×head depth (centimeters):

An analysis question at this stage could be: What is the interpretation of the slope 2.01?  Since this is a probabilistic rather than deterministic model, we want students to say: For each one centimeter increase in head depth, the mass of the lizard is predictedto increase by 2.01 grams, on average.”

This analysis provides useful information, but it does not allow us to address our statistical investigative question to use mass and head depth to predict whether a randomly chosen lizard is from a natural or a disturbed habitat. How might we refine our analysis to incorporate type of habitat?

Instead of displaying the lizards in the scatterplot together ignoring their type of habitat, we can display the lizards using different symbols for natural and disturbed habitat. This provides a multivariate analysis where we have incorporated a third variable. The following graph displays the output from this analysis with separate lines of best fit for the two habitats:

Now suppose a randomly chosen lizard has mass 3.6 grams and head depth 5.5 centimeters. Would you predict this lizard to be from a natural or disturbed habitat? How would you use the multivariate analysis to make this prediction?

Again, let students explore and try different approaches, asking students to justify their approach statistically. Some student approaches might be:

  1. A graphical approach: Plot the point (5.5, 3.6) on the scatterplot. This point lies closer to the prediction line for natural habitat than the prediction line for disturbed habitat. This point also falls more within the cluster of points for lizards from a natural than from a disturbed habitat.
  2. A computational approach: Evaluate the predicted mass based on a head depth of 5.5 cm for each of the two lines. The predictions turn out to be 5.05 grams for the “disturbed” line, 4.575 for the “natural” line.   The residuals for these predictions are (3.6 – 5.05) = -1.45 for the “disturbed” group, (3.6 – 4.575) = -0.975 for the “natural” group. Because the residual for “natural” is closer to zero than the residual for “disturbed,” we predict that this lizard came from a natural habitat.

All of these analyzes will result in some mis-classifications. Our goal is to minimize the mis-classification rate. Looking back at the dataset of variables measured on the lizards by the biologists, students might consider if more variables could be included to improve classification accuracy. Again, students might return in the process to posing a new statistical investigation question: How can different features of a lizard (e.g., head depth, hind limb length, mass) be best used to predict whether it came from a natural or a disturbed habitat? 

The analyses we have explored thus far can be generalized to more than two predictor variables, but developing classification rules becomes tedious without the use of computer technology. An algorithm known as Classification and Regression Tree (CART) produce a series of rules for making classifications based on a number of predictor variables. Below is a CART using mass, head depth, and hind limb length to predict type of habitat. The goal is that Level C students understand how to interpret output from the CART algorithm, not learn details of how the algorithm works.


Whether you are working with small samples from a population, experimental data, or vast datasets such as those found on public data repositories, questioning throughout the statistical problem-solving process is essential. This process typically starts with a statistical investigative question, followed by a study designed to collect data that aligns with answering the question. Analysis of the data is also guided by asking analysis questions. Constant interrogation of the data throughout the statistical problem-solving process can lead to the posing of new statistical investigative questions. When considering secondary data, the data first need to be interrogated.

The ladybug and lizard examples attempted to illustrate the essential role of questioning throughout the statistical process.  Notice that the ladybug example involves summary and comparative investigative questions, while the investigative questions posed and explored in the lizard example are associative – looking for relationships among two or more variables to aid in making predictions.

Now more than ever, questioning is a vital part of being able to reason statistically. In carrying out the statistical problem-solving process, we want students and adults to always be asking good questions. The Pre-K-12 GAISE II documents advocates that this role of questioning begin at a very young age and gain maturity with age and experience.

To conclude with a quote from the GAISE II document: It is critical that statisticians, or anyone who uses data, be more than just data crunchers. They should be data problem solvers who interrogate the data and utilize questioning throughout the statistical problem-solving process to make decisions with confidence, understanding that the art of communication with data is essential.


P.S. A file containing the lizard data is available from the link here:

#57 Some well-meaning but misguided questions

This guest post has been contributed by Emily Tietjen. You can contact Emily at etietjen@mcoe.org.

Emily was a student of mine as a statistics major at Cal Poly.  She was an invaluable help to me as an exceptional teaching assistant for several years*.  I was delighted when Emily decided to pursue a teaching career.  She has taught AP Statistics and other math courses at high schools in and near Merced in the central valley of California.  Since the beginning of her teaching career, I have very much enjoyed visiting Emily and her students every spring.  Emily has quickly moved into an administrative role, as she now serves as one of two math coordinators for the county of Merced. In this role she helps teachers throughout the county to teach mathematics (and statistics!) well.  I greatly appreciate Emily’s writing this guest blog post about some questions that she encounters in her position.

* In addition to very helpfully supporting students’ learning, Emily also displayed an indispensable but unteachable quality for a TA: She laughed at my jokes no matter how many times she heard me tell them in different classes over many terms**.

** I’ll be curious to know whether she laughs at this one as she reads for the first time.


The first thing that you should know about me is that I could easily be referred to as a fangirl of both Allan Rossman and Jo Boaler.  I had the distinct privilege of sitting through six years worth of Dr. Rossman’s courses as both a student and as his TA.  Six years included both a statistics degree and a math credential, but honestly, who doesn’t want to spend as much time as they can in San Luis Obispo?

I can confidently say that I gleaned more from sitting through repeated classes from Dr. Rossman than I ever got from any professional development.  Ideas that were intrinsic to his style of teaching, although we never directly discussed his philosophy, are concepts that as a new math coordinator I’m only beginning to have a name for.  Ask good questions?  I used to think that had more to do with the person asking the question: Were they articulate and educated and thoughtful enough to ask a really good question?  What I’ve come to understand is that asking a good question means to give the learner the authority to come to an understanding of a concept through their own intuition.

But asking a good question is intimidating for someone (yes, me) who regularly harbors the feelings of imposter syndrome.  In this post I will pose some well-intentioned but ultimately misguided questions about how students, educators, and adults view mathematics and primary and secondary mathematics education.  I will also discuss why I consider these well-meaning questions to be problematic.


1. Are you a math person?

Many people ask this question of each other and of children.  I have been asked this question often. 

I grew up in what you might describe as a humanities family.  My mom studied English and German and taught both but primarily German.  My dad and brother both majored in history, read voraciously, and after teaching the subject both went into administration.  I’m like them.  I was a teacher, and I’m now an administrator.  But I was also never quite like them.  It shows in the directness I expect in an answer given to a question and in the long (interesting, albeit) stories my mom tells before finally getting to the point.  It shows in my ability to remember numbers and to quickly solve problems and their ability to remember historical events and the interwoven understanding of how they overlay onto each other.  Math basically always came easy to me, and reading basically was always quick and comprehensible to them.  Clearly, I’m a math person, right? Wrong.  As a child, I enjoyed puzzles.  My parents praised my efforts.  In school, I liked math and they constantly reinforced my abilities.  Despite that, each of my elementary teachers (female, for the record) would talk about their favorite subject, which never included math, while I rolled my eyes at the thought that girls could not be as good at math as boys. 

Over the years, thanks to many privileges I had, none more powerful than my parents’ faith in me, I took honors and AP math courses with many inspiring teachers.  Even more incredibly, I had two particularly wonderful math teachers who were women for geometry and AP Statistics.  Both teachers brought math to life.  They made our classes collaborative and relevant to the world around us.  In both, I was asked to collect data from the outside world and apply meaning to what I had gathered.  They gave me manipulatives and visuals and allowed my classmates and me to formulate our understandings of the math.  They provided context that made the math meaningful to me.  Most of all I had fun. 

On the other hand, in most of my language, literature, and social science classes, teachers overwhelmed me with reading, taught history by having us read chapters aloud from a textbook, each student reading one paragraph at a time, followed by showing movies that partnered with the time period (yes, Mulan was shown with our unit on Chinese history).  I had a much more meaningful experience in school with math.  And I realize that others have stories like mine but completely in reverse. 

The work of Jo Boaler (see her book Mathematical Mindsets and her website here) has brought forth research about how brains learn and grow. Her work demonstrates how there is no research that makes someone have a “math brain.” Additionally, everyone has the capacity to continue learning any subject. A combination of factors led to my positive experiences with math.  My parents reinforced my ability.  I had teachers who empowered me and my learning. There’s no need for the question of whether or not you’re a math person, because there is no such thing. All students can learn math.


2. What class best meets the needs of the student?

This question is often considered as a student is being placed with a particular teacher or in a particular course.  Will it be “grade level” or “honors” or “remedial” or …? This one is so hard for me.  We want to do the best thing for our students, right? We want to make sure that students who are exceeding expectations are given enrichment and  opportunities to accelerate learning and students who are struggling are provided with support and remediation.  That sounds good, right?

I have classified this as a problematic question because even though it sounds innocent, it’s really about a practice called tracking. The problem is that research doesn’t back this up.  Ability grouping and tracking lead to differential outcomes for students.  At the secondary level, trying to meet the needs of where students are at means that teachers spend barely over a third of the year on grade-level material.  When students are given grade-level material they succeed more often than not yet they aren’t given the opportunity most of the time.  By tracking a student below grade-level content, a district is ensuring that those student’s will never be able to fill the gap between where they are and the grade-level content they deserve to see.  Students can be provided opportunities for advancement without needing to create specialized courses and should demonstrate that they have mastered the material before they advance rather than skipping concepts.  (You can read more about tracking issues in the reports here and here and also in the NCTM’s Catalyzing Change book series here.)

Another area where we suffer with this question is our undying race to Calculus in high school.  Too often we focus on how to prepare students to study calculus rather than consider what courses and skills would best serve their overall education and potential career. The vast majority of jobs in this country will depend on data literacy or statistics, yet statistical topics are typically found in the last chapter of the books and treated as the content that they’ll get to if they have time which they very rarely do.  Many of the above links also discuss the need for statistics and data literacy in the TK-12* educational system as well as the problematic nature of tracking.  Understanding data and statistics provides students relevance both to their current lives, through contexts that are inherent to subjects they are studying, and also relevance to their future careers.  When I was teaching math, students constantly asked when they would use the subject in their “real lives.”  When I was teaching statistics, students  never asked that question.

* TK stands for Transitional Kindergarten, a preliminary class to Kindergarten offered to children born in September – December.

Fortunately, there are efforts being made to encourage the prioritizing statistics and data literacy at the TK-12 level.  For example, Jo Boaler and her team have released a set of lessons on data science for grades 6-10 (here) along with an online teacher course (here) for data science and 21st Century Teaching and Learning. California university systems have considered adding quantitative reasoning courses as an addition to their subject requirements (see here) for applying to their schools (minimum course requirements to be accepted to public universities in California).  High school courses have been designed to address the needs of having a more relevant, equitable math course that highlights the use of data and statistics.  School districts and states have restructured their pathways that work to remove the tracking that is prevalent within our educational systems which leads to more equitable outcomes with specific inclusion of a statistics pathway.

This work must continue, as we know that data literacy will be crucial for our society to understand.  We face the need to comprehend data in multiple ways as we are constantly facing the collection of our own personal data on a daily basis and mostly have no practical access to knowing the ways in which it is used, for the good and the bad. On top of that, more often than not, the careers our students will go into will require the utilization of data and being able to analyze it.


3. Why do we have to do word problems?

Students often ask this of their math teachers.  I’m imagining my former students’ voices as I consider this topic. Heck, I hear my teenage self still wondering this.

Assigning word problems is sure to create anxiety, at least with the typical way that we approach them.  However, students often struggle with word problems for the wrong reason. The very prospect of word problems ignites so much fear in students that they are hesitant to even read them in the first place. Speed is all too often valued in the classroom and struggle is not, so confronting a word problem is asking students to work on a concept they’re likely still grappling while adding an additional complicating layer. Anyway, students see it as complicated because of the typical way we present it to them. While the traditional pressures of math still exist in many classrooms, or even worse, at home with little to no support. They’ll need to read, decode, create, and image or model, and transform that into something that they can then solve. I’d argue that we teachers haven’t done a sufficient job of preparing students for these situations.  It doesn’t have to feel this way. 

For example, we can expose students to a context and help them make sense of it before they even know what the question is. By initially excluding the question, students are relieved of the solution-finding inclination that we all too quickly jump to.  One of my favorite routines (see here) encourages students to suggest questions that would be a mathematically reasonable question given the context before presenting them with a question. After students have engaged in the context without the time pressure anticipated by typical math problems, they’re able to intuit what could and should be tested. This process gives meaning and helps us to understand the value of the problem.

When students have had this opportunity, word problems don’t feel so hard.  Word problems should pique interest and provide opportunities to make connections to the world around us.  They give us a reason to do math in the first place.  My assumption is that they feel hard because we feel rushed to solution finding.  Students are infrequently challenged to think slowly about a problem.  The pace of the class is often at the speed by which the first correct answer is given.  Word problems can instill fear and yet I think they’re truly key to making math feel relevant for our students as long as they aren’t arbitrary for the grade level.

For an example of what this might look like, consider the following background information from free-response question #3 on the 2018 AP Statistics exam (here): Approximately 3.5 percent of all children born in a certain region are from multiple births (that is, twins, triplets, etc.). Of the children born in the region who are from multiple births, 22 percent are left-handed.  Of the children born in the region who are from single births, 11 percent are left-handed.

At this point, a class might have a conversation about clarifications they may need for accessing the language used or understanding the context.  Then, the teacher could ask students to come up with a question for the context.  Depending on age (or maturity level), students may ask questions like, “Where do they live?” or “How old are the kids?” Those questions need to be redirected, because we are looking for mathematical questions.  For this context, students may ask, “What is the probability that a student born in the region is right-handed?”  This isn’t the ultimate question asked of students on the AP exam, but having students consider their own questions engages them in the context and gives them ownership of the question.  A class of students will often come up with the intended question after only a few suggestions*.  Pausing to consider other questions will also be helpful to give students insight into other aspects that may be important for solving the problem.  These aspects include what types of variables are present, how the information may be organized or depicted graphically, and what given information may be useful in determining the solution. 

* The first part of this particular AP question asked: What is the probability that a randomly selected child born in the region is left-handed?


4. What does good teaching sound and look like?

Okay this isn’t technically a bad question. Teachers and administrators ponder this year after and it continues throughout the career of everyone involved.  It’s in consideration when hiring, when deciding if a teacher should receive permanent status, and as the years pass and the field evolves and we learn more about equity and what methods work best.

The problem is that it’s very common for people to think of good teaching like how Trunchbull, from the film adaptation of Matilda, thought of an ideal school as “one in which there are no children at all.” Sadly, many teachers and administrators still consider a well-run class to be filled with students who are silent, only speaking when spoken to, and with students who sit down and stay there almost as if they don’t get to exist there as a person. 

We should instead nurture classrooms where students are given the authority to take ownership of their learning because students’ learning is more important than the teaching of lessons.  Teachers should be talking no more than half of the time.  Students should be talking.  Mostly to each other.  They should be positioned in a way where collaboration is convenient and encouraged. 

In my current role, I support mathematics teaching and learning for school districts within Merced county.  My office serves students from twenty school districts as well as our internal programs.  This accounts for about 60,000 students, of whom more than three-quarters are eligible for free and reduced lunch. Relative to the state, we have high populations of poverty and students who are classified as English learners. My office uses the following framework, developed by my colleague Duane Habecker based on Maslow’s hierachy of needs, to advocate for an effective mathematics program for all students:

  • Material Needs: Every student has a teacher with appropriate mathematics content knowledge and knowledge for teaching mathematics.  Math lessons are rooted in a solid understanding of the standards through rigorous, high-quality curriculum and meaningful tools. 
  • Mindset & Culture: Every student is immersed in a mindset and culture that intentionally communicates all students can learn math at high levels while being responsive culturally and personally in a learning environment that considers each and every student’s unique background, experiences, cultural perspectives, traditions, and knowledge.  Mistakes in mathematics are normalized. Students regularly experience high-quality, grade-appropriate lessons and assignments.
  • Student-Centered Instruction: Every student regularly experiences instruction that is student-centered and designed to maximize students’ use of language. Lessons create space for students to participate in discourse to promote conceptual understanding, which then leads to procedural fluency, problem-solving, and application. 
  • Equitable Assessment: Every student is regularly and humanely assessed in order to understand their own growth and to receive productive feedback for next steps in learning.  Students use the feedback to know where they are in their learning, assess any misconceptions that need to be addressed, and then use the results to drive the next level of learning.

I hope that these well-meaning but misguided questions have illustrated the misdirected focus that many have about how best to support our students in their mathematics education.  When we pigeonhole students into our own fixed beliefs, it’s no wonder that we consistently turn out students who underperform in mathematics as compared with other countries.  I believe we will see incredible growth by making mathematics more relevant to students at all ages, discontinuing the use of ability grouping and tracking, and offering more equitable pathways for college and career readiness.  Focusing on statistics and data science is a necessary and important part of the solution, as this leads to productive and supportive classroom environments and helps students to acquire essential skills for a modern workplace and world.

#56 Two questions to ask before making causal conclusions

This guest post has been contributed by Kari Lock Morgan.  You can contact Kari at klm47@psu.edu.

Kari Lock Morgan teaches statistics at Penn State. Along with other members of her family, she is co-author of Statistics: Unlocking the Power of Data, an introductory textbook that emphasizes simulation-based inference. Kari is an excellent and dynamic presenter of statistical ideas for both students and teachers. She gave a terrific presentation about evaluating causal evidence at the 2019 U.S. Conference on Teaching Statistics (a recording of which is available here), and I greatly appreciate Kari’s sharing some of her ideas as a guest blog post.

* I always implore students to read carefully to notice that causal is not casual.


How do we get introductory students to start thinking critically about evaluating causal evidence?  I think we can start by teaching them to ask good questions about potential explanations competing with the true causal explanation.

Let’s start with a generic example. (Don’t worry, we’ll add context soon, but for now just fill in your favorite two group comparison!).  Suppose we are comparing group A versus group B (A and B could be two treatments, two levels of an explanatory variable, etc.).  Suppose that in our sample, the A group has better outcomes than the B group.  I ask my students to brainstorm about: What are some possible explanations for this?  As we discuss their ideas, I look for (and try to tease out) three possible explanations:

  1. Just random chance (no real association)
  2. The A group differed from the B group to begin with (association, but due to confounding)
  3. A causes better outcomes than B (causal association)

This framework then leads naturally into what I think are the two key questions students should ask and answer when evaluating causal evidence:

  • Key question 1: Do we have convincing evidence against “just random chance”?  Why or why not?
  • Key question 2: Do we have convincing evidence against the groups differing to being with?  Why or why not?

If the answers to both of the above questions are “yes,” then we can effectively eliminate the first two alternatives in favor of the true causal explanation.  If the answer to either of the above questions is “no,” then we are left with competing explanations and cannot determine whether a true causal association exists.   

As teachers of introductory statistics, where do we come in? 

  • Step 1: We have to help students understand why each of these questions is important to ask.
  • Step 2: We have to help students learn how to answer these questions intelligently.

As a concrete example, let’s look at the health benefits of eating organic.  We’ll investigate this question with two different datasets:

1.  Data from the National Health and Nutrition Examination Survey (NHANES), a large national random sample.  Our explanatory variable is whether or not the respondent bought anything with the word organic on the label in the past 30 days, and the response variable is a dichotomized version of self-reported health status: poor/fair/good versus very good/excellent.  The sample data are visualized below:

In the sample, 45.9% of organic buyers had very good or excellent health, as compared to only 33% of people who hadn’t bought organic, for a difference in proportions of 0.459 – 0.33 = 0.129. 

In the second dataset, fruit flies were randomly divided into two groups of 1000 each; one group was fed organic food and the other group was fed conventional (non-organic) food*. The longevity of each fly by group is visualized below:

* Fun fact: This study was conducted by a high school student!  The research article is available here.

Organic-fed flies lived an average of 20.31 days, as compared to an average of 17.06 days for conventional-fed flies, giving a difference in means of 3.25 days (which is long in the lifespan of a fruit fly!).

In both of these datasets, the organic group had better outcomes than the non-organic group.  What are the possible explanations?

  1. Just random chance (no real association)
  2. The organic group differed from the non-organic group to begin with (association, but due to confounding)
  3. Eating organic causes better health status/longevity than not eating organic (causal association)

Do we have convincing evidence against alternative explanations (1) and (2)? How can we decide?


As I mentioned above, we teachers of introductory statistics have two jobs for each of these questions: first helping students understand why the question needs to be asked, and then helping students learn how to answer the question.  I’ll address these in that order:

STEP 1: Help students understand why each of the key questions is important to ask – why it’s important to consider them as potential competing explanations for why outcomes may be higher in one group than another.  (This is non-trivial!)

Key question 1: Do we have convincing evidence against “just random chance”?  Why or why not?

Why is this question needed?  We have to take the time to help students understand – deeply understand – the idea of statistical inference, at its most fundamental level.   Results vary from sample to sample.  Just because a sample statistic is above 0 (for example) doesn’t necessarily imply the same for the population parameter or the underlying truth.   This is NOT about illustrating the Central Limit Theorem and deriving the theoretical distribution for a sample mean – it is about illustrating to students the inherent variability in sample statistics.  While this can be illustrated directly from sample data, I think this is best conveyed when we actually have a population to sample from and know the underlying truth (which isn’t true for either of the datasets examined here).

Key question 2: Do we have convincing evidence against the groups differing to being with?  Why or why not?

Why is this question needed?  We have to take the time to help students understand – deeply understand – the idea of confounding, and why it’s dangerous to jump straight to the causal explanation if the groups differ to begin with. If the groups differ to begin with, we have no way of knowing whether this baseline difference or the A versus B distinction is causing the better outcomes.  I think that both talking through intuitive examples* and showing them real examples with measured data on the confounding variableare both important to help students grapple with this concept.  This is, inherently, reliant on multivariable thinking, and examples must go beyond bivariate context. 

* See posts #43 and #44 (here and here) for several examples.

In our NHANES organic example, I ask students to brainstorm: How might people who buy organic differ from the non-organic buyers?  Intuition is easy here, and students are good at this!  A common student answer is income, because organic food is more expensive. I respond by showing a real-data visualization of the relationship between eating organic and income, and between income and health status:

The sample data reveal that people who buy organic are richer, and richer people are healthier, so we would expect organic buyers to be healthier, even if buying organic food provided no real health benefit.  This is a concrete example of confounding, one that students can grasp.  Of course, income is not the only difference between people who buy organic and those who don’t, as students are quick to point out.  Given all of the differences, it is impossible to determine whether the better health statuses among organic buyers are actually due to buying organic food, or simply to other ways in which the groups differ. 

The key takeaway is that directly comparing non-comparable groups cannot yield causal conclusions; thus it is essential to think about whether the groups are comparable to begin with.


STEP 2: Help students learn how to reason intelligently about each of the key questions.

Key question 1: Do we have convincing evidence against “just random chance”?  Why or why not?

While we can assess this with any hypothesis test, I strongly believe that the most natural and intuitive way to help students learn to reason intelligently about this question is via simulation-based inference*.  We can directly simulate the values of statistics we would expect to see, just by random chance.  Once we have this collection of statistics, it’s relatively straightforward to assess whether we would expect to see the observed value of the sample statistic, just by random chance. 

* See posts #12, #27, and #45 (here, here, and here) for more on simulation-based inference.

I suggest that we can help students to initially reason about this in very extreme examples where a visual assessment is sufficient:

  • either the value of the sample statistic is close to the middle of the distribution of simulated statistics: could easily see such a statistic just by chance, so no, we don’t have convincing evidence against just random chance; or
  • the value of the sample statistic is way out in the tail: it would be very unlikely to see such a statistic just by chance, so yes, we have convincing evidence against just random chance.

In the case of the organic fruit flies dataset, we can use StatKey (here) to obtain the following distribution of simulated differences in sample means:

We notice that the observed difference in sample means of 3.25 days is nowhere to be seen on this distribution, and hence very unlikely to occur just by random chance.  (The sample statistic is even farther out in the tail for the NHANES dataset.)  We have convincing evidence against just random chance! 

Of course, not all examples are extreme one way or another, so eventually we quantify this extremity with the p-value (a natural concept once we have students thinking this way!), but this quantification can follow after developing the intuition of “would I expect a sample statistic this extreme just by chance?”.    

Key question 2: Do we have convincing evidence against the groups differing to being with?  Why or why not?

The best evidence against the groups differing to begin with is the use of random assignment to groups.  If the groups are randomly assigned, those groups should be similar regarding both observed and unobserved variables!  Although some differences may persist, any differences are purely random (by definition!).  You can simulate random assignment to convince students of this, which also makes a nice precursor to simulation-based inference!. 

Random assignment is not just an important part of study design, but a key feature to check for when evaluating causal evidence.  If my introductory students take only one thing away from my course, I want them to know to check for random assignment when evaluating causal evidence, and to know that random assignment is the best evidence against groups differing to begin with. 

Because the fruit flies were randomly assigned to receive either organic or non-organic food, we have convincing evidence against groups differing to begin with!   For the fruit flies we’ve now ruled out both competing explanations, and are left with the causal explanation – we have convincing evidence that eating organic really does cause fruit flies to live longer!!  Time to go buy some organic food*!!

* If you’re a fruit fly.

Because the NHANES respondents were not randomly assigned to buy organic food or not, it’s not surprising that we do observe substantial differences between the groups, and we would suspect differences even if we could not observe them directly.  This doesn’t mean that buying organic food doesn’t improve health status*, but this does mean that we cannot jump to the causal conclusion from these data alone.  We have no way of knowing whether the observed differences in reported health were due to a causal effect of buying organic food or due to the fact that the organic buyers differed from non-organic buyers to begin with.

* Make sure that students notice the double negative there.


Now I’ll offer some extra tidbits for those who want to know more about questioning causal conclusions.

When thinking about key question #2 about the groups differing to begin with, I want introductory students to understand (a) why we can’t make causal conclusions when comparing groups that differ to begin with, (b) without random assignment, groups will almost always naturally differ to begin with, and (c) with random assignment groups will probably look pretty similar.  These are important enough concepts that I try not to muddy them too much in an introductory course, but in reality it’s possible (in some situations) to create similar groups without randomization, and it’s also possible to obtain groups that differ even after randomization, just by chance.

Random assignment is not the only way to rule out groups differing to begin with; one could also collect data on all possible confounding variables (hard!) and force balance on them such as with propensity score matching or subclassification, but this is beyond the scope of an introductory course.  If you want to move towards this idea, you could compare units within similar values of an observed confounder (stratification).  For example, in the NHANES example, the organic buyers were healthier even compared to non-organic buyers within the same income bracket:

However, while this means the observed difference is not solely due to income, we still cannot rule out the countless other ways in which organic eaters differ from non-organic eaters.   We could extend this to balance multiple variables by stratifying by the propensity score, the probability of being in one group given all measured baseline variables (it can be estimated by logistic regression).  While this is a very powerful tool for making groups similar regarding all observed variables, it still can’t do anything to balance unobserved variables, leaving random assignment as the vastly superior option whenever possible.

While random assignment creates groups that are similar on average, in any particular randomization groups may differ just due to random variation.  In fact, my Ph.D. dissertation was on rerandomization – the idea that you can, and should, rerandomize (if you do it in a principled way) if randomization alone does not yield adequate balance between the groups.  In an introductory course, we can touch on some classical experimental designs aimed to help create groups even more similar than pure randomization, for example, by randomizing within similar blocks or pairs.  One classic example is identical twin studies, which I can’t resist closing with because I can show a picture of my identical twin sons Cal and Axel in their treatment and control shirts!


Questioning causal evidence involves evaluating evidence against competing explanations by asking the following key questions:

  1. Do we have convincing evidence against “just random chance”?  Why or why not?
  2. Do we have convincing evidence against the groups differing to being with?  Why or why not?

By the time students finish my introductory course, I hope that they have internalized both of these key questions –both why the questions need to be asked when evaluating causal evidence, and also how to answer them.

P.S. Below are links to datafiles for the examples in this post:

#55 Classroom assessment with clicker questions

This guest post has been contributed by Roxy Peck.  You can contact Roxy at rpeck@calpoly.edu.

I consider Roxy Peck to be one of the most influential statistics educators of the past 30 years.  Her contributions extend beyond her widely used and highly regarded textbooks, encompassing the teaching and learning of statistics at secondary and undergraduate levels throughout California, the United States, and beyond.  Roxy has been an inspiration and role model throughout my career (and for many others, I’m sure). I greatly appreciate Roxy’s taking the time to write this guest post about the use of clicker questions for classroom assessment.


Asking good questions is key to effective and informative assessment. Faculty use tests and quizzes to help them assess student learning, often for the purposes of assigning course grades. In post #25 of this blog (Group quizzes, part 1, here), Allan says he uses lots of quizzes in his classes because they also provide students with the opportunity to improve their understanding of the material and to assess how well they understand the material, and no one would argue with the importance of those assessment goals. But in this blog post, I want to talk about another form of assessment – classroom assessment. Classroom assessment is the systematic collection and analysis of information for the purpose of improving instruction. The more you know about what your students know and understand, the better you can plan and adjust your classroom practice.

I think that the best types of classroom assessments are timely and inform teaching practice, sometimes in real time. For me, the worst-case scenario is to find out when I am grading projects or final exams that students didn’t get something important. That’s too late for me to intervene or to do anything about it but hang my head and pout. That’s why I think good classroom assessment is something worth thinking carefully about.

My favorite tool for classroom assessment is the use of “clicker questions.” These are quick, usually multiple choice, questions that students can respond to in real time. The responses are then summarized and displayed immediately to provide quick feedback to both students and the instructor. There are many ways to implement the use of clicker questions, ranging from low tech to high tech. I will talk a little about the options toward the end of this post, but first I want to get to the main point, and that’s what I think makes for a good clicker question.


Clicker questions can be used to do real-time quizzes, and also as a way create and maintain student engagement and to keep students involved during class even in situations where class sizes are large. But if the goal is to also use them to inform instruction, they need to be written to reveal more than just whether a student knows or understands a particular topic. They need to be written in a way that will help in the decision of what to do next, especially if more than a few students answer incorrectly. That means that if I am writing a clicker question, I need to write “wrong” answers that capture common student errors and misconceptions.

Clicker questions can be quick and simple. For example, consider the following question:

Seventy-five (75) college students were asked how many units of coursework they were enrolled in during the current semester. The resulting data are summarized in the following frequency table:

What is the median for this dataset?  Options: A) 10; B) 11; C) 12

For this question, the correct answer is 12. What are students who answer 10 or 11 thinking? A common student error is for students to confuse the frequencies with the actual data. A student who makes this error would find the median of the frequencies, which is 10. Another common student error is to confuse the possible values for number of units given in the frequency table with the actual data. A student who makes this error would find the median of the possible values (the numbers in the “Number of Units” column) and answer 11. The main thing to think about when putting a question like this together are these common student errors. That’s not a new idea when writing good multiple choice questions for student assessment, but the goal in writing for classroom assessment is to also think about what I am going to do if more than a few students pick one of the incorrect options. With this question, if almost all students get this correct, I can move on. But if more than a few students select incorrect answer (A), I can immediately adapt instruction to go back and address the particular student misunderstanding that leads to that incorrect answer. And I can do that in real time, not two weeks later after I have graded the first midterm exam.

Another example of a good clicker question that is related to the same student misunderstanding where frequencies are mistaken for data values is the following:

Which of the three histograms summarizes the dataset with the smallest standard deviation?

Students choosing either answers (A) or (C) are focusing on variability in the frequencies rather than variability in the data values. If I see students going for those answers, I can address that immediately, either through classroom discussion or by having students talk in small groups about the possibilities and come to an understanding of why answer choice (B) is the correct one.

Here is another example of a simple question that gets at understanding what is being measured by the interquartile range:

Which of the two dotplots displays the dataset with the smaller IQR?

What is the error in thinking for the students who choose answer (B)? What would you do next if you asked this question in class and more than a few students selected this incorrect option?


I will only use a clicker question if I have a plan for what I will do as an immediate reaction to how students respond. Often, I can see that it is safe to move on, knowing that students are with me and that further discussion is not needed. In other cases, I find that I have some work to do!

So what is the difference between a clicker question and a multiple choice question? I think that pretty much any well-written multiple choice question can be used as a clicker question, so strategies for writing good multiple choice questions apply here as well. But I think of a good clicker question as a good multiple choice question that I can deliver in real time AND that is paired with a plan for how student responses will inform and change what I do next in class. I have used multiple choice questions from sources like the LOCUS and ARTIST projects (described at the end of this post) as clicker questions.

Consider the following question from the ARTIST question bank:

A newspaper article claims that the average age for people who receive food stamps is 40 years. You believe that the average age is less than that. You take a random sample of 100 people who receive food stamps, and find their average age to be 39.2 years. You find that this is significantly lower than the age of 40 stated in the article (p < 0.05). What would be an appropriate interpretation of this result?

  • (A) The statistically significant result indicates that the majority of people who receive food stamps is younger than 40.
  • (B) Although the result is statistically significant, the difference in age is not of practical importance.
  • (C) An error must have been made. This difference is too small to be statistically significant.

This is a multiple choice question that makes a great clicker question because students who choose answer (A) or answer (C) have misconceptions (different ones) that can be addressed in subsequent instruction.

The same is true for the following clicker question:

In order to investigate a claim that the average time required for the county fire department to respond to a reported fire is greater than 5 minutes, county staff determined the response times for 40 randomly selected fire reports.  The data was used to test H0:  μ = 5 versus Ha:  μ > 5 and the computed p-value was 0.12.  If a 0.05 level of significance is used, what conclusions can be drawn?

  • (A) There is convincing evidence that the mean response time is 5 minutes (or less).
  • (B) There is convincing evidence that the mean response time is greater than 5 minutes.
  • (C) There is not convincing evidence that the mean response time is greater than 5 minutes.

If very many students choose response (A), I need to revisit the meaning of “fail to reject the null hypothesis.” If many students go for (B), I need to revisit how to reach a conclusion based on a given p-value and significance level. And if everyone chooses (C), I am happy and can move on. Notice that there is a reason that I put the incorrect answer choice (A) before the correct answer choice (C). I did that because I need to know that students recognize answer choice (A) as wrong and want to make sure that they understand that answer is incorrect. If the correct choice (C) came first, they might just select that because it sounds good without understanding the difference between what is being said in (A) – convincing evidence for the null hypothesis – and what is being said in answer choice (C) – not convincing evidence against the null hypothesis.


I have given some thought about whether to have clicker question responses count toward the student’s grade and have experimented a bit with different strategies. Some teachers give participation points for answering a clicker question, whether the answer is correct or not. But because the value of clicker questions to me is classroom assessment, I really want students to try to answer the question correctly and not just click a random response. I need to know that students are making a sincere effort to answer correctly if I am going to adapt instruction based on the responses. But I also don’t want to put a heavy penalty for an incorrect answer. If students are making an effort to answer correctly, then I share partial responsibility for incorrect answers and may need to declare a classroom “do-over” if many students answer incorrectly. I usually include 3 to 4 clicker questions in a class period, so what I settled on is that students could earn up to 2 points for correct responses to clicker questions in each class period where I use clicker questions. While I use them in most class meetings, some class meetings are primarily activity-based and may not incorporate clicker questions (although clicker questions can sometimes be a useful in the closure part of a classroom activity as a way to make sure that students gained the understanding that the activity was designed to develop). Of course, giving students credit for correct answers assumes that you are not using the low-tech version of clicker questions described below, because that doesn’t keep track of individual student responses to particular questions.


Teachers can implement clicker questions in many ways. For example, ABCD cards can be used for clicker questions if you are teaching in a low tech or no tech environment:

With ABCD cards, each student has a set of cards (colored cards make it easier to get a quick read on the responses). The instructor poses a question, provides time to think, and then has each student hold up the card corresponding to the answer. By doing a quick look around the classroom, the instructor gets a general idea of how the students responded.

The downside of ABCD cards is that there is no way to collect and display the responses or to record the responses for the purpose of awarding credit for correct responses. Students can also see which students chose which answers, so the responses are not anonymous to other students. In a big lecture class, it is also difficult for the instructor to “read” the class responses.

Physical clickers are small devices that students purchase. Student responses are picked up by a receiver and once polling is closed responses can be summarized and displayed immediately to provide quick feed back to both students and instructor. Several companies market clickers with educational discounts, such as TurningPoint (here) and iClickers (here).

There are also several web apps for polling that can be used for clicker questions if your students have smart phones or web access. A free app that is popular with teachers is Kahoot! (free for multiple choice; more question types, tools and reports for $3 or $6 per month, here). Another possibility is Poll Everywhere (free up to 25 students, then $120 per year for up to 700 students, here).

And finally, Zoom and some classroom management systems have built-in polling. I have used Zoom polls now that I am delivering some instruction online, and Zoom polls allow you to summarize and share results of polling questions. Zoom also has a setting that tracks individual responses if you want to use it for the purposes of assigning credit for correct answers.


I think incorporating good clicker questions has several benefits. It provides immediate feedback to students (they can see the correct answer and how other students answered), and it has changed the way that I interact with students and how students interact with the course. Students are more engaged and enjoy using this technology in class. They pay more attention because they never know when a clicker question is coming, and they want to get it right. And if they get it wrong, they want to see how other students answered.

But one important final note: If you are going to use clicker questions, it is really important to respond to them and be willing to modify instruction based on the responses. If students see that many did not get the right answer and you just say “Oh wow. Lots of you got that wrong, the right answer is C” and then move on as if you had never asked the question, students will be frustrated. On the other hand, if you respond and adjust instruction, students see that you are making student understanding a top priority!


P.S. LOCUS (Levels of Conceptual Understanding in Statistics, here) is a collection of multiple-choice and free-response assessment items that assess conceptual understanding of statistics. Items have all been tested with a large group of students, and the items on the website include commentary on student performance and common student errors. Designed to align with the Common Core State Standards, they follow the K-12 statistics curriculum. Because there is a great deal of overlap in the high school standards with the current college intro statistics course, there are many items (those for level B/C) that are usable at the college level.

ARTIST (Assessment Resource Tools for Improving Statistical Thinking, here) is a large bank of multiple-choice and free-response assessment items, which also includes several scales that measure understanding at the course level and also at a topic level. At the course level, the CAOS test (Comprehensive Assessment of Outcomes for a First Course in Statistics) consists of 40 conceptual multiple-choice questions. The topic scales are shorter collections of multiple-choice questions on a particular topic. There are more than 1000 items in the item bank, and you can search by topic and by question type, select items to use in a test and download them as a word document that you can edit to suit your own needs. You must register to use the item bank, but there is no cost.