# A Man with One Watch Knows What Time it is…

There’s an expression that says, **“A man with one watch knows what time it is. A man with two watches is never quite sure.”**

This speaks to the introduction of uncertainty when you have more than one device measuring the same thing. The same uncertainty appears when you repeatedly measure the same thing with the same device. Because of this uncertainty, we have a statistical quirk called **Regression to the Mean.**

**What Is Regression to the Mean?**

Regression to the mean sounds complicated, but it’s actually a simple concept. It’s a statistical quirk that happens when you measure the same person or object over and over again. Because there is always an error associated with measurement, repeated measurements will produce a range of observed values.

Regression to the mean says that repeated measurements of a person or object will tend to approach the middle value, or mean.

Let’s use my digital bathroom scale as an example. Suppose I step on my scale and it tells me that I weigh 111.2 lbs. I step off my scale and step back on it to weigh myself again. This time it tells me that I weigh 110.8 lbs. These two measurements don’t equal each other because the bathroom scale has a measurement error associated with it. The bathroom scale is not perfectly accurate. This is true for all measurement devices.

Now let’s imagine that I have weighed myself repeatedly on the same bathroom scale 100 times and recorded what the scale showed me each time. I examine the 100 values and I see that my observed weight ranges from a low value of 110.0 lbs to a high value 112.0 lbs. I also notice that I have many recorded values in between.

So, how much do I actually weigh? Because the scale has a random error associated with its measurement capabilities, I don’t know my real weight. What I do know is that middle value of 111.0 lbs, which statisticians call the mean, is my most likely answer.

**How Does Regression to the Mean Impact Our Measurement?**

Suppose I now pay a visit to my doctor and she is measuring my weight for the first time. My doctor is not aware of the 100 values I observed previously. I step on the scale and she observes a value of 110.0 lbs. She asks me to step on the scale the next day and the scale displays 110.4 lbs. She asks again on the third day and we see a value of 110.8 lbs. My doctor might conclude that I am gaining weight. I respond that it must be the effect of the regression to the mean.

Because I am aware of my 100 observed weight values, I know that an observed weight of 110 on the first day is at the lowest end of my observed values. That means that any measurement of my weight after that is incredibly likely to be greater than 110 lbs. In the same regard, if my observed weight in the doctor’s office on the first day had been 112 lbs, it would be incredibly likely that the next measurement after that would be less than 112 lbs. The measurements will tend toward the middle value.

**Now We Have a Data Dilemma**

When we measure things in the real world, we don’t usually know the range of values to expect. We are like the doctor who has only seen three repeated measurements. The real-world dilemma is that we don’t know whether my weight is increasing or whether my measurements are tending toward the mean.

**An Example from the Business World**

Let’s look at another example. Suppose I want to give a training class to 100 people. Before the class begins, I give each student a pre-test and gather their scores. A common practice to determine the effectiveness of training is to give students the same test after the training has ended and to make a comparison of the before and after scores.

Next, let’s assume that our training was completely ineffective and the students learned nothing. If we were to look at the people who had the 10 lowest scores on the pre-test, we might expect this group to have the 10 lowest scores when we test them again. But that’s highly unlikely. We only need one low-scoring person’s score to change for the average of that group to go up. If we only examined this group, we might conclude that the training was effective.

Similarly, if we were to look at the people who had the 10 highest scores on the pre-test, we might expect this group to have the 10 highest scores on the re-test. But that’s also highly unlikely. We only need one high-scoring person’s score to change for the average of that subgroup to go down. If we only examined this group, we might conclude that the training was not only ineffective, but detrimental.

The effect we are seeing is that the score of each group has regressed to the mean. The low scoring group appears to have improved and the high scoring group appears to have gotten worse.

**How do We Solve Our Dilemma?**

Now that we are aware of the incorrect conclusions that can be drawn because of regression to the mean, how do we know if we have this problem when we analyze our data with regression? Without getting into the detailed equations, a quick guideline is:

*“The stronger your correlation is between two variables, the smaller your error is due to regression to the mean.”*

The good news? There are equations available that will allow you to both estimate and correct for this statistical quirk.

**Bio**

Tracey Smith is an internationally recognized analytics expert, speaker and author. Her hands-on consulting approach has helped organizations learn how to use data analytics to impact the bottom line. Tracey’s career spans the areas of engineering, supply chain and human resources. She is CPSM certified through the ISM. If you would like to learn more, please visit www.numericalinsights.com or contact Tracey Smith through LinkedIn. You can check out her books on her Amazon Author Page.