All Measurement is Good Measurement
If you wonder what all the big deal is, here it is in a nutshell. Causal relationships are all the rage (not to be confused with casual relationships, which can also be great). Which is to say, knowing for sure that Action A caused Outcome B lets you can continue doing Action A (if it was good) or for heaven's sake never doing Action A again (if it was bad). But in real life, Outcome B doesn't happen in a vacuum. It's influenced by Actions C, D, and E in addition to Action A. The result is that it becomes difficult to know whether Action A was the cause or whether it had any real effect at all in the presence of Actions B, C, and D.
Hence the need to isolate the effects of all those Actions (variables). An example would be the successful launch of a new software product. If the launch is wildly successful (lots of people bought it), was it the marketing campaign, the sales training, or the design of the software? Was it something else entirely? The marketing team, the training team, and the development team are all very interested in their contribution to that success and so may work to isolate the impact of those from each other.
And it IS possible to isolate those variables from each other. This is one my favorite examples of statistically isolating variables in a training program. It's from several years ago and if you watch the video you'll note that it took about 2 years to conduct that statistical analysis. So when I say that I disagree with isolating variables to measure impact, it's not that I think it's a bad idea. On the contrary it's an AWESOME idea if the results justify the cost and time it will take to conduct the analysis.
Because let's be honest, experimental design is sexy. Just imagine the fun in comparing the outcomes of one group that got something new and another group that didn't get that new thing! I learned about experimental design in high school when I also learned what a petri dish was and that bread mold was inevitable. Handily enough, the theory applies to business as well. Want to see if your new sales training will increase sales? Train half the sales staff and wait a year to compare the results of the two groups. At the end of that year, when you find out that the sales training increased the sales for the ones trained (the experimental group) and the untrained group (the control group) saw no increase, you'll have a great presentation to senior leaders about the impact of your training. You'll also likely be looking for a new job because you missed the opportunity for twice the sales improvement that you actually got.
So if experimental design is great for isolating variables, but isn't practical in a business setting, what do you do when something has changed and you want to know why?
Greg is a business leader, writer, and all-around fun guy. His new book is Measuring Success: A Practical Guide to KPIs.