GREG BRISENDINE
  • Home
  • Book
  • Author
  • Resources
  • Use Cases
    • Leadership Communication
    • New in Role
  • Contact
  • Blog

Measuring Success

Not nearly as tough as it sounds.

Let's Isolate Some Variables

12/10/2019

0 Comments

 
Is there any more enticing blog title than that one? I recently talked to the local chapter of the Association for Talent Development about training measurement and the point about isolating variables came up, as it always does.  Isolating variables means mathematically separating the effects of one thing from another thing. Many folks, including folks in the training industry, believe if you can't isolate the relative contributions of multiple variables to an outcome that the measurement isn't worthwhile. Not surprisingly, I disagree. 
Picture

All Measurement is Good Measurement

If you wonder what all the big deal is, here it is in a nutshell. Causal relationships are all the rage (not to be confused with casual relationships, which can also be great). Which is to say, knowing for sure that Action A caused Outcome B lets you can continue doing Action A (if it was good) or for heaven's sake never doing Action A again (if it was bad). But in real life, Outcome B doesn't happen in a vacuum. It's influenced by Actions C, D, and E in addition to Action A. The result is that it becomes difficult to know whether Action A was the cause or whether it had any real effect at all in the presence of Actions B, C, and D.

Hence the need to isolate the effects of all those Actions (variables).  An example would be the successful launch of a new software product. If the launch is wildly successful (lots of people bought it), was it the marketing campaign, the sales training, or the design of the software? Was it something else entirely? The marketing team, the training team, and the development team are all very interested in their contribution to that success and so may work to isolate the impact of those from each other.

And it IS possible to isolate those variables from each other. This is one my favorite examples of statistically isolating variables in a training program. It's from several years ago and if you watch the video you'll note that it took about 2 years to conduct that statistical analysis. So when I say that I disagree with isolating variables to measure impact, it's not that I think it's a bad idea. On the contrary it's an AWESOME idea if the results justify the cost and time it will take to conduct the analysis.

Because let's be honest, experimental design is sexy. Just imagine the fun in comparing the outcomes of one group that got something new and another group that didn't get that new thing! I learned about experimental design in high school when I also learned what a petri dish was and that bread mold was inevitable. Handily enough, the theory applies to business as well. Want to see if your new sales training will increase sales? Train half the sales staff and wait a year to compare the results of the two groups. At the end of that year, when you find out that the sales training increased the sales for the ones trained (the experimental group) and the untrained group (the control group) saw no increase, you'll have a great presentation to senior leaders about the impact of your training. You'll also likely be looking for a new job because you missed the opportunity for twice the sales improvement that you actually got. 

So if experimental design is great for isolating variables, but isn't practical in a business setting, what do you do when something has changed and you want to know why?
  1. List the possible contributors. Did a process change? Is there new training, new software, new employees? Has the market changed? Did a competitor show up or disappear?
  2. Use your gut. Don't get me wrong, I'm all for the math. But check out the territory before you start the journey. Look at the list you just created. Are there any flags, red or otherwise? Gut reactions are dismissed as non-precise, but they're usually based on real internalized experience, so listen.
  3. Run some estimates. For everything on your list, what is the likely contribution, based on your aforementioned gut reaction? Did the new software only contribute about 10% because it wasn't implemented until the end of the month? Did your rockstar new employee try something no one else had tried?
  4. Quick cost/benefit analysis. Now you've got a list of likely contributors and you've got a gut estimate of the relative contribution. What you do next depends on the kind of decision you need to make. If you're making a high-dollar or a long-term decision, spend some time working from the estimates to get closer to isolating the relative effects. If you don't have a decision to make, or if the estimates got you everything you need, your'e done. Keep running your business.
 ​If often find that people want to apply statistical analysis (including isolating variables) when the decision they have to make doesn't call for that level of precision. Don't do that.
Greg is a business leader, writer, and all-around fun guy. His new book is Measuring Success: A Practical Guide to KPIs.
0 Comments



Leave a Reply.

    Greg Brisendine

    I'm a measurement geek, business leader, playwright, poet, and I like ice cream.

    Archives

    December 2020
    March 2020
    February 2020
    January 2020
    December 2019
    October 2019
    September 2019
    August 2019
    July 2019

    Categories

    All

    RSS Feed

Measuring Success: A Practical Guide to KPIs © 2019 Greg Brisendine. All rights reserved.
Site powered by SuperDeluxe Marketing
  • Home
  • Book
  • Author
  • Resources
  • Use Cases
    • Leadership Communication
    • New in Role
  • Contact
  • Blog