No Data Left Behind
In my work as a learning measurement consultant, I worked with a Fortune 100 company (let's call them Big Software) that was delivering workplace harassment training to its employees. Pretty standard stuff and it surprised me that they also brought in a measurement guy (me). I've created strategies for measuring the impact of a lot of training including compliance training. But compliance training falls into a weird category of things to measure in that compliance training usually increases the reporting of non-compliance which most organizations don't expect. But when you think about it, it makes sense: you teach (remind) a bunch of people about the behavior that constitutes harassment and some of those people start to see harassment they had not seen before. Or some of those people aren't sure of the boundaries and start reporting things that might be harassment. In either case, the reporting goes up.
For Big Software my client was an attorney (always interesting) rather than a business leader. So I talked with this corporate attorney about the kinds of things we would want to measure to understand whether the workplace harassment training had been effective. Obviously we'd measure reporting before and after, and I also recommended a post-training survey to ask how much participants understood the application of the training.
Big Software's attorney was not at all in favor of this approach because if someone reported they didn't understand the training, that could undermine Big Software's ability to enforce the policy. In this instance, it was more legally defensible to say "we trained them and assume they understood it" than to say "we trained them and 80% reported understanding it, leaving 20% who didn't." I understood his point and was reminded why sometimes corporate life is soul-crushing (dealings with Finance and Legal sometimes leave me feeling that way).
However, the metrics geek in me was really bothered by the deliberate decision to avoid that data (clearly it bothered me; it's been years and I can recall the phone call as if it happened yesterday). So I came away from that experience, evangelist-like, with a commitment not to hide from data because I feared what it would tell me. Since then, I've been surprised at how often organizations don't even recognize that they're avoiding data.
A customer service contact center I worked with didn't track transfers because the representative didn't do any work, they just transferred the call to the appropriate department. I got samples of transferred calls to share with those leaders and let them hear how the representative asked questions and diagnosed the problem before transferring them to the right department. That customer's first experience in trying to solve their problem was that first representative, even though they were transferred. Sure, transfer data should go into a different category, but it shouldn't have been excluded.
I also worked with a team that applied a protective finish (basically paint) on machine parts in a manufacturing organization. In helping them measure their new hire training, I refined the criteria for measuring correct application of that finish, including how much of the paint was wasted. It turned out that the team training the new hires didn't care much about how much paint was wasted during training, but their environmental and safety team absolutely did. It took real time and money to safely manage the amount of chemicals in the waste collection drains, so reducing wasted paint both during and after new hires were trained was a measurable savings for the company.
None of these were deliberate choices to hide mission-critical data (well maybe the attorney, but he would disagree), but all of them limited understanding of the data (and therefore the real situation). So my question: What data does your organization avoid looking at?