2015 SmartBenchmarkingStartswithKnow

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Organizational Benchmarking.

Notes

Cited By

Quotes

Abstract

Comparing your organization to peers - also known as benchmarking - lets you understand how you're doing, identify performance gaps and opportunities to improve, and highlight peer achievements that you could emulate, or your own achievements to be celebrated. The problem is that peer comparison as generally practiced suffers from tunnel vision and so misses critical insights, to everyone's detriment.

It's almost universal practice that the benchmarker chooses one or two organizational goals, then picks a few key metrics (key performance indicators) relevant to those goals, and finally selects several peer groups from a limited set. The outputs are then the mean, median, distribution, or high-percentile values for those peer groups on those metrics. The conclusion is that the organization may or may not have a problem, which may or may not be addressable.

The selection of peer groups is crucial to insightful benchmarking. For example, suppose you want to compare the U.S. against other nations. What would be the right peer group? Here are some that make sense: democracies; the Anglosphere; constitutional republics; large countries; developed countries; OECD or NATO members; the western hemisphere; non-tropical countries; largely monolingual countries; business-friendly economies; and even baseball-playing nations.

As noted, there are always many peer groups to choose from, so how does one know which benchmarks will provide the most valuable insights? Let’s look at the healthcare industry as an excellent example of how benchmarking can be impactful, especially with the increased focus on improving health outcomes while simultaneously lowering costs. Benchmarking is increasingly applied to healthcare due to advances in data collection, governmental regulation and dissemination of data, and the potential impacts on well-being and national spending. The biomedical search engine at PubMed.gov reports only 3 records from 1992 on hospitals and benchmarking, but reports 1,412 from the decade 2005-2014. This makes it a useful case study for our purposes here.

Here are two accurate peer comparisons that are arguably insightful (data from Medicare.gov):

Anthony Community Hospital in Warwick, NY has the lowest average time patients spent in the emergency department before they were seen by a healthcare professional (10 mins) of all the 32 church-owned hospitals in the mid-Atlantic. Pawnee County Memorial Hospital in Pawnee City, NE has the 9th-most patients who reported that their pain was always well controlled (88%) of the 490 hospitals that are government-owned, provide emergency services, and are a critical access hospital. Note the peer groups: (1) church-owned; mid-Atlantic; and (2) government-owned; emergency services; critical access.

Considering more peer groups leads to uncovering more valuable benchmarking insights. But insightful peer groups can be formed, not only by picking non-numeric (aka symbolic) attributes, but also by dynamically determining numeric thresholds. Here’s a revealing (and true) insight that contains a dynamically formed peer group:

None of the other 344 hospitals with as many patients who reported YES, they would definitely recommend the hospital (85%) as Stanford Hospital in Stanford, CA also has as few patients who reported that the area around their room was always quiet at night (41%). That is, among those 344 hospitals, it has the fewest patients who reported that the area around their room was always quiet at night.

This is a provocative insight. One can imagine a hospital CEO reacting in one of these ways:

We’re profitable, prestigious, and have great weather. What’s a little nocturnal noise? There’s been night-time construction next door for the last year, and it’s almost done, so the problem will solve itself. I can’t think of any reason why we should be at the bottom of this elite peer group. I’ll forward this paragraph to our chief of operations to investigate and report back what may be happening. This peer-comparison insight wouldn’t be found by today’s conventional benchmarking methods, which instead report something like this: The average value for this quantity among 309 California hospitals with known values is 51.5 percent with a standard deviation of 9.5 percent, so Stanford Hospital is about one standard deviation below average. The reader can judge which of the two insights is the more action-provoking, not just for the single individual in charge, but for the entire team that needs to be roused and led to act on and address performance gaps.

So far, we’ve used specific examples to illuminate what tunnel vision overlooks. As our last step, let’s report the results of a few real-world software experiments.

The website hospitals.onlyboth.com (disclosure: I’m a co-founder and CEO of OnlyBoth) showcases the results of applying an automated benchmarking engine to data on 4,813 U.S. hospitals described by 94 attributes, mostly downloaded from the Hospital Compare website at Medicare.gov. A combinatorial exploration of peer comparisons among the 4,813 hospitals turns up 98,296 benchmarking insights that survive the software’s quality, noteworthiness, and anti-redundancy filters, or about 20 per hospital. In this hospitals experiment, insights were required to place a hospital in the top or bottom 10 within a peer group of sufficient size.

There appear 522 different peer groups that are formed by combining the hospital dataset’s 24 non-numeric attributes in various ways. As noted above, the number of peer groups is much, much larger if one counts not the attributes used, but the diverse ways to combine attribute values. For example, consider a geographic peer group using ”state” as an attribute. The attribute “state” can either be used or not, so there are two alternatives there, but the number of state values is 50, implying many more alternatives. The number of peer groups becomes still larger when accounting for dynamically-formed peer groups based on numeric thresholds.

Of course, the engine explored more peer groups than appear in the end results, which are those found to be large and noteworthy enough to bring to human attention. Also, each peer group appears in many insights by combining them with the available metrics. On average, each of the 522 peer groups enables over 900 individual hospital insights, by further combining each peer group and metric with different hospitals.

As this number of potential results shows, software automation is a key tool for ending tunnel vision. Without automation, there aren’t enough people and time in the world to explore what’s outside the tunnel, select the best insights, and bring them to human attention. And yet tunnel vision in benchmarking is still widespread today, and too many organizations are missing out on a vast number of noteworthy and action-provoking insights that could help them improve.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2015 SmartBenchmarkingStartswithKnowRaul Valdes-PerezSmart Benchmarking Starts with Knowing Whom to Compare Yourself To