Performance measurements: Coming up short

My fascination with verifiable outcomes is not very unique. After all, Americans take great pride in being able to measure  as a demonstration of cause and effect.  Data bolsters our confidence in evaluating why something works, or helps us defend a decision,  irrespective of industry or , the services provided.  What would we do without stars that appear adjacent a movie, new book or product, or the win-loss record that informs us about our  team’s performance?  In business, we rely on Return on Equity or Returns on Investment as our compass.  Important legislation waits for the Congressional Budget Office Cost estimations and a series of published statistics produced at regular intervals by our government set the legislative and implementation standards –the Consumer Price Index or of personal relevance to most seniors, COLA, the Cost-of Living Adjustment  which determines the level of social security benefits.

Few of us stop to assess the appropriateness, relevance–absolute or relative of  the indicators we reference.  When was the last time you differentiated cardinal vs ordinal rankings when committing resources?    We love to make lists and rank our preferences too.  The top ten movies of the year, the top-selling products, or nice rankings similar to those shown to the right..we love  top performers , or do we?  Does top really point out quality?  Elicit our support or even further incent us to add our votes?

Ranks are merely a means of sorting, and so its value  is independent and bears no causal relationship to effects.  Frequently, order can fool us;  when what matters is the context for the indicator and how we create  the  indicator showing order.  Just because we can count something doesn’t make it relevant.   A poll among three-year olds on the merits of healthcare reform would not be very instructive.  I could tabulate the height and weight of all the kindergartener on the first day of class and the last day of class.  Can I honestly attribute the success of the teacher that year to the difference realized?  Let’s hope that no one is trying.  But you would be surprised by the number of organizations who are attempting similar fool hardy cause and effect linkages.  What conclusion are we supposed to draw from the charts below?

Plenty of folks have devised causal profitability measures–especially non-financial performance indicators.  If you’re trying to employ meaningful measurements in your business, it’s worth taking a look at the challenges summarized  by two accounting professors and  published in HBR November 2003, “Coming up short on non-financial performance Measurement.”

Bottom line?  There are lots of caveats to our list making, rating preference behavior.  Before determining the task futile, when your own methods  fail to deliver results, I suggest putting each of your measures to a simple test.  Do your underlying assumptions have validity?  How well have they proved themselves to mirror reality?   Doing so will not only help you uncover the real value in your organization but also help you allocate resources to perpetuate both your values and the returns you are ultimately seeking.