We’ve all been there: some company you really hate runs an ad singing praises about all the awards they’ve gotten claiming they’re “number one!” in something. You frown at the TV, thinking, “Who the heck named them best of anything?” Now, a new study has found that the instinct to call shenanigans on those corporate awards is exactly right. Far from being meaningful recognitions of performance, those “awards” show exactly one thing: how much a company is willing to spend on marketing.
A research team at UMass – Amherst has recently published a study (PDF) looking at the origins of all those corporate awards. It turns out, they’re all pretty much as fictitious and contrived as you’d think. Their case study? T-Mobile, who between 2011 and 2013 basically went all-out getting others to tout their supposed greatness.
During the three-year span the research team studied, T-Mobile received 47 “best-of” awards. Most of them were either “good place to work” type awards (for example, being the best call center in a given city) or related to overall corporate governance. Of these, pretty much all come from “self-nomination,” which means T-Mobile found potential ratings organizations and then sent their own application materials in. That, in and of itself, is not particularly nefarious; it’s how major awards like the Emmys and Oscars work, too. But, the study found, everything keeps getting dodgier from there.
The programs T-Mobile submitted those application materials to had some major problems of their own. They “lack transparency in terms of the criteria used for evaluation,” the study found, meaning there’s no rubric or guidance out there that says what standard(s) an award-winning organization should meet. There’s also no independent verification of the data. T-Mobile (or another company) submits their own information, and nobody checks to see if it’s true.
That may-or-may-not-be-true data comes from surveys given to employees. Employee surveys can be meaningful, but only if they’re done in some very specific ways. In general, the study found that the surveys used in these instances did not adhere to well-known best practices in survey research. The surveys also typically have “low and unrepresentative response rates,” which makes their data questionable. And they’re also administered by the employer, instead of by a neutral third party, which seriously calls the validity of the data into question. (Not many people are really honest about displeasure with their companies and their working conditions when they think those comments can come back to hurt them.)
And as a bonus, the researchers found that, “Many of the firms conducting national evaluations also provide consulting services to the very companies they are rating. This,” they observe, “creates a strong potential for conflict of interest.”
Yes, it would seem to, wouldn’t it.
Calling the validity of the awards further into question? Consulting better-known, better-run organizations that use “more rigorous and objective measures” to check out the same criteria finds basically the opposite of what the awards-giving groups say.
The study’s authors conclude, “These ratings and awards cannot be seen as objective measures of corporate performance. Instead, they are best understood as parts of marketing programs operating in the guise of contests and competitions.” They add, “Rather than evaluating actual company performance, the ratings are a better indicator of a company’s allocations of resources to win awards and its work to create a facade of good behavior.”
Makes you wonder what a company could do if they spent all that time and energy on improving their business instead.
by Kate Cox via Consumerist
No comments:
Post a Comment