5 Ways to Spot Bad Information

April 10, 2013 Darcy Jacobsen


The Princess Bride scene

“You keep using that word… I don’t think it means what you think it means.”

I spend a lot of time slogging through research reports and studies. But before you feel too sorry for me, you should know—and this might be painfully obvious from the geek level of my posts—most of the time I actually enjoy it. Shh! I am an historian by training, so I’m no stranger to statistics and academic journals. I enjoy them! Usually. The exception to this rule is when I run across a report that I feel is misleading or poorly executed.

One of the things we all learn via academia is to separate the really useful reports from the not so useful. Usually, this is a self-correcting mechanism, because the most truly awful research rarely makes it past reviewers to be published. But unfortunately, from time to time, what I can only refer to as “dicey” studies escape peer review long enough to make it into the mainstream.

A recent example of this is a study that my colleague Derek Irvine blogged about yesterday over at Recognize This. The paper he spoke of came from Harvard Business School—a source we would normally expect to be above reproach. (I’m going to pick on that study a bit to make my points, here—I hope you take that in the spirit in which I intend it.)

Here are five guidelines that I use when I’m evaluating research to determine if it is worthy of including here on our blog. I recommend that you run this sanity check on research you encounter, to help you separate the good research from the not so good.

1. Make sure the title reflects the research

You keep using that word. I don’t think it means what you think it means” is just about my favorite line from The Princess Bride. Likewise, sometimes the title of a paper isn’t really a good barometer of what the study is actually looking at. For example, the title of the Harvard study, “The Dirty Laundry of Employee Award Programs” is catchy and punny (I do love a good pun.) but is ultimately a bit misleading. That study actually only looks at attendance programs, and a sort of odd hybrid at that–certainly not at standard employee award or reward programs. (As you know, using different types of reward programs—and following different best practices—yields very different results.) By referencing previous studies (most of which weren’t on attendance programs) the study sadly conflates a wide variety of programs and paints them all with the same brush. Useful if you’re looking at attendance programs? Perhaps. But as a study that reflects on the whole field, it is “Inconceivable!”

2. Make sure the conclusions are supported by the facts

When I used to grade undergrad papers at Boston University I would run into this one a lot: students often try to draw significant findings from a weak data set. They would read about one king and try to argue then about “all kings.” To determine whether your source is over-reaching, look at what the study actually examined. Compare that to the conclusions it drew. If you “can’t get there from here” you probably have a paper that you want to consider with a healthy dose of salt. In the case of the Harvard study, they examined conditions at one laundry plant, and compared it to others, but they did not take into account possible other reasons for differences or change, like different management styles (one manager appears to have been a bit of a renegade) or the fact that the plant they were looking at was unionized, (where none of the others were), or the fact that the program was largely confined to summer and autumn months (when people are far more likely to have absences). Make sure the conclusions match the data.

3. Make sure the study is statistically and methodologically valid

Make sure the data set is big enough and clear enough. This will vary, but a survey ought to have a statistically meaningful number of responses. Likewise, a research study should follow the scientific method, with a randomized control set or a broad enough subsection of data to be viable. A historical study should also subject itself to similar rigorous conditions before drawing conclusions. Seeing a study describe itself as “quasi-experimental” (as the Harvard study does) and to make assertions that the paper itself says are “speculative”, “theoretically appealing” or that it “does not directly show” evidence to support its own assertions, should send up warning signs. In the case of the Harvard study, conclusions were drawn based on an unauthorized program (executives cancelled it as soon as it was discovered) that was muddled by an incentives-based pay scheme already in place, and the fact that eligibility for the program was denied if the participants had taken a legitimate vacation day in the previous month.

4. Make sure the researchers (and you) know what they are talking about

Sometimes people can successfully cross over into different subfields and still retain credibility. Sometimes inexperience in a given subfield means authors are drawing conclusions about things that don’t translate effectively into the real world, or don’t reflect the subtleties of the topic. The Harvard Paper was written by two specialists on negotiation and pay-for-performance and one economics grad student. While I am sure all three are fine researchers in their own right, it is plain that none of the authors is really versed in the subtleties of the recognition/engagement field. For instance, they are talking about an attendance program where eligible employees enter their names in a hat for a random award drawing. This bears almost no resemblance to truly effective recognition or engagement programs of the sort that firms like Towers Watson, Bersin/Deloitte, Hay Group and Gallup have been studying for years. Trying to use the former to cast doubts on the latter leads me to believe that the writers’ understanding is flawed. It is up to us as readers to be careful that we don’t misapply a study that might seem to be related, but really isn’t.

5. Don’t over-rely on any one study

Finally, no matter how great (or bad) you might think a paper is, or how much you want to believe (or disbelieve) the results, getting a second, third and fourth opinion is always recommended. Individual studies can be flawed for all kinds of reasons. It is important to make sure that your conclusions are drawn from a large and varied body of work.

I hope this set of guidelines can help you as you navigate the tricksy waters of research, and helps you a bit as you look for great data to support your initiatives!

 

Blog subscription

Previous Article
Using Recognition to Build Compliance (And Avoid Risk)
Using Recognition to Build Compliance (And Avoid Risk)

Choosing a strategic recognition program is a key compliance decision that can solve many compliance headac...

Next Article
Schrödinger’s Cat and The Role of the Observer in Employee Performance
Schrödinger’s Cat and The Role of the Observer in Employee Performance

We take a look at Schrödinger's Cat and the role of the observer in employee performance.

×

Great content straight to your inbox... Subscribe to the blog today!

First Name
Last Name
Company
Country
State
Company Employee Size
Thank you!
Error - something went wrong!