Saturday, August 23, 2008

Research, Reputation, and the Power of One Number

I wanted to thank everyone who took time to write a comment about the "consumer of research" story. It was interesting to speak with my colleagues who attended the Academy of Management meeting this month about the "HRM rankings case."

There was an interesting mix of "shock" that this could happen (that the system being used to do the rankings, which are so important to academics, can be so fraught with error), to people thinking the whole process of depending on one number is silly, to congratulations that the journal is doing so well overall, to a few people saying they would never publish again in HRM because the impact factor number is not what they thought it was (this was 1% of the many people I spoke with by the way).

For those interested in HRM, in particular, I can tell you that we have recalculated the last 7 years, and as our 'instinct' suggested to many of us, the numbers have been going up, and they continue to rise. However, HRM is not ranked higher than AMJ, and you know what, that is not our goal. If anyone wants more information (I did get this request from several colleagues), please write to me directly, and I can share what I know.

HRM is a bridge journal, and we will continue to pride ourselves in providing high quality 'bridge' material. This means we will publish papers that are rigorous and that cover research which has value in the real world. We will publish manuscripts that bring new learning to the table, be it in the form of case studies, thought pieces, literature reviews, exploratory studies, or original, theory-based research. Papers that are not research focused will 'bridge' to research by suggesting ideas for researchers. Research studies include sections on implications for practitioners.

We will stay true to our mission, and our team is committed to do the best it can do. Our reputation is built on more than just one number (thank you to Andrea Wyatt-Budd for her comment about reputation; if you have not read it, please do).

The lessons learned for me is to be a better consumer of research. None of us should be lulled into a state of complacency because we are depending on a number that makes us look good. I love data, and so do a lot of people in my network. But as I say when doing business at eePulse, it's not the data that really matters. Data can be used to start high-level dialogue, and the data and dialogue are where true learning can be derived. The search for a magic number will always be a search because magic numbers do not exist.

Numbers are helpful to us because they make us question, help us calibrate results, and they provide learning. But alone, they are nothing.

Maybe we should think about this as we lead, consult, and teach.

Think about the one quarterly stock price number, the annual performance review score, the yearly employee survey benchmark, or the once every six months customer service statistic. What are you doing with these data? Are they being used to engage in action-focused dialogue? Or are they used for something else, and is that other purpose perhaps dysfunctional in some ways?

Maybe it's time to rethink how we teach, lead and consult.

I am going to start by developing the 'consumers of research class.' It may be part of a bigger program, or it may be a stand-alone course. However, I'm pretty convinced that this can be a very useful addition to our learning, and I know that I would benefit from putting it together.

If you have ideas for content, please continue to let me know.


Thursday, August 7, 2008

Consumers of Research: HRM, the Journal, Ratings Story

I have been thinking about teaching an executive development class focused on learning how to be a good consumer of research. I developed this concept after working with executives who are using employee survey data and other HR / people metrics to make decisions. With the flurry of activity from what is being called 'evidence-based management,' employee engagement, and the need for ROI work, more and more data are being made available by larger numbers of people. I still think this may be a good idea, but I was recently caught off guard by the fact that maybe the academic community, of which I am a member, are not good consumers of research themselves. We pride ourselves in using high quality methods, reviewing others work through the peer review and publishing process, training doctoral students on the latest research methods, and then even updating ourselves through professional training and conferences. But .. what I learned over the last month was that we are lulled into the complacency of numbers just like everyone else. Let me tell you a story.

I am the editor-in-chief of what is called a bridge journal (that means it is designed for both academics and practitioners). The name of the journal 'officially' is Human Resource Management. I have come to call it HRM, the Journal, because just having the name "human resource management" is difficult (people don't know if you are referring to a department, the field of HRM, etc.). I've been the editor for 3 years now. When I started out we struggled to get enough papers to fill an issue, and now we are in a position of having too many good papers, and we're booked out until 2011. So we are going to 6 (vs. 4) issues in 2009. The journal has won a number of awards since our first year in the editing role (I say we because it is the editorial team that really makes it happen), including most improved journal award. We were lulled into thinking we were doing pretty well.

But along the way there was an odd factor for us to contend with. There is something called the ISI ratings. An organization rates journals based on the impact factor of journal articles and then publishes the number annually. They use the number of citations per article (look at how many other articles are written that reference the original article in question), and journals that are cited more (more citations per article) are the ones that get a better score and then are more important, prestigious, etc. The problem is we don't know where they really get the citations from or which citations go with which article. The process is somewhat a mystery.

Our journal (HRM) has had an incredibly high ranking for the last few years. In fact, it was always higher than I ever expected it to be. By nature of being a 'bridge' journal, some of our articles would not be cited by other scientists because they are meant to be practical (as in for practitioners). But hey, who questions a good number?

So this year, when the new ratings came out and our journal dropped from the top 10 (which it has been) to number about number 50, our team was a bit shocked. We immediately tried to find out what we did. What mistakes were we making? We looked through the articles again, and we all decided nothing seemed dramatically wrong. We questioned whether the citation gathering process was working, whether key word searches were correct, and we went down lots of paths.

Then I got an email from a professor named Anne-Wil Harzing. She is an expert in the journal ranking process and has web site I would recommend to anyone interested in this topic (http://www.harzing.com/index.htm). She informed me that the drop in rankings for HRM (the Journal) may have been (as she said in her note to me) "her fault." Of course, it was not Anne's fault; however, her investigations led to the discovery of an error that led to the change in the ratings.

What happened? The organization in charge of one of the most important ratings that the world of academics uses made a mistake. And in my opinion, at least, it seems to be a pretty big one. Our journal (remember, HRM) was mistakingly getting credit for articles that cited Human Resource Management Review, HRM Journal (there is one that has the journal word in its title), and even books that are just HRM. For I don't know how many years, the index we've been using to tell ourselves how good we were was wrong. So this year, the number was corrected, and our journal took a big plunge in the rankings. Of course, the organization doing the rankings did not inform us. In fact, we contacted them on several occasions to find out what happened, and they never told us it was due to a mistake they made.

Net: We have no idea if the ranking is going up or down.

Personally: I have little faith in this number. How do I know that it's accurate for all the other journals?

Learning: Why do a group of academics, trained in the methods of research, give such high credibility to a number that is calculated in secret? There is very little transparency to the way the number is computed; we just believe the rankings when they come out.

Implication: EVERYONE needs a class in "learning to be a consumer of research."

We use metrics all the time. We are fascinated by numbers, and we particularly like it when they make us look good. But when things go bad, we fuss and worry about why.

This is not just the story of our journal (HRM). This is a story of employee survey data, of stock price, of best place to work rankings, and I am certain you can add many many more examples.

We all need to be better consumers of research. We need to ask the right questions when things are going well and when not. We need to know where numbers come from, and we have to take them in context.

Now .. you may wonder .. what am I going to do about the journal situation?

We are going to continue to do the work we've been doing. We think the journal is improving. We have global presence, subscriptions are going up, subscriptions are even being renewed at a higher rate, excellent authors are submitted high quality papers to us, we are winning awards, and yes, we have now learned that we may not have as high a ranking on this particular 'score.'

I hope that after reading this story that some of my academic colleagues will do the same. In fact, in a world of the open source code, the Internet, and full disclosure, I want to ask why we are using a metric from an organization that will not provide full and complete disclosure of process?

If these rankings and numbers are used for tenure decisions, reputation making, and more, then shouldn't we who believe in peer review, open dialogue, and having the information we need to be consumers of research, insist that the process by which these numbers are calculated be completely and fully disclosed?

This story has implications for all sorts of metrics that we are using for people at work. Let me provide two examples that I have talked about in other blog posts (and that I have articles about on http://www.eepulse.com/):

Benchmarking data - when is the last time you asked where the data came from, or how old it is? Did you know in most cases you are comparing your data to the average of data base that may be 4 years old. That's like taking your stock price today and comparing it to the average stock price of your competition for the last 4 years. We would never do that!

Average scores on employee surveys? Who says every question should be higher than it was the year before? Did you know that for some employees, improving employee engagement survey scores lowers their performance?

Question: If I teach the 'consumers of research course,' would anyone would show up?