Thursday, August 7, 2008

Consumers of Research: HRM, the Journal, Ratings Story

I have been thinking about teaching an executive development class focused on learning how to be a good consumer of research. I developed this concept after working with executives who are using employee survey data and other HR / people metrics to make decisions. With the flurry of activity from what is being called 'evidence-based management,' employee engagement, and the need for ROI work, more and more data are being made available by larger numbers of people. I still think this may be a good idea, but I was recently caught off guard by the fact that maybe the academic community, of which I am a member, are not good consumers of research themselves. We pride ourselves in using high quality methods, reviewing others work through the peer review and publishing process, training doctoral students on the latest research methods, and then even updating ourselves through professional training and conferences. But .. what I learned over the last month was that we are lulled into the complacency of numbers just like everyone else. Let me tell you a story.

I am the editor-in-chief of what is called a bridge journal (that means it is designed for both academics and practitioners). The name of the journal 'officially' is Human Resource Management. I have come to call it HRM, the Journal, because just having the name "human resource management" is difficult (people don't know if you are referring to a department, the field of HRM, etc.). I've been the editor for 3 years now. When I started out we struggled to get enough papers to fill an issue, and now we are in a position of having too many good papers, and we're booked out until 2011. So we are going to 6 (vs. 4) issues in 2009. The journal has won a number of awards since our first year in the editing role (I say we because it is the editorial team that really makes it happen), including most improved journal award. We were lulled into thinking we were doing pretty well.

But along the way there was an odd factor for us to contend with. There is something called the ISI ratings. An organization rates journals based on the impact factor of journal articles and then publishes the number annually. They use the number of citations per article (look at how many other articles are written that reference the original article in question), and journals that are cited more (more citations per article) are the ones that get a better score and then are more important, prestigious, etc. The problem is we don't know where they really get the citations from or which citations go with which article. The process is somewhat a mystery.

Our journal (HRM) has had an incredibly high ranking for the last few years. In fact, it was always higher than I ever expected it to be. By nature of being a 'bridge' journal, some of our articles would not be cited by other scientists because they are meant to be practical (as in for practitioners). But hey, who questions a good number?

So this year, when the new ratings came out and our journal dropped from the top 10 (which it has been) to number about number 50, our team was a bit shocked. We immediately tried to find out what we did. What mistakes were we making? We looked through the articles again, and we all decided nothing seemed dramatically wrong. We questioned whether the citation gathering process was working, whether key word searches were correct, and we went down lots of paths.

Then I got an email from a professor named Anne-Wil Harzing. She is an expert in the journal ranking process and has web site I would recommend to anyone interested in this topic (http://www.harzing.com/index.htm). She informed me that the drop in rankings for HRM (the Journal) may have been (as she said in her note to me) "her fault." Of course, it was not Anne's fault; however, her investigations led to the discovery of an error that led to the change in the ratings.

What happened? The organization in charge of one of the most important ratings that the world of academics uses made a mistake. And in my opinion, at least, it seems to be a pretty big one. Our journal (remember, HRM) was mistakingly getting credit for articles that cited Human Resource Management Review, HRM Journal (there is one that has the journal word in its title), and even books that are just HRM. For I don't know how many years, the index we've been using to tell ourselves how good we were was wrong. So this year, the number was corrected, and our journal took a big plunge in the rankings. Of course, the organization doing the rankings did not inform us. In fact, we contacted them on several occasions to find out what happened, and they never told us it was due to a mistake they made.

Net: We have no idea if the ranking is going up or down.

Personally: I have little faith in this number. How do I know that it's accurate for all the other journals?

Learning: Why do a group of academics, trained in the methods of research, give such high credibility to a number that is calculated in secret? There is very little transparency to the way the number is computed; we just believe the rankings when they come out.

Implication: EVERYONE needs a class in "learning to be a consumer of research."

We use metrics all the time. We are fascinated by numbers, and we particularly like it when they make us look good. But when things go bad, we fuss and worry about why.

This is not just the story of our journal (HRM). This is a story of employee survey data, of stock price, of best place to work rankings, and I am certain you can add many many more examples.

We all need to be better consumers of research. We need to ask the right questions when things are going well and when not. We need to know where numbers come from, and we have to take them in context.

Now .. you may wonder .. what am I going to do about the journal situation?

We are going to continue to do the work we've been doing. We think the journal is improving. We have global presence, subscriptions are going up, subscriptions are even being renewed at a higher rate, excellent authors are submitted high quality papers to us, we are winning awards, and yes, we have now learned that we may not have as high a ranking on this particular 'score.'

I hope that after reading this story that some of my academic colleagues will do the same. In fact, in a world of the open source code, the Internet, and full disclosure, I want to ask why we are using a metric from an organization that will not provide full and complete disclosure of process?

If these rankings and numbers are used for tenure decisions, reputation making, and more, then shouldn't we who believe in peer review, open dialogue, and having the information we need to be consumers of research, insist that the process by which these numbers are calculated be completely and fully disclosed?

This story has implications for all sorts of metrics that we are using for people at work. Let me provide two examples that I have talked about in other blog posts (and that I have articles about on http://www.eepulse.com/):

Benchmarking data - when is the last time you asked where the data came from, or how old it is? Did you know in most cases you are comparing your data to the average of data base that may be 4 years old. That's like taking your stock price today and comparing it to the average stock price of your competition for the last 4 years. We would never do that!

Average scores on employee surveys? Who says every question should be higher than it was the year before? Did you know that for some employees, improving employee engagement survey scores lowers their performance?

Question: If I teach the 'consumers of research course,' would anyone would show up?




8 comments:

Anonymous said...

Theresa-

What a refreshing post. At my particular institution, we have gotten away from the "ranking" you are talking about and instead are using the Harzing impact score (average number of times an article in a journal is cited elsewhere) because there is such clear information on how that number is derived on the Harzing site. By using Google Scholar to calculate this number (and many others), it also includes citations of papers in book chapters and conference proceedings. While this is perhaps too inclusive for some top tier schools, others may see that it is a better indication of how much articles in a particular journal are being read and used by others.

Whatever the metric used, it is essential that we know how it is calculated so that we can trust the validity of the results.

Hold the class, Theresa - I would come!

-Melissa Cardon, Pace University

Anonymous said...

Hi Theresa,

I think this is a fascinating problem, and one that has also occurred very recently in an organisation I am working with. The organisation in question is going through a huge acquisition at the moment, and one of the measures being reported up through the organisation is the number of hits to a particular intranet site. The stats looked great - and were showing steady increases...but it seems that stats package was double and treble-counting.
No-one questioned the figures until the drop occurred. Why would they?
We naively believed what we had been "sold" - as you had - without total transparency of the way it worked. You see the parallels. Now we are circumspect about using the figures at all, even though we now have faith in them.

It has to make us all question what are the important gauges of success? Do we believe in what we are doing to a point where the quantitative results are less important?
For my money, we should use these quantitative results with caution, and spend more time, effort and attention on qualitative input.

As for a course, I think it would be a great module - but I question how transparent the title is as a way of articulating the content and value of the material you want to cover...I think the thread is "your reputation matters, so learn how to research your research"

Anonymous said...

Hi Theresa,

This is a remarkable story! However, I must say this doesn't really surprise me. A couple of years ago, we had to compute impact factors manually for a study on self-citations. We found that none of the calculated journal impact factors corresponded to the official ISI impact factors (Anseel, Duyck, De Baene & Brysbaert, 2004). Other researchers have also complained about this problem (“Errors
in Citation Statistics,” 2002). To make the story complete, we found that about 30% of the ISI impact factor is determined by author self-citations. This again shows that we should be careful in drawing conclusions and making decisions on the basis of impact factors alone.

Frederik Anseel - Ghent University, Belgium

Anseel, F., Duyck, W., De Baene, W., & Brysbaert, M. (2004). Journal impact factors and self-citations: Implications for psychology journals. American Psychologist, 59, 49-51.

Errors in citation statistics: Opinion of the Nature editors.(2002, January 10). Nature, 415, 101.

Anonymous said...

Theresa,

It is good to hear what happened. I presented a couple of papers at Oxford and the Norwich Business School in June and the drop in HRM came up in conversation. I agree that it is an excellent journal and that it continues to get better all of the time.

In terms of the class that you talk about there is a similar class that is often offered in Education Specialist programs. It could be a good course for Business Schools as well. I designed one for a new Masters of Science in Leadership program that I hope shall be started in 2009.

Shawn Carraher

Anonymous said...

Dear Theresa,

Many thanks for your post. We are in the middle of a process of revising our journal rankings within our Social Sciences School at Tilburg University (the Netherlands). The creation of a A and B journal list for the HR discipline is quite challenging. The ISI impact ratings play a huge role in the determination of the ranking and reading the HRMJ mistakes made by Thompson puts our investigation in a completely different perspective. Perhaps its time for us - the global HR community - to construct our own top ranking list, for example in line with the British ABS journal list (association of business schools), which takes into account multiple criteria (including isi impact factors). The interesting thing about the ABS list is the recognition of the HRM-employment relationship discipline wih its own high quality journals. HRMJ (usa) is considered to be a top journal in that ABS ranking. Another interesting development is the emergence of SCOPUS. This program enables us to study the impact of individual scholars (h-index), but also to show the impact of a journal. Finally, to make things even more complicated we are confronted with the fact that there is another journal - Human Resource Management Journal (uk) - that is not yet registered as isi journal, but is an important outlet for European scholars. I would be interested to hear others about these issues.
Paul Boselie, Tilburg University (NL)

Anonymous said...

Dear Theresa,

This is somehow a very welcome story to distribute to most academics. Of course, we seem to know this in the back of our minds but do not act upon. In addition, I noticed that especially young academics appear not to be aware of the intransparency and secrecy of the ratings and take them for granted. I would suggest to include this critics into PhD teaching.
I would welcome the proposed course; in fact I may try to include this into our master courses HRM.

Erik Poutsma
Nijmegen School of Management. Radboud University, Netherlands

Anonymous said...

Thanks, Theresa. I am particularly interested in the observation about employee engagement lowering performance. Is there a citation for that?

Anonymous said...

Many thanks for so candidly re-framing the King's nudity!

Yes there is a problem, as a matter of fact, many problems with the statistics we rate journals by. Lack of transparency may be one of the problems, but that may stem from the core of our teaching business: ISI Thomson is private and not inclined to reveal the mystery of their trade.

Let me share with you another story. Some years ago I was puzzled by the rapid increase in Brazil's contribution to scientific journals. Most Brazilians were proud, so was I. But ISI Thompson refused to disclose whether the steep increase in citations may also be due to increased coverage of Brazilian scientific journals rather than only through more publications by Brazilian authors in the same number international journals.

Journals which were not previously covered by ISI were not as keen in subscribing to the ratings. The growth of ISI subscription revenues could only come by expanding frontiers; meaning covering more journals.

In a very business-like manner ISI launched an aggressive marketing campaign to increase subscription to their ratings by covering more journals. The marketing campaign included awards by ISI for the most cited authors, frequently the same experienced professors who make the decisions to subscribe or not to ISI's services!

Eventually the numbers will stabilize because there will be fewer new journals to cover, but the secrecy will remain, hopefully our gullibility will not.

Best,

Alfredo Behrens
www.alfredobehrens.com

Take the Anti-Scrooge Pledge for 2023

Take the Anti-Scrooge Pledge for 2023