In which I give some numbers on the amount of reviewing that I do on an annual basis.

How Much Reviewing?

By Niklas Elmqvist, University of Maryland, College Park

Recently, I’ve been spammed by the website Publons, which purports to be this place where I can make my peer reviews “work for me”. It seems like Publons allows you to register the peer reviews you write in a way that is certified while still preserving the anonymity of the review process. This would allow you to get credit for the reviewing you do, similar to (although certainly not weighted as heavily) the papers you publish, the grants you win, and the courses you teach. At least, that’s the theory, but I actually had to unsubscribe from their emails not once but twice as they were getting obnoxious. Besides, I somehow feel that peer review is this silent cross that we academics have to bear as part of doing business.

Now I’m going to turn around and do the exact opposite, which is to explain precisely how big the cross I bear is. I am fully aware of how ironic this is.

Nevertheless, I feel that it is worthwhile divulging this type of information, not necessarily because I want to beat my own chest about it (which, let’s face, is part of the answer), but also to give other scholars a concrete data point of how much reviewing one person in their community is doing. This may help people calibrate their own efforts and may, ultimately, result in more people willing to review papers.

As an aside, this post was inspired by Jean-Daniel Fekete, my postdoc mentor, who once upon a time would post the amount of reviewing he did every year on his website. That information is now gone from his website, and I am not entirely sure why, but I think his numbers were comparable (or higher) than mine.

Anyway, I do a lot of reviewing—at least, that’s my own perception, and I’d be curious to hear numbers for other members of the community. A big part of the reason is that I’m now a (relatively) senior researcher, which means I get engaged in a lot of program committees and journal editorial boards. This last year, I’ve been on the program committee for ACM CHI 2018 as well as ACM DIS 2018. Both of these have resulted in a sizable set of papers I am responsible for as primary or secondary reviewer. In addition to this, I am also an associate editor for IEEE Transactions on Visualization and Computer Graphics (TVCG), the Information Visualization journal (IVS), and the International Journal of Human-Computer Studies (IJHCS). Finally, I was also papers chair for InfoVis 2017 (as well as 2016); even if the latter typically does not mean I am directly responsible for writing reviews, I did have to read a lot of them to make final decisions. In addition, I also served on a couple of NSF review panels; even if these are not papers, it is still peer review, so I am counting them.

For the year 2017, this is what this boiled down to:

  • CHI 2018: 21 reviews (as 1AC and 2AC)
  • TVCG: 23 reviews (as associate editor)
  • IVS: 5 reviews (as associate editor)
  • IJHCS: 2 reviews (as associate editor)
  • 30+ additional one-off reviews (PacificVis, ToCHI, NSF, etc)

As you can see, the bulk of the reviewing is done as a program committee member or as an associate editor. This number also did not account for the 170+ papers that were submitted to IEEE InfoVis 2017, many of which I had to read or at least skim, including their reviews.

Now, maybe 2017 was special, and maybe other years were different? In the end, I searched for files in my review folder and counted the number of results, organized by year (yes, yes, I know I am supposed to delete my reviews and papers after I complete them, but hey—I’m an academic, I never delete anything). Here are the numbers (starting from 2008 when I first became a faculty member):

  • 2008: 74 reviews
  • 2009: 57 reviews
  • 2010: 85 reviews
  • 2011: 109 reviews
  • 2012: 70 reviews
  • 2013: 84 reviews
  • 2014: 108 reviews
  • 2015: 107 reviews
  • 2016: 85 reviews
  • 2017: 83 reviews

For 2018, I’ve already done 31, and we’re not into March yet. This is shaping up to be quite the year of reviewing.

Anyway, that’s an average of 86 reviews per year over a 10-year period. That number makes me exhausted just thinking about it. I’m also not the type to write one-line reviews, as I think that is an insult to authors given the amount of time they put into writing up their work. (And yes, I should probably chart this since I am, after all, a visualization guy, but I’m feeling lazy, okay?)

So, how much reviewing is enough? Well, given that most conferences and journals assign at least three reviewers per submitted paper, you should be reviewing at least three papers for every paper you submit. This number of course has to be adjusted for the number of co-authors (the reviewing load should be shared among the author team), but nevertheless, I think I’m safe in this regard, as I don’t think I am close to submitting ~29 papers per year (86/3). It should be easy to make this calculation for yourself, knowing how many papers you tend to submit (submit, mind you, not get accepted) per year.

While on this topic, a pet peeve of mine is when people consistently turn down my review requests, often for some nebulous reason or, worse, for no reason at all. Everyone is allowed to have a valid reason to be declining a particular review request now and then, but if you turn me down three times in a row, I am going to remember.

In the end, it all boils down to science being this enormous, shared enterprise where every participant has to do their part. I am sure there are people out there who take advantage of the system by not doing much reviewing at all. For myself, I believe I do my fair share, and I encourage everyone to consider that the system only works when the majority pitches in. Think about that whenever you get a review request the next time.

And if it is me asking you for a review, please don’t turn me down. At least not several times.