Another ACL submission looms, another resubmission for me. I pampered my papers as best as I could and I let it go, hoping the
/dev/random will assign me good reviewers... In this blog post I will attempt to describe the peer review process in my field as best and as accurately as I can to non-academics. I hope hilarity will ensue.
So imagine that you are in the blooming field of professional bear impersonators. You studied many years how to become the best black bear, down the finest details of chasing a skier down a ski slope and tree climbing. Now after you have perfected your craft, you are employed to develop new, more accurate methods of bear impersonation, in your tiny subfield of black bear imitation. You career progression is dependent on the new techniques you develop and present at the annual bear impersonation conference.
Now, conferences are a great place where you both present your fancy work and learn of others' techniques. You exchange knowledge with the goal of advancing the field for the good of ursidae kind. However there are many other researchers, just like you, working in their tiny subfields that also claim to have discovered the best places for salmon catching and the neatest of hibernation caves. Can you take their work at face value? Surely someone competent would have to verify that those bear impersonators did their due diligence when writing up their research. This is where the reviewers come in, and you are one of them.
Every annual bear impersonation conference has easily 3500+ submissions and each of those require a thorough review by at least 3 other bear impersonators. Every person who submits to that conference is required to review 5-7 other works, however they could be all in slightly different subfields. There's the strong white bear impersonators, there's the weirdos doing spectacled bear, there's the panda folks... You have some basic understanding of their work, because of the transferable skills you have gained as a black bear impersonator, but you are by no means an expert...
So you go to your assigned reviews and the first work you have to go over is on the mating rituals of panda bears in the wild, a cutting edge research that says how all other impersonators have done it wrong for the last decade. It all looks super interesting but... You have no idea what the current accepted take on panda mating rituals is. You would have to go and do a bit of research before you are qualified to adjudicate whether this work has some merit or not. This would take couple of hours of hard work, but it will be worth it, right? Except you are not paid to do any of that. In fact, not only are you not paid, but you are expected to do this in your free time. And remember, this is just one out of seven works that you have to do. Luckily three of those are on black bears and you can review them in about 30-45 minutes, but the rest will take forever. And you are due that big picnic basket stealing deliverable that pays your salary next morning... It's going to be another 60 hour week.
Or not. You can just glance over the bits you don't understand, give some random remarks about the pitch of the roar being off by a few keys, and job done. The process is anonymous anyways and most people skimp on their reviewer duties. You are sure that if you leave a lukewarm non-commital review, someone else will pick up the slack. That someone is usually the area chair who posts exasperated tweets like this one. It is also very much a zero-sum game too. The review scores from all works are averaged and then those above some arbitrary threshold get in the conference. By bringing down the score of other people's work you maximise the chance of yours getting in.
To be fair, that last bit, while true, is practiced by very few bad actors. Anywho, thank you for bearing with me thus far. This is very much the preface about how reviewing is supposed to work in the best possible scenario. In practice there are a few other additional issues, most of which come down to the lack of free time:
- There is a real fetishism for using new methods to solve problems. The more convoluted the better. Applying known methods to solve the same problems, even if more effective, is generally seen as less cool. SOME (not all, thankfully) reviewers have negative bias towards those works.
- There is a strong bias towards positive results. If you have negative results, you are LIKELY (again, thankfully, not always) getting bad reviews. Because it makes purrfect sense for negative results to go unpublished and for some other poor soul to go down the same path for a few months.
- Reviewers see each-other's identities. Senior researchers can influence junior people with their scores, simply by the virtue of being well known. If one famous person gives a bad score, the other reviewers (and area chair) are much less likely to argue, even if those people are usually extremely busy, therefore less likely to have paid a lot of attention to the work they were reviewing.
- It really takes one bad review to sink your paper. It is much easier for people to accept that they have missed something and the paper is worse than they thought than the other way around. I don't know why, don't ask me.
Those are all heuristics employed by busy people who simply don't have enough time (never mind the non-existent financial incentive) to read the paper and the related work in details and decide whether it is of high quality or not.
Not everyone does this. I know plenty of people that put multiple hours in paper reviewing, ask for second opinion, follow related work... And this is why it hurts even more when you get a very generic negative review from someone who obviously spent much less time reviewing compared to you.
Let me get some of the ones I have gotten in the past:
- Paper have much grammatical mistake, hard read - I got that one recently. I am by no means the greatest of writers, but the other two reviewers thought the paper was very well written. The grammar of the reviewer who wrote that was just the extra insult to injury.
- Many people could have done this. - Very true, many people could've thought to do this. But they didn't.
I have had numerous conflicting reviews throughout my short, sad academic career and they have overwhelmingly been resolved in favour of rejection. This is incredibly frustrating, because while my work is very far from being perfect, short, negative, non-informative reviews are in no way helpful for improving it.
You are supposed to contact the area chair when you spot a reviewer that obviously skimped on their duties, but the reality is that the area chair is even more overworked and just as unpaid as everyone else and they simply don't have the time to review every single case manually. Most of the time they resort to yet another heuristic...
So yeah. This is academia. How can we make things better? There are many things we can do, but I think we should start by addressing the expectation of free labour. I mean, after all, everyone does pay to get their publication. Admittedly, it is very easy to shout in the
/dev/null that is the Internet and complain about things being unfair. In reality, the situation is very complex and there are many stakeholders. Bringing a change that is satisfactory to everyone is difficult and this is why the status quo is so stable.
That's it from me. If you are interested in reading more about bad reviewer heuristics, check this excellent blog post on the issue by Anna Rogers. Enjoy the rest of your day/night and hope things will take a turn for the better.