Authonomy is a unique online community that connects readers, writers and publishing professionals. It was conceived and built by editors at HarperCollins Publishers. They are in ‘beta’ at the moment, so they’re still developing and perfecting the site. Authonomy invites unpublished and self published authors to post their manuscripts for visitors to read online. Authors create their own personal page on the site to host their project – and must make at least 10,000 words available for the public to read. Visitors to Authonomy can comment on these submissions – and can personally recommend their favourites to the community. Authonomy then counts the number of recommendations each book receives, and uses it to rank the books on the site. It also spots which visitors consistently recommend the best books – and uses that info to rank the most influential trend spotters.
This kind of ethos, certainly that the community decides, is shared by PLoS ONE who say that too often a journal’s decision to publish a paper is dominated by what the Editors think is interesting and will gain greater readership. They rationalise that these subjective judgements can lead to decisions which are frustrating to the author – and lead to bad science. In this case they peer-review the submissions and publish all papers that are judged to be technically sound. Judgements about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).
So is this the way for Human factors and HCI review, well I think so. We need a combination of non-judgemental review as per PLoS ONE with open paper review to encourage quality. Indeed, open reviews have been gaining popularity in other domains for sometime  championed by the British Medical Journal:
…Schroter says the journal decided to introduce its policy of signed reviews based on the logic that signed reviews might be more constructive and helpful, and anecdotally, the editors at BMJ say that is the case. JAMA‘s Rennie says he doesn’t need research data to tell him that signing reviews makes them better. “I’ve always signed every review I’ve ever done,” he says, “because I know if I sign something, I’m more accountable.” Juries are not anonymous, he argues, and neither are people who write letters to the editor, so why are peer reviewers? “I think it’ll be as quaint in 20 years’ time to have anonymous reviewers as it would be to send anonymous letters to the editor,” he predicts.
So here’s the test – for the next year I’m going include the following on all reviews:
Open Review: I am tired of receiving unfair, unconsidered, short, or unhelpful reviews – I personally do not mind the rejection but I do mind not being able to make my next submission better. It seems to me that having reviewer information obscured may be a partial explanation for these kinds of reviews and this may not be the best way to conduct scientific research in the future – especially in the collaborative cross-disciplinary domains within which I normally work. Therefore, in the interests of transparency, this paper was reviewed by Dr Simon Harper at the University of Manchester (UK). My objective was to give honest feedback to make your work better and my reviews more considered, I did not wish to say anything here which I would not say in a face-to-face discussion. If you feel this review was either unfair or unhelpful then please let me know, and I will endeavour to do a better job next time.
I’m going to see if I think my review quality increases, see if I get any feedback, and see how the conferences and journals for which I review handle the statement.
Addendum – 16 March 2011
Thanks to Phil Lord for Pointing to these open reviewing systems:
- Alison McCook (2006). Is Peer Review Broken? The Scientist, 20 (2), 26 Other: 23061/#ixzz16smQbXG3