Friday, March 15, 2013

Reviewing Season

It's reviewing season for many machine learning conferences (and for conferences in other fields as well).  My personal sample size is rather small, but I must say that the quality of reviewing has improved noticeably over the recent years.

I've been especially pleased with the ICML reviewing system this year, which has taken steps towards becoming more of a hybrid conference/journal system.  Being conference-driven (rather than journal-driven) once yielded significant benefits in terms of rapid dissemination of new research results.  However, given the rapidly growing number of submissions (as well as the fact that everything is submitted and reviewed electronically these days), it no longer makes sense to constrain submitting and reviewing to a few brief time intervals every year.  This paper by H V Jagadish provides a nice discussion of the benefits of a fully hybridized conference/journal system.

If I were to nitpick one thing, it would be the lack of information regarding the levels of expertise and interest that reviewers have for various submissions.  For example, in ICML, most reviewers seem to only express interest in a small number of submissions, thus often requiring an Area Chair to "infer" whether a reviewer would be a good fit for reviewing any given submission.  Although most conferences require submissions to enter in keywords or subjects, this often does not provide well-calibrated information.

One approach that I've enjoyed using in the past is the Toronto Matching System, which is able to learn a matching between my areas of expertise and the technical areas contained in a submission.  I hope that most conferences adopt something similar in the near-future, because it reduces the amount of human effort needed to find good matchings between reviewers and submissions.

No comments: