Reviewing Guidelines
This document guides those who participate in the VRST paper reviewing process and is directed towards those who perform full reviews of papers (i.e., secondary review coordinator 2AC, committee members, and reviewers) and meta-reviews (primary review coordinator 1AC).
Our goal is to ensure the highest possible reviewing standard for VRST . Please read these guidelines before reviewing and reach out to us if you are unclear about anything or need more information.
Reviewing a Paper for VRST:
During the paper review process, review the paper using the following guidelines:
- The primary aspect of accepting/rejecting a paper is its contribution. We consider a paper of sufficient quality if it presents a strong, tangible contribution to a specific aspect of the authors’ research. The authors can present preliminary findings if they support the authors’ assumptions, but more and complete findings are always welcome.
- Consider the value of the contribution and merits of the paper to the VRST community. The paper may be an edge case. However, it discusses a topic, e.g., currently unknown to the community or extremely relevant at the time.
- Don’t evaluate a paper based on its length but instead by its contribution. A paper must adequately describe the contribution made by the authors. Papers lacking in detail should be lengthened, and papers with excessive or repetitive detail should be shortened.
- Papers are acceptable if the authors can easily correct the weaknesses (e.g., missing references, minor spell-checking, fuzzy statements, lack of minor implementation details, and others).
- Mind that the VRST review form will ask you to provide a single score (5-point rating, see below) judging the quality. Your written appraisal must support this score.
Ken Hinckley offers great advice on reviewing:
Hinckley, K. (2016). So You’re a Program Committee Member Now: On Excellence in Reviews and Meta-Reviews and Championing Submitted Work That Has Merit.
Here are a few of the major points from Ken’s paper with minor changes below:
- Read papers with care and sympathy. Many hours of work — in some cases, years of work — have gone into research and writing this paper. Try reviewing as you would like others to review your own paper.
- Short and/or content-free reviews that read like unsubstantiated opinions are insufficient and will be rejected by the 1AC.
- State specifically why the paper is “great,” “mediocre,” or “bad.”
- Clearly describe on what grounds the paper should be accepted (or rejected). Keep in mind that this may be someone’s first paper. Your constructive feedback will be appreciated.
- Explicitly and clearly discuss the weaknesses and limitations in a positive and constructive manner. Specifically, don’t be insulting – be positive.
- Your review should be a critique of what the authors have done and not what they should have done. Assess the work the authors did and whether their methods are appropriate to support their claims. Avoid judging and explaining what the authors should have done.
- Do not reject a paper because of anything that can be fixed/addressed easily as authors will have the opportunity to do so for their camera-ready version.
- Avoid the fallacy of novelty. Specifically, do not simply reject papers because they replicated experiments.
- When evaluating papers with human-subjects studies, it is important that the participant sample population be representative of the population for which the technology is being designed. For example, if the technology is being designed for a general population then the participant population should include equal gender representation and a wide range of ages. All papers with human-subject studies should report at a minimum demographic information including age, gender, and every possible hint of social and diversity representation. If this is not the case, reviewers should not automatically reject the paper but instead provide appropriate constructive critique and advice regarding general claims that do not use representative sample populations.
- User studies are not required or appropriate for all papers. While the authors need to support their claims with evidence, that form of evidence can vary from paper to paper. In short, the work needs to have validation but not necessarily with a user study.
- Please do not reject system papers simply because it is built using existing well-known techniques if it accomplishes new functionality. In such situations, judge the novelty and significance of the new functionality. Here are some references for how to evaluate different types of research:
- Daniel R. Olsen Jr. (2007): Evaluating User Interface Systems Research
- Kasper Hornbæk, Aske Mottelson, Jarrod Knibbe, and Daniel Vogel (2019): What Do We Mean by “Interaction”? An Analysis of 35 Years of CHI
- David Ledo, Steven Houben, Jo Vermeulen, Nicolai Marquard, Lora Oehlberg & Saul Greenberg (2018): Evaluation Strategies for HCI Toolkit Research
- James Fogarty (2017): Code and Contribution in Interactive Systems Research