Reviewing Guidelines

This document guides those who participate in the VRST 2023 paper reviewing process and is directed towards those who perform full reviews of papers (i.e., secondary review coordinator 2AC, committee members, and reviewers) and meta-reviews (primary review coordinator 1AC).

Our goal is to ensure the highest possible reviewing standard for VRST 2023. Please read these guidelines before reviewing and reach out to us if you are unclear about anything or need more information.

Reviewing a Paper for VRST 2023:

During the paper review process, review the paper using the following guidelines:

  • The primary aspect of accepting/rejecting a paper is its contribution. We consider a paper of sufficient quality if it presents a strong, tangible contribution to a specific aspect of the authors’ research. The authors can present preliminary findings if they support the authors’ assumptions, but more and complete findings are always welcome.

  • Consider the value of the contribution and merits of the paper to the VRST community. The paper may be an edge case. However, it discusses a topic, e.g., currently unknown to the community or extremely relevant at the time.

  • Don’t evaluate a paper based on its length but instead by its contribution. A paper must adequately describe the contribution made by the authors. Papers lacking in detail should be lengthened, and papers with excessive or repetitive detail should be shortened.

  • Papers are acceptable if the authors can easily correct the weaknesses (e.g., missing references, minor spell-checking, fuzzy statements, lack of minor implementation details, and others).

  • Mind that the VRST 2023 review form will ask you to provide a single score (5-point rating, see below) judging the quality. Your written appraisal must support this score.

Ken Hinckley offers great advice on reviewing:

Hinckley, K. (2016). So You’re a Program Committee Member Now: On Excellence in Reviews and Meta-Reviews and Championing Submitted Work That Has Merit.

Here are a few of the major points from Ken’s paper with minor changes below:

  • Read papers with care and sympathy. Many hours of work — in some cases, years of work — have gone into research and writing this paper. Try reviewing as you would like others to review your own paper.

  • Short and/or content-free reviews that read like unsubstantiated opinions are insufficient and will be rejected by the 1AC.

  • State specifically why the paper is “great,” “mediocre,” or “bad.”

  • Clearly describe on what grounds the paper should be accepted (or rejected). Keep in mind that this may be someone’s first paper. Your constructive feedback will be appreciated.

  • Explicitly and clearly discuss the weaknesses and limitations in a positive and constructive manner. Specifically, don’t be insulting – be positive.

  • Your review should be a critique of what the authors have done and not what they should have done. Assess the work the authors did and whether their methods are appropriate to support their claims. Avoid judging and explaining what the authors should have done.

  • Do not reject a paper because of anything that can be fixed/addressed easily as authors will have the opportunity to do so for their camera-ready version.

  • Avoid the fallacy of novelty. Specifically, do not simply reject papers because they replicated experiments.

  • When evaluating papers with human-subjects studies, it is important that the participant sample population be representative of the population for which the technology is being designed. For example, if the technology is being designed for a general population then the participant population should include equal gender representation and a wide range of ages. All papers with human-subject studies should report at a minimum demographic information including age, gender, and every possible hint of social and diversity representation. If this is not the case, reviewers should not automatically reject the paper but instead provide appropriate constructive critique and advice regarding general claims that do not use representative sample populations.

  • User studies are not required or appropriate for all papers. While the authors need to support their claims with evidence, that form of evidence can vary from paper to paper. In short, the work needs to have validation but not necessarily with a user study.

  • Please do not reject system papers simply because it is built using existing well-known techniques if it accomplishes new functionality. In such situations, judge the novelty and significance of the new functionality. Here are some references for how to evaluate different types of research:

    • Daniel R. Olsen Jr. (2007): Evaluating User Interface Systems Research
    • Kasper Hornbæk, Aske Mottelson, Jarrod Knibbe, and Daniel Vogel (2019): What Do We Mean by “Interaction”? An Analysis of 35 Years of CHI
    • David Ledo, Steven Houben, Jo Vermeulen, Nicolai Marquard, Lora Oehlberg & Saul Greenberg (2018): Evaluation Strategies for HCI Toolkit Research
    • James Fogarty (2017): Code and Contribution in Interactive Systems Research

Sponsors

Platinum

Gold

Silver

Bronze

Supporters

Writing a Review:

The following guidelines outline the content and key points of a high-quality review for VRST 2023. These guidelines apply to the secondary review coordinator (2AC), committee members, and reviewers. Please adhere to the guidelines and contact the program chairs or primary review coordinator (1AC) for any questions.

  • A high-quality review should have about 1-page of well-considered commentary (at least 500 words), or more if warranted. Short and/or content-free reviews are insufficient and frustrate the authors.
  • Start your review with a summary of the strengths and contributions made by this paper and why they are noteworthy or important.
  • Explicitly and clearly discuss the weaknesses and limitations in a positive and constructive manner. Specifically, be positive and not insulting.
  • State specifically the reasons for the score you selected for this paper. Clearly describe on what grounds the paper should be accepted (or rejected).

Specifically, please address each of the following issues in your review:

  • Originality of the work: What new ideas or approaches are introduced in this paper?
  • Validity and replicability of the work presented: How confidently can researchers, practitioners, or experienced graduate students use the results and/or replicate this work?
  • Presentation clarity: Are the writing style and organization of this paper appropriate?
  • Related work: Is relevant previous work adequately cited and discussed?
  • Paper length: Is the paper length appropriate for the contribution?

If you have concerns about the methodological or statistical approaches taken by the authors, or its level of advancement over prior work, please cite a source for your objection (e.g., a definitive paper, a set of professional guidelines or a standard textbook). This is needed to help authors improve their submissions.

Please consider making any other recommendations that you think might be of use to the author(s).

Please be sure to address your review to the program committee. Any use of the word “you” should be referring to the committee, and not to the authors.

Please avoid last-minute reviews. Mind that your decisions affect the public appearance of VRST 2023. Therefore, the program chairs are very serious about ensuring the highest possible reviewing standards for VRST 2023.

The primary review coordinator (1AC) will examine all reviews for quality. If the 1AC finds any reviews of poor quality (e.g., lacking details, missing reasons for rejection, etc.), they will ask reviewers to update their reviews. In extreme cases, the 1AC may remove poorly written or incomplete reviews and find replacement reviewers.

Write a Meta-Review:

The following guidelines outline the content of a good meta-review. This section is only for primary review coordinators (1AC) and explains the content the chairs believe supports a decision best.

  • Describe the primary contributions of the paper.
  • Summarize the most significant pros and cons of the paper. The most critical are often those the majority of reviewers highlight in their reviews. Abstain from reiterating every single aspect (we have the reviews for that).
  • Explain the decision and the pros and cons that support this decision.
  • In case of conditional acceptance, describe the conditions the authors have to meet before the paper can be accepted.
  • In case the paper is rejected, add suggestions for improvement.
  • Avoid adding discussion details or the score into the meta-review.

Mind that the authors will see the meta-review with the final decision. Be constructive and explain rather more than less, especially in the case the authors receive an unfavorable decision. Very often, the research or paper was not ready at the time of submission. Invite the authors to re-submit next year if feasible.

VRST 5-point Rating:

This section explains the VRST 5-point rating and explains when we think one should select a particular score. We are aware that the decision can be subjective in many cases and that selecting between two is often a judgment call. We hope that this explanation provides some clarity.

  • Definite accept: I would argue strongly for accepting this paper.
    Select this option if the paper is acceptable as-is (except for minor edits), with a strong contribution and merits for the VRST community.
  • Probably accept: I would argue for accepting this paper.
    Select this option if the paper has a valid contribution and merits for the VRST community. Some additional explanations or minor corrections are required.
  • Could go either way: Overall I would not argue for accepting this paper.
    Select this option if the research is relevant, the topic is of value for the VRST community, but your attitude towards this contribution is overall borderline due to the identified weaknesses. You would not argue for accepting this paper, but you would also not feel negative about seeing this paper accepted.
  • Probably reject: I would argue for rejecting this paper.
    Select this option if the research is relevant and the topic is of value for the VRST community, but the research has severe weaknesses.
  • Definite reject: I would argue strongly for rejecting this submission.
    Select this option if the contribution is not understandable and the paper has no recognizable merits, or it is unclear what information the VRST community gains from this submission.

Desk Reject Policy:

This section is only directed towards the primary review coordinator (1AC) and secondary review coordinator (2AC). We expect you to exercise your editorial authority to make fair decisions that conserve time and resources. If you don’t think a submission has any chance of being accepted or is outside the scope of VRST, you may reject it without additional reviews. Such “early rejections” have a less negative effect on authors than late rejections, and let authors move on quickly with their work to a more suitable venue.

Definition of desk rejects at VRST:

  • Desk reject (DR): Desk reject papers are submissions that clearly violate the VRST submission policies, such as anonymity, length, language, or submission topic, or those submissions that are uncompetitive in the scope of the VRST conference so that the review outcome seems clear, and assigning reviewers to these submissions seems unnecessary.

For a paper that is a DR candidate, do not assign any external reviewers. Instead, consider the DR criteria below, discuss this paper among the primary review coordinator (1AC) and secondary review coordinator (2AC), and prepare a short rationale about why this paper should be desk rejected. When writing your justification, try to be positive and not insulting, but give a clear reason why there is a rejection. The program chairs will check and confirm these DR decisions.

Desk reject criteria:

  • The submission bears the authors' names and affiliations (not anonymous)
  • The submission's length is under 4 full pages or over 9 pages (not counting references, appendices or acknowledgments that are on the 10+ page).
  • The submission is incomplete (e.g., section headings given with no section content).
  • The paper is not written in the conference language (English) or the quality of writing renders the submission unreadable.
  • The content is clearly out of scope for the conference and the topic does not appear in the topics list in the call for submissions.
  • The pdf document is corrupted or partially ill-formatted, e.g., figures are missing, tables cross the page limits, and others (forgive missing references).
  • The paper’s context is intentionally promoted/advertised in the public domain with the possible disclosure of authors' names, affiliations, or other obviously identifiable information. However, note the FAQ below on ArXiv.
  • There is clearly insufficient literature review to contextualize and/or evaluate the proposed novelty/contribution to VR/AR/MR/XR in particular.
  • Large parts of the paper have been published before. However, note the FAQ below on prior 2-page poster/demo extended abstracts.
  • The paper has ethical issues (plagiarism, double submission, fake data, etc.).
  • The paper is very sloppy: many typos, missing references, and formatting issues (including large white spaces). However, note that minor formatting issues should be forgiven.

Note that we do not expect a large number of DR decisions. They should be only exercised if the paper is, without any doubt, of limited quality and does not reflect the merits of the research sufficiently.

In the case the review coordinators (1AC and 2AC) exercise a DR decision that is upheld by the program chairs, the program chairs will then compile and send the DR notification to the authors. This notification will include the review coordinators’ remarks and justification for the DR.

Frequently Asked Questions (FAQs):

This section summarizes a few frequently asked questions:

  • Should we reject papers that have been published on ArXiv?
    In a nutshell: No, you should not reject papers that have been published on ArXiv or a similar service as authors may have done it as a way to get a timestamp for their work. However, if the ArXiv submission explicitly states that the submission is under review at VRST, or if the authors listed this prepublication on their individual or institutional webpages or generated publicity for it through other forms of media, then yes, it may constitute a violation of VRST policies. Please raise any related concerns in your review and/or contact the program chairs.
  • Should we reject papers that have been presented before in a different format (e.g., poster or demo)?
    In some situations, a submission may build upon prior work. If you suspect any issues related to this point, please carefully assess how far the publications overlap and discuss this point with the primary review coordinator of this paper. Note that VRST does not consider a prior non-archival 2-page poster/demo extended abstract a reason for rejection of a paper submitted on the same topic.
  • Can I use ChatGPT to help me with my review?
    Short answer: no. Peer-reviewed papers are non-public information that is copyrighted by other parties, and therefore fall under the category of proprietary information. You should not post such information to ChatGPT or any other cloud service.

Thank you for your support and work to ensure the highest-quality VRST reviews.

Do not hesitate to contact us for any further information.

VRST 2023 Program Chairs: papers2023@vrst.acm.org

VRST 2023 Program Chairs,

Gerd Bruder, University of Central Florida (USA)
Tabitha Peck, Davidson University (USA)
Stefania Serafin, Aalborg University (Denmark)

Document History:

This document was created for VRST 2023 by Gerd Bruder, Tabitha Peck, and Stefania Serafin based on the ISMAR 2023 Conference Paper Reviewing Guidelines. The ISMAR 2023 Reviewing Guidelines were prepared by Jens Grubert, Andrew Cunningham, Evan Peng, Gerd Bruder, Anne-Hélène Olivier, and Ian Williams. Prior versions of these guidelines were prepared for ISMAR 2022 by Henry Duh, Jens Grubert, Jianmin Zheng, Ian Williams, and Adam Jones, for ISMAR 2021 by Daisuke Iwai, Denis Kalkofen, Guillaume Moreau, and Tabitha Peck, for ISMAR 2020 by Shimin Hu, Denis Kalkofen, Jonathan Ventura, and Stefanie Zollmann, and for ISMAR 2019 by Shimin Hu, Denis Kalkofen, Joseph L. Gabbard, Jonathan Ventura, Jens Grubert, and Stefanie Zollmann.