ONLINE PROCEEDINGS

VRST '22: 28th ACM Symposium on Virtual Reality Software and Technology

VRST '22: 28th ACM Symposium on Virtual Reality Software and Technology

Full Citation in the ACM Digital Library

SESSION: Applications

Carousel: Improving the Accuracy of Virtual Reality Assessments for Inspection Training Tasks

  • Jacob Belga
  • Tiffany D. Do
  • Ryan Ghamandi
  • Ryan P. McMahan
  • Joseph J. LaViola

Training simulations in virtual reality (VR) have become a focal point of both research and development due to allowing users to familiarize themselves with procedures and tasks without needing physical objects to interact with or needing to be physically present. However, the increasing popularity of VR training paradigms raises the question: Are VR-based training assessments accurate? Many VR training programs, particularly those focused on inspection tasks, employ simple pass or fail assessments. However, these types of assessments do not necessarily reflect the user’s knowledge.

In this paper, we present Carousel, a novel VR-based assessment method that requires users to actively employ their training knowledge by considering all relevant scenarios during assessments. We also present a within-subject user study that compares the accuracy of our new Carousel method to a conventional pass or fail method for a series of virtual object inspection tasks involving shapes and colors. The results of our study indicate that the Carousel method affords significantly more-accurate assessments of a user’s knowledge than the binary-choice method.

Research on the Emotions Expressed by the Posture of Kemo-mimi

  • Ryota Shijo
  • Sho Sakurai
  • Koichi Hirota
  • Takuya Nojima

Kemo-mimi means the dog- or cat-like ears on a humanoid character, or the ears of the animal itself. Kemo-mimi is often used as an element of the avatar’s appearance. It is generally considered that the posture of animal ears represents the animal’s emotional state. And the idea has been used as a technique for expressing emotions in many cartoon and animation works. But despite this fact, there are few examples of studies on the emotions that can be expressed by animal ears. Therefore, we decided to investigate the relationship between the posture of the animal ears and emotions and to establish a method of expressing emotions using the ears. In the experiments, three-dimensional animations of animal ears changing posture were presented to the subjects, and they were asked to answer the emotion corresponding to the posture. The results showed that there was a certain degree of a common understanding of people’s impressions concerning the animal ears. In this paper, we report the emotions that can be expressed by the posture of the animal ears as revealed in this study.

Leveraging VR Techniques for Efficient Exploration and Interaction in Large and Complex AR Space with Clipped and Small FOV AR Display

  • Yerin Shin
  • Gerard Jounghyun Kim

In this paper, we propose to take advantage of the digital twinned environment to interact more efficiently in the large and complex AR space in spite of the limited sized and clipped FOV of the AR display. Using the digital twin of the target environment, “magical” VR interaction techniques can be applied, as visualized and overlaid through the small window, while still maintaining the spatial association to the augmented real world. First we consider the use of amplified movement within the corresponding VR twinned space to help the user search, plan, navigate and explore efficiently by providing an effectively larger view and thereby better spatial understanding of the same AR space with less amount of physical movements. Secondly, we also apply the amplified movement and in addition, the stretchable arm to interact with relatively large objects (or largely spaced objects) which cannot be seen in their entirety at a time with the small FOV glass. The results of the experiment with the proposed methods have showed advantages with regards to the interaction performance as the scene became more complex and task more difficult. The work illustrates the concept of and potential for XR based interaction where the user can leverage the advantages of both VR and AR mode operations.

VR Games for Chronic Pain Management

  • Jiaheng Wang
  • Craig Anslow
  • Simon James Robertson Mccallum
  • Brian Robinson
  • Daniel Medeiros
  • Joaquim Jorge

Chronic pain is a continuous ailment lasting for long periods after the initial injury or disease has healed. Chronic pain is challenging to treat and affects the daily lives of patients. Distraction therapy is a proven method of relieving patients’ discomfort by taking their attention away from the pain. Virtual reality (VR) is a platform for distraction therapy by immersing the user in a virtual world detached from reality. However, there is little research on how physical interactions in VR affect pain management. We present a study to evaluate the effectiveness of physically active, mentally active, and passive interventions in VR using games with chronic pain patients. Our results indicate that physical and mental activities in VR are equally effective at reducing pain. Furthermore, These actively engage patients, while the effects of observing relaxing content persist outside VR. These findings can help inform the design of future VR games targeted at chronic pain management.

SESSION: Virtual Humans, Collaboration, and Social Interaction 1

VISTA: User-centered VR Training System for Effectively Deriving Characteristics of People with Autism Spectrum Disorder

  • Bogoan Kim
  • Dayoung Jeong
  • Mingon Jeong
  • Taehyung Noh
  • Sung-In Kim
  • Taewan Kim
  • So-Youn Jang
  • Hee Jeong Yoo
  • Jennifer Kim
  • Hwajung Hong
  • Kyungsik Han

Pervasive symptoms of people with autism spectrum disorder (ASD), such as a lack of social and communication skills, are major challenges to be embraced in the workplace. Although much research has proposed VR training programs, their effectiveness is somewhat unclear, since they provide limited, one-sided interactions through fixed scenarios or do not sufficiently reflect the characteristics of people with ASD (e.g., preference for predictable interfaces, sensory issues). In this paper, we present VISTA, a VR-based interactive social skill training system for people with ASD. We ran a user study with 10 people with ASD and 10 neurotypical people to evaluate user experience in VR training and to examine the characteristics of people with ASD based on their physical responses generated by sensor data. The results showed that ASD participants were highly engaged with VISTA and improved self-efficacy after experiencing VISTA. The two groups showed significant differences in sensor signals as the task complexity increased, which demonstrates the importance of considering task complexity in eliciting the characteristics of people with ASD in VR training. Our findings not only extend findings (e.g., low ROI ratio, EDA increase) in previous studies but also provide new insights (e.g., high utterance rate, large variation of pupil diameter), broadening our quantitative understanding of people with ASD.

Exploring User Behaviour in Asymmetric Collaborative Mixed Reality

  • Nels Numan
  • Anthony Steed

A common issue for collaborative mixed reality is the asymmetry of interaction with the shared virtual environment. For example, an augmented reality (AR) user might use one type of head-mounted display (HMD) in a physical environment, while a virtual reality (VR) user might wear a different type of HMD and see a virtual model of that physical environment. To explore the effects of such asymmetric interfaces on collaboration we present a study that investigates the behaviour of dyads performing a word puzzle task where one uses AR and the other VR. We examined the collaborative process through questionnaires and behavioural measures based on positional and audio data. We identified relationships between presence and co-presence, accord and co-presence, leadership and talkativeness, head rotation velocity and leadership, and head rotation velocity and talkativeness. We did not find that AR or VR biased subjective responses, though there were interesting behavioural differences: AR users spoke more words, AR users had a higher median head rotation velocity, and VR users travelled further.

VCPoser: Interactive Pose Generation of Virtual Characters Corresponding to Human Pose Input

  • Michinari Kono
  • Naohiko Morimoto
  • Ryoichi Kaku

Virtual characters (VCs) play a significant role in the entertainment industry, and AI-driven VCs are being developed to enable interaction with users. People are attracted to these VCs, resulting in a demand for them to co-exist in the same world. An approach to allow recording of the memories with the VCs is to capture videos or photos with them, where users are usually required to adapt their poses to the pre-rendered VC’s action. To allow a more seamless collaboration with VCs in photography scenarios, we propose VCPoser, which enables VCs to adapt their pose to the pose of the user. We created a deep neural network-based system that predicts a VC’s pose using the user’s pose data as input by learning the paired pose data. Our quantitative evaluations and user studies demonstrate that our system can predict and generate poses of VCs and allow them to be combined next to the posing user in a photo. We also provide an analysis of the human mindsets of paired poses for a better understanding of them and to share insights for aesthetic pose design.

Assessment of Instructor’s Capacity in One-to-Many AR Remote Instruction Giving

  • Mai Otsuki
  • Tzu-Yang Wang
  • Hideaki Kuzuoka

In this study, we focused on one-to-many remote collaboration which requires more mental resources from the remote instructor than the case of one-to-one since it is "multitasking". The main contribution of our study is that we assessed instructor’s capacity in one-to-many AR remote instruction giving both subjectively and objectively. We compared the remote instructor’s workload while interacting with a different number of local workers, assuming tasks at an industrial site. The results showed that the instructors perceived stronger workload and the communication quality became lower when interacting with multiple local workers. Based on the results, we discussed how to support the remote instructor in a one-to-many AR remote collaboration.

SESSION: Virtual Humans, Collaboration, and Social Interaction 2

Investigating the Perceived Realism of the Other User’s Look-Alike Avatars

  • Aisha Frampton-Clerk
  • Oyewole Oyekoya

There are outstanding questions regarding the fidelity of realistic look-alike avatars that show that there is still substantial development to be done, especially as the virtual world plays a more vital role in our education, work and recreation. The use of look-alike avatars could completely change how we interact virtually. This paper investigates which features of other people’s look-alike avatars influence our perceived realism. Four levels of avatar representations were assessed in this pilot study: a static avatar, a static avatar with lip sync corresponding to an audio recording, full face animation with audio and a full body animation. Results show that full-face and body animations are very important in increasing the perceived realism of avatars. More importantly, participants found the lip sync animation more unsettling (uncanny valley effect) than any of the other animations. The results have implications for the perception of other people’s look-alike avatars in collaborative virtual environments.

Marcus or Mira - Investigating the Perception of Virtual Agent Gender in Virtual Reality Role Play-Training

  • Georg Regal
  • Jakob Carl Uhl
  • Anna Gerhardus
  • Stefan Suette
  • Elisabeth Frankus
  • Julia Schmid
  • Simone Kriglstein
  • Manfred Tscheligi

Immersive virtual training environments are used in various domains. In this work we focus on role-play training in virtual reality. In virtual role-play training conversations and interactions with virtual agents are often fundamental to the training. Therefore, the appearance and behavior of the agents plays an important role when designing role-play training. We focus on the gender appearance of agents, as gender is an important aspect for differentiation between characters. We conducted a study with 40 participants in which we investigated how agents gender appearance influences the perception of the agents´ personality traits and the self-perception of a participants’ assumed role in a training for social skills. This work contributes towards understanding the design-space of virtual agent design, virtual agent gender identity, and the design and development of immersive virtual reality role-play training.

Evaluating the Effects of Virtual Human Animation on Students in an Immersive VR Classroom Using Eye Movements

  • Hong Gao
  • Lisa Hasenbein
  • Efe Bozkir
  • Richard Göllner
  • Enkelejda Kasneci

Virtual humans presented in VR learning environments have been suggested in previous research to increase immersion and further positively influence learning outcomes. However, how virtual human animations affect students’ real-time behavior during VR learning has not yet been investigated. This work examines the effects of social animations (i.e., hand raising of virtual peer learners) on students’ cognitive response and visual attention behavior during immersion in a VR classroom based on eye movement analysis. Our results show that animated peers that are designed to enhance immersion and provide companionship and social information elicit different responses in students (i.e., cognitive, visual attention, and visual search responses), as reflected in various eye movement metrics such as pupil diameter, fixations, saccades, and dwell times. Furthermore, our results show that the effects of animations on students differ significantly between conditions (20%, 35%, 65%, and 80% of virtual peer learners raising their hands). Our research provides a methodological foundation for investigating the effects of avatar animations on users, further suggesting that such effects should be considered by developers when implementing animated virtual humans in VR. Our findings have important implications for future works on the design of more effective, immersive, and authentic VR environments.

SESSION: Input and Interaction 1

Effect of Stereo Deficiencies on Virtual Distal Pointing

  • Anil Ufuk Batmaz
  • Moaaz Hudhud Mughrabi
  • Mayra Donaji Barrera Machuca
  • Wolfgang Stuerzlinger

Previous work has shown that the mismatch between disparity and optical focus cues, i.e., the vergence and accommodation conflict (VAC), affects virtual hand selection in immersive systems. To investigate if the VAC also affects distal pointing with ray casting, we ran a user study with an ISO 9241:411 multidirectional selection task where participants selected 3D targets with three different VAC conditions, no VAC, i.e., targets placed roughly at 75 cm, which matches the focal plane of the VR headset, constant VAC, i.e., at 400 cm from the user, and varying VAC, where the depth distance of targets changed between 75 cm and 400 cm. According to our results, the varying VAC condition requires the most time and decreases the throughput performance of the participants. It also takes longer for users to select targets in the constant VAC condition than without the VAC. Our results show that in distal pointing placing objects at different depth planes has detrimental effect on the user performance.

Puppeteer: Exploring Intuitive Hand Gestures and Upper-Body Postures for Manipulating Human Avatar Actions

  • Ching-Wen Hung
  • Ruei-Che Chang
  • Hong-Sheng Chen
  • Chung Han Liang
  • Liwei Chan
  • Bing-Yu Chen

Body-controlled avatars provide a more intuitive method to real-time control virtual avatars but require larger environment space and more user effort. In contrast, hand-controlled avatars give more dexterous and fewer fatigue manipulations within a close-range space for avatar control but provide fewer sensory cues than the body-based method. This paper investigates the differences between the two manipulations and explores the possibility of a combination. We first performed a formative study to understand when and how users prefer manipulating hands and bodies to represent avatars’ actions in current popular video games. Based on the top video games survey, we decided to represent human avatars’ motions. Besides, we found that players used their bodies to represent avatar actions but changed to using hands when they were too unrealistic and exaggerated to mimic by bodies (e.g., flying in the sky, rolling over quickly). Hand gestures also provide an alternative to lower-body motions when players want to sit during gaming and do not want extensive effort to move their avatars. Hence, we focused on the design of hand gestures and upper-body postures. We present Puppeteer, an input prototype system that allows players directly control their avatars through intuitive hand gestures and upper-body postures. We selected 17 avatar actions discovered in the formative study and conducted a gesture elicitation study to invite 12 participants to design best representing hand gestures and upper-body postures for each action. Then we implemented a prototype system using the MediaPipe framework to detect keypoints and a self-trained model to recognize 17 hand gestures and 17 upper-body postures. Finally, three applications demonstrate the interactions enabled by Puppeteer.

Rich virtual feedback from sensorimotor interaction may harm, not help, learning in immersive virtual reality

  • Jack Ratcliffe
  • Laurissa Tokarchuk

Sensorimotor interactions in the physical world and in immersive virtual reality (IVR) offer different feedback. Actions in the physical world almost always offer multi-modal feedback: pouring a jug of water offers tactile (weight-change), aural (the sound of running water) and visual (water moving out the jug) feedback. Feedback from pouring a virtual jug, however, depends on the IVR’s design. This study examines if the richness of feedback from IVR actions causes a detectable cognitive impact on users. To do this, we compared verb-learning outcomes between two conditions in which participants make actions with objects and (1) audiovisual feedback is presented; (2) audiovisual feedback is not presented. We found that participants (n = 74) had cognitively distinct outcomes based on the type of audiovisual feedback experienced, with a high feedback experience harming learning outcomes compared with a low feedback one. This result has implications for IVR system design and theories of cognition and memorisation.

Exploration of Form Factor and Bimanual 3D Manipulation Performance of Rollable In-hand VR Controller

  • Sunbum Kim
  • Youngbo Aram Shim
  • Geehyuk Lee

Virtual reality (VR) environments are expected to become future workspaces. An effective bimanual 3D manipulation technique would be essential to support this vision. A ball-shaped tangible input device that can be rolled in a hand is known to be useful for 3D object manipulation because such devices allow users to utilize their finger dexterity. In this study, we further explored the potential of a rollable in-hand controller. First, we evaluated the effects of its form factor on user behavior and performance. Although the size and shape of a rollable controller are expected to influence user behavior and performance, their effects have not been empirically explored in prior works. Next, we evaluated a rollable controller on bimanual 3D assembly tasks. A rollable controller may incur a high mental load as it requires users to use finger dexterity; therefore, the benefit of using such a device in each hand is not obvious. We found that a 5 cm-diameter ball-shaped controller was the most effective among the sizes and forms that we considered, and that a pair of in-hand rollable controllers showed significantly faster completion time than a pair of VR controllers for complex bimanual assembly tasks involving frequent rotations.

SESSION: Input and Interaction 2

PORTAL: Portal Widget for Remote Target Acquisition and Control in Immersive Virtual Environments

  • Dongyun Han
  • Donghoon Kim
  • Isaac Cho

This paper introduces PORTAL (POrtal widget for Remote Target Acquisition and controL) that allows the user to interact with out-of-reach objects in a virtual environment. We describe the PORTAL interaction technique for placing a portal widget and interacting with target objects through the portal. We conduct two formal user studies to evaluate PORTAL for selection and manipulation functionalities. The results show PORTAL supports participants to interact with remote objects successfully and precisely. Following that, we discuss its potential and limitations, and future works.

Eliciting Multimodal Gesture+Speech Interactions in a Multi-Object Augmented Reality Environment

  • Xiaoyan Zhou
  • Adam Sinclair Williams
  • Francisco Raul Ortega

As augmented reality (AR) technology and hardware become more mature and affordable, researchers have been exploring more intuitive and discoverable interaction techniques for immersive environments. This paper investigates multimodal interaction for 3D object manipulation in a multi-object AR environment. To identify the user-defined gestures, we conducted an elicitation study involving 24 participants and 22 referents using an augmented reality headset. It yielded 528 proposals and generated a winning gesture set with 25 gestures after binning and ranking all gesture proposals. We found that for the same task, the same gesture was preferred for both one and two-object manipulation, although both hands were used in the two-object scenario. We present the gestures and speech results, and the differences compared to similar studies in a single object AR environment. The study also explored the association between speech expressions and gesture stroke during object manipulation, which could improve the recognizer efficiency in augmented reality headsets.

Performance Analysis of Saccades for Primary and Confirmatory Target Selection

  • Aunnoy Mutasim
  • Anil Ufuk Batmaz
  • Moaaz Hudhud Mughrabi
  • Wolfgang Stuerzlinger

In eye-gaze-based selection, dwell suffers from several issues, e.g., the Midas Touch problem. Here we investigate saccade-based selection techniques as an alternative to dwell. First, we designed a novel user interface (UI) for Actigaze and used it with (goal-crossing) saccades for confirming the selection of small targets (i.e., < 1.5-2°). We compared it with three other variants of Actigaze (with button press, dwell, and target reverse crossing) and two variants of target magnification (with button press and dwell). Magnification-dwell exhibited the most promising performance. For Actigaze, goal-crossing was the fastest option but suffered the most errors. We then evaluated goal-crossing as a primary selection technique for normal-sized targets (≥ 2°) and implemented a novel UI for such interaction. Results revealed that dwell achieved the best performance. Yet, we identified goal-crossing as a good compromise between dwell and button press. Our findings thus identify novel options for gaze-only interaction.

Precueing Sequential Rotation Tasks in Augmented Reality

  • Jen-Shuo Liu
  • Barbara Tversky
  • Steven Feiner

Augmented reality has been used to improve sequential-task performance by cueing information about a current task step and precueing information about future steps. Existing work has shown the benefits of precueing movement (translation) information. However, rotation is also a major component in many real-life tasks, such as turning knobs to adjust parameters on a console. We developed an AR testbed to investigate whether and how much precued rotation information can improve user performance. We consider two unimanual tasks: one requires a user to make sequential rotations of a single object, and the other requires the user to move their hand between multiple objects to rotate them in sequence.

We conducted a user study to explore these two tasks using circular arrows to communicate rotation. In the single-object task, we examined the impact of number of precues and visualization style on user performance. Results show that precues improved performance and that arrows with highlighted heads and tails, with each destination aligned with the next origin, yielded the shortest completion time on average. In the multiple-object task, we explored whether rotation precues can be helpful in conjunction with movement precues. Here, using a rotation cue without rotation precues in conjunction with a movement cue and movement precues performed the best, implying that rotation precues were not helpful when movement was also required.

SESSION: Visualization and Displays 1

Effects of Environmental Noise Levels on Patient Handoff Communication in a Mixed Reality Simulation

  • Matt Gottsacker
  • Nahal Norouzi
  • Ryan Schubert
  • Frank Guido-Sanz
  • Gerd Bruder
  • Gregory Welch

When medical caregivers transfer patients to another person’s care (a patient handoff), it is essential they effectively communicate the patient’s condition to ensure the best possible health outcomes. Emergency situations caused by mass casualty events (e.g., natural disasters) introduce additional difficulties to handoff procedures such as environmental noise. We created a projected mixed reality simulation of a handoff scenario involving a medical evacuation by air and tested how low, medium, and high levels of helicopter noise affected participants’ handoff experience, handoff performance, and behaviors. Through a human-subjects experimental design study (N = 21), we found that the addition of noise increased participants’ subjective stress and task load, decreased their self-assessed and actual performance, and caused participants to speak louder. Participants also stood closer to the virtual human sending the handoff information when listening to the handoff than they stood to the receiver when relaying the handoff information. We discuss implications for the design of handoff training simulations and avenues for future handoff communication research.

Timeline Design Space for Immersive Exploration of Time-Varying Spatial 3D Data

  • Gwendal Fouché
  • Ferran Argelaguet Sanz
  • Emmanuel Faure
  • Charles Kervrann

Timelines are common visualizations to represent and manipulate temporal data. However, timeline visualizations rarely consider spatio-temporal 3D data (e.g. mesh or volumetric models) directly. In this paper, leveraging the increased workspace and 3D interaction capabilities of virtual reality (VR), we first propose a timeline design space for 3D temporal data extending the timeline design space proposed by Brehmer et al. [7]. The proposed design space adapts the scale, layout and representation dimensions to account for the depth dimension and how the 3D temporal data can be partitioned and structured. Moreover, an additional dimension is introduced, the support, which further characterizes the 3D dimension of the visualization. The design space is complemented by discussing the interaction methods required for the efficient visualization of 3D timelines in VR. Secondly, we evaluate the benefits of 3D timelines through a formal evaluation (n=21). Taken together, our results showed that time-related tasks can be achieved more comfortably using timelines, and more efficiently for specific tasks requiring the analysis of the surrounding temporal context. Finally, we illustrate the use of 3D timelines with a use-case on morphogenetic analysis in which domain experts in cell imaging were involved in the design and evaluation process.

Automated Blendshape Personalization for Faithful Face Animations Using Commodity Smartphones

  • Timo Menzel
  • Mario Botsch
  • Marc Erich Latoschik

Digital reconstruction of humans has various interesting use-cases. Animated virtual humans, avatars and agents alike, are the central entities in virtual embodied human-computer and human-human encounters in social XR. Here, a faithful reconstruction of facial expressions becomes paramount due to their prominent role in non-verbal behavior and social interaction. Current XR-platforms, like Unity 3D or the Unreal Engine, integrate recent smartphone technologies to animate faces of virtual humans by facial motion capturing. Using the same technology, this article presents an optimization-based approach to generate personalized blendshapes as animation targets for facial expressions. The proposed method combines a position-based optimization with a seamless partial deformation transfer, necessary for a faithful reconstruction. Our method is fully automated and considerably outperforms existing solutions based on example-based facial rigging or deformation transfer, and overall results in a much lower reconstruction error. It also neatly integrates with recent smartphone-based reconstruction pipelines for mesh generation and automated rigging, further paving the way to a widespread application of human-like and personalized avatars and agents in various use-cases.

NeARportation: A Remote Real-time Neural Rendering Framework

  • Yuichi Hiroi
  • Yuta Itoh
  • Jun Rekimoto

While presenting a photorealistic appearance plays a major role in immersion in Augmented Virtuality environment, displaying that of real objects remains a challenge. Recent developments in photogrammetry have facilitated the incorporation of real objects into virtual space. However, reproducing complex appearances, such as subsurface scattering and transparency, still requires a dedicated environment for measurement and possesses a trade-off between rendering quality and frame rate.

Our NeARportation framework combines server–client bidirectional communication and neural rendering to resolve these trade-offs. Neural rendering on the server receives the client’s head posture and generates a novel-view image with realistic appearance reproduction that is streamed onto the client’s display. By applying our framework to a stereoscopic display, we confirm that it can display a high-fidelity appearance on full-HD stereo videos at 35-40 frames per second (fps) according to the user’s head motion.

SESSION: Visualization and Displays 2

Virtual Air Conditioner’s Airflow Simulation and Visualization in AR

  • Joohwan Chae
  • Donghan Kim
  • Wooseok Jeong
  • Eunchan Jo
  • Won-Ki Jeong
  • JunYoung Choi
  • Seung-wook Kim
  • Myoung Gon Kim
  • Jae-Won Lee
  • Hyechan Lee
  • JungHyun Han

This paper presents a mobile AR system for visualizing airflow and temperature change made by virtual air conditioners. Even though there have been efforts to integrate the results of airflow/temperature simulation into the real world via AR, they support neither interactive modeling of the environments nor real-time simulation. This paper presents an AR system, where 3D mapping and air conditioner installation are made interactively, and then airflow/temperature simulation and visualization are made at real time. The proposed system is designed in a client-server architecture, where the server is in charge of simulation and the rest is taken by the client.

Nebula: An Affordable Open-Source and Autonomous Olfactory Display for VR Headsets

  • Charles Javerliat
  • Pierre-Philippe Elst
  • Anne-Lise Saive
  • Guillaume Lavoué
  • Patrick Baert

The impact of olfactory cues on user experience in virtual reality is increasingly studied. However, results are still heterogeneous and existing studies difficult to replicate, mainly due to a lack of standardized olfactory displays. In that context, we present Nebula, a low-cost, open-source, olfactory display capable of diffusing scents at different diffusion rates using a nebulization process. Nebula can be used with PC VR or autonomous head-mounted displays, making it easily transportable without the need for an external computer. The device was calibrated to diffuse at three diffusion rates: no diffusion, low and high. For each level, the quantity of delivered odor was precisely characterized using a repeated weighting method. The corresponding perceived olfactory intensities were evaluated by a psychophysical experiment on sixteen participants. Results demonstrated the device capability to successfully create three significantly different perceived odor intensities (Friedman test p < 10− 6, Wilcoxon tests padj < 10− 3), without noticeable smell persistence and with limited noise and discomfort. For reproducibility and to stimulate further research in the area, 3D printing files, electronic hardware schemes, and firmware/software source-code are made publicly available.

3D Reconstruction of Sculptures from Single Images via Unsupervised Domain Adaptation on Implicit Models

  • Ziyi Chang
  • George Alex Koulieris
  • Hubert P. H. Shum

Acquiring the virtual equivalent of exhibits, such as sculptures, in virtual reality (VR) museums, can be labour-intensive and sometimes infeasible. Deep learning based 3D reconstruction approaches allow us to recover 3D shapes from 2D observations, among which single-view-based approaches can reduce the need for human intervention and specialised equipment in acquiring 3D sculptures for VR museums. However, there exist two challenges when attempting to use the well-researched human reconstruction methods: limited data availability and domain shift. Considering sculptures are usually related to humans, we propose our unsupervised 3D domain adaptation method for adapting a single-view 3D implicit reconstruction model from the source (real-world humans) to the target (sculptures) domain. We have compared the generated shapes with other methods and conducted ablation studies as well as a user study to demonstrate the effectiveness of our adaptation method. We also deploy our results in a VR application.

SESSION: Vision Perception

The Relative Importance of Depth Cues and Semantic Edges for Indoor Mobility Using Simulated Prosthetic Vision in Immersive Virtual Reality

  • Alex Rasla
  • Michael Beyeler

Visual neuroprostheses (bionic eyes) have the potential to treat degenerative eye diseases that often result in low vision or complete blindness. These devices rely on an external camera to capture the visual scene, which is then translated frame-by-frame into an electrical stimulation pattern that is sent to the implant in the eye. To highlight more meaningful information in the scene, recent studies have tested the effectiveness of deep-learning based computer vision techniques, such as depth estimation to highlight nearby obstacles (DepthOnly mode) and semantic edge detection to outline important objects in the scene (EdgesOnly mode). However, nobody has yet attempted to combine the two, either by presenting them together (EdgesAndDepth) or by giving the user the ability to flexibly switch between them (EdgesOrDepth). Here, we used a neurobiologically inspired model of simulated prosthetic vision (SPV) in an immersive virtual reality (VR) environment to test the relative importance of semantic edges and relative depth cues to support the ability to avoid obstacles and identify objects. We found that participants were significantly better at avoiding obstacles using depth-based cues as opposed to relying on edge information alone, and that roughly half the participants preferred the flexibility to switch between modes (EdgesOrDepth). This study highlights the relative importance of depth cues for SPV mobility and is an important first step towards a visual neuroprosthesis that uses computer vision to improve a user’s scene understanding.

Adaptive Field-of-view Restriction: Limiting Optical Flow to Mitigate Cybersickness in Virtual Reality

  • Fei Wu
  • Evan Suma Rosenberg

Dynamic field-of-view (FOV) restriction is a widely used software technique to mitigate cybersickness in commercial virtual reality (VR) applications. The classical FOV restrictor is implemented using a symmetric mask that occludes the periphery in response to translational and/or angular velocity. In this paper, we introduce adaptive field-of-view restriction, a novel technique that responds dynamically based on real-time assessment of optical flow generated by movement through a virtual environment. The adaptive restrictor utilizes an asymmetric mask to obscure regions of the periphery with higher optical flow during virtual locomotion while leaving regions with lower optical flow visible. To evaluate the proposed technique, we conducted a gender-balanced user study (N = 38) in which participants completed in a navigation task in two different types of virtual scenes using controller-based locomotion. Participants were instructed to navigate through either close-quarter or open virtual environments using adaptive restriction, traditional symmetric restriction, or an unrestricted control condition in three VR sessions separated by at least 24 hours. The results showed that the adaptive restrictor was effective in mitigating cybersickness and reducing subjective discomfort, while simultaneously enabling participants to remain immersed for a longer amount of time compared to the control condition. Additionally, presence ratings were significantly higher when using the adaptive restrictor compared to symmetric restriction. In general, these results suggest that adaptive field-of-view restriction based on real-time measurement of optical flow is a promising approach for virtual reality applications that seek to provide a better cost-benefit tradeoff between comfort and a high-fidelity experience.

Sweating Avatars Decrease Perceived Exertion and Increase Perceived Endurance while Cycling in Virtual Reality

  • Martin Kocur
  • Johanna Bogon
  • Manuel Mayer
  • Miriam Witte
  • Amelie Karber
  • Niels Henze
  • Valentin Schwind

Avatars are used to represent users in virtual reality (VR) and create embodied experiences. Previous work showed that avatars’ stereotypical appearance can affect users’ physical performance and perceived exertion while exercising in VR. Although sweating is a natural human response to physical effort, surprisingly little is known about the effects of sweating avatars on users. Therefore, we conducted a study with 24 participants to explore the effects of sweating avatars while cycling in VR. We found that visualizing sweat decreases the perceived exertion and increases perceived endurance. Thus, users feel less exerted while embodying sweating avatars. We conclude that sweating avatars contribute to more effective exergames and fitness applications.

SESSION: Body Perception

Standing Balance Improvement Using Vibrotactile Feedback in Virtual Reality

  • M. Rasel Mahmud
  • Michael Stewart
  • Alberto Cordova
  • John Quarles

Virtual Reality (VR) users often encounter postural instability, i.e., balance issues, which can be a significant impediment to universal usability and accessibility, particularly for those with balance impairments. Prior research has validated imbalance issues, but little effort has been made to mitigate them. We recruited 39 participants (with balance impairments: 18, without balance impairments: 21) to examine the effect of various vibrotactile feedback techniques on balance in virtual reality, specifically spatial vibrotactile, static vibrotactile, rhythmic vibrotactile, and vibrotactile feedback mapped to the center of pressure (CoP). Participants completed standing visual exploration and standing reach and grasp tasks. According to within-subject results, each vibrotactile feedback enhanced balance in VR significantly (p <.001) for those with and without balance impairments. Spatial and CoP vibrotactile feedback enhanced balance significantly more (p <.001) than other vibrotactile feedback. This study presents strategies that might be used in future virtual environments to enhance standing balance and bring VR closer to universal usage.

The Rubber Hand Illusion in Virtual Reality and the Real World - Comparable but Different

  • Martin Kocur
  • Alexander Kalus
  • Johanna Bogon
  • Niels Henze
  • Christian Wolff
  • Valentin Schwind

Feeling ownership of a virtual body is crucial for immersive experiences in VR. Knowledge about body ownership is mainly based on rubber hand illusion (RHI) experiments in the real world. Watching a rubber hand being stroked while one’s own hidden hand is synchronously stroked, humans experience the rubber hand as their own hand and underestimate the distance between the rubber hand and the real hand (proprioceptive drift). There is also evidence for a decrease in hand temperature. Although the RHI has been induced in VR, it is unknown whether effects in VR and the real world differ. We conducted a RHI experiment with 24 participants in the real world and in VR and found comparable effects in both environments. However, irrespective of the RHI, proprioceptive drift and temperature differences varied between settings. Our findings validate the utilization of the RHI in VR to increase our understanding of embodying virtual avatars.

Walk This Beam: Impact of Different Balance Assistance Strategies and Height Exposure on Performance and Physiological Arousal in VR

  • Dennis Dietz
  • Carl Oechsner
  • Changkun Ou
  • Francesco Chiossi
  • Fabio Sarto
  • Sven Mayer
  • Andreas Butz

Dynamic balance is an essential skill for the human upright gait; therefore, regular balance training can improve postural control and reduce the risk of injury. Even slight variations in walking conditions like height or ground conditions can significantly impact walking performance. Virtual reality is used as a helpful tool to simulate such challenging situations. However, there is no agreement on design strategies for balance training in virtual reality under stressful environmental conditions such as height exposure. We investigate how two different training strategies, imitation learning, and gamified learning, can help dynamic balance control performance across different stress conditions. Moreover, we evaluate the stress response as indexed by peripheral physiological measures of stress, perceived workload, and user experience. Both approaches were tested against a baseline of no instructions and against each other. Thereby, we show that a learning-by-imitation approach immediately helps dynamic balance control, decreases stress, improves attention focus, and diminishes perceived workload. A gamified approach can lead to users being overwhelmed by the additional task. Finally, we discuss how our approaches could be adapted for balance training and applied to injury rehabilitation and prevention.

SESSION: Interaction Design

“Kapow!”: Studying the Design of Visual Feedback for Representing Contacts in Extended Reality

  • Julien Cauquis
  • Victor Rodrigo Mercado
  • Géry Casiez
  • Jean-Marie Normand
  • Anatole Lécuyer

In absence of haptic feedback, the perception of contact with virtual objects can rapidly become a problem in extended reality (XR) applications. XR developers often rely on visual feedback to inform the user and display contact information. However, as for today, there is no clear path on how to design and assess such visual techniques. In this paper, we propose a design space for the creation of visual feedback techniques meant to represent contact with virtual surfaces in XR. Based on this design space, we conceived a set of various visual techniques, including novel approaches based on onomatopoeia and inspired by cartoons, or visual effects based on physical phenomena. Then, we conducted an online preliminary user study with 60 participants, consisting in assessing 6 visual feedback techniques in terms of user experience. We could notably assess, for the first time, the potential influence of the interaction context by comparing the participants’ answers in two different scenarios: industrial versus entertainment conditions. Taken together, our design space and initial results could inspire XR developers for a wide range of applications in which the augmentation of contact seems prominent, such as for vocational training, industrial assembly/maintenance, surgical simulation, videogames, etc.

Understanding Perspectives for Single- and Multi-Limb Movement Guidance in Virtual 3D Environments

  • Hesham Elsayed
  • Kenneth Kartono
  • Dominik Schön
  • Martin Schmitz
  • Max Mühlhäuser
  • Martin Weigel

Movement guidance in virtual reality has many applications ranging from physical therapy, assistive systems to sport learning. These movements range from simple single-limb to complex multi-limb movements. While VR supports many perspectives – e.g., first person and third person – it remains unclear how accurate these perspectives communicate different movements. In a user study (N=18), we investigated the influence of perspective, feedback, and movement properties on the accuracy of movement guidance. Participants had on average an angle error of 6.2° for single arm movements, 7.4° for synchronous two arm movements, and 10.3° for synchronous two arm and leg movements. Furthermore, the results show that the two variants of third-person perspectives outperform a first-person perspective for movement guidance (19.9% and 24.3% reduction in angle errors). Qualitative feedback confirms the quantitative data and shows users have a clear preference for third-person perspectives. Through our findings we provide guidance for designers and developers of future VR movement guidance systems.

Design and Evaluation of Electrotactile Rendering Effects for Finger-Based Interactions in Virtual Reality

  • Sebastian Vizcay
  • Panagiotis Kourtesis
  • Ferran Argelaguet
  • Claudio Pacchierotti
  • Maud Marchal

The use of electrotactile feedback in Virtual Reality (VR) has shown promising results for providing tactile information and sensations. While progress has been made to provide custom electrotactile feedback for specific interaction tasks, it remains unclear which modulations and rendering algorithms are preferred in rich interaction scenarios. In this paper, we propose a unified tactile rendering architecture and explore the most promising modulations to render finger interactions in VR. Based on a literature review, we designed six electrotactile stimulation patterns/effects (EFXs) striving to render different tactile sensations. In a user study (N=18), we assessed the six EFXs in three diverse finger interactions: 1) tapping on a virtual object; 2) pressing down a virtual button; 3) sliding along a virtual surface. Results showed that the preference for certain EFXs depends on the task at hand. No significant preference was detected for tapping (short and quick contact); EFXs that render dynamic intensities or dynamic spatio-temporal patterns were preferred for pressing (continuous dynamic force); EFXs that render moving sensations were preferred for sliding (surface exploration). The results showed the importance of the coherence between the modulation an the interaction being performed and the study proved the versatility of electrotactile feedback and its efficiency in rendering different haptic information and sensations.

SESSION: Poster and Demo Abstracts

A Distance Learning System With Shareable Physical Information For Ski Training

  • Shigeharu Ono
  • Hideaki Kanai
  • Erwin Wu
  • Hideki Koike

Distance learning for skill learning is still inadequate because the perceptual information provided to the user is limited. This study proposed a framework for a distance learning system with real-time feedback to share physical information between the student and the teacher. As an initial trial to use this framework, we developed a prototype of a distance learning system for skiing including visual feedback, and verified its operation. The result indicated the system could be applied enough to a distance learning system.

A Method of Estimating the Object of Interest from 3D Object and User’s Gesture in VR

  • Sungjin Hong
  • Heesook Shin
  • Cho-rong Yu
  • Seong Min Baek
  • Youn-Hee Gil

In VR, gaze information is useful for directly or indirectly analyzing a user’s interest. However, there are inconveniences in using the eye tracking in the VR device. To overcome the drawback, we propose a method of estimating an object of interest from user’s gesture instead of eye tracking. LightGBM model is trained by using distance and angle-based features that are extracted from 3d information of the object and the position and rotation of the VR device. We compared accuracy of each feature for VR device combinations and found out that it is more efficient to use all devices instead of individual devices and to use angle-based feature instead of distance-based feature with accuracy of 79.36%.

A Mixed Reality Platform for Collaborative Technical Assembly Training

  • Minsoo Choi
  • Yeling Jiang
  • Farid Breidi
  • Christos Mousas
  • Mesut Akdere

We have developed a mixed reality (MR)-based platform for basic mechanical engineering concepts as a learning environment for collaborative assembly tasks. In our platform, multiple co-located users interact with virtual objects simultaneously, and during that time, the platform collects data related to participants’ collaboration and team behavior. We implemented four main sections in the platform including setup, introduction, training, and assessment. The platform provides the opportunity for users to interact with virtual objects while also acquiring technical knowledge. Specifically, for the technical component of the platform, users are asked to assemble a hydraulic pump by manipulating and fitting various parts and pieces into a provided pre-assembled blueprint. We conducted a preliminary expert panel review composed of three experts and received positive feedback and suggestions for further development of the platform.

AirHaptics: Vibrotactile Presentation Method using an Airflow from Audio Speakers of Smart Devices

  • Madoka Ito
  • Ryota Sakuma
  • Hiroki Ishizuka
  • Takefumi Hiraki

We perceive vibrotactile stimuli from smart devices such as smartphones when we use various applications. However, vibrators in these devices can present only a specific nearby resonant frequency with enough intensity to perceive, making it challenging to offer various vibrotactile stimuli. In this study, we propose a method to realize a vibrotactile presentation in a wide range of frequencies using airflow vibration generated by a built-in audio speaker of a smart device. We implemented a system based on the proposed method using a smartphone and experimented with measuring the airflow pressure. Moreover, we also propose the application of texture presentation using airflow.

An AI-empowered Cloud Solution towards End-to-End 2D-to-3D Image Conversion for Autostereoscopic 3D Display

  • Jun Wei Lim
  • Jin Qi Yeo
  • Xinxing Xia
  • Frank Yunqing Guan

Autostereoscopic displays allow the users to view the 3D content on electronic displays without wearing any glasses. However, the content for glass-free 3D displays needs to be in 3D format such that novel views could be synthesized. Unfortunately, nowadays images/videos are still normally captured in 2D which cannot be directly utilized for glass-free 3D displays. In this paper, we introduce an AI-empowered cloud solution towards end-to-end 2D-to-3D image conversion for autostereoscopic 3D displays, or “CONVAS (3D)” in short. Taking a single 2D image as the input, CONVAS (3D) is able to automatically convert the input 2D image and generate an image suitable for a target autostereoscopic 3D display. It is implemented on a web-based server such that it can allow the users to submit the conversion task and to retrieve the results without geographical constraints.

An Interactive Haptic Display System with Changeable Hardness Using Magneto-Rheological Fluid

  • Yutaka Nakanishi
  • Akihiro Matsuura

We present a haptic display system with changeable hardness using magneto-rheological (MR) fluid. The major component is the haptic device with layers of MR fluid, contact point and pressure sensors, and electromagnet. The system enables multi-modal interaction using this device with control circuits and a projector. We also developed two types of contents aiming for multi-modal virtual and mixed reality experiences.

Appling Artificial Intelligence Techniques on Singing Teaching of Taiwanese Opera

  • Shih-Chieh Lin
  • Chien-Hsing Chou
  • Ming-Feng Ke
  • Shu-Han Liao
  • Yen-Hung Lin
  • Chiu-Pin Kuo

Taiwanese opera is the important culture inheritance in Taiwan, however, this culture inheritance is dying in recently years. Although the Taiwan government and various Taiwanese opera troupes have worked hard for many years to promote this culture to campuses, and held several interest courses; this culture inheritance is still losing and dying. For elder people, Taiwanese opera and Taiwanese cultures are both precious culture treasures and parts of their childhood memories. Nowadays, young people in Taiwan are no longer familiar to Taiwanese, neither to Taiwanese opera singings. It is hard for young people to learn how appreciating this traditional culture. In this study, we refer to the current promotion methods of drama troupes which learn the singing method and posture of Taiwanese opera, we combine artificial intelligence techniques into traditional Taiwanese opera on singing and posture. The proposed system could analyzes students’ voice and postures, and then assists teachers to improve the learning performance of students. Students could compare their singing skill or postures with professional actors and adjust their singing and posture. Students of Taiwanese opera interest class can practice independently without professional teacher's guidance at home. In campus promotion, this game-like promotion method brings young people more acceptance of Taiwanese opera.

Augmented Reality Patient-Specific Registration for Medical Visualization

  • Isabela Figueira
  • Muhammad Twaha Ibrahim
  • Aditi Majumder
  • M. Gopi

In recent years, medical research has made extensive use of Augmented Reality (AR) for visualization. These visualizations provide improved 3D understanding and depth perception for surgeons and medical staff during surgical planning, medical training, and procedures. Often, AR in medicine involves impractical and extensive instrumentation in order to provide the precision needed for clinical use. We propose a mobile AR 3D model registration system for use in a practical, non-instrumented hospital setting. Our registration system takes as input a patient-specific model and overlays it on the patient using an accurate pose registration technique that requires a single marker as a point of reference to initialize a point cloud-based pose refinement technique. Our method is automatic, easy to use, and runs in real-time on a mobile phone. We conduct quantitative and qualitative analysis of the registration. The results confirm that our AR pose registration system produces an accurate and visually correct overlay of the medical data in real-time.

Avatar Voice Morphing to Match Subjective and Objective Self Voice Perception

  • Hiiro Okano
  • Keisuke Mizuno
  • Haruna Miyakawa
  • Keiichi Zempo

We investigated the effect of morphing the avatar’s voice from the user’s voice on its impressions. We also investigated whether the image of morphing differed between those who liked and disliked their voice. The experiment was conducted by morphing the acoustic parameters such as fundamental frequency, spectral envelope, and aperiodic component based on the acoustic signals recorded by the participants themselves, and investigating their impressions of an avatar speaking with that voice. The result showed that those who liked their voice were most impressed by their original voice, while those who disliked it were more impressed by the morphed voice. This suggests that people who dislike their voice tend to seek their ideal in the avatar’s voice.

Can Haptic Feedback on One Virtual Object Increase the Presence of Another Virtual Object?

  • Sooyeon Lee
  • Myungho Lee

This paper investigated whether increased presence from experiencing haptic feedback on one virtual object can transfer to another virtual object. Two similar studies were run in different environments: an immersive virtual environment and a mixed environment. Results showed that participants reported a high presence of untouched virtual object after touching a virtual object in a virtual reality environment. On the other hand, it was difficult to confirm that such presence transfers occurred in an augmented reality environment.

Colorimetry Evaluation for Video Mapping Rendering

  • Eva Décorps
  • Christian Frisson
  • Emmanuel Durand

Perceptually accurate colour reproduction is a core requirement of video mapping applications, where objective evaluation of colour rendering chain taking into account human perception becomes greatly beneficial. In this article, we present a workflow for colorimetry evaluation of video mapping software rendering chain, that we implemented in open-source video mapping software Splash, with a set a common metrics for image quality assessment and tools for colour reproduction evaluation. We introduce an accompanying graphical visualization template to help accurate interpretation of the metrics used. We describe different use case examples that we performed with our tool, proving the workflow efficient for simple, understandable and reproducible colorimetry evaluation.

Common Experience Sample 1.0: Developing a sample for comparing the characteristics of haptic displays

  • Takuya Oka
  • Kosuke Morimoto
  • Yohei Yanase
  • Keita Watanabe

Many haptic displays that provide haptic feedback to users have been proposed;however, differences in experimental environments make comparisons of displays difficult. Therefore, we categorized the characteristics of feedback based on existing research, and developed a common experience sample that includes virtual objects necessary for the expression of each characteristic. Additionally, we will study the methods of evaluating displays using the proposed sample, and aim at comparative evaluation of multiple displays.

CourseExpo: An Immersive Collaborative Learning Ecosystem

  • Connor Wilding Leonie
  • Robin Angotti
  • Kelvin Sung

Inspired by the need for remote learning technologies due to the Covid-19 pandemic and the isolated sense of lonely learners, we reimagined a remote classroom that fosters collaboration, builds community and yet without the constraints of the physical world. This paper presents a collaborative learning ecosystem that resembles a traditional city square where avatars of learners and facilitators wander, commingle, discover, and learn together. Buildings in the city square are learning modules which include typical knowledge units, assessment booths, or custom collaborative sketching studios. Our attempted prototype at realizing this conceptualization demonstrated initial success and we offer recommendations for future work.

Covid Reflections: AR in Public Health Communications

  • Ines Said
  • Austin J. Stanbury
  • Erica Delhagen
  • Amelia Winger-Bearskin

Augmented reality in public health communications is an under-explored field. Researchers forward Covid Reflections, a public health communications installation which employs augmented reality enhanced with AI LiDAR body tracking to engage public audiences in short duration health-oriented experiences. Covid Reflections helps audiences to visualize potential health outcomes of Covid-19 through depicting the process of disease contraction, sickness, and potential hospitalization on a virtual avatar which mirrors the user’s physical body in real-time. The user is immersed in a “virtual first-hand experience” of Covid-19, and is thus supported in drawing concrete conclusions about the potential personal implications of contracting Covid-19.

Data Abstraction for Visual and Haptic Representations in Flow Visualization

  • Ayush Bhardwaj
  • Sungjoo Kang
  • Jin Ryong Kim

This paper presents a new way of data abstraction for visual and haptic representations in immersive analytics using a mid-air haptic display. Visual and haptic abstraction is proposed to transform raw data (wind tunnel data) into another form of data for effective visual and haptic data mapping. Three main features are extracted: (i) Magnitude of Velocity, (ii) Recirculation Region, and (iii) Vorticity. For each feature, visual and haptic abstractions are defined based on data characterization and data reduction. A preliminary study shows a promising direction toward multimodal data interaction in immersive analytics.

Dill Pickle: Interactive Theatre Play in Virtual Reality

  • Krzysztof Pietroszek
  • Manuel Rebol
  • Becky Lake

“Dill Pickle”, is the first interactive immersive theatre experience in virtual reality that uses volumetric capture. In the play, a volumetrically captured actor plays the character of Robert. The user interacts with Robert through utterances that are memorized or prompted with text or audio. The set was recreated through a process of photogrammetry.

Dynamic X-Ray Vision in Mixed Reality

  • Hung-Jui Guo
  • Jonathan Bakdash
  • Laura Marusich
  • Balakrishnan Prabhakaran

X-ray vision, a technique that allows users to see through walls and other obstacles, is a popular technique for Augmented Reality (AR) and Mixed Reality (MR). In this paper, we demonstrate a dynamic X-ray vision window that is rendered in real-time based on the user’s current position and changes with movement in the physical environment. Moreover, the location and transparency of the window are also dynamically rendered based on the user’s eye gaze. We build this X-ray vision window for a current state-of-the-art MR Head-Mounted Device (HMD) – HoloLens 2 [5] by integrating several different features: scene understanding, eye tracking, and clipping primitive.

Evaluation of Pseudo-Haptics system feedbacking muscle activity

  • Yoshihito Tanaka
  • Akira Kubota

Differences in perceptions between virtual reality (VR) and reality prevent immersion in VR. To improve immersion in VR, many methods have adopted haptic feedback in VR using pseudo-haptics. However, these methods have little evaluated the effect of force feedback on pseudo-haptics that reflect the user’s state. This paper proposes and evaluates the pseudo-haptics system that manipulates the control/display (C/D) ratio between reality and VR using muscle activity measured. We conducted a user study under three conditions: the C/D ratio is constant, large, or small, depending on the muscle activity. Our results indicated that pseudo-haptics were effective for small C/D ratio settings during low myoelectric intensity.

Experience Variations Between Immersive and Non-Immersive RPGs

  • Ioannidis Marios
  • Vlasios Kasapakis

This work preliminary evaluates game experience variations between an N-IVR and an IVR version of a Role-Playing Game (RPG). Our results indicate genre-specific variations in game experience between RPGs and other game genres studied in a similar manner. Moreover, our study identifies prior experience with N-IVR games in general, as well as with N-IVR RPGs, as an important factor affecting game experience in IVR RPGs.

Exploration of inter-marker interactions in Tangible AR

  • Anurag Kumar Singh
  • Jayesh S. Pillai

Inter-marker interactions in marker based Augmented Reality mobile applications are limited to movement and placement. In this paper we explore multiple inter-marker interactions in the tangible AR space along with their use cases. We developed prototypes that demonstrate primarily five inter-marker interactions; namely, proximity of two or more markers, placement of makers over each other, flipping of markers, marker as a toggle and marker as a controller. These interactions are designed such that they would correlate with multiple contexts of application. To demonstrate their usage we have chosen lattice structures in Chemistry as the context. Using our prototypes and the insights from initial evaluations, we discuss the benefits and drawbacks of such interaction methods. We further outline the opportunities of using these interactions and extending these concepts in several other contexts.

Exploring Vibration Intensity Map Of Hand Postures For Haptic Rendering In XR

  • Sang Ho Yoon
  • Youjin Sung
  • Yitian Shao
  • Rachel Kim

In this study, we explore the effect of hand posture on haptic experience within the hand. With the wide acceptability of using the hand as an input in the XR, it is important to find out the positive and negative effects of hand postures on the haptic experience. We measured the vibration intensity of the hand using an accelerometer on various hand postures. Our results showed that distinctive hand postures alter the vibrotactile actuator’s tactile feedback across the hand.

Finger Kinesthetic Haptic Feedback Device Using Shape Memory Alloy-based High-Speed Actuation Technique

  • Sang-Woo Seo
  • Seungjoon Kwon

Compared to the study on tactile feedback gloves, the kinesthetic feedback device, which has been studied for the past several decades, has difficulties in interacting with the user owing to various problems, such as large size, low portability, and high power consumption. Herein, we present a bidirectional finger kinesthetic feedback device that can provide an immersive virtual reality experience using a shape memory alloy (SMA). The proposed device provides kinesthetic feedback without heterogeneity by integrating efficient power control of the SMA actuator, fast cooling of the SMA within one second, and high-precision motion-tracking technology. The implemented device delivers a gripping and hand spreading force of up to 10N each to the index and middle fingers.

Generating Leg Animation for Walking-in-Place Techniques using a Kinect Sensor

  • Jingbo Zhao
  • Zhetao Wang
  • Yiqin Peng
  • Yaojun Wang

We present a kinematic approach based on animation rigging to generating real-time leg animation. Our main approach is to track vertical in-place foot movements of a user using a Kinect v2 sensor and map tracked foot height to the motions of inverse kinematics (IK) targets. We align two IK targets with an avatar's feet and guide the virtual feet to perform cyclic walking motions using a set of kinematic equations. Preliminary testing shows that this approach can produce compelling real-time forward-backward leg animation during in-place walking.

Guide Ring: Bidirectional Finger-worn Haptic Actuator for Rich Haptic Feedback

  • Zofia Marciniak
  • Seo Young Oh
  • Sang Ho Yoon

We introduce a novel wearable haptic feedback device that magnifies the visual experience of virtual and augmented environments through bidirectional vibrotactile feedback driven by electromagnetic coils with permanent magnets. This device creates guidance haptic effect through magnetic attraction and repulsion. Our proof-of-concept prototype enables haptic interaction through altering position of wearable structure, vibrating with different intensity, and waveform pattern. Example applications illustrate how the proposed system promotes guided and rich haptic feedback.

Ha and Fu: Interface to Breathe on a Smartphone

  • Homei Miyashita
  • Hidenori Aoki

This study used front-camera face tracking to realize interactions such as blowing on a smartphone to make it foggy or tapping on it to make it puff. We used the parameters of the lower jaw opening, the lip closing, and the lip stretching to identify these actions. Additionally, we enabled pointing based on the position and direction of the face and implemented four applications using this interface.

Haptic Interaction Module for VR Fishing Leisure Activity

  • Yong Hae Heo
  • Seongho Kim
  • Juwon Um
  • Gyubin An
  • Sang-Youn Kim

This paper presents a tiny haptic interaction module which generates high resistive torque for VR fishing leisure activity. The presented haptic interaction module was developed by magnetic rheological fluids and optimizing its structure. The measured haptic torque was varied from 0.3 N·cm to 2.4 N·cm as the applied voltage increased from 0 V to 5 V. The performance of the proposed actuator was qualitatively evaluated by constructing virtual fishing environment where a user can feel not only the weight of a target object but also its motion.

Immersive Analytics for Spatio-Temporal Data on a Virtual Globe: Prototype and Emerging Research Challenges

  • Simon Kloiber
  • Katharina Krösl
  • Tobias Schreck

We present our approach for the immersive analysis of spatio-temporal data, using a three-dimensional virtual globe. We display quantitative data as country-shaped elevated polygons and animate elevation levels over time to represent the temporal dimension. This approach allows us to investigate global patterns of behaviour, like pandemic infection data. By using a virtual reality setting, we intend to increase our understanding of spatial data and potential global relationships. Based on the development of our prototype, we outline research challenges we see emerging in this context.

Improving Pedestrian Safety around Autonomous Delivery Robots in Real Environment with Augmented Reality

  • Madoka Inoue
  • Kensuke Koda
  • Kelvin Cheng
  • Toshimasa Yamanaka
  • Soh Masuko

In recent years, the use of autonomous vehicles and autonomous delivery robots (ADR) has increased. This paper explores how pedestrian safety around moving ADRs can be improved. To reduce pedestrian anxiety, we proposed the display of various real-time information from the ADR in Augmented Reality (AR). A preliminary experiment was conducted in an outdoor environment where an ADR was running, within a 5G network. We found AR has a positive effect in alleviating user anxiety around the ADR.

Investigation of User Performance in Virtual Reality-based Annotation-assisted Remote Robot Control

  • Thanh Long Vu
  • Dac Dang Khoa Nguyen
  • Sheila Sutjipto
  • Dinh Tung Le
  • Gavin Paul

This poster investigates the use of point cloud processing algorithms to provide annotations for robotic manipulation tasks completed remotely via Virtual Reality (VR). A VR-based system has been developed that receives and visualizes the processed data from real-time RGB-D camera feeds. A real-world robot model has also been developed to provide realistic reactions and control feedback. The targets and the robot model are reconstructed in a VR environment and presented to users in different modalities. The modalities and available information are varied between experimental settings, and the associated task performance is recorded and analyzed. The results accumulated from 192 experiments completed by 8 participants showed that point cloud data is sufficient for completing the task. Additional information, either image stream or preliminary processes presented as annotations, was found to not have a significant impact on the completion time. However, the combination of image stream and colored point cloud data visualization modalities was found to greatly enhance a user’s performance accuracy, with the number of target centers missed being reduced by 40%.

Leveraging multimodal sensory information in cybersickness prediction

  • Dayoung Jeong
  • Kyungsik Han

Cybersickness is one of the problems that undermines user experience in virtual reality. While many studies are trying to find ways to alleviate cybersickness, only a few have considered cybersickness through multimodal perspectives. In this paper, we propose a multimodal, attention-based cybersickness prediction model. Our model was trained based on a total of 24,300 seconds of data from 27 participants and yielded the F1-score of 0.82. Our study results highlight the potential to model cybersickness from multimodal sensory information with a high level of performance and suggest that the model should be extended using additional, diverse samples.

LivePose: Democratizing Pose Detection for Multimedia Arts and Telepresence Applications on Open Edge Devices

  • Christian Frisson
  • Gabriel N. Downs
  • Marie-Ève Dumas
  • Farzaneh Askari
  • Emmanuel Durand

We present LivePose: an open-source (GPL license) tool that democratizes pose detection for multimedia arts and telepresence applications, optimized for and distributed on open edge devices. We designed the architecture of LivePose with a 5-stage pipeline (frame capture, pose estimation, dimension mapping, filtering, output) sharing streams of data flow, distributable on networked nodes. We distribute LivePose and dependencies packages and filesystem images optimized for edge devices (NVIDIA Jetson). We showcase multimedia arts and telepresence applications enabled by LivePose.

Mapping of Locomotion Paths between Remote Environments in Mixed Reality using Mesh Deformation

  • Akshith Ullal
  • Nilanjan Sarkar

Remote mixed reality (RMR) allows users to be present and interact in other users’ environments through their photorealistic avatars. Common interaction objects are placed on surfaces in each user's environments and interacting with these objects require users to walk towards them. However, since the user's and their avatar's room's spatial configuration are not exactly similar, for a particular user's walking path, an equivalent path must be found in the avatar's environment, according to its environment's spatial configuration. In this work, we use the concept of mesh deformation to obtain this path, where we deform the mesh associated with the user's environment to fit to the spatial configuration of the avatar's environment. This gives us the corresponding mapping of every point between the two environments from which the equivalent path can be generated.

MetaTwin: Synchronizing Physical and Virtual Spaces for Seamless World

  • Henry Kim
  • Ayush Bhardwaj
  • Brandon Coffey
  • Dongbeom Ko
  • Sungjoo Kang
  • Jin Ryong Kim

This paper presents MetaTwin, a collaborative Metaverse platform that supports one-to-one spatiotemporal synchrony between physical and virtual spaces. The users can interact with other users and surrounding IoT devices without being tied to physical spaces. Resource sharing is implemented to allow users to share media, including presentation slides and music. We deploy MetaTwin in two different network environments (i.e., within the US, Korea-US international) and summarize users’ feedback about the experience.

Object Manipulation Method Using Eye Gaze and Hand-held Controller in AR Space

  • Ryo Ishibashi
  • Ikkaku Kawaguchi

When manipulating virtual objects in AR space, the target object is often occluded by other objects partially or completely (hereinafter, occlusion problem). In addition, the hand-ray manipulation that is commonly used in many AR devices requires the user to keep raising the arm within the range of the hand tracking sensor and causes the gorilla-arm problem. In this study, we propose an object manipulation method that combines eye gaze and a hand-held controller to mitigate the occlusion problem and gorilla-arm problem in AR environments. In the proposed method, the user controls the ray’s direction with eye gaze, and the user adjusts the length of the ray and selects objects with the hand-held controller.

PlayMeBack - Cognitive Load Measurement using Different Physiological Cues in a VR Game

  • Mohammad Ahmadi
  • Huidong Bai
  • Alex Chatburn
  • Burkhard Wuensche
  • Mark Billinghurst

We present a Virtual Reality (VR) game, PlayMeBack, to investigate cognitive load measurement in interactive VR environments using pupil dilation, Galvanic Skin Response (GSR), Electroencephalogram (EEG) and Heart Rate (HR). The user is shown different patterns of tiles lighting up and is asked to replay the pattern back pressing the tiles in the same sequence they lit up. The task difficulty depends on the length of the observed pattern (3-6 keys). This task is designed to explore the effect of cognitive load on physiological cues, and if pupil dilation, EEG, GSR and HR can be used as measures of cognitive load.

Selection of Expanded Data Points in Immersive Analytics

  • Inoussa Ouedraogo
  • Huyen Nguyen
  • Patrick Bourdot

We propose a novel technique to facilitate the selection of data points, a type of data representation we often work with in immersive analytics. We designed and implemented this technique based on the expansion of data points following Fitt’s law. A user study was conducted in an headset-based augmented reality environment. The results significantly highlight the performance of our technique in helping the user select data points and their subjective appreciation in working with the expendable data points.

Sign Language in Immersive VR: Design, Development, and Evaluation of a Testbed Prototype

  • Elena Dzardanova
  • Vlasios Kasapakis
  • Spyros Vosinakis
  • Konstantina Psarrou

Immersive Virtual Reality (IVR) systems support several modalities such as body, finger, eye, and facial expressions tracking, thus they can support sign-language-based communication. The combined utilization of tracking technologies requires careful evaluation to ensure high-fidelity transference of body posture, gestures, and facial expressions in real-time. This paper presents the design, development and evaluation of an IVR system utilizing state-of-the-art tracking options. The system is evaluated by certified sign language teachers to detect usability issues and examine appropriate methodology for large-scale follow-up evaluation by users fluent in sign language.

Size Does Matter: An Experimental Study of Anxiety in Virtual Reality

  • Junyi Shen
  • Itaru Kitahara
  • Shinichi Koyama
  • Qiaoge Li

The emotional response of users induced by VR scenarios has become a topic of interest, however, whether changing the size of objects in VR scenes induces different levels of anxiety remains a question to be studied. In this study, we conducted an experiment to initially reveal how the size of a large object in a VR environment affects changes in participants’ (N = 38) anxiety level and heart rate. To holistically quantify the size of large objects in the VR visual field, we used the omnidirectional field of view occupancy (OFVO) criterion for the first time to represent the dimension of the object in the participant’s entire field of view. The results showed that the participants’ heartbeat and anxiety while viewing the large objects were positively and significantly correlated to OFVO. These study reveals that the increase of object size in VR environments is accompanied by a higher degree of user’s anxiety.

TeleStick: Video Recording and Playing System Optimized for Tactile Interaction using a Stick

  • Ryoto Uchihashi
  • Takumi Otsuka
  • Yuya Murakami
  • Ayaka Yoshizawa
  • Takuya Kawashima
  • Kaito Yamaguchi
  • Genta Ono
  • Tsukina Matsuhashi
  • Saki Yamada
  • Manoka Waguri
  • Youichi Kamiyama
  • Keita Watanabe

TeleStick is a system that records and replicates tactile experience and visual information using common types of camera and video monitor environment. TeleStick uses a stick-type device with a tactile microphone attached to a camera so that it is consistently visible in the video, and records video and with tactile and audio information using stereo 2ch. The users can feel as if they were in the video as they watch it while holding a stick-type device with a speaker and a vibrator.

The Community Game Development Toolkit

  • Amelia Roth
  • Daniel Lichtman

The Community Game Development Toolkit is a set of tools that provide an accessible, intuitive work-flow within the Unity game engine for students, artists, researchers and community members to create their own visually rich, interactive 3D stories and immersive environments. The toolkit is designed to support diverse communities to represent their own traditions, rituals and heritages through interactive, visual storytelling, drawing on community members’ own visual assets such as photos, sketches and paintings, without requiring the use of coding or other specialized game-design skills. Projects can be built for desktop, mobile and VR applications. This paper describes the background, implementation and planned future developments of the toolkit, as well the contexts in which it has been used.

The Effect of Training Communication Medium on the Social Constructs Co-Presence, Engagement, Rapport, and Trust: Explaining how training communication medium affects the social constructs co-presence, engagement, rapport, and trust

  • Mohammadamin Sanaei
  • Marielle Machacek
  • James Coleman Eubanks
  • Peggy Wu
  • James Oliver
  • Stephen B. Gilbert

Communication performance is highly context sensitive and difficult to quantify. In the SCOTTIE project (Systematic Communication Objectives and Telecommunications Technology Investigations and Evaluations), the goal is to investigate the impact of the communication medium on team performance and effectiveness. The human decision to travel or replace travel with telecommunications can be extracted from SCOTTIE, rather than relying on intuition and opinion. This poster analyzes four social communication constructs and compares them in Face-to-Face, Video Conferencing, and Virtual Reality training scenarios. Co-presence, engagement, rapport, and trust were the four constructs. Data from 105 participants across the three between-subject conditions showed that engagement was the only construct that had a statistically significant difference between the three training environments.

The Effects of Gestural Filler in Reducing Perceived Latency in Conversation with a Digital Human

  • Junyeong Kum
  • Myungho Lee

The demand to utilize digital humans for conversation systems in multiple areas has increased. Time delays in a conversation with digital humans can be occurred because of network and natural language processing, which can deteriorate the users’ satisfaction and the availability of the digital humans. In this study, we designed a gestural filler that can be exhibited when delay occurs during a conversation with a digital human, and compared the effects of the gestural filler with conversational filler.

TTTV2 (Transform the Taste and Visual Appearance):Tele-eat virtually with a seasoning home appliance that changes the taste and appearance of food or beverages

  • Homei Miyashita

We prototyped a seasoning appliance that applies a “taste display” technology that employs a taste sensor to reproduce flavors via the spraying and subsequent mixing of colored, flavored liquids to create a printed image on the surface of another food. For example, when toasted bread is used as the medium, the appliance changes its appearance and taste into other food items, such as pizza or chocolate brownie, and the user can then virtually eat that food.

TTTV2 makes it possible for people with shellfish allergies to still enjoy the taste of crab virtually

  • Homei Miyashita

In this paper, we reproduce the taste of a crab cream croquette, which are harmful to people with shellfish allergies using TTTV2 (Transform the Taste and Visual Appearance), which is a taste-reproduction technology using the mixing and spraying of solutions. Participants recognized the food from only the taste. It is possible to safely taste flavors virtually by reproducing them through the mixing of safe materials that do not contain allergens.

Using Virtual Reality Food Environments to Study Individual Food Consumer Behavior in an Urban Food Environment

  • Talia Attar
  • Oyewole Oyekoya
  • Margrethe Horlyck-Romanovsky

The objective of this research was to explore whether virtual reality can be used to study individual food consumer decision-making and behavior through a public health lens by developing a simulation of an urban food environment that included a street-level scene and three prototypical stores. Twelve participants completed the simulation and a survey. Preliminary results showed that 72.7% of participants bought food from the green grocer, 18.2% from the fast food store, and 9.1% from the supermarket. The mean presence score was 38.9 out of 49 and the mean usability score was 85.9 out of 100. This experiment demonstrates that virtual reality should be further considered as a tool for studying food consumer behavior within a food environment.

Virtual eating experience of poisonous mushrooms using TTTV2

  • Homei Miyashita

VR content does not have to follow reality entirely, thus creating a variety of experiences, and the same should be true for content for taste displays. It may be worthwhile to create experiences that cannot be experienced in the real world. For example, many VR games create thrill and excitement through experiences that would be life-threatening in the real world. In this context, we developed taste content that allows users to safely experience the taste of poisonous mushrooms.

Visual Considerations for Augmented Reality in Urban Planning

  • Dario Lanfranconi
  • Richard Wetzel
  • Tobias Matter
  • Christian Schnellmann

The design process in architecture and urban planning has always been accompanied by a discourse on suitable visual representation. This resulted in both a wealth of visualisation styles and high sensitivity among planners regarding the visual communication of their work. Representation of projects throughout phases of the planning process often adheres to established visual standards, from concept sketch to high-end rendering. A look at contemporary Augmented Reality (AR) apps for urban planning indicates however that the quality and precision of representation seem to lag somewhat behind, entailing risks that projects are misinterpreted. This poster describes our design research on three urban planning apps developed with Swiss municipalities and outlines results that improve visual representation in AR throughout different planning phases.

Visualizing Machine Learning in 3D

  • Diego Rivera

Understanding machine learning models can be difficult when the models at hand have many parts to them. Having a visual model can help aid in understanding how the model functions. A way to visualize these models is to use a 3D (three-dimensional) game development application. An application that will have an interactive element allowing the users to interact with the model (rotating and scaling it) and see changes at run-time. An interactive element will keep the users engaged, understanding, and seeing how a machine learning model looks and behaves. This paper describes the process of visualizing a machine learning model in a 3D application.

Visualizing Perceptions of Non-Player Characters in Interactive Virtual Reality Environments

  • Sebastian Misztal
  • Anna Tarrach
  • Madeline Ebeling
  • Artur Ritter
  • Jonas Schild

Visual effects and elements to visualize the perceptions of one’s own virtual character (also referred to as Visual Delegates) are often used in video games, e.g., status bars visualize the character’s sense of health, filters on the interface layer visualize the character’s state of mind. It is still largely unexplored whether Visual Delegates can also be used to transfer the perception of non-player characters comprehensibly. Therefore, we developed a medical virtual reality scenario using five different types of Visual Delegates to visualize three different perceptions of a virtual non-player patient. We tested for character assignment in a qualitative user study (N = 20). Our results can be used to decide more effectively what types of Visual Delegates can be used to convey perceptions of non-player characters.