Nick Whiting
Thursday, 29 November
Title : Discovering VR's Grammar: The Interplay of the Engineering and the Creative Processes
Nick Whiting
Technical Director at Epic Games, Studio Head at Epic Games Seattle
Abstruct : Learning to make experiences and tell stories in a new medium is always challenging. Creators often rely on imitation and adaptation of their work in previous media when transitioning to a new medium before a new grammar is established. Epic Games, creators of the Unreal Engine technology and multiple hit video game franchises like Fortnite and Gears of War, uses a deeply collaborative process, where engineering works hand-in-hand with creatives in order to develop new experiences. The talk will cover how Epic has evolved and adapted that process from working in both games and visual effects to the new medium of virtual reality. By using case studies from consumer products like Bullet Train and Robo Recall, as well as experience in enterprise sectors, we will present Epic’s methodology for driving the underlying engine technology forward by leveraging the creative process. Furthermore, because Epic shares its technology with other diverse companies, we will share insights into how others are using the engine for unique VR applications outside of entertainment, and what can be done to help make VR more sustainable.
Biography: Nick Whiting is currently overseeing the development of the award-winning Unreal Engine 4's virtual and augmented reality efforts, including shipping the recent "/Robo Recall/", “/Bullet Train/,” "/Thief in the Shadows/," "/Showdown/," and "/Couch Knights/" VR titles. Previously, he has helped shipped titles in the blockbuster "Gears of War" series, including "Gears of War 3" and "Gears of War: Judgment." Nick is also currently serving as the chair of the Khronos OpenXR initiative, working to create an open standard for VR and AR platforms and applications.
Before his work on VR and AR technologies, he was the lead engineer on Unreal Engine 4’s Blueprints visual scripting system, used by developers to democratize the creation of experiences between both engineers and creatives.
Previously to working at Epic, Nick worked on the “America’s Army” series of games for the U.S. Army, which served not only as a free first-person shooter game for the public, but also the technological basis for many internal army training initiatives.
Nick received degrees in Electrical and Computer Engineering and Japanese Linguistics, as well as a certificate in Biomedical Engineering from the University of Colorado at Boulder, with coursework towards those degrees from the University of Tsukuba in Japan.
Masatoshi ISHIKAWA
Friday, 30 November
Title : Smart Systems Using High-speed Image Processing and Applications
Masatoshi ISHIKAWA
Abstruct : High-speed image processing at 1,000fps can create new values and aspects in various types of smart systems including VR, MR, and AR systems. At first, an architecture of parallel decomposition based on hierarchical parallel distributed system with module structure and a design concept of dynamics matching will be explained. In addition, a parallel processing vision chip for 1,000fps image processing and a 1,000fps projector will be shown as key devices of high-speed smart systems. Based on such concepts and devices, application systems including active vision, gesture recognition, target tracking, multi target tracking, dynamic projection mapping, 3D shape measurement, book flipping scanning, micro visual feedback, variable focus lens, all-in-focus vision, batting robot, high-speed robot hand, and running robot will be shown by using videos.
In such application systems, high-speed and low latency response, real-time visual feedback, and parallel decomposition of task play important roles for creating immersive and realistic interactive systems. Processing speed faster than human and total latency lower than human are important design challenges to be solved for realizing such application systems. High-speed image processing is one of key technologies to create novel systems. Finally, problems of smart systems and future of artificial intelligence will be discussed.
Biography : Masatoshi ISHIKAWA received the B.E., M.E. and Dr. Eng. degrees in mathematical engineering and information physics in 1977, 1979 and 1988, respectively, from the University of Tokyo, Tokyo, Japan. From 1979 to 1989 he was a researcher and senior researcher at Industrial Products Research Institute, Tsukuba, Japan, from 1979 to 1987, and from 1987 to 1989, respectively. He moved to the University of Tokyo as an associate professor in 1989. He was a professor of mathematical engineering and information physics, and information physics and computing at University of Tokyo from 1999 .to 2001 and from 2001 to 2005, respectively. He was an executive adviser to the president, a vice-president and an executive vice-president of the University of Tokyo, from 2004 to 2005, from 2004 to 2005, and from 2005 to 2006, respectively. He is a professor of creative informatics and the dean of graduate school of information science and technology at the University of Tokyo. His current research interests include robotics, sensor fusion, high-speed vision, visual feedback, dynamic image control, and active perception. He received more than 100 academic awards including Best Paper Awards. He is the president-elect of IMEKO (International Measurement Confederation) from 2015.
Hao Li
Saturday, 1 December
Title : Photorealistic Human Digitization and Rendering using Deep Learning
Hao Li
Abstruct : The age of immersive technologies has created a growing need for processing detailed visual representations of ourselves as virtual and augmented reality is growing into the next generation platform for online communication. A natural simulation of our presence in such virtual world is unthinkable without a photorealistic 3D digitization of ourselves. In visual effects and gaming, compelling CG characters can already be produced using state-of-the-art 3D capture technologies combined with the latest real-time rendering engines. However, the creation of high-quality digital assets are still bounded by expensive and time-consuming production pipelines, and creating photorealistic results still requires authoring tasks that can only be achieved by skilled digital artists. We propose a deep learning approach to this high-dimensional reconstruction and rendering problem by introducing new convolutional neural networks and architectures, that can process 3D data, and synthesize images from 3D input with intuitive user controls. I will showcase our latest research in photorealistic 3D face digitization, expression generation, as well as hair modeling and rendering. This presentation will open up the question in whether a deep learning-based synthesis approach could potentially displace conventional graphics rendering pipelines.
Biography : Hao Li is CEO/Co-Founder of Pinscreen, assistant professor of Computer Science at the University of Southern California, and the director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. Hao's work in Computer Graphics and Computer Vision focuses on digitizing humans and capturing their performances for immersive communication and telepresence in virtual worlds. His research involves the development of novel geometry processing, data-driven, and deep learning algorithms. He is known for his seminal work in non-rigid shape alignment, real-time facial performance capture, hair digitization, and dynamic full body capture. He was previously a visiting professor at Weta Digital, a research lead at Industrial Light & Magic / Lucasfilm, and a postdoctoral fellow at Columbia and Princeton Universities. He was named top 35 innovator under 35 by MIT Technology Review in 2013 and was also awarded the Google Faculty Award, the Okawa Foundation Research Grant, as well as the Andrew and Erna Viterbi Early Career Chair. He won the Office of Naval Research (ONR) Young Investigator Award in 2018.
Hao obtained his PhD at ETH Zurich and his MSc at the University of Karlsruhe (TH).