|Dieter Fox (UW)||Sanja Fidler (U Toronto)||Thomas Funkhouser (Princeton)|
|Abhinav Gupta (CMU)||Leonidas Guibas (Stanford)||Vladlen Koltun (Intel)|
|Jitendra Malik (UCB)||Matthias Nießner (TUM)||Josh Tenenbaum (MIT)|
Tremendous efforts have been devoted to 3D scene understanding over the last decade. Due to their success, a broad range of critical applications like 3D navigation, home robotics, and virtual/augmented reality have been made possible already, or are within reach. These applications have drawn the attention and increased aspirations of researchers from the field of computer vision, computer graphics, and robotics.
However, significantly more efforts are required to enable complex tasks like autonomous driving or home assistant robotics, which demand a deeper understanding of the environment compared to what is possible today. Such a requirement is because these complex tasks call for an understanding of 3D scenes across multiple levels, relying on the ability to accurately parse, reconstruct and interact with the physical 3D scene, as well as the ability to jointly recognize, reason and anticipate activities of agents within the scene. Therefore, 3D scene understanding problems become a bridge that connects vision, graphics and robotics research.
The goal of this workshop is to foster interdisciplinary communication of researchers working on 3D scene understanding (computer vision, computer graphics, and robotics) so that more attention of the broader community can be drawn to this field. Through this workshop, current progress and future directions will be discussed, and new ideas and discoveries in related fields are expected to emerge.
Specifically, we are interested in the following problems:
|Siyuan Huang* (UCLA)||Chuhang Zou* (UIUC)||Hao Su (UCSD)||Alexander Schwing (UIUC)|
|Shuran Song (Princeton)||Siyuan Qi (UCLA)||Yixin Zhu (UCLA)|
|David Forsyth (UIUC)||Leonidas Guibas (Stanford)||Song-Chun Zhu (UCLA)|