The First CREST International Workshop on `Joint Design of Encoding and Decoding Methods for Plenoptic imaging' supported by JST CREST Grant Number JPMJCR1764.
- Utilizing ToF camera for analyzing optical responses of a scene
- Takuya Funatomi
- The time-of-flight camera was originally developed for depth sensing but can be used for different purposes upon its principle. We address a scene as an optical reaction system for input illumination and utilize ToF camera to measure its response. Our study shows that the temporal point spread function of a scene can be recovered as an impulse response and it presents light-in-flight image. Moreover, we show that material classification can be achieved from ToF measurement. These utilizations show the future possibilities of the time-of-flight camera.
- Takuya Funatomi is an associate professor at Nara Institute of Science and Technology (NAIST) since 2015. He received Ph.D. degrees in Informatics from Kyoto University in 2007. Then, he was an assistant professor at Kyoto University until 2015, and a visiting assistant professor at Stanford University in 2014. His research interests include computer vision, computer graphics, and pattern recognition.
- Rethinking Structured Light
- Kyros Kutulakos
- Even though structured-light triangulation is a decades-old problem, much remains to be discovered about it---with potential ramifications for computational imaging more broadly. I will focus on two specific aspects of the problem that are influenced by recent developments in our field. First, programmable coded-exposure sensors vastly expand the degrees of freedom of an imaging system, essentially redefining what it means to capture images under structured light. I will discuss our efforts to understand the theory and expanded capabilities of such systems, and to build custom CMOS sensors that realize them. Second, I will outline our recent work on turning structured-light triangulation into an optimal encoding-decoding problem derived from first principles. This opens the way for adaptive systems that can learn on their own how to optimally control their light sources and sensors, and how to convert the images they capture into accurate 3D geometry.
- Kyros Kutulakos is a Professor of Computer Science at the University of Toronto. He received his PhD degree from the University of Wisconsin-Madison in 1994 and his BS degree from the University of Crete in 1988, both in Computer Science. Kyros has been a pioneer in the area of computational light transport, developing theoretical tools and computational cameras to analyze light propagation in real-world environments. He is the recipient of an Alfred P. Sloan Fellowship, an Ontario Premier's Research Excellence Award, a Marr Prize in 1999, a Marr Prize Honorable Mention in 2005, and four other paper awards (CVPR 1994, ECCV 2006, CVPR 2014, CVPR 2017). He was Program Co-Chair of CVPR 2003 and ICCV 2013, and also served as Program Co-Chair of the second ICCP conference back in 2010.
- Time-resolved imaging with lateral electric field charge modulators
- Keiichiro Kagawa, Keita Yasutomi, and Shoji Kawahito
- We have been developing ultra-high-speed time-resolving CMOS image sensors for time-of-flight depth imaging, fluorescence lifetime imaging, and diffuse optical tomography based on lateral electric field charge modulators (LEFM). In this talk, the operation principle of our sensors is briefly explained, and examples of time-resolved imaging of multi-path interference are shown.
- Keiichiro Kagawa received the Ph.D. degree in engineering from Osaka University, Osaka, Japan, in 2001. In 2001, he joined Graduate School of Materials Science, Nara Institute of Science and Technology as an Assistant Professor. In 2007, he joined Graduate School of Information Science, Osaka University as an Associate Professor. Since 2011, he has been an Associate Professor with Shizuoka University, Hamamatsu, Japan. His research interests cover high-performance CMOS image sensors, imaging systems, and biomedical applications.
- Reconstructing Scenes with Mirror and Glass Surfaces
- Michael Goesele
- 3D scanning is an important tool to create a digital
model of a real world objects and scenes. Planar
reflective surfaces such as glass and mirrors are,
however, notoriously hard to reconstruct for most
current 3D scanning techniques. When treated naïvely,
they introduce duplicate scene structures, effectively
destroying the reconstruction altogether. Our key
insight is that an easy to identify structure attached
to the scanner—in our case an AprilTag—can yield
reliable information about the existence and the
geometry of glass and mirror surfaces in a scene. We
introduce a fully automatic pipeline that allows us to
reconstruct the geometry and extent of planar glass
and mirror surfaces while being able to distinguish
between the two. Furthermore, our system can
automatically segment observations of multiple
reflective surfaces in a scene based on their
estimated planes and locations. In the proposed setup,
minimal additional hardware is needed to create high-
quality results. We demonstrate this using
reconstructions of several scenes with a variety of
real mirrors and glass.
This is joint work with Thomas Whelan, Steven J. Lovegrove, Julian Straub, Simon Green, Richard Szeliski, Steven Butterfield, Shobhit Verma, and Richard Newcombe.
- Michael Goesele received his diploma in Computer Science from Ulm University in 1999. He then joined the Max Planck Institute for Computer Science in Saarbrücken and earned his Ph.D. from Saarland University in 2004. After a two year postdoctoral stay at the University of Washington (Seattle), he joined TU Darmstadt in 2007. His research interests include computer graphics, computer vision, and massively parallel computing.