Venue

Lecture room 3,
Graduate School of Information Science,
Nara Institute of Science and Technology
8916-5 Takayama-cho, Ikoma,
Nara 630-0192, JAPAN

Access

Program

March 16th, 2017

13:30-18:00 Symposium

18:00-19:00 Lab tour at NAIST

March 17th, 2017

Lab tour at NAIST

Registration

Register now!

Speakers

Dr. Guillaume Caron

e-Cathedral: On the digital archiving of the largest medieval Gothic church of France

The E-Cathedral research program was born in the context of cultural heritage preservation. It is dedicated to the full digitization of the cathedral of Amiens, in France, to get a model the closest to reality. This heritage building is the highest complete Gothic cathedral built in the Middle Ages (XIIIth Century) and is considered a UNESCO world heritage since 1981.

E-Cathedral was launched at fall 2010 for fifteen years and this talk gives an overview of digitization results obtained during its first six years of existence. Getting a precise 3D geometry and high definition colors of the heritage building is presented through digitization tools and computer vision research works.

Furthermore, the use of the model is another important target of this multidisciplinary program merging geographical surveying, information technologies, robotics, history of art and architecture. Some applications in these fields will be highlighted, as the interactive visualization of hundreds of millions of 3D points on a smartphone, assistive virtual camera control for tour and the first actual map of the cathedral of Amiens.

Core contributors to this invited talk content: E. Mouaddib, Professor, D. Groux-Leclet, PhD, N. Crombez, PhD and Z. Habibi, PhD.

Dr. Cedric Demonceaux

Computer vision technics using external sensors
Pose estimation, 3D reconstruction or even scene analysis are topics well studied in computer vision. The recent computer vision technics can localize a camera, reconstruct in 3D a scene with a really good accuracy. The goal of this talk is to see if we can go further by using external sensors for helping the camera to see where we are or to build the 3D of the scene. Thus, we will try to show that we can outperform recent computer vision technics using prior knowledge: 3D structure (with depth information), vertical angle (with inertial sensor). This presentation will be decomposed in two main parts. Firstly, we will suppose that we know the structure of the scene and we will show how we can use this information for computer vision tasks. For instance, we will show that we can localize precisely a 2D camera in the scene and that we can extract the background of the scene by removing dynamic objects. Secondly, we will suppose that the camera is synchronized with an inertial sensor. We will show, in this case, we can extremely simplify the pose estimation problem of a pinhole camera and we can easily reconstruct the 3D line of the scene with a single image of a non-central camera.

Dr. Fabio Morbidi

Feature-based and Dense Omnidirectional Visual Compass for Autonomous Robots
Am I heading in the right direction? This is a fundamental question that every mobile robot should be able to answer, to be deemed really ``autonomous''. Cameras are being increasingly used for robot navigation in unknown environments, since they are lightweight, inexpensive, and they provide a richer information about the surrounding environment than other on-board sensors (e.g. IMUs or gyroscopes). Because of their robustness, compact form factor, and enlarged field of view, omnidirectional cameras have lately gained prominence in robotics research. In this talk, I will present a feature-based and a dense omnidirectional visual compass for catadioptric cameras (i.e. cameras combining convex mirrors and lenses), that I have recently co-designed and experimentally validated on different robotic platforms. The former algorithm utilises the image-projection of 3D parallel lines for estimating the heading angle of the camera-robot, while the latter leverages the phase correlation method in the 2D Fourier domain. I will conclude my talk with some possible avenues for future research, including the attitude estimation of quadrotor UAVs, and the visual guidance of a robotic wheelchair in the framework of the European project ADAPT.

Dr. Shohei Nobuhara

Aqua Vision: Light Field Modeling of Underwater Projector-Camera System and Its Applications
Recent advances in 3D shape and motion measurement and understanding in computer vision play an important role in developing emerging real-world applications such as person identification from surveillance videos and autonomous car driving. In these successful application scenarios, however, the capture targets are assumed to be opaque and in a uniform medium. Measuring techniques for translucent objects involving refraction, transmission, scattering, etc is still an open problem in computer vision while it has a wide variety of applications in bioinformatics, biology, fishery and aquaculture as objects in water or in microscopic environment are translucent in general. This talk will introduce our "aqua vision" project aimed at investigating image-based 3D sensing techniques for such challenging scenarios using regular cameras, projectors and mirrors.

Dr. Takahiro Okabe

Multispectral Active Illumination

The appearance of an object depends not only on the geometric and photometric properties of the object but also on the light source illuminating it. In particular, the appearance depends on the color, i.e. spectral intensity as well as the direction of the light source. Therefore, active illumination using multispectral light sources is useful for recovering spectral properties of a scene and investigating interactions highly dependent on wavelengths such as refraction, diffraction, interference, scattering, and fluorescence.

In this talk, we will introduce two topics on multispectral active illumination. One is based on our multispectral light stage termed Kyutech Light Stage I. It consists of 32 LED clusters at different directions each of which has 9 LEDs with different spectral intensities. The other is based on a consumer DLP projector with multiple primary colors. Specifically, we show our recent results of diffuse-specular separation under multispectral and multidirectional light sources, multispectral direct-global separation, and image-based spectral relighting.

Dr. Nicolas Ragot

Environment perception for intelligent vehicles at the IRSEEM laboratory
The perception of the surrounding environment of a vehicle in order to provide it capabilities of mobility, surveillance, interaction and autonomy is a highly studied subject in mobile robotics. Indeed, the task of perception is an essential and critical issue to move towards autonomous vehicles, which will be the milestone of future means of transport (automobile, public transport shuttles, mobility solutions for disability, etc.). In this field of application, the perception of the scene is mainly spatio-temporal: the vehicle has to define a geometrical and three-dimensional representation of an environment whose physical properties vary over time: the vehicle is mobile while objects are in motion, environmental conditions which vary in external environment, plurality of the observed environments. In this framework, the IRSEEM laboratory is conducting researches for about ten years about perception systems for autonomous mobile applications. This presentation will be an opportunity to draw up an inventory of the works carried out and in progress related to these systems dedicated to the perception and analysis of environments (eg. panoramic vision from catadioptric and fish-eye cameras)

Dr. Ryusuke Sagawa

Dense 3D Reconstruction from High Frame-Rate Video with Projector-Camera System
Dense 3D reconstruction of fast moving objects could contribute to various applications such as body structure analysis, accident avoidance, and so on. In this talk, we introduce a technique based on a one-shot scanning method, which reconstructs 3D shapes for each frame of a high frame-rate video capturing the scenes projected by a static pattern. To avoid instability of image processing, we propose single-colored wave grid pattern to find correspondence between projector and camera. We show several results of dense 3D reconstruction of fast moving objects captured by high-speed camera.

Dr. Hideaki Uchiyama

Onsite benchmarking for visual SLAM
In ISMAR 2015 and VRSJ 2016, tracking competitions were successfully organized to compare the performance of visual SLAM techniques. The task of the competitions was to markup points at 3D coordinates given by the organizers while participants moved their hand-held cameras in a competition site. To achieve the task, participants first acquired the coordinate system of the site at the starting area, and then tracked 6 DoF camera poses to find the given coordinates in the site. For the evaluation, the 3D positions of marked points were compared with their ground truths so that the absolute error of visual SLAM was measured. On-site evaluation is important because it can avoid over parameter optimization by participants and unveil the actual performance of the methods. In this talk, we present the complete process of the competition and provide one example method for completing the competition.

Contact

funatomiis.naist.jp
To page top