Demos

The Demo Chairs are please to announce the acceptance of the following demos:


Dense 3D one-shot scanning system for capturing shapes of fast moving objects.

Demonstrators: Ryo Furukawa, Ryusuke Sagawa, and Hiroshi Kawasaki

Abstract: In this demonstration, dense 3D reconstruction systems for extremely fast moving objects are presented. The system is based on a one-shot scanning method that can reconstructs 3D shape from a single image where dense and simple pattern are projected onto an object. To realize dense 3D reconstruction from a single image, two methods are implemented in the system; (1) an efficient line detection technique based on de Bruijn sequence and brief-propagation, and (2) an extension of shape from grid technique.


Areograph: Photo-realistic Models from Images

Demonstrators: Philip McLeod, Steven Mills, Jiri Fajtl, and Luke Reid

Abstract: We present Areograph, a system for producing photo-realistic 3D models from images. This is achieved through a combination of computer vision techniques for reconstructing a 3D surface model and computer graphics techniques for dynamic texturing.


HAKA1: Testing of computer vision algorithms goes mobile

Demonstrators: Reinhard Klette and John Morris

Abstract: Adapting vision-based algorithms for adequate performance in uncontrolled environments (i.e outside laboratories) is still a challenge for the vision community. Our demonstration is based on the research vehicle HAKA1, which has proved to be an excellent tool for the testing of vision algorithms in a broad range of environments. It has been modified so that up to nine cameras can be mounted behind the windscreen. Other sensors (e.g. laser range finders, GPS antennas) can be attached to the custom roof rack. An important application for this mobile platform is the development of vision-based driver assistance systems. Among the wide range of vision-based algorithms that can be used (e.g. road modeling, lane detection, segmentation and object classification), motion and depth detection play a crucial role in understanding the dynamic environment of the vehicle. For this demonstration, we will focus on depth estimation and present a real time stereo system capable of processing high definition (HD) images (i.e. 1 mega pixel resolution) at up to 30 frames per second (fps).


Wearable Sensor Device for Automatic Recording of Hand Drawings

Demonstrators: Takuya Maekawa, Akisato Kimura, and Hitoshi Sakano

Abstract: Drawing and writing are two of the most important human activities when it comes to recording events and information. There are many drawing and writing activities in our daily lives including memo taking during phone calls, jotting down recipes when watching a cooking show, and note taking during lectures. Needless to say, the digitization of hand drawn paper is important and thus many products and methods for capturing hand drawings have been developed. For example, image scanners and pen tablets are already widely used to scan office documents and capture hand drawn sketches. Also, systems have been developed that capture pen strokes with a camera mounted on a pen tip and special paper or by tracking the pen position with ultrasound and infrared technologies. However, many of these methods require a special pen, paper, and/or apparatus. Thus, when we want to capture hand drawings with these methods, we have to capture the drawings actively, e.g., by preparing special pens and paper. In this work, we try to capture automatically all the hand drawings found in our daily lives without any explicit action by the user. Recent advances in sensing technology enable us to record our daily life data anywhere and at anytime using small always-on wearable sensors. In this work, our aim is to capture hand drawings automatically with an always-on wearable sensor device equipped with a camera.


Demo Chairs

For more information, please contact the Demo Chairs: