geneview.org

We show you hidden things ...       ... you never thought of | Michael Kalkusch

 


Interactive 3D Reconstruction for Urban Planning

Outdoor Augmented/Mixed Reality applications often require a model of the real environment either for tracking or for interactive modification. While these models are often manually constructed using some 3D modeling tools, an automated acquisition pipeline is still not available for that purpose. Especially for urban planning, the status quo of existing buildings often need to be available in a 3D model. Since manual model creation is a tedious and time consuming task, a need for automated 3D reconstruction is given. Therefore, we come up with the idea of using the AR scout who is equipped with a camera, GPS, and a UMPC (ultra mobile PC). The scout explores the environment and delivers a sequence of outdoor images annotated with GPS tracking data (and probably also orientation information using an interia tracker). These images are transmitted to a reconstruction pipeline which processes the data in an iterative way. After at least three different images, a first initial 3D model can be generated using sophisticated computer vision algorithms. Up to now, a textured 3D point cloud represents the modeled 3D scene. However, we also work on building 3D textured surface models. The reconstruction is not limited to urban scenes but can also be applied for capturing individual objects such as chairs, tables, and so forth.


The focus of our approach is to achieve high response times of the reconstruction engine rather than focusing on very high quality. This allows for interacting with the different modules of the reconstruction engine and set certain parameters if required on-the-fly. Moreover, images which are not suitable for the 3D reconstruction can be rejected by the engine and the user gets immediate feedback. The figure below shows the flow of the 3D reconstruction. The images are managed using our XML-based persistent database called Muddleware. This database is designed for multi-user data exchange based on the document object model (DOM). A very powerful feature is the watchdog mechanism which allows for registering a callback based on an attribute change (addressed via an XPath expression). This feature is exploited by our system in order to notify each attached component about incoming and outgoing data.

In contrast to most existing approaches with use high-end cameras, we try to find out the performance of mobile low-end cameras with noise ratio and low resolution. However, in the initial tests it turned out that these camera images can still be used for the modeling. We have tested the following devices with built-in cameras:
  • Logitech Quickcam for notebooks pro (1.3mpix)
  • HTC built-in camera (2mpix)
  • Sony Vaio UX90, built-in camera (1.3mpix)
  • i-mate SP5 cell-phne, built-in camera (1.3mpix)
  • HTC built-in camera


Reconstruction Engine

The reconstruction engine acts as a black box which takes 2D images and delivers 3D models. The main idea is that a sequence of 2D images (containing a sufficient overlap in image contents) is used to find correspondences between them. These correspondences can then be used to estimate the camera positions where the 2D image were taken. The mathematical framework to generate 3D geometry from multiple images is concisely presented in the book by Hartley/Zisserman ("Multiple View Geometry in Computer Vision"). Once the initial model is known, consecutive images can be related to each other, and a textured 3D point cloud can be computed by a so-called dense matching approach. In the following, a brief overview of each individual task is given. The engine's pipeline is shown in the figure below.


The reconstruction pipeline consists of four main components: feature extraction, correspondence search, camera pose estimation, and dense matching.

Video

  • The video shows the AR scout in combination with the 3D reconstruction pipeline, avi (approx 40 MB)

Events

 

Acknowledgments

 

We would like to thank the VRVis Virtual Habitat group in Graz, who provides the whole 3D reconstruction algorithms for obtaining 3D models based on a sequence of 2D images. For more information see here.

 

Project Team

 

  • Bernhard Reitinger
  • Christopher Zach (VRVis Graz)

 

website maintained by Michael Kalkusch
last updated on 2011-10-09

copyright (c) 2005-2011 GeneView.org