PTAM (Parallel Tracking and Mapping) is a camera tracking system for Augmented Reality (AR). This CRC Press News discusses a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for home robots, it enables a system speciï¬cally designed to track a hand-held camera in a small AR workspace. It was applied to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer:
This allows the use of computationally expensive batch optimization techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.
This method is an alternative to the SLAM approaches previously employed to track and map unknown environments. Rather than being limited by the frame-to-frame scalability of incremental mapping approaches which mandate “a sparse map of high quality features”, it implements the alternative approach, using a far denser map of lower-quality features. Results show that on modern hardware, the system is capable of providing tracking quality adequate for small-workspace AR applications - provided the scene tracked is reasonably textured, relatively static, and not substantially self-occluding. No prior model of the scene is required, and the system imposes only a minimal initialization burden on the user (the procedure takes three seconds.) With the level of tracking robustness and accuracy, the novel approach signiï¬cantly advance the state-of-the art.
PTAM is a technology and algorithm that is able to estimate the position of a camera in a three-dimensional environment and to map the position of the points of the visible objects by analyzing and processing information from a video sequence. As the name of the technology states, the whole process can be actually split into two different actions: tracking and mapping. With the camera moving in the 3D space, it is possible to measure its own position via triangulation and stereo initialization techniques when the same scene is viewed from different points of view. This process is the camera tracking, which aims to calculate as accurately as possible its relative position to the other objects and the movement of the camera in real-time.
The second task is the mapping of the 3D environment in which the camera moves. The simplest way to do so is to measure the position of certain point-features, while other techniques are able to detect straight lines or even extract 3D mesh information from the video stream. Tracking and mapping are clearly mutually dependent:
The main idea of PTAM is therefore to perform the tracking and mapping tasks in parallel, running them on separate threads on a multi-core processor. This method allows a precise and robust real-time tracking, together with an accurate points-based map of the environment.
An important quality of the PTAM method is the fact that mapping is performed only when there are free resources on the background processing thread. This allows the tracking system to follow the camera in real-time regardless of the complexity of the scene, achieving constant frame-rate output particularly useful for AR applications. On the other hand, if the camera is stationary in an already-mapped environment, the background thread will allocate resources to analyze again old information in order to improve the quality of the map.
PTAM maps the real world without needing to be initialized with real world markers like known natural feature targets. It is difficult to map input from a handheld camera as opposed to a robot because a camera will not have any odometry (input from movement sensors used to estimate the position) whereas a robot would. Additionally, neither can a handheld camera be moved at arbitrarily slow speeds.
PTAM estimates the position of a camera in a 3D environment and it maps the positions of points on objects in the space by analyzing and processing the input from the camera in real time.
As a summary, PTAM involves two main parts – the tracking of the camera and the mapping of the points. These are run in parallel on different threads of a multi-core processor as follows:
Parallel Tracking and Multiple Mapping (PTAMM) is an extension to PTAM. The key differences are support for creating and automatically localizing into multiple independent maps, and the ability to serialize maps to and from disk. PTAMM also has the basis of a game framework to allow users to add further games to the 3 included AR games. With multiple maps each map can hold its own AR, and ss the camera is moved from one mapped area to another; PTAMM automatically detects and switches to the relevant map, displaying the correct AR annotations, extending the capability of Parallel Tracking and Mapping for Inventive Problm Solving of AR applications.