SlidAR: A 3D Positioning Method for SLAM-based Handheld Augmented Reality
 

 We present SlidAR, a 3D positioning for Simultaneous Localization And Mapping (SLAM) based HAR systems. SlidAR utilizes 3D ray-casting and epipolar geometry for virtual object positioning. It does not require a perfect 3D reconstruction of the environment nor any virtual depth cues. We have conducted a user experiment to evaluate the efficiency of SlidAR method against an existing device-centric positioning method that we call HoldAR. Results showed that SlidAR was significantly faster, required significantly less device movement, and also got significantly better subjective evaluation from the test participants. SlidAR also had higher positioning accuracy, although not significantly.

J. Polvi, T. Taketomi, G. Yamamoto, A. Dey, C. Sandor, and H. Kato:
"SlidAR: A 3D Positioning Method for SLAM-Based Handheld Augmented Reality",
Computers and Graphics, Vol. 55, pp. 33-43, 2016. (pdf file)
Camera Pose Estimation for Zoomable Camera Using Pre-calibrated Information
 


 We propose a method for estimating the camera pose for an environment in which the intrinsic camera parameters change dynamically. In general, video see-through-based AR cannot change the image magnification that results from a change in the camera’s field-of-view because of the difficulty of dealing with changes in the intrinsic camera parameters. To remove this limitation, we propose a novel method for simultaneously estimating the intrinsic and extrinsic camera parameters based on an energy minimization framework. Our method is composed of both online and offline stages. An intrinsic camera parameter change depending on the zoom values is calibrated in the offline stage. Intrinsic and extrinsic camera parameters are then estimated based on the energy minimization framework in the online stage. In our method, two energy terms are added to the conventional marker-based method to estimate the camera parameters: reprojection errors based on the epipolar constraint and the constraint of the continuity of zoom values.

T. Taketomi, K. Okada, G. Yamamoto, J. Miyazaki, and H. Kato: "Camera Pose Estimation under Dynamic Intrinsic Parameter Change for Augmented Reality", Computers and Graphics, Vol. 44, pp. 11-19, Jul. 2014. (pdf file)
Geometrically-correct Projection-based Texture Mapping onto Deformable Objects
 




 Projection-based Augmented Reality commonly employs a rigid substrate as the projection surface and does not support scenarios where the substrate can be reshaped. This investigation presents a projection-based AR system that supports deformable substrates that can be bent, twisted or folded. We demonstrate a new invisible marker embedded into a deformable substrate and an algorithm that identifies deformations to project geometrically correct textures onto the deformable object. The geometrically correct projection-based texture mapping onto a deformable marker is conducted using the measurement of the 3D shape through the detection of the retro-reflective marker on the surface. In order to achieve accurate texture mapping, we propose a marker pattern that can be partially recognized and can be registered to an object's surface. 

Y. Fujimoto, R. T. Smith, T. Taketomi, G. Yamamoto, J. Miyazaki, H. Kato, and B. Thomas:"Geometrically-correct projection-based texture mapping onto a deformable object", IEEE Transactions on Visualization and Computer Graphics, Vol. 20, No. 4, pp. 540-549, Mar. 2014. (pdf file)
Detection of 3D Points on Moving Objects from Point Cloud Data
 




 A 3D modeling technique for an urban environment can be applied to several applications such as landscape simulations, navigational systems, and mixed reality systems. In this field, the target environment is first measured using several types of sensors (laser rangefinders, cameras, GPS sensors, and gyroscopes). A 3D model of the environment is then constructed based on the results of the 3D measurements. In this 3D modeling process, 3D points that exist on moving objects become obstacles or outliers to enable the construction of an accurate 3D model. To solve this problem, we propose a method for detecting 3D points on moving objects from 3D point cloud data based on photometric consistency and knowledge of the road environment. In our method, 3D points on moving objects are detected based on luminance variations obtained by projecting 3D points onto omnidirectional images. After detecting 3D the points based on evaluation value, the points are detected using prior information of the road environment.

T. Kanatani, H. Kume, T. Taketomi, T. Sato, and N. Yokoya: "Detection of 3D Points on Moving Objects from Point Cloud Data for 3D Modeling of Outdoor Environments", Proc. IEEE Int. Conf. on Image Processing (ICIP2013), pp. 2163-2167, Sep. 2013. (pdf file)
Camera Pose Estimation Using Feature Landmark Database
 



We achieve fast and accurate feature landmark-based camera parameter estimation by adopting the following approaches. First, the number of matching candidates is reduced to achieve fast camera parameter estimation by tentative camera parameter estimation and by assigning priorities to landmarks. Second, image templates of landmarks are adequately compensated for by considering the local 3-D structure of a landmark using the dense depth information obtained by a laser range sensor. To demonstrate the e.ffectiveness of the proposed method, we developed some AR applications using the proposed method.

T. Taketomi, T. Sato, and N. Yokoya: "Real-time and Accurate Extrinsic Camera Parameter Estimation using Feature Landmark Database for Augmented Reality", Computers and Graphics, Vol. 35, No. 4, pp. 768-777, Aug. 2011. (pdf file)
 Structure-from-motion Using GPS Considering GPS Positioning Accuracy
 


 We propose a method for estimating extrinsic camera parameters using video images and position data acquired by GPS. In conventional methods, the accuracy of the estimated camera position largely depends on the accuracy of GPS positioning data because they assume that GPS position error is very small or normally distributed. However, the actual error of GPS positioning easily grows to the 10m level and the distribution of these errors is changed depending on satellite positions and conditions of the environment. In order to achieve more accurate camera positioning in outdoor environments, in this study, we have employed a simple assumption that true GPS position exists within a certain range from the observed GPS position and the size of the range depends on the GPS positioning accuracy. Concretely, the proposed method estimates camera parameters by minimizing an energy function that is defined by using the reprojection error and the penalty term for GPS positioning.

H. Kume, T. Taketomi, T. Sato, and N. Yokoya: "Extrinsic camera parameter estimation using video images and GPS considering GPS positioning accuracy", Proc. 20th IAPR Int. Conf. on Pattern Recognition (ICPR2010), pp. 3923-3926, Aug. 2010. (pdf file)
 Position Estimation of Near Point Light Sources
 



We present a novel method for estimating 3-D positions of near light sources by using highlights on the outside and inside of a single clear hollow sphere. Conventionally, the positions of near light sources have been estimated by using observed highlights on multiple reference objects, e.g. mirror balls. The primary contributions of our work are as follows; (1) Geometric calibration for multiple reference objects is not required., (2) Positions of near light sources can be accurately estimated by minimizing reprojection errors., (3) Corresponding pairs of highlight positions under multiple light sources can be easily determined.

T. Aoto, T. Taketomi, T. Sato, Y. Mukaigawa, and N. Yokoya: "Position Estimation of Near Point Light Sources using Clear Hollow Sphere", Proc. 21st IAPR Int. Conf. on Pattern Recognition (ICPR2012), pp. 3721-3724, Nov. 2012. (pdf file)
 Robust Model-based Tracking Considering Changes in The Measurable DoF
 

Model based tracking approaches estimate the pose of the object by minimizing the re-projection error. However, when the object has some ambiguity, for instance, rotation invariance, the 3D pose cannot be correctly estimated. This paper proposes a novel method to allow continuous tracking even when the Degrees of Freedom (DoF) of the target object changes, being able to recover one missing DoF. Pose ambiguity test and recovery of the 3D pose by null space search were added into a general model-based tracking algorithm. Experiments were conducted in a synthetic and in the real world environment to validate the proposed method.

K. Kumagai, M. Oikawa, T. Taketomi, G. Yamamoto, J. Miyazaki, and H. Kato:
"Robust model-based tracking considering changes in the measurable DoF of the target object", Proc. 21st IAPR Int. Conf. on Pattern Recognition (ICPR2012), pp.2157-2160, Nov. 2012.(pdf file)
 A Model-based Tracking Framework for Textureless 3D Objects
 
This paper addresses the problem of tracking textureless rigid curved objects. A common approach uses polygonal meshes to represent curved objects inside an edge-based tracking system. However, in order to accurately recover their shape, high quality meshes are required, creating a trade-off between computational efficiency and tracking accuracy. To solve this issue, we suggest the use of quadrics calculated for each patch in the mesh to give local approximations of the object contour. This representation reduces considerably the level of detail of the polygonal mesh while maintaining tracking accuracy. The novelty of our research lies in using curves to represent the quadrics'projection in the current viewpoint for distance evaluation instead of comparing directly the edges from the mesh and detected edges in the video image. 

M. Oikawa, T. Taketomi, G. Yamamoto, M. Fujisawa, T. Amano, J. Miyazaki, and H. Kato: "A model-based tracking framework for textureless 3D rigid curved objects", SBC Journal on 3D Interactive Systems, Vol.3, No. 2, pp. 2-15, Jan. 2013. (pdf file)