AR Capture is an app I created for my honours project at university. It takes feature point data from mobile augmented reality platforms such as ARCore and ARKit and generates a mesh in real-time as the user scans the data in.

Overview

Standard volumetric reconstructions become problematic when dealing with limited depth data such as that given through the SLAM feature points from ARCore and ARKit. Upon attempting to resolve a desirable quality mesh we get poor, fragmented output as can be viewed in the below set of images on the left.

5cm Reconstruction

I based my research around improving the output of the volumetric reconstruction by employing a surface estimation technique to estimate the depth in areas of limited detail. This can be viewed in the set of images on the right. As can be viewed, the surface estimation allows the system to preserve feature rich areas of the scene whilst also still retaining a uniform mesh in lesser detailed parts. Importantly, the technique with surface estimation still maintains performance good enough for real-time applications.

Occlusion and Physics

Although the implementation works, and the output mesh is good enough for doing physics interactions and occlusions, I realise now that there are better methods that will allow me to achieve a more accurate mesh output and improved reconstruction times. This is something I plan to work on implementing once I complete iOS support for this project.