Creating real-time depth maps for occlusions in AR – AR is a technology that allows you to insert virtual objects in the real world. And you’ve probably seen this for things like funny face masks or informational overlays or other things. But one of the key limitations of current systems is that they can only insert virtual objects in front of real things, but not behind them. This is because they only track the depth of a few points in the scene, but not for all the pixels.
So we’ve worked on a paper called Fast Depth Densification for Occlusion-Aware AR that solves this problem. So like other systems, we use SLAM to track the depth of a few points very accurately. But we also use a real-time optical flow method to estimate the motion of all the remaining pixels. But because this is a real-time method, the quality is not high enough to estimate depth directly from it, but we can use it to determine the location of the major depth edges in the scene.
And then, we go back to these very accurate SLAM points and propagate their depth to all the remaining pixels but stop the propagation at these recovered depth edges. That ensures that all the object boundaries are super crisp. So this is very fast and produces depth maps that are great for AR because they are smooth everywhere, except at these super sharp discontinuities.
They’re perfect for inserting virtual objects in the scene that really look like they’re there because they can be both in front or behind real objects. So it’s really exciting for me to see how this technology and others we’re working on help to blur the line between the real world and the virtual world and make AR more realistic and engaging.
Web enthusiast. Thinker. Evil coffeeaholic. Food specialist. Reader. Twitter fanatic. Music maven. AI and Machine Learning!