Nav

Google's Camera App and Lens Blur

By Staff Reporter | Apr 28, 2014 11:29 AM EDT

Google's Camera App and Lens Blur feature make for a powerful mobile app. Lens Blur is a new mode in the Google Camera app. According to Google, the feature lets users take a photo with a shallow depth of field using just the Android phone or tablet. Unlike a regular photo, Lens Blur lets the user change the point or level of focus after the photo is taken. This allows users to choose to make any object come into focus simply by tapping on it in the image.

Google

Below is more insight on how Google does it:

Once we’ve got the 3D pose of each photo, we compute the depth of each pixel in the reference photo using Multi-View Stereo (MVS) algorithms. MVS works the way human stereo vision does: given the location of the same object in two different images, we can triangulate the 3D position of the object and compute the distance to it. How do we figure out which pixel in one image corresponds to a pixel in another image? MVS measures how similar they are -- on mobile devices, one particularly simple and efficient way is computing the Sum of Absolute Differences (SAD) of the RGB colors of the two pixels.

Now it’s an optimization problem: we try to build a depth map where all the corresponding pixels are most similar to each other. But that’s typically not a well-posed optimization problem -- you can get the same similarity score for different depth maps. To address this ambiguity, the optimization also incorporates assumptions about the 3D geometry of a scene, called a "prior,” that favors reasonable solutions. For example, you can often assume two pixels near each other are at a similar depth. Finally, we use Markov Random Field inference methods to solve the optimization problem.

*Google

By changing the depth-of-field slider, Google indicates that users can simulate different aperture sizes, to achieve bokeh effects ranging from subtle to surreal (e.g., tilt-shift). The new image is rendered instantly, allowing to see changes in real time. Research at Google also posted that Lens Blur replaces the need for a large optical system with algorithms that simulate a larger lens and aperture. The company describes the feature that instead of capturing a single photo, users move the camera in an upward sweep to capture a whole series of frames.

From these photos, Lens Blur uses computer vision algorithms to create a 3D model of the world, estimating the depth (distance) to every point in the scene. The algorithms for Lens Blue are used to create imaging of 3D photo from the Android mobile device. The imaging is closely related to the computer vision algorithms used in 3D mapping Google Maps Photo Tours and Google Earth.

Latest Stories