In a photograph, depth-of-field is the part of the image that is sharp or focused, in contrast to other areas that are defocused or blurred. Depth-of-field can be deep (meaning the whole image is well focused) or shallow (only the main subject is focused while the foreground and background are blurred). Photographers use shallow depth-of-field as a way of emphasizing the main subject by isolating it from the distracting background.
Depth-of-field can be controlled by (1) the lens aperture, (2) the lens focal length, or (3) the distance to the main subject. For example, a shallow depth-of-field can be obtained via a lens with a wide aperture/large focal length or by approaching the main subject. Cellphone cameras have small apertures and focal lengths and cannot optically produce photographs with shallow depth-of-field. For this reason, new cellphones are beginning to offer the possibility of obtaining photographs with a shallow depth-of-field by processing the images through software.
This paper presents a system to produce photographs with a shallow depth-of-field in cellphones. When the main subject is a person, the system uses a neural network to separate it from the background. If the camera has dual pixel hardware (common in high-end cellphones), the software uses it to blur the background. In case there is no dual pixel sensor or if the subject is not a person, the system adapts to try to obtain good results. The authors state that the system is in production and is used by millions of Android users.