Blog

  • Google Releases Deep Learning Model of The Pixel 2 Portrait Mode

    Google is known for its open-source tools for developers like Angular, Google Go, and Google Web Toolkit. The company has always promoted development efforts through its different platforms like Android, Chromium and Fuchsia OS. As a matter of fact, Google keeps open sourcing most of its discoveries and advancements in AI, Machine Learning and Data Mining applications. This further helps developers to leverage the new technological advancements and add new innovative features to their applications.

     

    Lately, the tech pioneer (Google) has released the Deep Learning model that was used to create the portrait mode in Pixel 2 devices. It’s called the semantic image segmentation which happens to be the technology behind the single lens portrait mode in Pixel 2. The deep learning model which is released by Google assigned semantic labels to every pixel in an image. It also detects and classifies different objects like road, sky, person and animal. At the core, it recognizes the background and foreground in an image.  

     

    The same feature was used in the portrait mode of Pixel 2 to create ‘Depth of field effects’ with only one physical lens. The main focus of the deep learning model is on image recognition i.e recognizing the outline of the objects in an image. It’s also used to determine where an object ends and where the background begins.

     

    Also read How Deep Learning Is Changing The Overall Customer Experience.

     

    The Deep Learning Model

    The Deep Learning model which is recently open-sourced by Google is called DeepLab -V3+. It happens to be an image segmentation model that allows developers to build features similar to the Portrait mode in Pixel 2 using real-time video segmentation. Apart from that, the recent Google release also features model training and evaluation code which is also implemented in TensorFlow.


    You may also like Get AI To Mobile Devices With Google TensorFlow Lite.

     

    The Bottom Line

    Owing to the tremendous advancements in image segmentation systems, these systems have reached unprecedented accuracy levels, something that seemed like a pipedream just five years earlier. No doubt, Google has been a major contributor in this field and it has paved ways for others to do the same. Publically sharing its image segmentation model with the worldwide community of developers is a big step and it will further help the developers to build state-of-the-art Machine Learning systems and envision new applications using this technology.

Tags: MachineLearning