Google Releases Deep Learning Model of The Pixel 2 Portrait Mode

Posted By : Anirudh Bhardwaj | 30-Oct-2018

Google Releases Deep Learning Model of The Pixel 2 Portrait Mode

Google is known for its open-source tools for developers like Angular, Google Go, and Google Web Toolkit. The company has always promoted development efforts through its different platforms like Android, Chromium and Fuchsia OS. As a matter of fact, Google keeps open sourcing most of its discoveries and advancements in AI, Machine Learning and Data Mining applications. This further helps developers to leverage the new technological advancements and add new innovative features to their applications.

 

Lately, the tech pioneer (Google) has released the Deep Learning model that was used to create the portrait mode in Pixel 2 devices. It’s called the semantic image segmentation which happens to be the technology behind the single lens portrait mode in Pixel 2. The deep learning model which is released by Google assigned semantic labels to every pixel in an image. It also detects and classifies different objects like road, sky, person and animal. At the core, it recognizes the background and foreground in an image.  

 

The same feature was used in the portrait mode of Pixel 2 to create ‘Depth of field effects’ with only one physical lens. The main focus of the deep learning model is on image recognition i.e recognizing the outline of the objects in an image. It’s also used to determine where an object ends and where the background begins.

 

Also read How Deep Learning Is Changing The Overall Customer Experience.

 

The Deep Learning Model

The Deep Learning model which is recently open-sourced by Google is called DeepLab -V3+. It happens to be an image segmentation model that allows developers to build features similar to the Portrait mode in Pixel 2 using real-time video segmentation. Apart from that, the recent Google release also features model training and evaluation code which is also implemented in TensorFlow.


You may also like Get AI To Mobile Devices With Google TensorFlow Lite.

 

The Bottom Line

Owing to the tremendous advancements in image segmentation systems, these systems have reached unprecedented accuracy levels, something that seemed like a pipedream just five years earlier. No doubt, Google has been a major contributor in this field and it has paved ways for others to do the same. Publically sharing its image segmentation model with the worldwide community of developers is a big step and it will further help the developers to build state-of-the-art Machine Learning systems and envision new applications using this technology.

About Author

Author Image
Anirudh Bhardwaj

Anirudh is a Content Strategist and Marketing Specialist who possess strong analytical skills and problem solving capabilities to tackle complex project tasks. Having considerable experience in the technology industry, he produces and proofreads insightful content on next-gen technologies like AI, blockchain, ERP, big data, IoT, and immersive AR/VR technologies. In addition to formulating content strategies for successful project execution, he has got ample experience in handling WordPress/PHP-based projects (delivering from scratch with UI/UX design, content, SEO, and quality assurance). Anirudh is proficient at using popular website tools like GTmetrix, Pagespeed Insights, ahrefs, GA3/GA4, Google Search Console, ChatGPT, Jira, Trello, Postman (API testing), and many more. Talking about the professional experience, he has worked on a range of projects including Wethio Blockchain, BlocEdu, NowCast, IT Savanna, Canine Concepts UK, and more.

Request for Proposal

Name is required

Comment is required

Sending message..