Research/Blog
AI Enabled Virtual Try-On
- February 26, 2020
- Posted by: CellStrat Editor
- Category: Artificial Intelligence Computer Vision Deep Learning
Abstract
Artificial Intelligence based Virtual Try-On product was demonstrated by AILab members – Niraj Kale, Prashanth Sinha & Shreyas Jagannath
in CellStrat AI Conclave on 8th. February’20 in Bengaluru.
At AI Mage, they build products with AI, AR and VR technologies to personalize the shopping experience of their customers. They intend to have multiple products which solve this purpose. Their current Flagship product, the Virtual Try-on Clothes and Accessories will make shopping an easy, fast and fun experience. Some of the features that add the cherry on top will be, enabling the estimation of the correct fit of clothes and accessories.
They would also enable the customers to select the right clothes and accessories based on their preferences or the trend which is currently going on. The self checkout and recommendation modules, in offline retail shops can save time and money by reducing long queues in billing and trial rooms. In an online scenario, they would be able to save the retailer a lot of time and reduce costs in supply chain operations.
Algorithm
When a user’s face is detected by the camera, engine performs the following steps to generate the augmented face mesh, as well as center & region poses:
It identifies the center pose and a face mesh:
- The center pose, located behind the nose, is the physical center point of the user’s head (in other words, inside the skull).
- The face mesh consists of hundreds of vertices that make up the face, and is defined relative to the center pose.
- The Augmented Face generated uses the face mesh and center pose to identify face region poses on the user’s face:
These regions are:
Left forehead (LEFT_FOREHEAD)
Right forehead (RIGHT_FOREHEAD)
Tip of the nose (NOSE_TIP)
These elements — the center pose, face mesh, and face region poses — comprise the augmented face mesh and are used by AugmentedFace APIs as positioning points and regions to place the assets in the app.
Implementation
Virtual try-on allows the app to automatically identify different regions of a detected face, and use those regions to overlay assets such as Jewellery and Spectacles in a way that precisely matches the contours and regions of an individual face. In order to achieve this, we overlay textures and 3D models on a detected face, using face detection libraries and face feature recognition libraries. We also use face orientation detection to orient the assets in 3D space.
View complete Poster here.