Google has announced MobileNets, a family of mobile first computer vision models for TensorFlow.
The company designed them to work on low-power, low-speed platforms like mobile devices. In the realm of visual recognition, mobile devices have long had access to many of these computer vision technologies via the cloud.With MobileNets devices can directly classify and detect objects seen through the cameras on mobile devices.
MobileNets are small, low-latency, and low-power models parameterized meet the resource constraints of a variety of use cases. They built for classification, detection, embeds, and segmentation similar to other popular large-scale models.
The announcement of MobileNets is not surprising given Google’s latest focus on getting. With MobileNets, developers will have more tools to create mobile artificial intelligence-powered apps.
Additionally, running these tasks directly on a device benefits the users substantially, as a big concern is having data leave one’s phone, but on-device computer vision addresses that.
Right now, a few big companies are working on bringing machine learning to their apps. Apple and Google have dropped hints that they are working on processors designed to best utilize machine learning.
Qualcomm has been focusing on optimizing current and future processors for on-device machine learning as well.