Apple that launched its A11 Bionic system on chip (SoC), that will drive the drive the company’s AI adoption in smartphones over the next couple of years, will likely enable the Cupertino-based giant to lead the AI-capable chip market through 2020.
According to Counterpoint Research, over half a billion smartphones that will be shipped in 2018 will have AI capabilities on board at the chipset level, and one in three smartphones and one in three smartphones will be AI-capable in 2020.
Huawei with its HiSilicon Kirin 970 SoC, is second to market after Apple with AI-capable smartphones. The chipset was launched in September and comes on board with Huawei Mate 10 which is designed to diverse computational tasks efficiently, thanks to the neural processing unit at the heart of the Kirin 970 SoC.
Qualcomm is also expected to join the AI bandwagon by launching high to mid-tier SoCs with AI capabilities within the next few months. It should be able to catch-up with Apple and Huawei, and is expected to be second in the market in terms of volume by 2020, followed by Samsung and Huawei.
AI applications require huge amounts of data processing even for a small task and have not made major headway in smartphone applications until the second half of 2017 due to the limited processing power of smartphone CPUs, meaning the user experience would have been hindered.
Sending and receiving that information from cloud-based data centres is potentially difficult, time consuming and requires a solid connection, which is not always available. Which is why manufactures are looking to have AI-capability on-board the device.
“The initial driver for the rapid adoption of AI in smartphones is the use of facial recognition technology by Apple in its recently launched iPhone X. Face recognition is computationally intensive and if other vendors are to follow Apple’s lead, they will need to have similar on-board AI capabilities to enable a smooth user experience,” said Jeff Fieldhack, Counterpoint Research Director.
By having advanced SoC-level AI capabilities, smartphones will be able to perform a variety of tasks such as processing natural languages, including real-time translation; helping users take better photos by intelligently identifying objects and adjusting camera settings accordingly. But this is just the start. Machine learning will make smartphones understand user behaviour in an unprecedented manner.
Virtual assistants will become smarter by analysing and learning user behaviour, thereby uniquely serving each user according to their needs. This could potentially help virtual assistants take the leap and become a main-stream medium of interaction between the user and device, the report added.