Though relatively rare, the best kind of major announcements from the big tech companies are ones that combine news about new hardware, coupled with a preview of new software that will boost sales in the installation channel. That is just what Google did on Wednesday during the keynote for their 11th annual Google I/O conference in Mountain View, CA.
At its core, Google I/O is where those who develop apps and products learn about the latest versions of the basic software tools and suites. However, the first-day keynote sets the stage with a more general overview of what is coming down the pipe, and how Google itself is implementing it.
The wide variety of announcements focused on what Google CEO Sundar Pichai stated as the core mission of “Organizing the world’s information.” Looking deeper, the statement made last year that the search and software giant was moving from “Mobile First to AI First” was even more of a guiding principle with the addition of “Deep Learning” to the construct. The melding of databases, search and selection, machine learning, and the way in which end results are displayed or presented clearly showed that.
On the search front, the Google Assistant is being continually improved to mine not only web-based data, but personal storage and location awareness. For example, the new Google Lens initiative will use deep learning so that the user’s device will recognize the location, determine what objects are there, and do things such taking a picture of an object like a router, have it recognized, and then return the serial number and information label on a specific product.
With the Google Assistant as the front end, a key announcement is that the Assistant will soon become available for iOS. With the ability to search and command from the two major mobile platforms, the ability to work seamlessly through to end devices will be accelerated by the coming availability a developer platform that will enable apps, services, and product manufacturers to build in Assistant integration via third party devices. It could be said that Amazon’s Alexa Skills have paved the way for that, but here Google will go further by allowing commands to be typed in, as well as spoken. Another point of difference will be the integration of transactional processing that will allow the user to not only return a search result, but have the address of a third-party establishment to recognize the delivery address and pay via secure, fingerprint-authorized payment.
On the output side, we will also see a push into not only voice and activity response, but visual response as well. For example, speak a request to a Google Assistant-capable device and have the result show on an TV’s internal Chromecast or a connected streaming dongle.
Indeed, mention was made of the “fastest growing screen for control is not mobile, but the one in the living room.” To emphasize that, sales on our side may be prompted by an increased focus on 360.