Apple has hired Google’s chief of search and artificial intelligence, John Giannandrea, a major coup in its bid to catch up to the artificial intelligence technology of its rivals. —Jack Nicas and Cade Metz, the New York Times
Excerpts from the reactions that caught my eye:
Neil Cybart, Above Avalon: Apple Poaches Google A.I. Leader. This news was met by a pretty cynical narrative involving Apple never taking A.I. and machine learning seriously until now. The byproduct of this ineptitude has been a dysfunctional Siri, which was put into the spotlight with the recent HomePod launch. While I don’t agree with that assertion, it’s the current narrative in many tech circles.
Gene Munster, Loup Ventures: What this means for Apple. It’s a win. Talent follows talent, and John Giannandrea will no doubt help to build Apple’s AI brand and enhance future recruiting efforts. His shared vision on privacy is good news for a company who claims to be the vanguard of user security. In the meantime, Google will maintain its strength in AI, given they are still an “AI first company” and have tremendous AI and deep learning horsepower with their Google Brain and DeepMind teams. Jeff Dean, the founder of Google Brain, has taken over as the head of their AI department in a “reshuffling” making AI a more central part of their business. Will Google employees follow in Giannandrea’s footsteps? There will probably be a few, but the competition is fierce, and this will not be the last major AI trade.
Ina Fried, Axios: Why it matters. AI is a key strategic area with Google, Apple, Facebook, Microsoft and others all looking to outflank their rivals.
Nick Stat and James Vincent, The Verge: With Giannandrea joining the fold,.. Apple may be able to recruit more top-level talent and improve its algorithms, a feat the company has long said it wants to achieve without violating its stance on privacy. Yet because neural networks — the backbone of deep learning techniques for developing self-improving software — require large amounts of data to be trained on, Apple is necessarily at a disadvantage because it only has access to publicly available sets. Facebook and Google, on the other hand, operate large-scale data collection operations with billions of users around the globe.
Patrick Moorhead: Moor Insights: One person alone rarely makes the only difference, but will likely pull people over from other companies. Historically, Apple very rarely does Apple change its direct reporting structure. Big deal from that angle.
Tripp Mickle, Wall Street Journal: Apple has been looking for a senior artificial-intelligence executive for nearly a year, a person familiar with the search said.
Benedict Evans, Andreessen Horowitz: Apple Hires Google’s A.I. Chief. Important. Probably more important: they persuaded him he’ll have interesting problems to work on, and that he’ll be able to get stuff done, which implies broader changes.
John Gruber, Daring Fireball: If it works out, we’ll probably look back at this as one of the most significant Apple executive hires ever.
My take: Smart move. About time.
competitors would be well advised to make the effort.
(Sorry about the (dot) stuff, but lately every time I try to post a story with even a single link here, it gets shuffled off into ‘awaiting moderation” limbo, and then I have to email PED directly, etcetera.)
From the article:
“But Cook’s steadfast aversion to the cloud presents a challenge as Apple tries to build up new features powered by machine learning and AI. To build and run machine learning services you need computing power and data, and the more you have of each the more powerful your software can be. The iPhone is beefy as mobile device goes, and it’s a good bet Apple will add dedicated hardware to support machine learning. But it’s tough for anything it puts in your hand to compete with a server—particularly one using Google’s custom machine learning chip.
Compare the photo management apps from Apple and Google to see how this can play out. Both use neural networks to parse your photos so you can search for dogs and trees and your best friend. Apple’s Photos does this entirely on your iPhone. Google Photos does it all in the cloud.
Of the two, only Apple’s app will let you search your iPhone snaps for “dog” while in airplane mode at 30,000 feet, and not having to wait while your query and the response travel across the internet can in theory make searches snappier. But Google Photos has generally been favored by reviewers (including our own) impressed by the power of the search company’s image-parsing algorithms. Local processing works great for many things, but if you want to push the envelope it’s hard for a mobile device to outsmart cloud AI, says Eugenio Culurciello, a professor at Purdue University who works on hardware to accelerate machine learning. “In a server you can do so much more work in any second,” he says.”
“In a move that could shift the course of multiple technology markets, Google will soon launch a cloud computing service that provides exclusive access to a new kind of artificial-intelligence chip designed by its own engineers.
CEO Sundar Pichai revealed the new chip and service this morning in Silicon Valley during his keynote at Google I/O, the company’s annual developer conference.
This new processor is a unique creation designed to both train and execute deep neural networks—machine learning systems behind the rapid evolution of everything from image and speech recognition to automated translation to robotics. Google says it will not sell the chip directly to others. Instead, through its new cloud service, set to arrive sometime before the end of the year, any business or developer can build and operate software via the internet that taps into hundreds and perhaps thousands of these processors, all packed into Google data centers.
The new chips and the new cloud service are in keeping with the longterm evolution of the internet’s most powerful company. For more than a decade, Google has developed new data center hardware, from computer servers to network gear, to more efficiently drive its online empire. And more recently, it has worked to sell time on this hardware via the cloud—massive computing power anyone can use to build and operate websites, apps, and other software online. Most of Google’s revenue still comes from advertising, but the company sees cloud computing as another major source of revenue that will carry a large part of its future.”
“Completely off topic, but Sunday’s New York Times front page story hasn’t been addressed here yet, and it’s VERY relevant to several PED stories of the last few weeks:
www (dot) nytimes (dot) com/2018/03/31/business/media/amazon-google-privacy-digital-assistants (dot) html
Note that the only mention of Apple in the whole article is one nearly irrelevant sentence, to wit: “Apple recently introduced its own version, called the HomePod.” Note that clicking this sentence links to another NY Times article that is entitled: “Apple’s HomePod Has Arrived. Don’t Rush to Buy It.” The writer tests Alexa, Home, and HomePod and gives them these grades:
■ Amazon’s Echo — 3.4
■ Google’s Home — 3.1
■ Apple’s HomePod — 2.9
The Author’s conclusions: “But Siri on HomePod is embarrassingly inadequate, even though that is the primary way you interact with it. Siri is sorely lacking in capabilities compared with Amazon’s Alexa and Google’s Assistant. Siri doesn’t even work as well on HomePod as it does on the iPhone.” (No mention of the fact that the iPhone is not “always on and listening”; you have to literally push a button on the iPhone or tap on your Apple Watch or AirPods to get Siri to listen to you.)
I’ve gone on at length about how the HomePod is far more secure than it’s competition. Shame that wasn’t mentioned in the article that panned it in the first place. Or in the Sunday article, for that matter….”
Heavy computing resources will always beat out pocket-computers for computationally intensive tasks. But this article also ignores the fact that you can program a learning system and then download -what has been learned- to the pocket computer. Crunching “already-learned” stuff is a lot more computationally tractable than learning new stuff.
We might recall Philip’s issue with HomePod’s Siri. Siri could not get the context that Philip’s taste in music was different from his wife’s
Siri does a fine job of telling me when I need to get going, considering local traffic. Pretty basic.
This firm used its technology in speech recognition. Hmmmm, speech as a UI.
SIRI’s ability to retrieve data is limited by the number of databases Apple subscribes to (less than half the number Google subscribes to).
Additionally, even with all the angst about SIRI’s abilities, sales have not been materially impacted. I think this is so because of Google’s blind obsession with collecting user data and selling ads. That obsession has led to Google making key technologies available on iOS through the App Store, ergo, consumers that want an iPhone but prefer Google Maps etc., are able to get the best of both worlds. This has allowed Apple to focus (resource allocation/budgets) on its vision until those resources are freed up to improve SIRI. Once that happens users will naturally migrate to SIRI because it will be the default search application (personal assistant, maps, etc).