Analyst: Why Apple’s new AI chief must not be beholden

John Giannandrea, former head of Google’s machine learning division, starts later this month.

From a note to clients by Guggenheim’s Robert Cihra that landed in my inbox Tuesday:

We have confirmed that Mr. Giannandrea will start later this month and report DIRECTLY to Apple’s CEO Tim Cook, which we consider critical so that he is not beholden to any one product or service (e.g., Siri, iPhone), the way Craig Federighi runs Software across iOS and macOS platforms.

That said, we presume Siri will be Mr. Giannandrea’s initial focus, as we believe natural language understanding (NLU) is one of his specialties (e.g., his quote from Google’s own NLU team blog says “Understanding language is the holy grail of machine learning”). And that seems key, since we think Apple was early launching Siri with the iPhone 4s back in 2011, has since been challenged to keep pace with Alexa and Google Assistant, and yet maintains a strong point of leverage in Siri being the DEFAULT AI assistant on iPhones, so thereby accessed by hundreds of millions of active users every month…

From a positioning standpoint, we see Apple’s hardware+software integration and economic model built on selling DEVICES not cloud services, so expect its AI/ML to stay focused on making its products more intuitive (better), not conduits to monetizing something else. We think this is why Apple has no incentive to ever provide an API for Siri to run on third-party hardware, rather just an API for developers to access Siri from third-party iOS apps. Apple’s self-imposed policies to protect user privacy can also sometimes be seen as a hurdle to harvesting / learning from much more massive customer datasets.

However, we expect that Apple can still leverage its ownership of iOS and product-centric model to its advantage, including through more EDGE processing to execute AI/ML directly on its devices (e.g., iPhone, Watch, AirPods, Apple TV, HomePod), as well as leveraging its unique focus on user privacy as a marketing advantage (e.g., “differential privacy”).

My take: Cihra has thought more deeply than most Apple analysts about machine learning and how it fits into Apple’s business model.

See also:

7 Comments

  1. Gianfranco Pedron said:

    “Apple’s self-imposed policies to protect user privacy can also sometimes be seen as a hurdle to harvesting / learning from much more massive customer datasets.”

    Yikes! They’re starting to understand.

    Honestly, I don’t know anything about Cihra’s track record regarding Apple (and AAPL) but it’s nice to see someone, other than respected members of this comment section and, of course, PED, bring attention to Apple’s dedication to privacy with respect to Siri.

    0
    April 11, 2018
  2. David Emery said:

    There’s an opportunity for Mr. Giannandrea to start a public dialog on this, that highlights Apple’s devotion to privacy but also recognizes what is necessary to improve Siri.

    (disclosure: I might be the last person with an iPhone who has never tried Siri.)

    1
    April 11, 2018
    • Richard Wanderman said:

      I agree.

      You should tri Siri, even in its current state. If you limit domains it’s amazingly useful. I use it daily and not just to show off, it was great use for me, in the car, in the house, and on the trail.

      1
      April 12, 2018
  3. David Gleason said:

    On the broadest scale, there are some pretty good arguments that sensor data will kill the cloud by overwhelming it. Just one autonomous vehicle can generate up to 5GB of data per mile, which no network could handle, or data center could store. Which makes EDGE computing far more realistic and feasible as millions of devices come online. Apple could be in a much better position than we realize. See Peter Levine of A16Z:
    The end of Cloud Computing
    https://a16z.com/2016/12/16/the-end-of-cloud-computing/

    4
    April 11, 2018
    • David Emery said:

      In the last decade, I was on an Army project (FCS) where this was a definite consideration. We realized we couldn’t move terebytes of data over the radios. My solution was to have the fuel trucks download data from each combat vehicle at the same time it gassed them back up. The 3 problems are (a) moving the data to its “permanent home.” (b) fusing the data with all that other data you have to generate useful information; (c) long term storage/archiving all that data.

      3
      April 11, 2018
      • David Gleason said:

        Levine’s solution as I recall, is that most data need not be stored; it can be evaluated on the vehicle, massively reduced to key data elements and then uploaded.

        0
        April 11, 2018
        • David Emery said:

          A lot of times, you need non-local data to determine what is “key”. Example: Subsequent reports (from different observers) of the same event can be discarded, -if- you know of the existence of the first report (and you’ve shown these are duplicate events.) I have other examples from my previous line of work, but probably shouldn’t post them 🙂 🙂

          But I’ll point out this is related to some significant Quality of Service issues you have in saturated networks. Often you can’t determine the value of data at its point of creation; you can only make that determination at the point of consumption. At best you can do heuristics to -guess- “this data is not important now.” on a single node. And that tended to be true of network status data itself; one QoS model showed the network being saturated by messages telling everyone the network is approaching saturation.

          0
          April 11, 2018

Leave a Reply