Apple: The emojification of iMessage (video)

Watch Apple’s most popular app get way stickier.

If you, like me, don’t get what the big deal is about stickers, I recommend Connie Chan’s excellent The Elements of Stickers on the Andreessen Horowitz website.

If you already know why swapping cartoon emotions is huge among the WeChat set, skip the backgrounder and go straight to the video below.

In 15 minutes: The emojification of iMessage.

4 Comments

  1. Jonathan Mackenzie said:

    This comment is long. I don’t mind if no one reads it…

    Some people are calling this event a snoozer (or at least some headlines were calling it that), but I saw some stuff that was more encouraging than anything I’ve seen in years.

    While some make the argument that Apple is behind the curve in AI and other popular new technologies (like voice recognition/personal assistant), I don’t necessarily see it that way.

    Apple spent some time getting used to the idea that the near future of mobile computing is apps. Because of this, they gave their developers an API that gets them closer to “metal” for maximum performance and creativity. Then they gave them Swift to make creating apps a bit easier. (That was the goal anyway, I’ll leave it up to others to decide if Swift achieves it.)

    So now they are giving greater and greater access to the device though Apple’s own apps and services, like iMessage and Siri. It seems like all of these steps put developers in a fantastic position to use iOS devices for their great ideas.

    We may be a little early for Kurzweil’s singularity, but it is clear that a few technologies are rapidly converging:

    –batteries
    –wireless communication
    –using computers to make independent choices and act on those choices

    So all of these together mean that everything from cars to smartphones and all the robot shapes in between will be run on batteries, use wireless signals to connect with the rest of the world, and analyze data in order to make independent choices.

    I may have a spreadsheet on my iPad that takes stock market data and performs calculations. I may choose to act on the information I see. But the computer of the future will be able to act on its own (if I authorize it do to so) without waiting for my approval. The self-driving car makes its own decisions and then acts on them. And we should expect that same decision making power to be brought to bear in any number of realms.

    The mobile device of the future will be an odd union of data analysis/machine logic and our own personal wants and desires. If we are running a particular app and have given our iOs device permission, we may start getting notifications on our screens that read thins like, “It was just announced that Eminem is playing the Civic Center next week. I booked you 4 tickets. Should I invite Bob and Alice?” (insert whatever play or concert you’d be enthusiastic about.)

    So Apple is playing a long game. The history of iOS goes from providing basic access to the device to providing better tools to program the device and now better access to the device itself. And all of this while they continue to explore better batteries (and its cousin cleaner power sources ) and dabble secretly in a world where computers act on the decisions they make without waiting for human approval.

    Self driving cars will be transformative in ways that are hard to imagine, but mostly because they will come of age in an environment where other devices also execute their own “decisions”.

    As this picture becomes clearer to me, it seems more obvious that Apple is well positioned to succeed in such a world. It is hard to imagine that AI bots could proliferate in the world and somehow not be very successful on iOS device. The only reason would be Apple fails to allow the bots to be powerful enough to execute interesting things. This WWDC convinced me that Apple is cautious, not stingy, in how much device control it hands over to developers.

    0
    June 18, 2016
    • Gregg Thurman said:

      Jonathan, your comments about the future of computer power, especially about machine logic is most prescient.

      When I first read about VocalIQ’s capabilities, especially in relation to competing AI/machine learning products, it became apparent (to me anyway) that the IoTs won’t be “dumb” plug ins, but rather “smart” devices that can act independent of human initiative.

      I was very surprised to see so much of VocalIQ’s capabilities incorporated into SIRI in such a short time after the acquisition. To me the speed at which the 2 products were merged is a HUGE signal of how important VocalIQ’s capabilities are to the future of Apple’s products and vision. Giving developers (software and hardware) API access to SIRI just reinforces that feeling. Peripherals and apps designed for competing AI platforms are going to be much less powerful than those developed for SIRI, and will remain so until they change how they learn VocalIQ is a paradigm shift in AI/machine learning technology.

      The extremely short presentation of SIRI notwithstanding, I think the demonstration of what SIRI can do now was the most important revelation of WWDC.

      SIRI/VocalIQ just left the competition (Google Now, Amazon Alexa, Microsoft Cortana) in the dust.

      1
      June 24, 2016
      • Jonathan Mackenzie said:

        Thanks for the feedback and additional insight.

        Your data supports the idea that contrary to the common wisdom, Apple may actually know what the heck it’s doing. I certainly can’t predict how the world will change as a result of this new kind of computing, but we seem to share the belief that this is likely transformative. It’s a very exciting time for Apple enthusiasts.

        0
        June 25, 2016

Leave a Reply