Apple wraps itself in AI just in time for another ‘AI winter’

I remember the first AI winter. And the second. And the third.

From The Verge: Self-driving cars are headed toward an AI roadblock: 

The dream of a fully autonomous car may be further than we realize. There’s growing concern among AI experts that it may be years, if not decades, before self-driving systems can reliably avoid accidents. As self-trained systems grapple with the chaos of the real world, experts like NYU’s Gary Marcus are bracing for a painful recalibration in expectations, a correction sometimes called “AI winter.”

From the Wikipedia entry on “AI winter”:

The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the “American Association of Artificial Intelligence”). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the “winter” of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.

My take: As the Wikipedia article demonstrates, the history of AI is littered with cycles of hype and disappointment. Early machine translation. Connectionism. Lisp machines. Expert systems. Fifth generation computers. The Strategic Computing Initiative.

As a young staff writer at Time I tried several times to produce a story on AI that could run on the cover of the magazine—which was still a big thing in those days. I never managed to write an editable draft. Partly it was because the “artificial intelligence” deliverables at the time—expert systems—didn’t live up to the billing. But I think it was bigger than that. It was as if the human mind, or at least the mind of a Time editor, shrank from possibility that a machine might actually do what it—the mind—does.

The current cycle of AI hype is centered on machine learning and autonomous systems—which Tim Cook last year called “the mother of AI projects” and declared (belatedly) area of intense interest for Apple.

If history is any guide, disappointment will follow. The limits of machine learning—as the MIT Technology Review periodically reminds us—are well known to academics. For one thing, today’s machine learning modules are brittle, narrowly focused and don’t generalize well. They are also hard to edit because at their core their “learning” is incomprehensible, in the sense of being beyond human understanding.

That may be the real AI roadblock of the Verge’s headline. I’m not sure investors—never mind the general public—have wrapped their minds around what it means to trust your life to an autonomous system moving at highway speeds. After all, what is a self-driving car but a machine that can kill in the hands of a computer whose operations are beyond human ken.

18 Comments

  1. Neil Shapiro said:

    Can you expand on your comment that “They (machine learning modules) are also hard to edit because at their core their “learning” is incomprehensible, in the sense of being beyond human understanding.” As computer programs they are written by humans so wouldn’t they have to have a core idea based on a human programmer’s own idea and direction to the machine as to how they learn?

    0
    July 6, 2018
    • I’m no expert, but exposing a neural network to a few tens of thousands of instances and reinforcing the hits sounds to me like a way to build a black box

      0
      July 6, 2018
    • John Konopka said:

      “They (machine learning modules) are also hard to edit because at their core their “learning” is incomprehensible, in the sense of being beyond human understanding.”

      As a very rough analogy, think of a very flexible sheet of rubber that can maintain its shape after being stretched. AI software is like the sheet of rubber, the knowledge is the shape that it acquires by learning. If you stretch the sheet of rubber over a recognizable object you can intuit that stored information by looking at it. However if that shape is used to contain the information needed to handle all sorts of cases encountered when driving a car that shape will be a completely non-intuitive set of contours. There is no easy to predict what each little twist and turn does. During learning, every time the software encounters an error it tweaks that stored knowledge a little. Just my own opinion, but I think there is no guarantee that lots of training will cause the system to settle into a stable solution.

      If you want to play with this get an app called Polyword (for iOS). You point the camera at something and snap a picture, it tries to recognize the object and gives you the name in two languages. Nice, when it works. It recognized my computer mouse, but called my ruler a spatula and my dachshund a chihuahua! Often it says the object is unrecognizable.

      AI is a very strange technology. When we attempted manned flight, or rocketry, we had a clear goal. We had rough ideas about the need for engines with a certain power to weight ratio, light but strong materials, etc. With AI we have an indistinct goal. We want computers to think more or less like we do. However, we don’t know how we think so we can’t copy that. We have lots of clues, we have lots of algorithms, but we don’t really know how we put it all together.

      The exasperating thing is that it feels so easy for us that we think that it should be easy to transfer this talent to machines.

      “The scientists at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 thought that perhaps two months would be enough to make “significant advances” in a whole range of complex problems, including computers that can understand language, improve themselves, and even understand abstract concepts.”

      https://singularityhub.com/2018/01/18/how-fast-is-ai-progressing-stanfords-new-report-card-for-artificial-intelligence/#sm.0000csfbjuzbsff0vl028ll7k7ima

      People still think like this. They say that strong AI is just around the corner. Now that corner is about 10 years out.

      3
      July 7, 2018
      • David Emery said:

        Huh… I worked in high tech for 35 years, and I -never- heard anyone technical say “AI is just around the corner”. Quite the opposite, it was always “AI has great promise, but there’s a lot of work to be done to realize that promise.”

        1
        July 7, 2018
  2. Gregg Thurman said:

    “what it means to trust your life to an autonomous system moving at highway speeds.”

    Talking autonomous cars with friends I find acceptance generally falls into 2 categories:

    1. People that aren’t technology literate and,
    2. People that are technology literate.

    Computer literate should not be confused with technology literate. Computer literate just means you know how to USE a computer. It does not mean you know how a computer’s guts (hardware/software) work.

    Autonomous vehicles have already proven to be better drivers (fewer accidents) than human drivers by a wide margin. Yet we allow teenagers of very limited skill and judgment to get behind the wheel without giving such access a second thought.

    In my opinion, the only thing holding autonomous driving back from use today is ignorance of the technology and technophobia.

    1
    July 6, 2018
  3. Gregg Thurman said:

    Adding to the above:

    If we accepted that autonomous vehicles were 30% safer than humans (they are) and put them on the road (in mass) today, the machine learning, just by virtue of greater input, would make them even safer than they are today in a very short time.

    The roadblock to doing that is the erroneous belief machines require human direction. I think sociologists (I may be using the wrong profession) would call that fear of losing dominion.

    1
    July 6, 2018
    • Jonathan Mackenzie said:

      Roads containing large numbers of autonomous vehicles will be safer not just for the reason you give, but also because these vehicles could communicate with each other. Sure, their intentions, such as deciding who gets precedence on a left turn at an intersection, but also information about the shared environment.

      From the Verge story, “A fatal 2016 crash saw a Model S drive full speed into the rear portion of a white tractor trailer, confused by the high ride height of the trailer and bright reflection of the sun.”

      In a world with multiple autonomous vehicles on the road, the “lay of the land” can be checked with the other cars on the road. Simply put, other cars might tell the confused car that it is about to collide with another object.

      The best drivers may be able to outperform self driving cars even for decades, but self driving cars are not competing against the best drivers of the world, they are competing against the average driver.

      Monitoring and regulation of self driving traffic is also something that could be done on a collective scale by computer, for a government agency with a human at the head. Imagine for instance if Los Angeles are had 80% autonomous driving vehicles. These vehicles could be watched and given rules from the traffic network itself. An administrator could press a button and cars might be rerooted to avoid anything from the presidential motorcade to a brushfire. What government wouldn’t want such control over local transportation? And what argument could be made that it was not safer than human navigation? Imagine what this would do to improve evacuations of residents during natural disasters.

      Computer controlled transportation will be safer, more efficient, and subject to greater governmental control and oversight than human driving. I think these qualities make it inevitable.

      0
      July 6, 2018
  4. Fred Stein said:

    Some points:

    1) In 5 to 10 years we will have commercial self driving cars, but they may be niche applications. Converting the installed base of cars will take another 2 decades.

    2) TAI for self driving cars is just one of many AI applications. We are likely to see consolidation (winnowing) in self driving soon, but no “AI winter”. AI professionals will have many career options.

    3) While some promises of AI are over-hyped, we won’t see a dramatic slow down in AI investments in chips, networking, software, and applications.

    0
    July 6, 2018
  5. David Emery said:

    If the software makes a mistake, who’s liable? The software developer? The automobile company? The driver?

    And from an engineering perspective, what can be done to limit that liability? In civil engineering, there are accepted practices that, if you follow them, shield you from liability. (My father was a civil engineer, and we talked about this a lot.)

    0
    July 6, 2018
    • Gregg Thurman said:

      What good is there in determining fault after the fact, if you’re dead as a result of a collision, you are still dead. Fault is an insurance issue. I’m not aware of any State that has not adopted “No-fault” insurance rules.

      State legislatures will have to adopt the same type of liability rules for autonomous vehicles as they do for civil engineers. That will happen when legislatures figure out how to replace gasoline tax revenue lost to electric vehicles. One will not happen without the other.

      Bottom line is that legislative approval is a question of revenue, not the liability.

      1
      July 6, 2018
      • David Emery said:

        I’ve argued for professional licensing for software engineers for 30+ years, and debated that with many people I respect in the tech industry. It hasn’t happened, and I don’t see any reason to think it will happen soon.

        The defense for civil engineers (my father was one, we talked about this a lot because he was involved as an expert witness in several liability cases) is following well accepted practices. I don’t see (a) a strong consensus on what those practices are in software, (b) any commitment by industry (with a few exceptions, such as commercial avionics) to identifying and following those practices; (c) any interest in universities in teaching those; (d) any interest in the practitioner community to invest in the time to get licensed (e.g. EIT and PE tests).

        What I see is the -opposite-! (a) a dependence on practices and technologies we know are substantially less safe/effective (measured in preventing bugs) than others; (b) no willingness by industry in investing in their human capital – instead they hire more cheap bodies to throw at the problem. (See http://www.thedailywtf.com if you want to read about the results); (c) universities that focus on individual coding rather than team software development; (d) continued opposition to any idea of actually accepting responsibility, let alone liability, for one’s own work.

        It will take some spectacular failures and then the politicians will act. But they won’t necessarily pass the right laws, because the tech industry and professional societies have not built up any credibility in telling legislatures what those laws should look like (because they’ve been fighting them for so long.)

        The one real working counter-example is commercial avionics. Between the FAA, other national & international agencies, and the airplane industry, there are a set of practices that are used for software development (e.g. DO-178c). Those practices are hugely expensive, with the verification costs for a new airframe in the billions of dollars. And some of those practices have been questioned, from a cost-effective basis (i.e. “Doing X costs a lot, are we getting a return on that technique as measured by defect avoidance, detection & removal?”)

        IoT, particularly for medical devices, should scare you even more than autonomous vehicles.

        0
        July 7, 2018
  6. Richard Wanderman said:

    This is a great discussion.

    Here’s something to consider: Private and commercial pilots routinely use autopilot to fly entire routes, hands off. most pilots take off and land planes by hand although larger airports have landing supports in place to guide planes in. I’ve heard that a 787 and newer airbuses can taxi, take off, cruise, land, and taxi completely on autopilot. It isn’t done but it could be.

    We’ve been flying on autopilot for a long time. Yes, vehicular traffic tolerances are a lot tighter than airplanes but that doesn’t mean we won’t get things worked out. It’s not a question of if but when.

    0
    July 6, 2018
  7. From reader Walter Milliken:

    I’m with you [PED] on this one. Neural networks are interesting tech, but I don’t expect them to really be a success for fully autonomous driving. They might be successful in niches where environment complexity and variability is relatively low. Highway driving is one case of this. Metro buses might also work out, since they retrace a very predictable route and can thus recognize anomalies more easily.

    A major limitation of neural nets is that they are simply trained on a set of inputs and the expected output. If the trainer didn’t include a particular input, the output of the neural net may be unpredictable, and will likely be a match to whatever trained input vaguely resembles the current situation. So that utility pole that just fell across the road may not be in the training set, and might resemble, say, a crosswalk marking, to the network. Ouch.

    And then there’s the problem of teaching neural nets necessary abstract concepts like relevant rules and even law. I strongly suspect that in the current driving software, the AI is used primarily at the visual object recognition level, and a lot of the actual decision-making happens in more rigorously-coded code or rule systems. “Oh, that’s a human walking in front of the car: apply the brakes.”

    As long as the set of inputs is well-constrained, and/or consequences of false or failed matches aren’t too high (e.g. photo-tagging), neural-net-style AI is a very useful tool in the AI toolkit. But beyond that, color me very dubious.

    I also have major concerns about neural net AI applications outside the vehicle operation scenario:

    — I strongly suspect one of Google and Facebook’s major reasons for interest in AI is in predicting user behavior, as in “What will get the user to push the buy button on this ad?” There are major ethics issues here, but I expect the AIs simply won’t get very good at this sort of thing, except with people who are very predictable. But for Google/Facebook, it will still be a win if the AI can increase the user click-through/buy rate a few percent. This is basically just a bigger version of A/B message testing, only probably automated in generating the actual message as well. Look for this at your corner social-media/ad platform or startup venture, ASAP….

    — Same as above, but for political messaging. Only here, the problem is that if the AIs train on feedback from their operations, and adapt to improve, they could easily start determining that the most effective donation/vote-promoting strategies are things that actual humans would (mostly) shy away from for ethical reasons. (Or so one would hope.) How do you train a neural net “Don’t use hate speech or outright libel?” I don’t have a clue…. See also the infamous Microsoft chatbot AI.

    — And then there’s the very opaque world of high frequency financial trading. I can easily see them using neural nets as an extension of their existing adaptive algorithms. It’s a world where a neural net can easily observe the effects of its previous actions and train to get better results. Only, again, there’s this problem of how do you train the network not to invent illegal behaviors like collusion or other types of market manipulation? The thing that really scares me here is that the network could easily be turned loose with perfectly plausible goal metrics like “increase profit” without actually coding anything illegal, but for the network (or more likely a collection of networks at different companies) to evolve successful strategies that would be highly illegal if a human were doing them. And it would be nearly impossible to prove that the AI was colluding with other “competing” AIs, or indulging in market manipulation, since there’s essentially no way to figure out *why* a neural net produced a particular output other than “hmmm… that’s what all these numbers in these tables add up to when processed with this input”. Yes, that’s oversimplified. But I believe it’s roughly accurate.

    And then there are the security issues. What happens when someone figures out that waving a poster with a bunch of special shapes on it can cause crashes in all the AI cars with a specific brand of AI software? Another classical issue: when people tried to build learning network security system, they discovered that it’s pitifully easy for an adversary to train them to do things that favor the adversary. Can a bored teen train a cloud-based driving AI to suddenly swerve every car that drives past a particular point in the road? What about a next-gem laser pointer trick that confuses the LIDAR on all the cars going by? (Okay, that’s a sensor issue, not AI.)

    Neural nets are basically very densely-coded lookup tables that accept fuzzy input. It’s really very hard to say exactly what they’ll do without trying a specific input set, and even harder to say *why* they produced that particular output, given the set of things they were trained with, if the input isn’t close to something in the training set.

    In other words, neural nets aren’t just black boxes, they’re *magic* black boxes. As a systems architect, this worries me.

    As long as they’re limited to tagging how funny a cat photo is, or even figuring out which ad I should be shown, they’re not too dangerous. As driver assistive devices to provide a backup/cross-check on a human driver, that’s a great idea — that design provides diversity robustness. Letting AIs autonomously drive cars, or make major financial decisions — count me out.

    And yes, “true” AI is always about 10 years out, and has been for decades.

    — Walter

    1
    July 8, 2018

Leave a Reply