AI is a Myth, says AI-researcher and engineer Erik Larson

Last April, the book The Myth of Artificial Intelligence was published by Harvard University Press. The author is Erik J. Larson, a computer scientist and tech entrepreneur who is active in the field, especially in natural language processing. After reading the book I decided I wanted to bring it to the attention of people, which led to me discussing some points with Erik as input for this article, and the article turned into (mostly) an interview.

The book gives an interesting analysis of AI based on a framework for inference that has been created by the 19th century scientist C.S. Peirce. Peirce was a bit of a non-standard character, who had written quite some insightful things about intelligent thought; points which were largely unknown until halfway the previous century, when his mostly unpublished work finally got some deserved attention (though still not much).
Key to Larson’s book is the use of Peirce’s framework for inference (for drawing intelligent conclusions):

  • DeductionIf A Then B, I see A, so I conclude B from A
  • Induction — Whenever I see A, I also see B with a statistical certainty, so I conclude If A Then B
  • AbductionIf A Then B, I see B, so I conclude (it is possible that) A. Or: I guess, I estimate, I hypothesize A from B

Abduction (the term comes from Peirce) is thus a sort of reverse and loose form of deduction, but not reducible to it. An example of abduction is for instance:

  • When it rains the street gets wet ( If A Then B)
  • I see a wet part of the street outside my window (B)
  • I conclude it has rained (A). But there are many other possible causes. E.g. maybe somebody has opened a fire hydrant. Or the street has just been cleaned by a cleaning truck. Or … So my conclusion is an estimate, a guess, a hypothesis.

Larson argues effectively that computers cannot do abduction and that without abduction, no true intelligence can be created, let alone a ‘hyperintelligence’, which has been a core myth of AI.

TL;DR (a.k.a. Summary)

For those wanting to get a feel for the reality and potential of AI, Larson’s The Myth of Artificial Intelligence is enlightening and helpful, and I recommend reading the book. For those expecting huge advances in AI in their lifetime, I recommend reading it too and to try to come to grips with AI’s limitations. Your feeling for AI will only get better.

Compared to the main criticism of AI published before (Hubert Dreyfus’ What Computers (Still) Can’t Do, 1972,1979,1992)— Larson’s critique is less encumbered with — for most people dense and arcane — references to analytic philosophy. Larson is a tech-person and insider, not a philosopher, and it shows. Larson’s book also includes more recent developments than Dreyfus’ critique (Google Translate, IBM Watson’s Jeopardy! win, and the Human Brain Project all get their deserved critical attention). Besides, the use of Peirce’s framework, especially the attention being paid to ‘abduction’ as a key element of intelligent thought, is really a find: it makes a really clear and convincing argument possible and it makes the criticism effective.

Those that believe in the unavoidable rise of (truly or ‘hyper’) intelligent tend to attack the critical messengers on form and person, not on substance. It happened to Dreyfus. In fact it happened so much and so effectively that Larson did his best to distance himself from Dreyfus to prevent an “That’s just another Dreyfus critique…” reaction. I hope his ‘security through obscurity’ strategy works, and that his clear argument cuts through the myth, but his strategy is as unfair to Dreyfus as how unfair history has long been towards Peirce.

Still, it would be good if this book was widely read.

A Conversation With Erik Larson

GW: You use C.S. Peirce’s inference framework deduction-induction-abduction as a a foundation for your analysis in your book. I knew little about C.S. Peirce (just his name) before reading your book and found it interesting. How much was writing the book driven by a desire to make him more known and giving him the credit he deserves?

Charles Sanders Peirce

EL:  It’s interesting that you ask this question, as I found myself thinking of the Myth as at least in part a way to raise deserved awareness of Peirce, certainly in the AI community, where he is essentially ignored, but also as an example of a forgotten genius. His entire body of work — which was voluminous — was focused on very core questions involving thinking, and his obsession (yes I think you can call it an obsession) with abductive or hypothetical inference was not fully appreciated in the 19th century, but clearly has spelled trouble for programmers and AI scientists more recently. It’s funny that even in the philosophy of science, people tend to discount the original idea — the actual hypothesis that leads to testing and deducing downstream. On the face of it, that’s a very curious omission, since without the initial guess or “idea” it makes little sense to rely so heavily on deduction or induction. Peirce realized this early on, and essentially spent his long and troubled career investigating the consequences of the inclusion of abduction in our scientific and everyday thinking.

GW: What would you like to accomplish with the book?

EL: My hope is that people are more discerning and skeptical about obvious AI hype in pop culture and everywhere else. I think it does science a disservice, for one, and as a practicing computer scientist I think it tends to obscure the actual problems we face in engineering applications that solve useful and important problems. When someone like, for example, Nick Bostrom frets endlessly about the coming superintelligence, it seems wise to at least ask some hard (or harder) questions about the actual state of the science in the field. AI is almost singularly vulnerable to this kind of parlor tricking — we don’t, for instance, hear endless hype about cold fusion coming around the corner (at least not anymore). I don’t think it helps the field or the society writ large, so I wanted to try to set the record straight as much as I was able.

GW: The extreme expectations (hype) have been with us about as long as the field exists. Take the article from Life in 1970 Meet Sharky, the first electronic person. Here Marvin Minsky states: within 3-8 years we will have created human intelligence, 5 months after that a genius through self-education, and a few months after that superintelligence. The ‘critical’ people interviewed for that story did not say it was impossible, but ‘give it 15 years’ instead of 8.

You are not the first with critical observations about the myth. Dreyfus, from the late 1960’s on, is the obvious example. In your view, what is the reason that we — with some ups and downs — have been falling for the myth, and still are falling for it?

EL: I find this among the most difficult questions to answer. On the shallow side of it, there’s obvious motivation for practitioners to bolster claims of future performance for financial and other reasons like public interest. The deeper question is whether we’ve redefined ourselves as Arendt suggested (the 20th century philosophy and protege of Heidegger), as homo faber, or “man as maker.” In which case the sort of impulses traditionally identified as religious get necessarily channeled into scientific and technological visions of superiority, culminating in the act of a god by a human: creating another life form.

In less hifalutin language, I think the myth of AI is tied up in our identity as bringing about progress, continuing evolution by mindful activity maybe, as Kurzweil claims. Many people who endorse (implicitly or explicitly) the idea of progress as techno-scientific advance see the emergence of machine intelligence as a kind of inevitability because it represents the final creative act in a fully matured humanistic science. So it sort of has to be the case that we’ll supersede our own biological selves with the act of technoscientific creation in an AI.

That all lurks in the background, I think, when people get hung up on seeing the future as populated by super smart machines. 

A final thought (on the simpler side of things) is that people tend to confuse narrow successes on tasks like games with progress toward general intelligence, so that it seems like true AI is only a matter of time. Once the distinction between narrow and general AI is made clearer, it’s much easier to see hype as hype.

GW: I got the impression (e.g. from references as ‘recent’ of something that was a while back) that writing the book had taken quite some time. Correct? If so, what made it hard to write?

EL: It took a while. Part of the issue was that I was actually working as an engineer for the early parts, when I had first started thinking of writing the book. Later I was unsure exactly how large of a circle to draw around the project, in other words whether it should be an argument only or also a larger or big picture view of science and tech. These sorts of questions kept me occupied for several years. I think around 2014 I had decided to write it. So that gives you an idea of how much procrastination played a role as well. 

GW: Do you think abduction is doable with digital computers?

I’m not sure, frankly. We do have some Bayesian models of abduction, but they are what I call “abduction in name only,” as they typically restrict the selection to two choices, which really isn’t abduction at all. Abduction is hard not because of the form of the logic per se, but because the selection of a cause or plausible factor given an observation is effectively unbounded. There’s no end to the “et cetera” problem in actual dynamic or commonsense environments. So in this more realistic sense, I think it’s unlikely that we’re going to crack that problem any time soon. In fact, the more you understand (the more deeply you understand) the problem, at least for me, the more mysterious it becomes, and the less inclined I am to talk about or entertain around the corner solutions. That said, the future of the field is of course a scientific unknown, Some Einstein could come along and illuminate the problem in a way that no one else has yet seen. That’s possible. But my hunch is that we don’t and won’t have any real implementation of bona fide abduction for a very long time, if at all.

GW: So, as long as we cannot solve abduction, what do you see as possible for the current wave of deduction/induction based AI? Where do you see the limit to where we can go with the current approach? When should business leaders become very cautious when some proposal is being presented to them?

EL: I think we have much room for improvement with current AI, actually. The primary flaw I see is at the level of design, where engineers try to optimize accuracy of subsystems (this is especially true of NLP) and end up with fragile systems because subsystem error is simply a fact of life in real world AI. So handling errors — often by letting them get “pushed through” to where they can be handled more effectively —  is one area where I think we’d get more robust and seemingly intelligent systems. Another observation here related to what’s just been said is that the “superintelligence” types in AI tend to see AI as what I call a “black box intelligence.” This is bad thinking, in my view, and perpetuates fragile designs that actually limit the potential of future systems. Actually smarts in AI is in my view a misnomer, a kind of conceptual confusion; the intelligence of a technology like AI lies in the interface between the machine and the human end user. If that interface is correctly designed the system will seem smart and more importantly it will help solve interesting and useful problems for the end user. Note how none of this has to do with buying into the Bostrom-Kurzweil line about a coming “superintelligence,” which as an AI scientists myself (technically I’m an NLP engineer — but, alas, the buzzwords are everywhere), I see as just an egregious example of black box intelligence thinking. Bad design thinking, in other words, which actually limits the potential of the field for real-world successful applications.

GW: What are you yourself currently working on?

EL: I’m currently building a text extraction system for a corporate client. It generates a large knowledge graph in RDF(S) that can then be queried using a SQL-based language called sparql. The prototype is a question answering system for movies found on Wikipedia, but the general technology can be applied to any domain to make structure knowledge graphs from unstructured texts for purposes of query and question answering.

Closing/Review

GW: Let me share my conclusion with you, and a point of critique. I think this is an important book and it would be good if many people read it. C.S. Peirce’s abduction is really an insightful perspective to use, and for instance the example of Winograd sentences is also enlightening. Both provide clarity in a complex field. Compared to the classic What Computers (Still) Can’t Do, I hope it can be more effective. Apart from being up-to-date with technological developments, Dreyfus’ background in philosophy created a barrier your book does not have. 

Which brings me to a main point of critique: you do not pay enough respect (by far) to the other messengers that have gone before you, in particular Dreyfus. As I read your book, many of your arguments are not really different from those of Dreyfus. And when you pay attention to him, you seem to simply follow the ad hominem attack he was facing. His writing in What Computers (Still) Can’t Do is not “bombastic” at all, it is about as sharp as your own. And Dreyfus never said a computer would not beat a grand champion, quite the opposite. Dreyfus even mentions Peirce in passing:

Yet Winston’s program works only if the “student” is saved the trouble of what Charles Sanders Peirce called abduction, by being “told” a set of context-free features and relations—in this case a list of possible [it. GW] […] relationships”.

Hubert Dreyfus, What Computers (Still) Can’t Do — A Critique of Artificial Reason (1972,1979,1992), p21 1992 MIT Edition

In 1992 he also already laid out the issue with neural networks (and thus other forms of induction-based (hidden) rules programming), e.g.:

But neural networks raise deep […] questions. It seems that they undermine the fundamental […] assumption that one must have […] a theory of a domain in order to behave intelligently in that domain

When neural networks became fashionable, traditional AI researchers assumed that the hidden nodes in a trained net would detect and learn the relevant features, relieving the programmer of the need to discover them by trial and error

Hubert Dreyfus, What Computers (Still) Can’t Do — A Critique of Artificial Reason (1992), p xxxiii-xxxiv
The Second Edition of Dreyfus’ book (1979)

(I’ve removed the ‘philosophical’ terminology from that first sentence.) What this says is what you also say when you discuss how people (wrongly) think that we can do ‘without hypothesis’, and that induction will solve abduction.

I find your use of C.S. Peirce’s framework is extremely effective: it is much easier than Dreyfus’ philosophical deep roots which can be hard for many to wade through. So while you might (rightly) think Peirce deserves credit, I think Dreyfus deserves credit, and not just the ‘bombastic’ adjective or the charge that he made a prediction (that he did not make).

I think it is important to look at why these people (who were as right as you are) did have such a hard time in getting heard: it goes against some deep-seated cultural assumptions (a.k.a. paradigms). But then again, that would probably distract from the argument you are presenting, which needs to be heard first. So I understand the choice.

EL: I did in fact distance myself from Dreyfus’ critique, which was very good as you rightly pointed out, because you get this kind of argument from the myth-makers:

“That’s just another Dreyfus critique…”

I wanted to create at least the impression that I was offering not a philosophical analysis but an actual critique by a computer scientist, for computer scientists (and everyone else).
However, I may have overdid it with the tactic of seeking to distance myself from Dreyfus! Point well taken.
I have not been attacked too vehemently by the superintelligence crowd, although there was a review that came out in a tech zine (I forget the one now) that had some sharp criticisms, and I got the impression that the reviewer did not like my conclusions, but couldn’t really argue with them on the merits.
I’d be interested to see what someone like Sam Harris or Nick Bostrom would say.

GW: Thank you for your time and effort on this exchange.

Everybody: The book The Myth of Artificial Intelligence by Erik J. Larson, published by The Belknap Press of Harvard University Press is worth your time and energy.

References

Some earlier articles I wrote on the subject referenced here. Two about AI on digital computers and the (fundamental) fragility/brittleness of them. One about the hope that quantum computing will save the day (for AI and other goals that run into exponential scaling issues when using digital computers). The articles also explain the fundamentals, so you might use them as an introduction.

4 comments

  1. Great piece! I’m happy to read a piece that gets at the heart of the AI problem as a non-technical writer who is trying to understand why AI and its limitations.

    Like

Leave a comment