Eyes that glaze over. Eyes like saucers. Eyes that narrow.

When (new) technology is concerned, I observe that there are three types of people involved. Take the current Generative-AI fever. Many boardrooms and organisations of state are thinking about what to do about/with this arrival of what clearly is a breakthrough. But boardroom members generally have other strengths than understanding technology. Heck, even for specialisms closer to their heart they will employ specialists close by, my favourite example being the ‘chief legal counsel’. So, when the actual details of the technology become important we might expect quite a few of these to have eyes that ‘glaze over’. It is natural. We all have areas where our eyes glaze over when someone else goes in deep. When my kid explains their physics thesis to me, I glaze over, and I was once trained as a physicist for crying out loud.

TL;DR

When confronted with something with deep ‘technical’ details , this might be technology such as Generative AI (ChatGPT and friends), but it also applies to something like the arcane details of the law, of accounting, or of the medical profession, one can recognise three types of people:
• Those whose eyes ‘glaze over’
• Those whose eyes become like saucers as they see a wild future
• Those whose eyes narrow to slits as they smell nonsense
Given that the decision makers are generally of the first category in such ‘technical’ domains, what kind of advisors should they employ? Those with eyes like saucers? Of those with eyes that narrow?

But the decision makers still must make decisions. And they make decisions based on what they expect, part on the risks they see, part on what they would like to see happen, etc. So, when something new like Generative AI comes along, (or blockchain before that, or every other silver bullet that has been sold as a revolutionary, simple solution to management) they need to get informed. And they may not have Uncle Edsger statement about quacks — Nowhere in computing the quack density is as high as in the broad area of design methodology; needless to say, all the miracle ointments are at the very least “proven” or “tested” — handy.

In a fever-like situation like we have now with Generative AI, the world is awash with people without an understanding that is deep enough to have even an inkling of the limitations. These people expect unlimited power and almost ‘magical’ results from what has appeared on the scene. These are the people whose eyes become like saucers. They are convinced the whole world is going to change. It happened during the dot-com boom of the 1990’s, when the ‘eyes like saucers’ crowd told us the whole economy would change. It didn’t, so the whole thing ended in a crash. It happened when people in the 2000’s thought that — with some clever spreadsheets using not deeply enough understood statistics — they had done away with uncertainty in investments. What followed was the worst economic crisis after the 1930’s. (Aside: I can advise people to read the prize-winning This Time Is Different: Eight Centuries of Financial Folly by Reinhart and Rogoff, 2008). In Generative AI, these are the people who currently create images like this:

From Azeem Azhar Three months of AI in six charts

Let’s pick this one apart, shall we?

First, they’re predicting, and there is an Arab proverb: “He who predicts the future lies, even if he tells the truth”. But not only that. This is the first graph of a story that portends to be about the previous three months (it is, it is about the previous three months of predictions, but who is going to notice that?). I can’t read the original article of the prediction, it is behind a paywall and I’m not going to pay for junk like this.

And it is junk. Because, second, they’re comparing apples and oranges. If you want to compare, you should compare software engineer salaries not with bandwidth, but with network engineer salaries or the salaries of their offshoot: security engineers (good luck getting those for $0.01 per year soon. Yes, $0.01. Small detail from the graph: the cost of CPU has apparently reached $0.01/MFLOP in 2019 and shortly after 2030 software engineers will make $0.01 per year. It all looks very convincing, of course. Notice the specific form of the dashed prediction in that respect.

So, frankly, this is nonsensical fantasy presented as fact.

Enter the first of our categories, the people who read this, do not have a background in any of this — they might have a marketing background, for instance —, and whose eyes become large (eyes like saucers). They are somewhat convinced by what is shown, because it is shown in a convincing manner and they see many examples like this (because — in the absence of real understandingthey all copy one another).

There is a second group. These people see the wild claims and their eyes narrow. They become suspicious because they understand (some of) the technology, and they’ve simply seen too many of these hypes come and go, and they recognise the risk that much of this is — again — hype.

Sometimes, the people with eyes like saucers actually run the company. In which case I would consider such a company a rather risky investment. Millions have been wasted in organisations run by those. Some have even gone under, taking the investors’ money with them. But for most run-of-the-mill organisations, public or private, they are run by people who have no idea what a transformer does and what the difference is with long-short-time-memory (LSTM).

And the simple question is: who do you hope advises these decision makers? The people whose eyes become like saucers or the people whose eyes narrow?

P.S.

There is something that we can expect from these models with reasonable certainty. It is a use that can be best illustrated by an example. Transformer models are much better at translating natural language than what came before. Transformers were invented in 2016-2017 based on ideas already published in 1992. Google improved (which enabled massive parallelisation and use independently of some other techniques) and started to use transformers. What we all noted in those years was an increase of the quality of Google Translate (from ‘piss poor’ to ‘pretty decent’), first by implementing ‘attention’ and then ‘transformers’. So, does that mean translators will go out of a job? Possibly, we do not have typists and human calculators anymore, these were made redundant by (personal) computers. But more likely is that that same translator will see 1-2 orders of magnitude increase in productivity. Let the LLM produce the translation, the translator will only check the result. Does that mean ‘less translators’? Again, possibly, but if history is telling us anything it is that not less likely: we just are going to translate much more because it will become very affordable. It might even lead to more translators, not less. We lost the typist, we did not lose the policy advisor, we might even have more of these, today. The same holds for summarising. And there are more useful applications where we can see productivity gains. Will we lose software engineers? Not a chance. Will we see overall productivity gains? Maybe. The jury is still out.

Note: the success of transformer-based models has been 30(!) years in the making, even if people only started to notice it last years when ChatGPT was publicly made available.

You know what is funny?

There is a amusing analogy between “He who predicts the future lies, even if he tells the truth” with how Generative AI like ChatGPT works. Generative AI are prediction mechanisms: the models are based on predicting what is the best next word. When that goes awry and it produces falsehoods, we call that ‘the model hallucinates [something that is not true]’. But the real situation is: Everything what these models do is ‘hallucination’, they are liars, even when they are telling the truth.

1 comment

Leave a comment