The “ChatGPT and Friends” Collection

This page contains an annotated list of my publications/explanations about the 2023 surge of attention (hype is not always a wrong word here) society paid to Artificial Intelligence, mostly because of the impact of the public release of ChatGPT in the fall of 2022 by OpenAI.

Here we go:



Gerben Wierda on ChatGPT, Altman, and the Future

Hi All,

The world is atwitter right now with the conflict between Sam Altman and the board of OpenAI. Gary Marcus is posting about it almost daily, and the ink getting spilled over it is considerable, and for good reason. On Colligo, we too are “following the story.” It’s my honor to publish a wonderful guest post from Gerben Wierda, who can put the Altman shocker in a broader and helpful context.


Keep reading



State of the Art Gemini, GPT and friends take a shot at learning

Google’s Gemini has arrived. Google has produced videos, a blog, a technical background paper, and more. According to Google: “Gemini surpasses state-of-the-art performance on a range of benchmarks including text and coding.”

But hidden in the grand words lies another generally overlooked aspect of Large Language Models which is important to understand.

And…

Keep reading




Generative AI doesn’t copy art, it ‘clones’ the artisans — cheaply

The early machines at the beginning of the Industrial Revolution produced ‘cheap’ (in both meanings) products and it was the introduction of that ‘cheap’ category that was actually disruptive. In the same way, where ‘cheap’ is acceptable (and no: that isn’t coding), GenAI may disrupt today.

But there is a difference. Early machines were separate…

Keep reading

The Department of “Engineering The Hell Out Of AI”

ChatGPT has acquired the functionality of recognising an arithmetic question and reacting to it with on-the-fly creating python code, executing it, and using it to generate the response. Gemini’s contains an interesting trick Google plays to improve benchmark results.

These (inspired) engineering tricks lead to an interesting conclusion about the state of LLMs.

Keep reading


When ChatGPT summarises, it actually does nothing of the kind.

One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn’t summarising at all, it only looks like it. What it does is something else and that something else only becomes summarising in very specific circumstances.

Keep reading



Will Sam Altman’s $7 Trillion Plan Rescue AI?

Sam Altman wants $7 trillion for AI chip manufacturing. Some call it an audacious ‘moonshot’. Grady Booch has remarked that such scaling requirements show that your architecture is wrong. Can we already say something about how large we have to scale current approaches to get to computers as intelligent as humans — as Sam intends?…

Keep reading

GPT and Friends bamboozle us big time

After watching my talk that explains GPT in a non-technical way, someone asked GPT to write critically about its own lack of understanding. The result is illustrative, and useful. “Seeing is believing”, true, but “believing is seeing” as well.

Keep reading

On the Psychology of Architecture and the Architecture of Psychology

[Sticky] About the role ‘convictions’ play in human intelligence, starting from the practical situations ‘advisors’ — such as IT advisors — find themselves in.

Advisors need (a) to know what they are talking about and (b) be able to convince others. For architects, the first part is called ‘architecture’ and the second part could…

Keep reading



Some earlier stories related to the rise of large generative models:

Before the advent of ChatGPT and friends (Generative AI, the very large neural networks — thanks to the transformer architecture) there was already to occasional fever about AI, such as about ‘cognitive computing’ (neural networks in general), written about here