AI-generated podcast AI-slopcast

We introduce a new term: "AI-slopcast". This is a podcast that is created by Generative AI and — surprise! — is AI-slop. The victim: one of my own posts.

AI has invented a new language, and added sex to a dull office context

It turns out that AI has created a whole new language. Humans do not speak it, and they may even mistake it for talk about sex. But luckily Generative AI is able to translate it to something humans can understand (and where the sex doesn't show up).

Generative AI ‘reasoning models’ don’t reason, even if it seems they do

'Reasoning models' such as GPT4-o3 have become a well known member of the Generative AI family. But look inside and while they add a certain depth, at the same time they add nothing at all. Not 'reasoning' anyway. Just another 'level of indirection' when approximating. Sometimes powerful. Always costly.

Let’s call GPT and Friends: ‘Wide AI’ (and not ‘AGI’)

GPT-3o has done very well on the ARC-AGI-PUB benchmark. Sam Altman has also claimed OpenAI is confident that it can build Artificial General Intelligence (AGI). But that may be based on confusions around 'learning'. On the difference between narrow, general and (introducing) 'wide' AI.

Google’s ‘Willow’ quantum computer: impressive science and misleading marketing

Google has announced 'Willow', a quantum computer that can calculate so fast it would take a supercomputer 10 septillion (a 10 with 25 zeros) years to do the same. But while the science is real and cool, the message is misleading. An explainer for non-physicists.

Generative AI doesn’t copy art, it ‘clones’ the artisans — cheaply

The early machines at the beginning of the Industrial Revolution produced 'cheap' (in both meanings) products and it was the introduction of that 'cheap' category that was actually disruptive. In the same way, where 'cheap' is acceptable (and no: that isn't coding), GenAI may disrupt today. But there is a difference. Early machines were separate inventions creating a comparable product. GenAI is trained on the output of humans, their skill is 'cloned' and it is this 'cloned skill' that produces the 'comparable product'. GenAI is not 'copying art', it is 'cloning the artisan'. And our intellectual rights haven't yet caught up.

When ChatGPT summarises, it actually does nothing of the kind.

One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn't summarising at all, it only looks like it. What it does is something else and that something else only becomes summarising in very specific circumstances.

Microsoft lays a limitation of ChatGPT and friends bare

Microsoft researchers published a very informative paper on their pretty smart way to let GenAI do 'bad' things (i.e. 'jailbreaking'). They actually set two aspects of the fundamental operation of these models against each other.

The Department of “Engineering The Hell Out Of AI”

ChatGPT has acquired the functionality of recognising an arithmetic question and reacting to it with on-the-fly creating python code, executing it, and using it to generate the response. Gemini's contains an interesting trick Google plays to improve benchmark results. These (inspired) engineering tricks lead to an interesting conclusion about the state of LLMs.

Memorisation: the deep problem of Midjourney, ChatGPT, and friends

If we ask GPT to get us "that poem that compares the loved one to a summer's day" we want it to produce the actual Shakespeare Sonnet 18, not some confabulation. And it does. It has memorised this part of the training data. This is both sought-after and problematic and provides a fundamental limit for the reliability of these models.