AI-generated podcast AI-slopcast

We introduce a new term: "AI-slopcast". This is a podcast that is created by Generative AI and — surprise! — is AI-slop. The victim: one of my own posts.

AI has invented a new language, and added sex to a dull office context

It turns out that AI has created a whole new language. Humans do not speak it, and they may even mistake it for talk about sex. But luckily Generative AI is able to translate it to something humans can understand (and where the sex doesn't show up).

Generative AI ‘reasoning models’ don’t reason, even if it seems they do

'Reasoning models' such as GPT4-o3 have become a well known member of the Generative AI family. But look inside and while they add a certain depth, at the same time they add nothing at all. Not 'reasoning' anyway. Just another 'level of indirection' when approximating. Sometimes powerful. Always costly.

Generative AI doesn’t copy art, it ‘clones’ the artisans — cheaply

The early machines at the beginning of the Industrial Revolution produced 'cheap' (in both meanings) products and it was the introduction of that 'cheap' category that was actually disruptive. In the same way, where 'cheap' is acceptable (and no: that isn't coding), GenAI may disrupt today. But there is a difference. Early machines were separate inventions creating a comparable product. GenAI is trained on the output of humans, their skill is 'cloned' and it is this 'cloned skill' that produces the 'comparable product'. GenAI is not 'copying art', it is 'cloning the artisan'. And our intellectual rights haven't yet caught up.

Ain’t No Lie — The unsolvable(?) prejudice problem in ChatGPT and friends

Thanks to Gary Marcus, I found out about this research paper. And boy, is this is both a clear illustration of a fundamental flaw at the heart of Generative AI, as well as uncovering a doubly problematic and potentially unsolvable problem: fine-tuning of LLMs may often only hide harmful behaviour, not remove it.

Will Sam Altman’s $7 Trillion Plan Rescue AI?

Sam Altman wants $7 trillion for AI chip manufacturing. Some call it an audacious 'moonshot'. Grady Booch has remarked that such scaling requirements show that your architecture is wrong. Can we already say something about how large we have to scale current approaches to get to computers as intelligent as humans — as Sam intends? Yes we can.

Memorisation: the deep problem of Midjourney, ChatGPT, and friends

If we ask GPT to get us "that poem that compares the loved one to a summer's day" we want it to produce the actual Shakespeare Sonnet 18, not some confabulation. And it does. It has memorised this part of the training data. This is both sought-after and problematic and provides a fundamental limit for the reliability of these models.

The Truth about ChatGPT and Friends — understand what it really does and what that means

On 10 October I gave an (enthusiastically received) explainer talk at the EABPM Conference Europe 2023, making clear what ChatGPT and friends actually do — addressing the technology in a non-technical but correct way — and what that means. That presentation fills the gap between the tech and the results. At the end you will understand what these models really do in a practical sense (so not the technical how) when they handle language, see not only how impressive they are, but also how the errors come to be (with a practical example), and what that means what we may expect from this technology in the future.