AI-generated podcast AI-slopcast

We introduce a new term: "AI-slopcast". This is a podcast that is created by Generative AI and — surprise! — is AI-slop. The victim: one of my own posts.

Let’s call GPT and Friends: ‘Wide AI’ (and not ‘AGI’)

GPT-3o has done very well on the ARC-AGI-PUB benchmark. Sam Altman has also claimed OpenAI is confident that it can build Artificial General Intelligence (AGI). But that may be based on confusions around 'learning'. On the difference between narrow, general and (introducing) 'wide' AI.

What makes Ilya Sutskever believe that superhuman AI is a natural extension of Large Language Models?

I came across a 2 minute video where Ilya Sutskever — OpenAI's chief scientist — explains why he thinks current 'token-prediction' large language models will be able to become superhuman intelligences. How? Just ask them to act like one.

Artificial General Intelligence is Nigh! Rejoice! Be very afraid!

Should we be hopeful or scared about imminent machines that are as intelligent or more than humans? Surprisingly, this debate is even older than computers, and from the mathematician Ada Lovelace comes an interesting observation that is as valid now as it was when she made it in 1842.

GPT and Friends bamboozle us big time

After watching my talk that explains GPT in a non-technical way, someone asked GPT to write critically about its own lack of understanding. The result is illustrative, and useful. "Seeing is believing", true, but "believing is seeing" as well.

The hidden meaning of the errors of ChatGPT (and friends)

We should stop labelling the wrong results of ChatGPT and friends (the 'hallucinations') as 'errors'. Even Sam Altman — CEO of OpenAI — agrees, they are more 'features' than 'bugs' he has said. But why is that? And why should we not call them errors?

The lack of use cases for blockchain should teach organisations a valuable lesson about handling hypes

If someone tries to get you to invest in some shiny new technology — like blockchain 5-8 years ago — beware. How do you judge these proposals? A realistic use case is key.

Cicero and chatGPT — signs of AI progress?

Cicero, an AI, performed in the top 10% against human performers in the game Diplomacy, which is about negotiating with others. chatGPT is making the rounds with its impressive output. Are these AI breakthroughs or at least signs of real progress? Or signs of trouble to come?

It is life, Jim. But not as we know it.

Blake Lemoine, a Google engineer, has claimed the LaMDA language neural net chatbot is sentient, is alive. Nonsense on stilts, according to one critic. A musing about the meaning of 'life'. And abortion. And doubt. And the point Lemoine has but doesn't make.

Are we humans still ‘top dog’ in this brave new world of massive IT?

What is the information revolution doing to us humans? A very condensed journey from essences of digital technology and human intelligence to the role of talk, trust and the impact of IT — especially social media — on society. We are most intelligent on the planet, but that is a relative measure. Our intelligence has serious weaknesses, some of which the IT revolution is now making painfully visible. We must hope that we're intelligent enough to accept that we're not very intelligent. That may be an even more difficult paradigm shift than Copernicus' or Darwin's.