What makes Ilya Sutskever believe that superhuman AI is a natural extension of Large Language Models?

I came across a 2 minute video where Ilya Sutskever — OpenAI's chief scientist — explains why he thinks current 'token-prediction' large language models will be able to become superhuman intelligences. How? Just ask them to act like one.

State of the Art Gemini, GPT and friends take a shot at learning

Google’s Gemini has arrived. Google has produced videos, a blog, a technical background paper, and more. According to Google: "Gemini surpasses state-of-the-art performance on a range of benchmarks including text and coding." But hidden in the grand words lies another generally overlooked aspect of Large Language Models which is important to understand. And when we use that aspect to try to trip up GPT, we see something peculiar. Shenanigans, shenanigans.

Artificial General Intelligence is Nigh! Rejoice! Be very afraid!

Should we be hopeful or scared about imminent machines that are as intelligent or more than humans? Surprisingly, this debate is even older than computers, and from the mathematician Ada Lovelace comes an interesting observation that is as valid now as it was when she made it in 1842.

GPT and Friends bamboozle us big time

After watching my talk that explains GPT in a non-technical way, someone asked GPT to write critically about its own lack of understanding. The result is illustrative, and useful. "Seeing is believing", true, but "believing is seeing" as well.

The hidden meaning of the errors of ChatGPT (and friends)

We should stop labelling the wrong results of ChatGPT and friends (the 'hallucinations') as 'errors'. Even Sam Altman — CEO of OpenAI — agrees, they are more 'features' than 'bugs' he has said. But why is that? And why should we not call them errors?

The Truth about ChatGPT and Friends — understand what it really does and what that means

On 10 October I gave an (enthusiastically received) explainer talk at the EABPM Conference Europe 2023, making clear what ChatGPT and friends actually do — addressing the technology in a non-technical but correct way — and what that means. That presentation fills the gap between the tech and the results. At the end you will understand what these models really do in a practical sense (so not the technical how) when they handle language, see not only how impressive they are, but also how the errors come to be (with a practical example), and what that means what we may expect from this technology in the future.

I had a conversation with GPT-4

I had a conversation with ChatGPT. Who hasn't, right? This short post is just to share a short conversation I had with GPT-4. The conversation will probably be mentioned in my upcoming talk "What Every Architect Should Understand About ChatGPT and Friends" which I will hold on 10 October 2023 at the Enterprise Architecture Conference Europe 2023 in London.

Eyes that glaze over. Eyes like saucers. Eyes that narrow.

When (new) technology is concerned, I observe that there are three types of people involved. Those whose eyes 'glaze over'. Those whose eyes become like saucers. Those whose eyes narrow. Who should be the advisor of the decision makers?

Where are GPT and friends going?

What can we estimate about where the Generative AI innovation is going? Three useful links to articles that give interesting observations and insights.

Cicero and chatGPT — signs of AI progress?

Cicero, an AI, performed in the top 10% against human performers in the game Diplomacy, which is about negotiating with others. chatGPT is making the rounds with its impressive output. Are these AI breakthroughs or at least signs of real progress? Or signs of trouble to come?