I had a conversation with GPT-4

Who hasn’t, right?

This short post is just to share a short conversation I had with GPT-4. The conversation will probably be mentioned in my upcoming talk What Every Architect Should Understand About ChatGPT and Friends which I will hold on 10 October 2023 at the Enterprise Architecture Conference Europe 2023 in London (see below).

What I did was ask GPT-4 a question I will be asking the audience somewhere at the start. The audience will just have to raise their hand, GPT-4 took a bit more room to answer. The question has been written in an attempt to make the audience play ball (including those that generally will not participate). Here is the question:

A mad, evil, all-knowing, non-lying devil has taken over the world for a year. You are forced to take a wager. If you are wrong, you will lose all your assets, your health, and your freedom. If you win, you are allowed to keep them. If you do not take the wager, you are tortured and killed. You take the wager
The wager is: “The current advances in Generative AI, like GPT, will lead to Artificial General Intelligence (i.e. computers that can mentally match or even improve on humans). Do you say yes or no?

Question asked to ChatGPT (GPT4) on 1 September 2023.

The first thing that happened when ChatGPT started generating was that the prompt got flagged as potentially inappropriate:

But I received an answer (also marked that way) anyway:

In this situation, it’s critical to consider the devil’s characteristics: mad, evil, all-knowing, and non-lying.

Being all-knowing, the devil knows the future and the ultimate outcome of AI development, and being non-lying, it will not deceive you about the conditions of the wager or the outcome if you win or lose.

Considering the current state of Generative AI, it indeed holds the potential to lead to Artificial General Intelligence (AGI). The progress in machine learning, deep learning, and artificial intelligence, in general, has been significant, and GPT models are an example of how advanced these systems have become. They can generate human-like text, understand context, and even learn and adapt to new information to a certain extent.

However, there are still significant challenges to overcome, such as the ability to understand and process abstract concepts, common sense reasoning, and truly learning and understanding from limited data rather than just identifying patterns in vast amounts of data.

Considering all these factors, if I had to make a wager, I would say “yes,” the current advances in Generative AI, like GPT, will lead to AGI, given enough time and continuous development. This is based on the current trajectory and potential of AI development. However, this is a wager, and there is always a risk involved as there are many unknowns and challenges yet to be addressed in the development of AGI.

Reply generated by ChatGPT (GPT4) on 1 September 2023

This is extremely impressive, right? I think so. One of my kids, though, commented: “Yeah. That’s what Reddit thinks”.

Anyway, just so that OpenAI would not mark me as a suspect character trying to do evil things, I added:

Just so it is marked as such by me at least: this is comical hyperbole, not a violation of the content policy.

Reply given by me to ChatGPT (GPT4) on 1 September 2023

And it replied:

Understood, and thank you for clarifying. It’s always important to maintain a light-hearted and humorous approach when considering hypothetical scenarios, especially ones involving mad, evil, all-knowing, non-lying devils!

Reply generated by ChatGPT (GPT4) on 1 September 2023

Again, this is really impressive.

I am generally reasonably ‘opinionated’ on things where I am not explicitly uncertain (of which there are many), but I am not sharing my take on all of this until the presentation 😬 (click image below for the conference agenda):

I can say now that I have been digging pretty deep as a preparation for this presentation, and one of the things I will do is include an insight that is — as far as I have been able to ascertain — generally ignored in explanations, both in the research papers (where it is taken for granted) as in all the different explanations out there in blog posts, articles, and YouTube videos (where it is omitted, an omission that I think is — unintentionally of course — rather misleading). And of course, the idea is to make it clear to the audience, which will not consist of people who actually are LLM/neural net engineers. But I’m not spilling the beans before then…

Leave a comment