After watching my talk that explains GPT in a non-technical way, someone asked GPT to write critically about its own lack of understanding. The result is illustrative, and useful. "Seeing is believing", true, but "believing is seeing" as well.
Category: Enterprise Architecture
The hidden meaning of the errors of ChatGPT (and friends)
We should stop labelling the wrong results of ChatGPT and friends (the 'hallucinations') as 'errors'. Even Sam Altman — CEO of OpenAI — agrees, they are more 'features' than 'bugs' he has said. But why is that? And why should we not call them errors?
The Truth about ChatGPT and Friends — understand what it really does and what that means
On 10 October I gave an (enthusiastically received) explainer talk at the EABPM Conference Europe 2023, making clear what ChatGPT and friends actually do — addressing the technology in a non-technical but correct way — and what that means. That presentation fills the gap between the tech and the results. At the end you will understand what these models really do in a practical sense (so not the technical how) when they handle language, see not only how impressive they are, but also how the errors come to be (with a practical example), and what that means what we may expect from this technology in the future.
I had a conversation with GPT-4
I had a conversation with ChatGPT. Who hasn't, right? This short post is just to share a short conversation I had with GPT-4. The conversation will probably be mentioned in my upcoming talk "What Every Architect Should Understand About ChatGPT and Friends" which I will hold on 10 October 2023 at the Enterprise Architecture Conference Europe 2023 in London.
Eyes that glaze over. Eyes like saucers. Eyes that narrow.
When (new) technology is concerned, I observe that there are three types of people involved. Those whose eyes 'glaze over'. Those whose eyes become like saucers. Those whose eyes narrow. Who should be the advisor of the decision makers?
The Other Side of Requirements
We tend to think of requirements as 'incoming' elements for our designs. But what we not always explicitly notice is that we also create many 'outgoing' requirements in our designs, often in the form of 'constraints' of the 'users' of the products that come from those designs. These requirements lead a hidden existence most of the time, and that invisibility makes our decision making sometimes less efficient. What if we would make our outgoing requirements explicit? And how could we manage this? For that I created an actual small demo solution to manage solution designs in a library that links them and the requirements they give to each other, built as a plugin for Confluence.
Where are GPT and friends going?
What can we estimate about where the Generative AI innovation is going? Three useful links to articles that give interesting observations and insights.
Definition of Ready, Done? What about a ‘Definition of Broken’?
As the IT world has been largely taken over by Agile methods, the concepts of Definition of Ready and Definition of Done have become mainstream. While these concepts were introduced at the story/sprint level in Scrum, they have taken on a wide role and are generally used at all levels these days, not just on stories, but also on features and epics, the larger items in the agile-tree. There is, however, a new concept that maybe very helpful at the higher levels that we might use: a Definition of Broken.
The lack of use cases for blockchain should teach organisations a valuable lesson about handling hypes
If someone tries to get you to invest in some shiny new technology — like blockchain 5-8 years ago — beware. How do you judge these proposals? A realistic use case is key.
Cicero and chatGPT — signs of AI progress?
Cicero, an AI, performed in the top 10% against human performers in the game Diplomacy, which is about negotiating with others. chatGPT is making the rounds with its impressive output. Are these AI breakthroughs or at least signs of real progress? Or signs of trouble to come?