djhworld

thoughts


ChatGPT corrects itself when you tell it off

The internet has been abuzz around playing with ChatGPT, a new chat AI thing. Putting biases and unfortunate responses aside, curiosity got the better of me and I started to fool around with it.

One of the surprising interactions was, I told it (assistant? eliza? alexa? mega-clippy?) off for getting something wrong, and it corrected itself.

You are correct, the matrix in my previous response was incorrect.

Like a teenager half arsing their homework, I managed to get them to actually make an effort.

The conversation

I’m doing advent of code again this year, a task I heartily start but eventually give up after day 15 or so, but the early days are always fun. Day 2 starts off talking about the game Rock, Paper, Scissors and computing some scoring logic based on some simple rules.

Being lazy and tired after work I was having trouble picturing all the outcomes of the game to represent them in my program, so I asked the tool to print a truth table. Not a very good question I’ll admit, a truth table usually refers to boolean outcomes not the win/lose/draw states, but they ran with it anyway.

At first glance the output was pretty cool, but I found it hard to read. The labels for the rows were not printed so you couldn’t determine who was player 1 and player 2.

So I asked them to print me a matrix.

Nice.

But something was still off, the text talks about the rules of the game but the matrix looks completely wrong. You can’t tie if both players present different objects.

So I gave them a sharp reply - you are wrong!

To my surprise, they corrected themselves and printed the corrected matrix. Even admitting the previous answer was wrong!

In this case I knew the rules of the game, I just wanted something quick to reference while writing code, but it makes me wonder whether accepting the first response from these AI tools might not always be the best idea.