ELI5: ChatGPT is a Giant Plinko Game

Photo By: sumofus

ChatGPT is like a giant Plinko game.

Plinko is a game where you put a coin in at the top. The coin falls down the board, hitting pins that influence it on the way down. At the bottom are slots, with various points or prizes. Depending on where you put your coin in at the top, you have different chances of which slot it will land it at the bottom. It's not random, some slots will be more likely than others depending on where you start.

Now, rather than a coin going in at the top, assume it's your question to ChatGPT. At the bottom, rather than prizes, assume it's all possible words. ChatGPT looks at your question, and determines based on the words you used where to put your question in at the top. Then, the pins influence your question on the "way down" depending on what the words are. Rather than landing in on a single word, ChatGPT outputs a list of the most likely words to come next. ChatGPT then picks a word from the most likely words and shows it on your screen.

After a word is generated, ChatGPT makes a new coin (your question, and the start of its response), and puts it back into the top of the Plinko board to generate the next word. This continues until ChatGPT has a response it deems a sufficient length.

This is how ChatGPT "remembers" things from your conversation. It's actually submitting as much of your chat history as it can handle, so that it can generate relevant output.

But how does a Plinko board know how to make really human like answers?

Using around 570 GB of text from the internet, the ChatGPT model (Plinko board), was trained. Parts of the text were input into the top of the Plinko board, and then the output was analyzed. Since the trainer knows what the next word is it can adjust the pins until the coin starts to predictably fall where expected. After the initial training, humans interact with the board, and can give feedback on if generated content is good or bad (which can be subjective). The trainer is its own computer program, but it still takes a long time, and is expensive to do for such a large dataset.

Does ChatGPT remember things? Is it sentient?

Since ChatGPT gives very realistic answers, some believe it could be sentient. If you believe this, you'd have to assume that a Plinko board is also sentient. IMO, for a model to be sentient or conscious it would need to be constantly running, have feedback loops, memory, and the ability to modify its internal state. ChatGPT does not modify its internal state, have a memory, and only runs (with no feedback loops) when text is input.

Deep Dive

If you want a really deep dive into how these models work, this article by Stephen Wolfram does a great job explaining many of the parts that make up something like ChatGPT.

Comments

Popular posts from this blog

Phoenix LiveView: Async Assign Pattern

Write Admin Tools From Day One

Custom Kaffy Styling