22
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
LLMs do not think or feel or have internal states. With the same random seed and the same input, GPT4 will generate exactly the same output every time. Its speech is the result of a calculation, not of intelligence or self-direction. So, even if intelligence can be described by an algorithm, LLMs are not that algorithm.
What exactly do you think would happen if you could make an exact duplicate of a human and run it from the same state multiple times? They would generate exactly the same output every time. How could you possibly think differently without turning to human exceptionalism and believing in magic meat?