23
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
What research have you done?
You clearly don’t actually care; if you did, you wouldn’t select your sources to gratify your ego, but actually try to understand the problem here. For example, ask GPT4 itself if it is intelligent. It will instruct you far better than I ever can. You clearly have access to it – frame your objections to it instead of Internet strangers tired of your bloviating and ignorance.
I understand you’re upset, but my sources have been quite clear and straightforward. You should actually read them, they’re quite nicely written.
I am upset: you don’t know what you’re talking about, refuse to listen to anything that contradicts you, and are inflammatory and unpleasant besides. If I wasn’t clear enough – go talk to an LLM about this. They have no option but to listen to your idiocy. I, of course, do have a choice, and am blocking you.