DeletedUser232 Large Language Model is clearly not an Artificial Intelligence. I don't understand how one could argue that current chatbots are an AI just because it sounds similar to humans. In fact, it's basically a politically biased generalization of human language. ... What current LLMs are good at is generalization by mass data. That is not an artificially created intelligence capable of learning knowledge or thinking logically.
It is fun that. Whenever we learn to replicate something widely believed to be intelligent behavior, and thus understand how it works, it stops being intelligent behavior, because it is just some stupid algorithm. For so long, being able to play chess was believed to be the ultimate test of intelligence, but when a computer defeated the world champion, we all saw it was just a stupid brute-force search algorithm. Hardly anything possessing intelligence. Suddenly, playing chess was no longer seen as an intelligent task. We humans actually simulated the same brute-force search algorithm in our minds after all, while playing chess, we just lack the same compute power so cannot compete anymore. But being able to comprehend and respond to text remained a challenge, and the ultimate test for intelligence. And then large language models were invented, and here we are, it is just a stupid statistical model with zero understanding. No intelligence is required to understand or produce text, just a large statistical model of likely continuations.
I wonder where this will end, when we have artificial general intelligence, bots behaving and feeling like humans. Probably, in the end, we will have to admit, humans never did possess anything that can reasonably be called intelligent behaviors. It was all just stupid algorithms in the end. All of it. What made us human, or "intelligent", was in fact nothing special or fancy at all.
(This post is meant as being sarcastic)