I think this is partly true of some — today unfortunately perhaps most — humans. Though to be more precise I’d formulate it like this: Many humans today — by habit — function cognitively in an analogous way to LLMs; rather than interacting at the level of understanding, depth, and meaning, they merely regurgitate semantically valid linguistic constructions pieced together from sources they have read or listened to previously.
Such humans (and LLMs) are functioning in language like a sophisticated parrot. All manner of clever AND USEFUL (like a tool is useful) results are born of this type of surface rearrangement of language. Unfortunately this mode of linguistic functioning is bereft of everything meaningfully rich and deep that human minds are capable of. Not only is it not “completely understand[ing]”, it’s not understanding at all; there’s specifically no understanding going on there. Like Oakland, there’s no there there. It’s also exactly NOT ‘sentience’ (or even anywhere in the same ballpark) in the sense that most people intend and understand that word. And it’s not “close” or “getting there” because there is no pathway from rearranging linguistic symbols to sentience; it’s a different thing altogether.
Artificial intelligence (today) is perfectly termed; it’s precisely not authentic intelligence. It’s a facsimile; a clever and useful simulacra (in the exact sense of Baudrillard). I don’t know of any reason that authentic intelligence or sentience — a self capable of something you could call understanding — could not be instantiated in man-made machines. But LLMs are not that and will never be that; they are something altogether different from that.
The fact that many sentient humans function so often as artificial intelligences does not mean that LLM’s are close to sentient. On the contrary, what it means is that humans en masse still have a lot to learn about being human.