While re-reading Robert Sokolowski's excellent book Phenomenology of the Human Person recently, I came across a fascinating quote from Derek Bickerton's book Language and Human Behavior:
The problems animals solve, the problems we solve, are our own problems.... But the problems computers solve are not problems for computers. If I have a problem, it's my problem. If my computer has a problem, it's still my problem. Nothing is a problem for it, because it doesn't interact with the world. It just sits there and waits for me to give it my problems.
This is why LLMs like ChatGPT have a prompt. The LLM doesn't figuratively or literally walk into my office and say "hey, I've been thinking about this problem I've got and I'd like to bounce around some potential solutions with you"; in the terms I introduced a few weeks ago, the LLM isn't sentient because nothing is salient for it. For this reason, in contrast with folks who are excited about the prospect of artificial intelligence, I find these systems to be fundamentally boring.
(Cross-posted at philosopher.coach.)
FOR FURTHER EXPLORATION
Peter Saint-Andre > Journal