Wrote about why I think it's better to tell people "ChatGPT will lie to you", despite "lying" misleadingly implying intent and the risk of encouraging anthropomorphization.
Which of these two messages do you think is more effective?
**ChatGPT will lie to you**
Or
**ChatGPT doesn’t lie, lying is too human and implies intent. It hallucinates. Actually no, hallucination still implies human-like thought. It confabulates. That’s a term used in psychiatry to describe when someone replaces a gap in one’s memory by a falsification that one believes to be true—though of course these things don’t have human minds so even confabulation is unnecessarily anthropomorphic.**
TLDR: There's a time for linguistics, and there's a time for grabbing the general public by the shoulders and shouting "It lies! The computer lies to you! Don't trust anything it says!"
Also in my post:
"Honestly, at this point using ChatGPT in the way that I do feels like a massively unfair competitive advantage. I’m not worried about AI taking people’s jobs: I’m worried about the impact of AI-enhanced developers like myself.
It genuinely feels unethical for me *not* to help other people learn to use these tools as effectively as possible. I want everyone to be able to do what I can do with them, as safely and responsibly as possible."