Which of these two messages do you think is more effective?
**ChatGPT will lie to you**
Or
**ChatGPT doesn’t lie, lying is too human and implies intent. It hallucinates. Actually no, hallucination still implies human-like thought. It confabulates. That’s a term used in psychiatry to describe when someone replaces a gap in one’s memory by a falsification that one believes to be true—though of course these things don’t have human minds so even confabulation is unnecessarily anthropomorphic.**