It’s often said that AI chatbots are designed to have overly friendly or flattering personalities to keep us engaged. But there is mounting evidence to suggest that such “sycophantic” traits may pose risks to users – with a new University of Oxford study finding that warmer chatbots make more mistakes and are less trustworthy.
Why friendlier chatbots might be less trustworthy
