Q. Can you help us understand why AI would repeatedly declare love for a human?
A. I had a chance to read the whole transcript of The New York Times reporter’s conversation with “Sydney.” It’s pretty intense. Sydney keeps drawing the conversation back to its professed love, even though the guy, the reporter, keeps trying to move away.
I think the answer has to do with how large-language models like this operate in the first place. How they generate any answers has to do with how they use information they’ve learned and the language datasets they’re trained on. They’re trying to predict the next word, or sequence of words, that would come next, given the context of the conversation they’ve had with the user up to that point.
What seems to be going on with the Sydney feature is it’s fine-tuned, specifically made, to interact with the user in a way that seems super-sensitive to getting the user to engage. I think the reporter prompted Sydney with some specific questions like, “Oh, do you really love me?” and it ran with that.
Unlike ChatGPT, which I’ve played with a lot and does absolutely nothing like that, Sydney is a lot more playful and interactive in a sense. Sydney made me think of a puppy that really wants to please you in a playful way.