ChatGPT addiction
- Mikael Svanstrom
- May 26
- 1 min read

MIT and OpenAI teamed up to ask a very important question: How do interactions with AI chatbots affect people’s social and emotional well-being?
Their finding was that “…both model and user behaviours can influence social and emotional outcomes. Effects of AI vary based on how people choose to use the model and their personal circumstances.”
They also found that a small subset of power users, become dependent upon — or even addicted to — the chatbot. People who used the chatbot for longer periods seemed to start considering it a friend and those who used it the longest tended to be lonelier and get more stressed out over subtle changes in the model’s behaviour. They were the most likely to develop a parasocial relationship with the AI.
We can become friendly with and establish an emotional relationship with ChatGPT even if it isn’t what it was designed for. So the question is: what happens when we specifically design AI models for that purpose? There are many companies that work on the idea of an AI companion. For example Replica.ai promotes their companions as “An AI companion who cares. Always here to listen and talk. Always on your side.”
None of this comes as a surprise. But as we lean further into the idea of creating personalities, do we really know how that will affect personal relationships or what societal changes it might bring?
References:
AI Companion: https://replika.ai/
コメント