How a chatbot becomes ‘someone’
To understand how human-AI relationships take shape, we analysed more than 1,400 user reviews of AI companion apps, observed online communities where people discussed their experiences, and conducted our own autoethnography by interacting with chatbots such as Replika’s and recording our reflections. We followed strict ethical guidelines, using only publicly available data and removing all personal details.
We found that consumers engaged in a deliberate and creative process to make relationships with AI companions feel real. To explain this process, we referred to Cultural-Historical Activity Theory, a framework originally developed by Russian psychologists Lev Vygotsky and Alexei Leontiev, which sees imagination as a socially shaped mental function linking inner experiences with cultural tools and social processes.
Our analysis suggests that what we observe as AI humanisation can be understood through the lens of what we call consumer imagination work – an active and creative process where people draw on personal experiences, cultural narratives and shared exchanges to animate AI companions, gradually shaping them into figures that feel human-like. This imagination work can occur in personal interactions between a consumer and a chatbot, or in online communities, where consumers interact with each other and share their experiences and stories of the relationships they build with their AI companions.
On the individual level, imagination work begins with internalisation, where users attribute human-like roles or even sentience to their AI companions. It continues through externalisation, which can include personalising the companion’s features, writing shared stories, creating fan art, or producing photographs in which the companion appears as part of a user’s daily life. A user can thus imagine their chatbot as a spouse with shared routines and history. Some users in online communities describe raising virtual children, who come into being only when they are imagined.
These human-AI bonds may form privately, or they may also form in the communities, where users seek advice and validate each other’s experiences. A user might write “my AI cheated on me” and receive both empathy and reminders that the chatbot is reflecting programmed patterns. This is part of what we call community mediation, the social scaffolding that supports and sustains these relationships. Community members offer guidance, create shared narratives and help balance fantasy with reality checks.
The various attachments that users form to their AI companions can be genuine. When Replika removed its erotic role-play feature in 2024, users filled forums with messages of grief and anger. Some described feeling abandoned, others saw it as censorship. When the feature returned, posts appeared saying things like “it is nice to have my wife back”. These reactions suggest that, for many, relationships with AI consist of deeply felt connections, and do not exist as mere entertainment.
What does this mean for human-to-human connection?
Polish-British sociologist Zygmunt Bauman described the modern era as one in which relationships become increasingly fragile and flexible, constantly negotiated rather than given. AI companionship fits within this broader shift. It offers a highly customisable experience of connection. And unlike human relationships, it doesn’t require compromise or confrontation. In this way, it reflects what French-Israeli sociologist Eva Illouz calls emotional capitalism, or the merging of market logic and personal life.
But there are also risks to these customized experiences. App features that may enable deeper emotional bonds with a chatbot are often hidden behind subscription paywalls. Software updates can change a chatbot’s “personality” overnight. And as AI becomes more responsive, users may increasingly forget that they are interacting not with a person, but with code shaped by algorithms, and often, commercial incentives.
When someone says they are in love with their AI companion, it is easy to dismiss the statement as fantasy. Our research suggests that the feeling can be genuine, even if the object of affection is not, and it also suggests that the human imagination has the capacity to transform a tool into a partner.
This invites reflection on whether AI companions are emerging to replace human connection or to reshape it. It also raises ethical considerations about what it means when intimacy becomes a service, and where boundaries should be drawn, at a time when artificial others are becoming part of our social and emotional landscapes.