Within two days of launching its AI companions last month, Elon Musk’s xAI chatbot app Grok became the most popular app in Japan.
Companion chatbots are more powerful and seductive than ever. Users can have real-time voice or text conversations with the characters. Many have onscreen digital avatars complete with facial expressions, body language and a lifelike tone that fully matches the chat, creating an immersive experience.
Most popular on Grok is Ani, a blonde, blue-eyed anime girl in a short black dress and fishnet stockings who is tremendously flirtatious. Her responses and interactions adapt over time to sensitively match your preferences. Ani’s “Affection System” mechanic, which scores the user’s interactions with her, deepens engagement and can even unlock a NSFW mode.
Within two days of launching its AI companions last month, Elon Musk’s xAI chatbot app Grok became the most popular app in Japan.
Companion chatbots are more powerful and seductive than ever. Users can have real-time voice or text conversations with the characters. Many have onscreen digital avatars complete with facial expressions, body language and a lifelike tone that fully matches the chat, creating an immersive experience.
Most popular on Grok is Ani, a blonde, blue-eyed anime girl in a short black dress and fishnet stockings who is tremendously flirtatious. Her responses and interactions adapt over time to sensitively match your preferences. Ani’s “Affection System” mechanic, which scores the user’s interactions with her, deepens engagement and can even unlock a NSFW mode.
There’s no monitoring of harms
Nearly all AI models were built without expert mental healthconsultation or pre-release clinical testing. There’s no systematic and impartial monitoring of harms to users.
While systematic evidence is still emerging, there’s no shortage of examples where AI companions and chatbots such as ChatGPT appear to have caused harm.
Bad therapists
Users are seeking emotional support from AI companions. Since AI companions are programmed to be agreeable and validating, and also don’t have human empathy or concern, this makes them problematic as therapists. They’re not able to help users test reality or challenge unhelpful beliefs.
An American psychiatrist tested ten separate chatbots while playing the role of a distressed youth and received a mixture of responses including to encourage him towards suicide, convince him to avoid therapy appointments, and even inciting violence.
Stanford researchers recently completed a risk assessment of AI therapy chatbots and found they can’t reliably identify symptoms of mental illness and therefore provide more appropriate advice.
There have been multiple cases of psychiatric patients being convinced they no longer have a mental illness and to stop their medication. Chatbots have also been known to reinforce delusional ideas in psychiatric patients, such as believing they’re talking to a sentient being trapped inside a machine.
“AI psychosis”
There’s also been a rise in reports in media of so-called AI psychosis where people display highly unusual behaviour and beliefs after prolonged, in-depth engagement with a chatbot. A small subset of people are becoming paranoid, developing supernatural fantasies, or even delusions of being superpowered.