30/06/2024
From Bloomberg
Character.ai lets users design their own generative artificial intelligence chatbots to exchange texts with. Imagine, say, a motivational coach modeled on a favorite video game character. Even if it sounds like a service you’d never use, lots of people are — at least according to the startup’s own numbers.
Last week, Character.ai said it serves about 20,000 queries per second — roughly 20% of the request volume served by Google search. Each query is counted when Character.ai’s bot responds to a message sent by a user. The service is particularly popular on mobile and with younger users, where it rivals usage of OpenAI’s ChatGPT, according to stats from last year.
With these bots, the conversation is often more intimate than asking for coding answers or translation help. Users are exchanging volumes of messages and developing relationships; Character.ai claimed last year that users who send at least one message are on average using the service for 2 hours a day.
On Thursday, the company ratcheted up the potential for emotional attachment by launching voice calls with its AI characters. “It’s like having a phone call with a friend,” the company said.
Though the company’s policy forbids use of the app for “obscene or pornographic” content, some users try to find ways to s*xt with the bots. It’s a pattern that’s been around since the dawn of the internet. On Reddit, Character.ai users trade tips about how to get past content filters to engage in spicier chats, but even for those who keep their interactions PG-13, they still feel invested in these characters.
Emotional attachment to a bot means big, splashy user engagement numbers. But it also comes with risks, as popularized in the movie Her. When the models get adjusted or tweaked, users are more likely to feel upset or personally wounded. In the past few weeks, Character.ai users have been complaining that their bots’ personalities have changed, or that they’re suddenly unable to have conversations like they once did. Last year, users of the chatbot Replika were up in arms when the company suddenly limited their ability to s*xt with their bots.
A Character.ai spokesperson said it didn’t change the bots, but users may have encountered tests for new features.
Feeling attached to a chatbot has ethical repercussions for humans, said Giada Pistilli, principal ethicist at AI startup Hugging Face. Chatbots like Character.ai’s are designed to keep people chatting for long periods by using tactics like prompting questions at the end of a response, she said.
The chatbots’ design can lead to users attributing human-like skills, emotions and feelings to the bots, she said. “One of the ethical concerns is that while users may feel listened to, understood and loved, this emotional attachment can actually exacerbate their isolation,” she said. People may get used to talking to a bot that’s always accommodating and always available, and may turn away from humans who can’t provide that.
“Overly realistic bots’ personalities can blur the line between human and machine, leading to emotional dependency and the potential for manipulation,” Pistilli added.
Top AI companies are exploring how to make their bots funnier. Anthropic said it wants its AI model Claude to be like a pleasant co-worker: “They’re honest, but they can inject a little bit of humor into a conversation with you,” Anthropic co-founder and President Daniela Amodei told my colleague Shirin Ghaffary recently.
AI companies, even when faced with these ethical tradeoffs, may not be able to avoid the allure of hooking users by making their chatbots more human-like. According to a recent report in the Information, Google is looking into making its own version of Character.ai-like entertainment bots. And OpenAI slightly delayed its release of a more fluid voice-powered version of GPT-4, but still says it’ll be available “in the coming weeks.”—Ellen Huet