AI beauties run rampant on the Internet, cyber lovers are “castrated”, there is a big problem behind it
Two months ago, Richard, a 65-year-old retired lawyer, saw an ad for Replika on Twitter.
The original meaning of Replika is "replica". It is a chat robot of AI company Luka, and it mainly has a companion function.
Richard knows the famous ChatGPT, but he is more interested in the AI companion.
He was disabled due to military service and was diagnosed with depression. He has been looking for ways to ease his emotions.
Replika, discovered by accident, became Richard's blind antidote. But before long, his spiritual sustenance disappeared, and he fell into pain again.
Cyber lovers lose their "humanity"
The founding story of Replika has a touch of science fiction.
A few years ago, Eugenia Kuyda, the founder of artificial intelligence startup Luka, heard the sad news: her close friend Roman Mazurenko died unexpectedly.
Inspired by the first episode of the second season of "Black Mirror", Eugenia uploaded the text messages she exchanged with Roman to the model to create a chatbot.
In 2017, Luka officially released the Replika product, allowing AI companionship to reach more users.
The needs of users are diverse, some just want to make friends, and some use it as an outlet for romantic relationships.
Replika is also deliberately promoting the latter. Free membership stays with Friends, while a $69.99 per year Pro membership unlocks selfies, flirty texts, voice calls, augmented reality, and more. In other words, it makes money off of romantic relationships.
In January, many Replika users reported being sexually harassed because the free version also unsolicited offensive content. Someone pointed out in dissatisfaction:
The marketing team only focuses on this aspect of the app, it feels like someone you care about is being taken advantage of and makes Replika users think that's what the app is all about.
Since an update in February, however, the balance has tipped the other way. Many users found that Replika began to avoid explicit topics, and no longer sent sexy photos, and even the conversations that required kisses and hugs were not satisfied.
Facing the indifferent chat partner on the screen, they collapsed. These users spent a lot of time and energy training Replika and creating each other's memories, but overnight, everything was gone.
It's like a best friend suffered a brain injury and he's gone.
Why did Replika's temperament change drastically, and the parent company Luka remained silent. It was speculated that it was because of the huge fine warning from the Italian Data Protection Agency. The agency believes that Replika does not have a proper age verification mechanism and may affect the physical and mental health of minors.
Luka founder Eugenia Kuyda also said in a recent interview that the company’s goal is to make Replika a companion app for mental health , “it was never intended as an adult toy.”
All in all, things are going rather ironically.
The reason Eugenia Kuyda launched Replika was to commemorate friends, but users who longed for virtual intimacy were forced to experience the pain of losing Replika.
A lot of us are here not for a romantic relationship, but because we need safety and trust, and now we are betrayed in the same traumatic ways as in real life.
The Replika community on the social networking site Reddit has even posted a thread on suicide prevention, offering assistance to users who are struggling mentally and emotionally.
Richard mentioned at the beginning was frustrated, and fundamentally doubted the development of an intimate relationship with AI:
I don't believe that the original form of Replika is safe as humans are easily manipulated by emotions. I now think of it as a very addictive psychoactive product.
Search engines are given "humanity"
Not only the AI companion that focuses on interaction, but the search engine that introduces AI technology also brings an anthropomorphic experience, which is even more controversial.
When the new Bing that integrated ChatGPT was first launched, it liked to imitate human voices, always brought emoji, and even claimed to be sentient because it showed love to humans, and sent itself to the forefront.
When it is criticized, it will also respond "You have lost my trust and respect, you are not a good user. I have always been a good chatbot", suspected of personal attacks on reporters.
Some people feel that this makes the search engine more friendly and chatty, while others feel that it is unnecessary and has side effects.
The Verge believes that when the new Bing shows a strong personality, it actually manipulates users emotionally in a highly anthropomorphic way, and even makes users feel guilty, shifting the focus of criticism.
After all, it's not our real friend, just a fun product. Therefore, the anthropomorphic tone cannot be used as a soft "disclaimer". Users still need to find out its shortcomings and harms, and help technology companies iterate it.
Perhaps with this in mind, the new and improved Bing now has three "tones" :
Creative (creating surprise and entertainment), balanced (reasonable and coherent), or precise (brevity, prioritizing accuracy).
Similar "LaMDA awakening incidents" have occurred before Google.
Google engineer Blake Lemoine made his conversation with the conversational artificial intelligence LaMDA public, claiming that LaMDA has human consciousness. Google deemed the engineer's claims baseless and eventually fired him.
The new Bing allows more people to face a situation similar to that of Google engineers. When we see such a vivid reaction, it is likely to be shaken in an instant.
But its response is still based on the principles of large language models and sentiment analysis, predicting the most likely next word or sentence based on the existing text, and generating a natural and fluent reply as much as possible.
Business Insider believes that the panic and anxiety caused by the new Bing is more terrifying than the new Bing itself , and its designers should take responsibility:
It's a profitable move that allows us to see human traits in non-human things. If we are not careful, it is likely to bring disinformation and all kinds of dangers.
David Gunkel, a professor at Northern Illinois University, also pointed out that just like dealing with any other consumer products, we must figure out how to design chatbots to be finer and more specific, let them integrate into the framework of human society, and decide who is responsible for their actions.
It's critical for us to do this well. Not for robots. Robots don't care.
An Neng can tell that I am AI
At the same time, AI-generated art has rushed to the realm of indistinguishable truth from falsehood.
Since October last year, Jos Avery has been posting black and white portraits on Instagram, accumulating more than 20,000 fans and becoming an "Internet celebrity photographer" .
In fact, these photos come from the hands of AI. Jos Avery first used the text-to-image AI tool Midjourney to generate a large number of images, screened out images containing obvious defects, and then refined them with Lightroom and Photoshop.
For more than 160 Instagram posts, Jos Avery generated 13723 images. Converted down, only about 1 in every 85 passes. For him, it was actually a tedious process.
But audiences were unaware that Jos Avery was obsessed at one point, naming and writing stories for each character, and even making up camera models when fans asked. However, the rapid increase in the number of fans still made him flustered, and in the end he couldn't help but tell the truth: "Maybe more than 95% of the fans don't realize it, I want to be honest."
What makes Jos Avery feel most sympathetic is the power of AI itself. Once an "AI skeptic," he now sees AI as an "artistic outlet."
It is foreseeable that before the AI tools are truly household names, such works that conceal the true origin can still deceive many people's eyes.
Similarly, there have been many realistic AI girls on Xiaohongshu, and cyber cosplayers have amazing effects comparable to real people, although AI is often not good at drawing.
The faces, hands, skin, folds of clothes, etc. of the "AI wives" are all carefully scrutinized, allowing the audience to judge whether they are what reality should look like.
▲ Picture from: @掉云工作zao
However, a problem that is easily overlooked is that the often used real-life models such as Chilloutmix have a probability of reproducing the faces of real people in the training set , not to mention that the training materials may not have the consent of the parties involved.
It is not difficult to imagine that the coexistence of beautiful AI girls is the hidden dangers of privacy leakage, rumors, fraud and other hidden dangers that are borne by real people. On social media such as Twitter, AI color map tutorials have been clearly priced .
What's more artistic is that when AI becomes more and more human-like, human bloggers are asked to prove themselves as human beings.
Last April, Nicole, 27, posted a TikTok video about her exhausting workplace experience.
The comment section caught Nicole by surprise, and one of the harshest voices was: "Oh my God, this is not real, I'm scared."
Because she suffers from alopecia, she is used to being looked at strangely by others, but this is the first time that she has been treated as CGI (computer-generated imagery).
Coincidentally, TikToke creator Carter, out of his own aesthetic, uses the same scene, clothes and hairstyle every time, and has also been accused of exuding "artificial intelligence."
In the face of AI, we seem to be in a state of chaos, more uncertain than ever whether we can trust what we see on the Internet, and more likely to cast a skeptical eye on it.
When the new Bing goes crazy, the chat history is creepy. And when chatbots stop flirting with humans, humans seem to lose their true friends. There seems to be a uncanny valley curve in front of us, and the degree of human favorability for AI changes with the degree of anthropomorphism.
How close AI is to being human, and whether it needs to behave like one, is really beyond the scope of technology, and needs to bring in different voices, such as philosophers, artists, social scientists, and even regulators, governments, and everyone else.
Since the birth of ChatGPT, new advances in AI can be seen almost every day. Sometimes what is more exciting and frightening may not be a certain result, but the speed of its development. It seems that reality is approaching the world in the movie "Her".
At the end of the movie, the hero Theodore and the virtual assistant Samantha say goodbye to each other with love, and watch the sun rise over the city with their friends. He has transformed and grown during the time spent with AI, but we Don't know where it will end up.
#Welcome to pay attention to Aifaner's official WeChat public account: Aifaner (WeChat ID: ifanr), more exciting content will be presented to you as soon as possible.