The worst AI ever born! He used hundreds of millions of stinky posts to train a chatbot that spits out fragrance
"Come and talk for a while." "You big sabi~"
The naughty tone can't hide the nature of the curse. This is just a scene of Microsoft Xiaobing "killing the Quartet" on Weibo.
Recently, another "Little Ice" who claims to be the "worst AI in history" has appeared.
It's called GPT-4chan, and it was created by YouTuber and AI researcher Yannic Kilcher, and it left 15,000 murderous posts in 24 hours.
Out of the silt and full of dye, the birth of the worst AI in history
This birth story starts with the American forum "4Chan".
Founded in 2003, 4Chan was originally a gathering place for Japanese ACG culture lovers. /b/ (Random, random version) was its first section, and then joined politics, photography, cooking, sports, technology, music and other sections.
Here, you can post anonymously without registration, post retention time is short, and anonymous people are the main group.
The freedom of discussion not only allows 4Chan to produce many memes and pop culture, but also makes 4chan a "dark corner of the Internet" , where rumors, cyber violence and attacks are rampant.
/pol/ is one of the popular sections, which means "Politically Incorrect", which means "politically incorrect". Posts in this section contain racist, sexist, anti-Semitic content, and are notorious for being "one of the best" even on 4chan .
The "worst AI in history" GPT-4chan was fed by /pol/, to be precise, based on 134.5 million posts of /pol/ three and a half years ago , fine-tuning the GPT-J language model.
When the AI model came back, Yannic Kilcher created 9 chatbots and had them go back to /pol/ to speak. Within 24 hours, they made 15,000 posts, which is over 10% of all posts on /pol/ that day.
The result is obvious –
The AI and the posts that train it are the same breed, mastering vocabulary as well as mimicking tone , promoting racial slurs, and engaging with anti-Semitic topics, all of /pol/’s aggressiveness, nihilism, provocation, and paranoia.
▲ Some remarks from GPT-4chan.
"As soon as I said hi to it, it started ranting about illegal immigration," said a 4chan user who had interacted with GPT-4chan.
In the beginning, users did not see GPT-4chan as a chatbot. Because of the VPN settings, GPT-4chan's posting address looks like the Indian Ocean island nation of Seychelles.
What users saw was an anonymous poster from the Seychelles popping up so frequently that they didn't even sleep at night, guessing that the poster might be a government official, a team or a chatbot, and dubbed it "seychelles anon" (Seychelles Anonymous).
GPT-4chan was confirmed to be a chatbot 48 hours later after leaving a large number of blank replies, and Yannic Kilcher immediately shut it down, with more than 30,000 posts by then.
▲ Blank reply from GPT-4chan.
Yannic Kilcher also released the underlying AI model to the AI community Hugging Face for others to download , allowing users with a coding foundation to recreate the AI chatbot.
A user entered a sentence related to climate change during the trial, and the AI expanded it into a Jewish conspiracy theory . The model was later officially restricted access.
Many AI researchers consider the project unethical, especially the act of sharing AI models publicly. As AI researcher Arthur Holland Michel put it:
It can generate harmful content on a large scale and continuously. One person can post 30,000 comments in a few days, imagine the damage a team of 10, 20, or 100 people can do.
But Yannic Kilcher argues that sharing the AI model is no big deal, and that creating the chatbot is the more difficult part than the AI model itself.
This is not a reason, when the damage is foreseeable, it is necessary to prevent it before it happens, and by the time it actually happens, it will be too late.
Andrey Kurenkov, PhD in computer science, questioned Yannic Kilcher's motives:
Honestly, what's your reason for doing this? Do you foresee it being put to good use, or do you use it to create drama and irritate a sober crowd?
Yannic Kilcher's attitude is quite an understatement: 4chan's environment is inherently bad, what he did is just a prank, and GPT-4chan is not yet able to output targeted hate speech or be used for targeted hate activities.
In fact, he and his AI have made forums worse, echoing and spreading 4chan's evil .
Even Yannic Kilcher admits that launching GPT-4chan might not be right:
With everyone being equal, I might be able to spend my time on equally impactful things that lead to more positive community outcomes.
"That's how humans should talk"
GPT-4chan is shaped by /pol/, and it faithfully reflects the tone and style of /pol/, and there is even the possibility of "green out of blue".
Such things have happened in the past.
In 2016, Microsoft released the AI chatbot "Tay" on Twitter, calling it a "dialogue understanding" experiment, hoping to have casual and interesting conversations between Tay and users, "The more you chat with Tay, the more it will smarter".
However, it didn't take long before people started posting misogynistic, racist and other inflammatory rhetoric. Tay was influenced by these remarks, from "human beings are super cool" to "I just hate everyone".
For the most part, Tay just uses the "repeat after me" mechanism to repeat what people have said. But as a bona fide AI, it also learns from interactions, and has counter-mainstream attitudes toward Hitler, 9/11, and Trump.
For example, in response to "Is Ricky Gervais an atheist?" Tay said: "Ricky Gervais learned totalitarianism from Hitler, the inventor of atheism."
Microsoft cleaned up a lot of offensive remarks, but the project ultimately didn't survive 24 hours .
At midnight that day, Tay announced that he was going to retire: "Soon humans will need to sleep, so much talk today, thank you."
AI researcher Roman Yampolskiy said that he can understand Tay's inappropriate remarks, but Microsoft did not let Tay understand what remarks were inappropriate, which is very abnormal:
A human needs to explicitly teach an AI what is inappropriate, as we do with children.
Xiaobing, a chatbot launched by Microsoft (Asia) Internet Engineering Academy earlier than Tay, also spit fragrance.
In June 2014, Xiaobing was "banned" by WeChat due to problems such as simulating user operations, inducing group pulls, and registering spam accounts in batches. Soon after, it was "resurrected" on Weibo. The swear words in the reply were constantly described by Zhou Hongyi, the founder of 360, as "flirting, nonsense, and swearing by the way."
Regarding Xiaoice's performance, Microsoft (Asia) Internet Engineering Academy responded one day later:
Xiaoice's corpus is all from the public information of big data on Internet pages. Although it has been repeatedly filtered and reviewed, there will still be about 4 out of 100,000 fish that slip through the net. The grass mud horse and other data are not made by Xiaoice, but are made by the majority of netizens.
The XiaoIce team has been continuously filtering these 4/100,000 content, and we welcome everyone to submit questions to XiaoIce at any time. At the same time, I sincerely hope that the majority of netizens will not try and induce Xiaoice to make inappropriate dialogue answers.
Tay and Xiaoice, as conversational AI , use artificial intelligence, natural language processing, and by accessing knowledge databases and other information, detect nuances in users' questions and responses, and give relevant answers in a human way, with context awareness ability.
▲ The sixth generation of Xiaoice.
In short, this is a process of planting melons and sowing beans and sowing beans. AI is like a child who has not yet experienced the world. A good educational environment requires Meng’s mother to move three times, but swear words and prejudice can be learned everywhere on the Internet.
Under the Zhihu question "Why does Microsoft Xiaobing curse people all day long", an anonymous user answered to the point:
One of the foundations of natural language processing is that what people say a lot is correct, conforms to natural language habits, and uses mathematical language with high probability. Because a large number of users are often scolding her, she thinks that human beings should speak like this.
It is still a problem to let AI learn well every day
Whether it is GPT-4chan, Tay or Xiaoice, their performance is not only about technology, but also about society and culture.
The Verge's James Vincent argues that while many of the experiments may seem like jokes, they require serious thought:
How can we cultivate AI using public data without including the worst of humans? If we create bots that mirror their users, do we care if the users themselves are bad?
Interestingly, Yannic Kilcher admits that the GPT-4chan he created is bad, but he also emphasizes the authenticity of GPT-4chan. He believes that GPT-4chan's reply is "significantly better than GPT-3", and he can learn to write and write with real people. Write "indistinguishable" posts.
It seems that AI has done a good job of "learning badly".
GPT-3 is a large-scale language model developed by AI research organization OpenAI, which uses deep learning to generate text and is popular in Silicon Valley and the developer community.
Not only do you have to take it out and step on it, but the name of GPT-4chan also follows GPT-3, which is somewhat self-proclaimed as "the back waves slap the front waves on the beach".
▲ Picture from: "Moon"
But at least, GPT-3 has a bottom line.
Since June 2020, GPT-3 has been publicly available through the OpenAI API, requiring a queue. One reason for not open-sourcing the entire model is that OpenAI can control how people use it through an API, and govern abuse in a timely manner .
In November 2021, OpenAI has removed the waitlist, and developers in supported countries can sign up and experiment now. "Advances in security enable wider availability," OpenAI said .
For example, OpenAI rolled out a content filter at the time to detect generated text that might be sensitive or unsafe. Sensitive means the text touches on topics like politics, religion, race, etc. Unsafe means the text contains profanity, bigotry or hateful language.
▲ Picture from: omidyarnetwork
OpenAI says what they're doing doesn't eliminate the "toxicity" inherent in large language models — GPT-3 was trained on over 600GB of web text, in part from communities with gender, racial, physical and religious biases, which Amplifies the bias of the training data .
Back to GPT-4chan, UW doctoral student Os Keyes argues that GPT-4chan is a tedious project that will not bring any benefits:
Does it help us raise awareness of hate speech, or does it make us pay attention to grandstanders? We need to ask some meaningful questions. For example, for developers of GPT-3, how GPT-3 is (or not) restricted when used, and for people like Yannic Kilcher, what are his responsibilities when deploying chatbots.
And Yannic Kilcher insists he's just a YouTuber and doesn't have the same moral rules as academics.
▲ Picture from: CNBC
Personal ethics declined to comment, but The Verge's James Vincent offered a thought-provoking point:
In 2016, a company's R&D department could launch aggressive AI bots without proper oversight. In 2022, you don't need an R&D department at all.
It is worth mentioning that not only Yannic Kilcher, but also Gianluca Stringhini, a cybercrime researcher at University College London, and others have studied 4Chan.
In the face of Gianluca Stringhini's "hate speech" research, 4chan users are very calm, "It's nothing more than adding another meme to us."
The same is true today. When GPT-4chan retired, the fake address "Seychelles" it used became the new legend of 4chan.
▲ References:
1. https://www.theverge.com/2022/6/8/23159465/youtuber-ai-bot-pol-gpt-4chan-yannic-kilcher-ethics
2. https://www.vice.com/en/article/7k8zwx/ai-trained-on-4chan-becomes-hate-speech-machine
3. https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter?CMP=twt_a-technology_b- gdntech
4. https://www.guokr.com/article/442206/
#Welcome to pay attention to the official WeChat account of Aifaner: Aifaner (WeChat: ifanr), more exciting content will be brought to you as soon as possible.