This year, AI painting brushed the headlines of the technology news in various poses.
At the beginning of the year, Disco Diffusion, which can draw a sci-fi spectacle with only one sentence of text description, quickly became popular. People instilled wild imaginations such as "a cabin in the forest floating in the sky" into the AI model, and within a few hours a An oil painting that basically fits the imagination, refreshing people's impression of AI painting in the past.
In April, Open GL released the Dall-E 2 AI model, which can understand the descriptions input by humans in natural language, and produce several realistic pictures based on the descriptions.
Due to its simple operation and "ghost animal" effect, it quickly became an artifact for people to generate emoticons. People used irrelevant nouns to generate memes one after another. After the memes became popular, AI painting became a tool for people to have fun.
In the second half of the year, the emergence of models such as Stable Diffusion and NovelAI Diffusion raised AI painting to another level. It became difficult for people to distinguish the comic pictures drawn by real painters and AI painters. Topics such as "Will AI painting replace the original painter" began to be seriously discussed.
Nowadays, turning on social media and turning photos into comics with AI drawing has become a hot topic. In the AI photos posted by netizens, there are as many realistic AI comics as there are funny pictures produced by AI cognitive errors. On the one hand, people are lamenting that AI technology has been so developed, and on the other hand, they are thankful that AI cannot completely replace humans.
Just when people are worried about whether AI drawing can grab the jobs of cartoonists such as Oda Eiichiro and Aoyama Gangchang, another industry is already facing an AI employment crisis-interestingly, this crisis is made by themselves.
Do AI programmers program bugs?
Mention Google's X department, you may feel a little familiar.
The former name of the X department is Google X, yes, it is the "lunatic department" that produced Google Glass, Google's self-driving technology, and the Loon hot air balloon Internet project.
After Google merged into the parent company Alphabet in 2015, Google X also changed its name to simply "X", which is responsible for developing radical and cutting-edge technologies. Wing, the drone meal delivery solution we reported before, came from the X department hand.
Recently, according to foreign media reports, the X department is secretly working on a new project. This time they want to use AI machine learning to develop a tool that can write code by itself.
Let programmers type codes to develop an AI that can type codes to replace programmers. This story like a joke is about to come true.
According to people familiar with the matter, the AI code-writing project is code-named Pitchfork and is managed by Olivia Hatalsky, who has participated in several major innovation projects such as Google Glass.
The development of Pitchfork was inspired by the sudden idea of Google engineers when upgrading the version of the Python code base: if they do not hire the existing batch of software engineers, how can they achieve version iteration?
For newly hired programmers, it often takes a long time to get familiar with the code of existing projects. Once the senior programmers in the project leave, it is not always easy to smoothly undertake the code work, so Google thought To develop an "AI programmer", let the software version iteration get rid of the dependence on project experience.
The biggest feature of AI tools is that they can use existing data to perform machine learning at an extremely fast speed. After proper algorithm tuning, AI can imitate the learning data and produce similar results.
Therefore, Google hopes that Pitchfork will learn the programming style of existing software engineers and write new code based on these experiences.
When Pithfork matures, it will be able to replace part of the programmer's work of writing and updating code, while ensuring the quality of the code.
Olivia Hatalsky once described the project as "building the future of software engineering" in the recruitment information for recruiting Pithfork developers, but will programmers really like this future that may lose their jobs?
As a typical high-paying job, programmers' wages have always been the largest expenditure of technology companies. Whenever the economic income of technology companies fluctuates, there will be a wave of layoffs to reduce the company's employment costs as much as possible and help the company survive the cold winter.
Viewed in this light, Pitchfork is less the future of software engineering than a defensive measure for mass layoffs, allowing projects to function without experienced programmers.
According to foreign media statistics, Alphabet's median employee salary in 2021 is $295,884, the highest in the S&P 500 Index.
Facing the plight of slowing revenue growth, it is difficult for a rich company such as Google to maintain the myth of high salaries. In the earnings conference call last month, Alphabet CEO Sundar Pichai said that he would slow down the speed of recruiting new employees, while some analysts speculated , Google may have to conduct a large-scale layoff like Meta.
If the layoff plan comes true, Google's "AI programmers" are believed to be on the agenda soon.
Does the world need AI programmers?
Before the Pitchfork plan came to light, similar AI programming tools had already appeared.
Github, another big technology giant under Microsoft, launched a tool called Copilot in June 2021. You only need to enter a few new codes, and AI will help you complete the rest.
Copilot is based on the GPT-3 artificial intelligence technology released by OpenAI. GPT-3 can learn to write based on the materials that exist on the Internet. This is like feeding it a novel resource library. It can write a Conan Dow Harry Potter in the style of Illinois; feed it a repository of historical biography, and it could write a chronicle for World of Warcraft.
GitHub owns the world's largest source code repository, and you should have guessed what happened next: GitHub poured a large number of public code libraries into Copilot, allowing it to learn to write its own code.
At present, although Copilot does not yet have the ability to completely write software programs independently, it has been able to complete the work of some short code snippets. Some foreign media reports on Copilot mentioned that developers have used Copilot to generate 40% of the code in their work, and GitHub expects this number to double in the next five years.
But before that, these "AI programmers" still have a bigger problem to be solved-is the code it knocks out legal?
As mentioned above, the codes produced by Copilot are all learned from GitHub, which inevitably leads to situations where AI uses unauthorized codes autonomously, attracting lawsuits from code owners.
According to The Verge, Microsoft, GitHub, and OpenGL have received class action lawsuits from American programmers who accused Copilot of operating on an "unprecedented scale of software piracy" in violation of copyright law.
This is the first class action lawsuit in the United States to question the training and output of an AI system. If AI programming is to become the future of software engineering, the outcome of this case will have a profound impact on this "future."
Who owns the ownership of the content produced by AI? Who is responsible for AI infringement? What is the relationship between users, creators, and AI? All of these require more detailed legal provisions. For a long time to come, even if AI has the ability to replace humans in work, it is still an out-and-out "illegal worker" in law.
Mathematician John von Neumann once proposed the concept of the singularity of artificial intelligence technology, that is, when artificial intelligence continues to self-learn to a certain stage, it will usher in an explosive growth of intelligence, which will far exceed human intelligence.
The "singularity theory" makes many people fear AI technology, and some people even predict that super-intelligent machines driven by AI will be the last invention of mankind, because stupid humans will be ruled by AI at that time, and the emergence of AI programmers The possibility of this prophecy has deepened.
But don't worry, AI has been contained by human law before this – for thousands of years, it has been the most powerful weapon of the weak, and it will be the same in the future.
#Welcome to pay attention to Aifaner's official WeChat public account: Aifaner (WeChat ID: ifanr), more exciting content will be presented to you as soon as possible.