Don’t let GPT-4 evolve again! Musk took the lead in signing a joint letter of thousands of people, urgently calling on AI laboratories to immediately suspend research

So far, we have experienced many digital revolutions, including the emergence of graphical interfaces, the birth of the Internet, and the popularization of the mobile Internet, but no technological change has caused such widespread panic like the AI ​​wave triggered by the GPT model.

On the one hand, AI has changed the way people work and live, greatly improving efficiency; on the other hand, the operation and development of AI are full of uncertainties, which triggers human beings' natural protection mechanism against unknown things – fear.

Today, a public joint letter on the Internet exploded, calling for all AI experiments to immediately suspend research on AI models more advanced than GPT-4 for at least 6 months, in order to kill these terrible fantasies in the future. In the cradle.

The speed of AI progress is astonishing, but relevant supervision and auditing methods have not kept up, which means that no one can guarantee the safety of AI tools and the process of using AI tools.

The joint letter has received signature support from many well-known figures including the 2018 Turing Award winner Yoshua Bengio, Musk, Steve Wozniak, Skype co-founder, Pinterest co-founder, Stability AI CEO, etc. , the number of co-signers has reached 1125 before the deadline.

The original text of the open letter is as follows:

Artificial intelligence has intelligence to compete with human beings, which may bring profound risks to society and human beings, which has been confirmed by a large number of studies [1] and recognized by top AI laboratories [2]. As the widely recognized Asilomar AI Principles state, advanced AI could represent a major transformation in the history of life on Earth, and as such should be planned and managed with commensurate attention and resources.

Unfortunately, even in recent months, AI labs have been locked in a runaway race to develop and deploy increasingly powerful digital minds that no one can understand, predict, or reliably control, even if they The creators couldn't do that either.

With modern AI systems now competitive with humans in common tasks [3], we must ask ourselves:

  • Should we allow machines to flood our information channels, spreading propaganda and lies?
  • Should we automate all jobs, including the satisfying ones?
  • Should we develop non-human minds that may eventually surpass and replace us?
  • Should we risk getting out of control civilization?

These decisions should not be made by unelected tech leaders. Only when we are confident that the impact of the AI ​​system is positive and the risks are manageable should we develop powerful AI systems. This confidence must be well-founded and grows with the potential impact of the system. OpenAI's recent statement on artificial intelligence states that "at some point, it may be necessary to obtain independent review before commencing training of future systems, and for state-of-the-art efforts to agree to limit the growth rate of computation used to create new models." We agree. Now is that moment.

Therefore, we call for an immediate moratorium on all AI labs for at least 6 months from training AI systems more powerful than GPT-4. This suspension should be public and verifiable, and include all key players. If such a moratorium cannot be implemented quickly, the government should step in and impose a moratorium.

AI labs and independent experts should use this pause to work together to develop and implement a shared set of advanced AI design and development safety protocols that should be rigorously reviewed and monitored by independent outside experts.

These protocols should ensure that systems adhering to them are secure beyond reasonable doubt [4]. This does not mean pausing AI development, but simply taking a step back from the ever-dangerous race towards larger, unpredictable black-box models and their emergent capabilities.

AI research and development should focus on improving the accuracy, safety, explainability, transparency, robustness, alignment, trustworthiness, and loyalty of existing robust, advanced systems.

At the same time, AI developers must work with policymakers to dramatically accelerate the development of AI governance systems. These should at least include:

  • A new, competent regulatory body dedicated to AI;
  • Oversight and tracking of high-capacity AI systems and large pools of computing power;
  • Provenance and watermarking systems for distinguishing real from synthetic content, tracking model leaks;
  • Robust audit and certification ecosystem; accountability for damage caused by AI;
  • Adequate public funding of technical AI safety research;
  • Veteran institution dealing with the massive economic and political changes AI will cause, especially the implications for democracies.

Humanity can enjoy a prosperous future with the help of AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards of using these systems explicitly to benefit all and give society a chance to adapt.

Society has hit the pause button in the face of other technologies that could have catastrophic effects on society [5]. Here we can do the same. Let's enjoy a long AI summer instead of rushing into fall unprepared.

Reference Information:

Bender, EM, Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?  . In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).
Bostrom, N. (2016). Superintelligence. Oxford University Press.
Bucknall, BS, & Dori-Hacohen, S. (2022, July). Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119 -129).
Carlsmith, J. (2022). Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.
Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton & Company.
Cohen, M. et al. (2022). Advanced Artificial Agents Intervene in the Provision of Reward. AI Magazine, 43(3) (pp. 282-293).
Eloundou, T., et al. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.
Hendrycks, D., & Mazeika, M. (2022). X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862.
Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.
Ordonez, V. et al. (2023, March 16). OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'. ABC News.
Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis Urges Caution on AI. Time.
Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.
OpenAI (2023). GPT-4 Technical Report. arXiv:2303.08774.
Ample legal precedent exists – for example, the widely adopted OECD AI Principles require that AI systems “function appropriately and do not pose unreasonable safety risk”.
Examples include human cloning, human germline modification, gain-of-function research, and eugenics.

In this open letter, the attached signature form may be more exciting than the content of the letter itself. You can find many familiar names here, and each signature expresses the signer's strong skepticism towards AI.

However, some netizens found that these signatures may not be completely credible, because the names of OpenAI CEO Sam Altman (I sealed myself?) and the actor John Wick of the movie "Quick Chase" appeared in the list. The specific authenticity Still to be studied.

The address of this joint letter is attached below, and interested friends can also sign their names.

Pause Giant AI Experiments: An Open Letter

Cut the crap.

#Welcome to pay attention to Aifaner's official WeChat public account: Aifaner (WeChat ID: ifanr), more exciting content will be presented to you as soon as possible.

Ai Faner | Original Link · View Comments · Sina Weibo