After OpenAI was founded: Dreams, disagreements, and AGI

Editor's note: When Sam Altman elaborated on OpenAI's vision, he unhesitatingly stated that they want to develop AGI (Artificial General Intelligence) that is beneficial and safe to all mankind.

With Microsoft's capital injection, many people began to doubt this: Can OpenAI really do it?

Faced with this question, Sam Altman gave an affirmative answer.

Bloomberg recently created a four-episode podcast series, "Foundering: The OpenAI Story," documenting the rise of Sam Altman in an attempt to fully understand one of the most important founders of our time.

Today we’re translating the second episode of this podcast series. It includes the following:

The multifaceted nature of AGI: Going all out to create AGI is regarded as an important vision by OpenAI. This could solve many global challenges, but also poses unprecedented risks, including uncontrollable artificial intelligence and severe social impacts.

The evolution of OpenAI: From data-driven innovation to careful model sharing. How does OpenAI remain “Open” under the wrap of commercial factors?

Power Struggle at OpenAI: Elon Musk's desire for greater control has created disagreements with some OpenAI employees. Under the leadership of Sam Altman, OpenAI has moved from non-profit to for-profit.

Original podcast link:

I want to start with a dream, a dream about building AI.

This dream has been imagined and written about for decades, as humanity works together to create a new entity more powerful than ourselves. Some researchers believe this dream may soon come true.

One day, the digital brains that reside in our computers will be as good as, or better than, our biological brains. Computers will become smarter than us. We call this artificial general intelligence (AGI).

This is said by Ilya Sutskever, one of the co-founders of OpenAI. In a Ted Talk last year, he often acted like a religious mystic when talking about the future of artificial intelligence.

But now, he's just talking about the search for AGI, a type of artificial intelligence that can think and solve problems like humans. It can switch freely between playing games, solving scientific problems, creating beautiful art and driving cars.

OpenAI's goal is to build AGI. In the AI ​​world, this idea used to be very avant-garde, at least it used to be. Ilya describes AGI as an almost mystical, momentous leap forward, like Prometheus bringing fire, with consequences that will be huge. It will lead us into technological glory, but also chaos. In the documentary "iHuman", he expressed confidence in future changes.

The emergence of AI is a great thing now, because AI will solve all the problems we face today. It will address employment, disease and poverty. But it will also bring new problems. The problem of fake news will become more serious, cyberattacks will become more extreme, and we will have fully automated AI weapons.

Ilya is a highly accomplished AI researcher who worked at Google before joining OpenAI. He has a deep and passionate affection for humanity. He can play the piano and draw. One of his paintings hangs in OpenAI's office. It is a flower shaped like the company's logo.

At the same time, he is also very focused on AI research. He once told a reporter: "I live a very simple life. I go to work and then go home. I don't do anything else. There are many social situations to go to and many activities to go to, but I don't participate in them."

He spends a lot of time observing the trajectory of AI and trying to predict the future. Ilya is particularly worried about whether AGI will have its own desires and goals. This fantasy can be heard in his words.

It's not that it will actively hate humans and want to harm them, but it will become too powerful. I think a good analogy is the way humans treat animals. We don't hate animals, but when we need to build a highway between two cities, we don't consult them. It’s conceivable that the Earth’s surface could be covered in solar panels and data centers.

I want to pause here for a moment, this presents a very strong, powerful image that we are creating some kind of new existence. It will be interested in us, but ultimately indifferent, just like we are with deer.

What struck me most in this recording was Ilya's tone, which was not fear, but awe. Ilya imagines an AGI that might overturn us in order to achieve its goals, a dramatic scenario that's hard to truly fathom. This conception of a supernatural, omnipotent entity has a very strong religious overtones.

I should mention that this is all entirely theoretical and we are still a long way from AGI. OpenAI is currently trying its best to develop models that can credibly imitate humans, but there is still a big gap between imitation and AI that can think independently. Still, OpenAI wants to do it, and it wants to be the best and be the first. Here is what Sam Altman said when testifying before Congress in 2023:

My biggest concern is the huge impact we will have on the world. Our entire field, technology, and industry may cause "significant harm" to the world. That's why we started this company. I think if something goes wrong with this technology, it could go "horribly wrong."

You're listening to "Foundering" and I'm your host Ellen Hewitt. In this episode, we take you into the chaotic and idealistic early days of OpenAI.

We discuss the dream of building an all-powerful AGI, which is important. Because this is what OpenAI is rapidly advancing towards, this generation's race to the moon. We'll discuss how AI technology is changing dramatically and rapidly, and how this change is bringing the dream of AGI closer than ever.

In just a few years, it went from a curious idea that was laughed at to a milestone that some experts believe could be achieved within a few years. Sam Altman even suggested that it might happen in 2028. We'll also look at the compromises OpenAI has made in pursuing this dream.

Initially, the company promised to share its research widely and not be corrupted by the profit motive. But when their technology began to advance, and it looked like they might gain great power, they made a U-turn. This critical moment then turned into a power struggle within OpenAI, with Sam Altman taking the reins.

Let's start with 2015. OpenAI had just been founded at this time, with a $1 billion commitment from Elon Musk, as well as funds from other donors. It was a small, fledgling research laboratory.

In the early days, Sam and Elon weren't around very often. At the time, Sam was actually still running the startup accelerator Y Combinator, but he was starting to position himself as a thought leader in the AI ​​field. In particular, he was talking about AI doomsday scenarios.

In 2015, he declared in a blog that the development of superhuman machine intelligence could be the greatest threat to humanity’s continued existence. He also wrote that AI could destroy every human being in the universe. In the same year, he mentioned at a technology event that OpenAI was about to be established.

In fact, I just agreed to fund a company doing AI safety research. You know, I think AI will probably cause the end of the world. But in the meantime, there will be some great companies using advanced machine learning techniques.

I want to talk about this quote. He says AI could kill us all while asking us to trust his conclusions as an expert. But he is also "glib" when it comes to making money.

In the early days of OpenAI, Sam and Elon were not often involved in day-to-day work. They go out and talk to people, recruit talent, talk to reporters. They come in about once a week for progress updates. On those days, Sam would pop into the conversation and then leave.

He strikes me as very smart, sharp, and very efficient. When the conversation is over, it's over.

This is Peter Abele, a researcher who worked at OpenAI during its first two years. In its early days, OpenAI looked like a typical startup, he said. They didn’t even have an office for a while, meeting at one of the co-founders’ homes.

We started working in Greg Brockman's apartment in San Francisco's Mission District in late 2015 and early 2016. We basically sit on the couch, on the kitchen counter, and in bed, and that's where the work happens. It's crazy to imagine that such a huge project could start here. We have 20 of the best AI researchers in the world focused on accomplishing something that has never been done before.

In the absence of Sam and Elon, the main leaders were two men who would later play important roles. They are not celebrities but will play a huge role in the future.

One is Greg, whose apartment serves as their office, and the other is Ilya, the AI ​​scientist we heard about at the beginning of the episode. You can think of Greg as the workaholic in charge of business operations, and Ilya as the AI ​​genius.

Together they ran OpenAI, and Peter would remember walking the streets of San Francisco with Ilya every week, talking about the big picture, asking ourselves, are we solving the right problem?

I think he saw the potential of AI earlier and more clearly than anyone else. He was more optimistic than the others and would come up with analogies. For example, a neural network is just a computer program, just a circuit, we just program it in a different way.

Meanwhile, Greg kept working hard.

Greg is the kind of person who puts his heart and soul into his work. He could keep working and working. I've seen people like this, but very few.

Even after OpenAI moved out of Greg's apartment, he still pretty much lived in the office. One former employee said he was still working at his desk when they arrived at work in the morning and was still typing away at his keyboard when they left in the evening.

When Greg got married a few years later, he even had a secular ceremony in the office. Decorated with a large flower background cloth, the shape of the flower is also the OpenAI logo. The ring was held by a robot hand, and the wedding officiant was Ilya.

When Greg and Ilya joined OpenAI, they didn't need money. Ilya sold a company to Google, and Greg owned a lot of Stripe, and that company was worth tens of billions of dollars.

In Silicon Valley, the usual reason people start startups is because they think they can build a lucrative business. But OpenAI is a non-profit organization. Greg and Ilya were inspired by this dream.

This is Reid Hoffman, one of OpenAI’s earliest donors:

There is no equity return for the initial team. They do it for humanity.

"This is for humanity," OpenAI has always said. Their website states that our mission is to ensure that AGI benefits all mankind.

Well, it’s no secret that Silicon Valley loves grand mission statements, and WeWork wants to promote global awareness, but OpenAI’s mission statement is even grander, with overtones of altruism.

When Sam talks about the company's work, he often discusses potential disasters. Here he talks to ABC's Rebecca Jarvis, his voice again serious as he positions himself as a thought leader in the field. So, what's the worst outcome?

Sam Altman: One thing I'm particularly concerned about is that these models could be used for large-scale disinformation. Now that they are getting better at writing computer code, I worry that these systems could be used for offensive cyberattacks.

Rebecca Jarvis: You make an important point, which is that the humans who control the machines now also have tremendous power.

Sam Altman: We are very concerned about authoritarian governments developing this technology.

Rebecca Jarvis: Putin once said that whoever wins this AI race will basically control humanity. Do you agree with this statement?

Sam Altman: That's a chilling statement indeed.

Sam says this thing is so valuable that the global superpowers will fight over it. Skeptics believe that if you make it sound like what you're working on is important, you'll attract more attention and funding. We will focus on this perspective in the next episode.

In OpenAI's early days, their plans to save humanity were unclear. Their strategy was a bit scattered, here's Peter's recollection:

We've looked at robotics, where we've had some exploration; we've looked at analog robots, where we've spent a lot of experience; and we've looked at digital agents that can perform various tasks on the web, like booking flights. We also looked at video games.

OpenAI says one of its first goals will be to build a robot butler that can set and clear tables. Like the maid from The Jetsons.

The company has also built a robotic hand that can solve a Rubik's Cube with one hand, and has put a lot of effort into developing a robot that can play Dota 2, a hugely popular multiplayer online game. They believe that the complexity of the game environment can allow AI to better navigate the real world.

Here's someone testing the bot:

This robot is awesome, much better than I expected.

These Dota bots even compete against professional players:

Prepare in advance, the action begins! So, the AI ​​got the first drop of blood…

A bot that can play Dota is technically impressive, but it doesn't look that way to the average person, and the commercial applications of these products aren't immediately obvious. Here is a comment from a former employee:

We're doing random things and seeing what happens. Sometimes it feels like there's a big gap between what was built and what was imagined. People program robots to play video games during the day, and then they sit around the table at lunch talking about saving humanity.

A common view in the field of AI is that to make something powerful, you sometimes need to start with something trivial. Video games and robot maids will pave the way for self-driving cars and cancer-curing AI.

Within OpenAI, they sometimes liken themselves to the Manhattan Project, the team responsible for building the first atomic bomb, and they think it's a good thing, ambitious and important. This is what another former employee told me:

It's an arms race, they all want to make the first AGI, and they think they can do it best. I don't see much fear about AI itself, I just see excitement about building AI.

In 2015, AI looked very different than it does today, it was weaker and harder to train. The big breakthrough at the time was that a robot was able to beat the world's best players in Go, a complex strategy board game from China. But that AI can only play Go, it can't do anything else.

This is Oren Etzioni, a computer science professor and former research director of an AI research institute:

These systems are narrow-domain and very targeted. The system that plays Go can't even play chess, let alone cross the street or understand a language. And a system that predicts air ticket price fluctuations and can predict air ticket price increases or decreases very accurately cannot handle text. So basically, every time there's a new application, you need to train a new system. This takes a long time, requires a lot of annotated data, etc.

But then, AI technology made a major breakthrough. In 2017, a team of researchers from Google Brain published a paper called “Attention Is All You Need.” In this paper, they describe a new AI architecture called Transformer.

Transformer did something very important for its time. The AI ​​system needs to input very specific data, and each piece of data must be labeled, this is correct, this is wrong; this is spam, not spam; this is cancer, not cancer…

But Transformer allows AI to receive messy, unlabeled data. In fact, it can do this more efficiently than expected, using less computing power.

Now, these Transformer-based models can learn on their own. It's a bit like if you want to teach a child to read, you used to have to hire a tutor to sit there and use flash cards, but now you can let the child go to the library on their own and they will come out knowing how to read and write.

It’s a surprising and painful reality, as one investor described it to me, that the best AI doesn’t come from the most specialized training techniques, but from the ones with the most data.

Peter, an early OpenAI employee, says Ilya immediately saw its potential:

Ilya reacted with certainty, saying this was a major breakthrough that we needed to pay attention to.

Even in the early days of OpenAI, Ilya always had a hunch that major advances in AI would come not from a particular tweak or new invention, but from more data, pouring more and more fuel into the engine. Now, Ilya has research to support his hypothesis.

This is what Oren said again:

OpenAI's Ilya is known for his view that data and data volume, if we scale it up significantly, will get us where we need to be. This is not a common belief, and many very smart and famous people, including myself, did not foresee this.

Thanks to Ilya's push, OpenAI began experimenting with Transformer. They were one of the first companies to do this. They made the model that generates pre-trained Transformers under the now familiar acronym GPT.

Especially when they started experimenting with the Transformer's performance when processing text. Because they can basically type in any book, newspaper article, Reddit post, blog. Humans spend a lot of time writing, and that text now has a new purpose: training data.

The internet was not created to train AI, but ultimately, that may be its legacy. OpenAI’s models are getting better at generating text, and not just in one knowledge domain.

The amazing thing about GPT systems is that they are so broad, they are actually generalists. You can ask them about almost any topic and they'll give you surprisingly good answers. That's because they've been trained to effectively have the entire corpus of text available to humans, billions of sentences, all the documents, memos, trivia, Harry Potter fan fiction you've ever read, all of that being fed into the mill. Once you read this, it becomes very versatile. So for the first time we have a system that you can ask it any question and it will give you a surprisingly intelligent answer, from narrow domain AI to a general or broad AI.

By feeding large amounts of text into the model, OpenAI found they could create an AI that was much better at generating convincing responses. In fact, at some point, they started to worry that it might be too good.

When OpenAI's language model GPT-2 first emerged, their initial decision was not to share the model more widely because they were concerned that it could be dangerous. This is Peter, who by then had left OpenAI and started his own company, but he remembers the scene on the day of the launch:

Apparently, it understands language much better than anything it has been trained on before. Its release did come with a lot of, maybe good marketing, maybe caution, or both. It was called "too dangerous to release," so I think this is the first project where OpenAI decided not to make part of its work public. Suddenly, the thing we're thinking about becomes, "What if something is so powerful that people might abuse it in ways we can't control?"

Once OpenAI has a really strong product, they will start to rethink their openness.

For the name OpenAI, Open really means that everything will be open source and anyone can build on it.

"Openness" is a key component of the company's brand. When they launched, Sam told reporter Steven Levy: "It will be open source and anyone can use it."

He also told reporters: "Their AI will be freely owned by the world. Open source software, broadly speaking, means that the source code is public and anyone can adjust the code and distribute it themselves." But the company soon began to gradually withdraw these commitments.

Obviously, over time this became something that wasn't entirely open source, or even open source at all. I mean, they're definitely not going to open source their work now.

This open source spirit seems to be slowly disappearing. The following is a speech Sam gave in Munich in 2023:

I'm curious, if we keep the trajectory of GPT-2, GPT-3, GPT-4, and then to GPT-5 and GPT-6, how many people want us to open source GPT-6 the day we complete training?

Wow, okay, we're not going to do that, but it's fun.

Honestly, Sam comes across as rather arrogant here. He knew that OpenAI initially promised to be open source. Now he's surveying his audience on what they think of the open source model and then immediately dismissing their responses.

Over the years, Sam subtly changed the meaning of "open," becoming more ambiguous. Here’s what he said at a venture capital firm:

This is what we call OpenAI, and we want this technology to be open to everyone.

He said it so naturally, as if it was obvious what openness meant. But his definition seems to me so vague as to be essentially meaningless. I mean Google search is open to everyone. It seems like OpenAI is happy to keep people guessing about what they mean by openness.

Just a few months after the company was founded, Ilya wrote in an internal email:

As we get closer to building AI, it makes sense to start becoming less open. The "Open" in OpenAI means that everyone should benefit from the results of AI, but there is no need to share the science, although in the short and possibly medium term, sharing everything is definitely the right strategy for recruiting talent.

This email is very interesting because it shows that from the beginning, OpenAI planned not to share their science freely. They don't want to be open source as they claim, but they want to maintain an open public image because it gives them a recruiting advantage.

For example, don't build AI for the bad guys, come and work for us. When we asked about their changing definition of openness, a company spokesperson said, "Our mission has remained the same, but our strategy has had to change."

Okay, let's go back to 2017. Two years after the company was founded, a new problem emerged at OpenAI: a power struggle.

Elon wants to take over, he's the kind of guy who's used to being in charge. According to OpenAI, he wants to move the company under Tesla and become CEO with a majority stake. If it doesn't go his way, he quits.

As with all things Elon, over time he wants to exert more control and make sure the company is run in the image and manner he wants it to be. So that creates tension.

This is my colleague Ashley Vance, who wrote a biography about Elon.

Elon's preferred role in anything is to be the CEO and the dominant force in controlling day-to-day operations.

In fact, Greg Brockman and Ilya Sutskever, who are responsible for day-to-day operations, are very vigilant. Although Elon is reckless, impulsive and difficult to get along with, he is also their main source of funding.

He has pledged nearly $1 billion. OpenAI has other donors, but none come close to that amount. One option is to stay with Elon and keep his capital.

Employees are not entirely sold on the idea and have some concerns. So we've reached this decision point: Should we stay with Elon or leave him? In recent years, people have almost always put up with Elon and his demands.

Or another option is to separate from Elon and find other sources of funding. Do you know someone who might be good at raising money? Sam Altman.

The focus was that Elon wanted the company to go one way and the employees wanted to go another, so Sam was chosen to lead OpenAI forward.

In the first few years, Sam was not much involved in OpenAI. He's actually still the president of Y Combinator, but in this power struggle, Sam wins over Elon, and that's a big deal. Elon is more famous, more experienced, and especially he hates losing.

In a conflict, Elon's reaction is to win at all costs. Elon has rarely lost a "battle" in recent years. Typically, if he's not inside the company, he'll sue someone to get them to give in; if he's inside, he'll press the policy until he gets what he wants. It's hard to find an example in recent years where he didn't get the results he wanted. So the turmoil within the company must be very serious for this to not happen.

So in 2018, Elon stormed off and took his capital with him. A few years later, he would actually sue Sam and OpenAI, claiming they had reneged on their original non-profit and open source promises.

Shortly after Elon left, Sam became OpenAI's CEO. There had been no CEO here before, but the power struggle defined Sam's new dominance in the company. Remember what the founder of YC once said? "Sam is very good at being powerful."

Sam's interest in OpenAI continued to grow, and his attention began to shift away from managing YC. Indeed, running a world-renowned startup accelerator is a position with a lot of impact.

But the race for AGI is heating up, and if OpenAI succeeds in creating AGI before anyone else does, it's hard to imagine a position with more power than its CEO. However, Sam didn't give up his job at YC right away.

The situation makes some people who run accelerators unhappy. They felt that Sam was distracted too much, pushed too fast, and put his own interests ahead of YC. This has created some enemies for him within his own team.

In fact, according to one source, Sam's mentor Paul Graham, the man who originally put him in the position, flew from the UK to personally ask Sam to resign. Paul lost faith in his former protégé, but he also didn't want to create public drama. So Sam was persuaded to quit, they kept a low profile, and now he only focuses on OpenAI.

Sam has a big goal, to raise funds to train OpenAI's models. They require a lot of computing power, and computing power is expensive. Sam tried to raise money but made no progress. Here's what he said on the Lex Fridman Podcast:

We started as a non-profit organization. We figured out early on that we needed a lot more money than we could raise as a nonprofit to do what we needed to do. We tried and failed several times to raise money as a non-profit. We don't see a path forward there. So we need some of the benefits of capitalism, but not too much. I remember someone saying that as a nonprofit, things wouldn't go far enough; as a for-profit, things would go too far.

They needed something in the middle, and to be honest, Sam wasn't too concerned about life after non-profit. He pieced something together and basically created a for-profit entity that was affiliated with the original nonprofit.

A for-profit entity can do everything like a regular company, such as raising investments and providing equity to employees, but the returns to its investors are limited, whereas in other companies they are unlimited.

This corporate structure has been cobbled together, with OpenAI now essentially a for-profit entity controlled by a nonprofit board of directors, which sounds a little shaky. OpenAI spent several years claiming they would be a non-profit, and now they have found this for-profit solution.

After that change, a lot of people were unhappy. But OpenAI is more focused on their end goal, they want to build AGI and need to raise funds to achieve this goal.

Then, in 2019, Sam the trader made a big deal: he raised $1 billion from Microsoft. The following is what Microsoft CEO Satya Nadella said after signing the agreement:

Hi, I'm here with Sam Altman, CEO of OpenAI. Today we are very excited to announce our strategic partnership with OpenAI.

Importantly, Microsoft has a lot of raw computing power that OpenAI can now use. Remember, OpenAI was originally conceived as the “antidote” to Google. They presented themselves as fundamentally different from the profit-hungry tech giants, and then overnight they became intimately associated with tech companies worth over a trillion dollars.

OpenAI is now in many ways a fork of Microsoft, which is a significant shift. Reid Hoffman, who was on the boards of both OpenAI and Microsoft at the time, didn't see this as a abandonment of OpenAI's original promise.

There were concerns whether this would undermine the mission. But I think it's a modern naivety to think that corporations equal bad or corrupt, because there are so many ways in which corporations engage with humans and society. They try to serve customers, employ employees, have shareholders, and exist in society.

Reid's point is that wanting to make money doesn't mean you're bad, which is on-brand for a billionaire venture capitalist.

I have a feeling that the Microsoft deal might be the most practical way for OpenAI to continue its mission of creating secure AGI for all of humanity. But it also highlights the important characteristic that OpenAI tends to backtrack on its promises when it's convenient.

Underneath this, people began to question Sam's integrity, both inside and outside the company.

This will cause a big crack.

See you in the next episode of "Foundering"!

# Welcome to follow the official WeChat public account of aifaner: aifaner (WeChat ID: ifanr). More exciting content will be provided to you as soon as possible.

Ai Faner | Original link · View comments · Sina Weibo