xAI receives US$6 billion in financing, Musk also plans to build a “super computing power factory”

Just now, xAI officially announced that it has received US$6 billion in Series B financing, and its valuation has also jumped to approximately US$18 billion.

Major investors in this round of financing include Valor Equity Partners led by Antonio Gracias, the first investor in Tesla and SpaceX, Dubai investment company Vy Capital, US private venture capital Andreessen Horowitz, Sequoia Capital, Venture Capital Investment company Fidelity Management & Research Company, and Kingdom Holding, a Saudi Arabian holding company, etc.

xAI's most famous product currently is the chatbot Grok. Since Grok was officially released in November 2023, it has been catching up and trying to surpass the pace of OpenAI.

Over the past year, Grok has grown by leaps and bounds. In March this year, Musk announced the open source of the 314 billion parameter hybrid expert model Grok-1; at the end of March, xAI launched the 128k long text Grok-1.5; in April, xAI launched the first multi-modal large model Grok-1.5V.

Currently xAI uses 20,000 GPUs to train Grok 2.0. The latest version of Grok can process text, graphics and recognize real-world objects, and future Grok models can also recognize audio and video.

Musk has also publicly stated that xAI will use up to 100,000 GPUs to train and run its next version of Grok.

According to the latest report from The Information, Musk revealed in a presentation to investors in May that xAI plans to build a "Gigafactory of Compute" to provide computing power support for the next version of Grok.

xAI’s new “Super Computing Factory” is essentially a supercomputer, similar to the GPU cluster Meta built for training AI models. A cluster refers to numerous server chips connected by cables within a single data center so that they can run complex calculations simultaneously in an efficient manner. Generally speaking, having a cluster with more chips and stronger computing power means that it can bring more powerful AI.

xAI plans to connect 100,000 H100 GPUs to the "Super Computing Factory" and may cooperate with Oracle. In the middle of this month, it was reported that xAI was discussing cloud server-related issues with Oracle executives, and xAI planned to spend $10 billion to rent Oracle servers in the next few years.

Currently, xAI has rented approximately 16,000 H100 chip servers from Oracle and has become Oracle's largest GPU customer.

In addition to computing power, another important factor in determining the location of an AI data center is power supply. It has been calculated that a data center with 100,000 GPUs might require 100 megawatts of dedicated power.

xAI’s office is located in the San Francisco Bay Area, but electricity will also be an important factor to consider when selecting the location of the new “super computing factory”. Further negotiations with the government may be required in the future.

Musk told investors that xAI still lags behind competitors at this stage. Sources say that by the end of this year or early next year, OpenAI and Microsoft may have clusters of the size envisioned by Musk.

OpenAI and Microsoft are also discussing the development of a $100 billion supercomputer containing millions of NVIDIA GPUs that would be several times the size of xAI.

In March of this year, in Nvidia’s press release, Musk publicly declared that Nvidia’s artificial intelligence hardware was “the best.”

Musk also said on an investor call in April that Tesla has 35,000 Nvidia H100s to train its autonomous driving and plans to more than double that number by the end of the year.

NVIDIA's CFO Colette Kress has included xAI on the customer list. In the future, xAI will work with OpenAI, Amazon, Google and other companies to give priority to using NVIDIA's next-generation flagship generative AI architecture Blackwell.

Although the "computing power super factory" is queuing up to buy NVIDIA chips, xAI is still developing its own Dojo supercomputer.

At Tesla AI Day in 2022, it demonstrated its self-developed Dojo computing platform. Dojo is internally composed of "Training Tiles". Each "Tile" contains 25 D1 chips. These 25 chips are ultimately aggregated into 54P computing power and a bisection bandwidth of 13.4TB/S.

Dojo can also be presented in the form of a cabinet. Under the powerful hardware stack, a Dojo cabinet can provide 1.1E computing power, 13TB of high-bandwidth memory, and 1.3TB of high-speed memory. Four Dojos can provide the computing power equivalent to 72 GPU racks.

However, at this stage, since the stability and output of Dojo have not been guaranteed, xAI can still only choose NVIDIA GPUs to train AI.

xAI’s newly obtained Series B financing has helped it solve its financial burden to a certain extent. But Musk himself has admitted that if he wants to be competitive in the AI ​​​​track, he will have to spend at least several billion dollars every year.

Whether it is Neuralink, which implants microchips into the human brain, Optimus, a humanoid robot, or the intelligent Grok AI assistant, these actively promoted projects by Musk vaguely point to the ultimate goal: artificial general intelligence (AGI).

xAI still has a long way to go and is working hard to be a game changer. However, it now appears that Nvidia may be the biggest winner at the moment.

# Welcome to follow the official WeChat public account of aifaner: aifaner (WeChat ID: ifanr). More exciting content will be provided to you as soon as possible.

Ai Faner | Original link · View comments · Sina Weibo