Meta plans to invest up to $65 billion by 2025 to scale up the total number of GPUs available within the company to 1.3 million. The announcement is in stark contrast to recent AI developments out of China.
Meta-CEO Mark Zuckerberg says on his X competitor Threads that his company plans to invest $60 billion to $65 billion in AI by 2025. That budget is mainly to buy new accelerators. By the end of the year, Meta wants to have more than 1.3 million GPUs available. It is unclear how many the company already has.
Meta will use that infrastructure to build out its Llama 4 model. Zuckerberg predicts that Llama 4 will emerge as the best model, although it is unclear how he measures that and why he assumes Llama 4 will be more powerful than models from competitors such as OpenAI.
Useful investment
The announcement comes at about the same time as interesting AI news from China. There, researchers are making the Deepseek R1 model available to users. Deepseek R1 is supposed to be competitive to existing LLMs, but was trained differently. Allegedly, the Chinese spent barely six million dollars on training the model, or about five percent of the investment required by major U.S. companies such as Meta and OpenAI for their LLMs.
It is currently unclear how competitive Deepseek R1 really is. If the model does indeed perform satisfactorily, it calls into question the need to make billions of investments in energy-hungry AI infrastructure.