How Meta CEO Mark Zuckerberg plans to make money from open-source AI


Meta aims to dominate the infrastructure and developer community with its open-source products, similar to what Google has done with Android smartphones.

According to Zuckerberg, Meta’s strategy is to develop a common software infrastructure and make it available as open-source software. For AI, this infrastructure includes Meta’s Llama models and industry-standard tools such as PyTorch. Product-specific implementations remain proprietary.

Meta’s open-source strategy: Dominating AI infrastructure

In the latest earnings call, Zuckerberg explains where he sees the potential benefits of Meta’s open-source strategy. Open-source models are generally more secure, efficient, and cost-effective to operate because they are continually vetted and developed by the community, Zuckerberg says.

Open-source software can also become an industry standard, making it easier to incorporate innovation into Meta’s products. Finally, the popularity of open source among developers and researchers could help Meta attract better talent.



Despite providing infrastructure such as Llama models and PyTorch as largely free open-source software, Zuckerberg does not see a significant impact on the benefits Meta gains from AI.

Because Meta tends to have unique data and develops specific product integrations, the company can be an industry leader without significantly impacting product differentiation, he says.

“The short version is that open sourcing improves our models, and because there’s still significant work to turn our models into products, because there will be other open source models available anyway, we find that there are mostly advantages to being the open-source leader and it doesn’t remove differentiation from our products much anyway,” Zuckerberg says.

According to Zuckerberg, learning from unique data and feedback loops in Meta products is the next step in Meta’s AI strategy.

Zuckerberg sees growing demand for more computing

Zuckerberg expects AI development to remain computationally intensive. Historically, the most advanced language models have required about ten times as much computing power per year to train. This trend may continue, although it is not clear to what extent, Zuckerberg says.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top