DeepSeek is an artificial intelligence (AI) research lab based in China. It was spun out of the country's most successful hedge fund, High-Flyer, in 2023. The fund had been using AI for years to develop trading algorithms.
The DeepSeek team found a way to develop powerful large language models (LLMs) for a tiny fraction of the money being spent by America's leading AI companies. It triggered a panic in the U.S. stock market on Monday as investors considered the impact on chip suppliers like Nvidia (NASDAQ: NVDA) and prominent developers like OpenAI (which is backed by Microsoft).
Start Your Mornings Smarter! Wake up with Breakfast news in your inbox every market day. Sign Up For Free »
This could be a transformational moment in the AI race. Not only are DeepSeek's methods potentially valid, but there is at least one other Chinese AI start-up that seems to have produced similar results. Here's what it could mean for Nvidia and OpenAI.
Ilya Sutskever is one of the co-founders of America's leading AI developer, OpenAI. He once believed data and computing power were the key ingredients to training the best AI models and producing the smartest AI software. This is known as pre-training scaling, and it meant the developers with the most financial resources could build the best data centers, buy the best chips, and win the AI race.
But in November 2024, he told Reuters the results from using that method have plateaued. OpenAI has since developed models with better "reasoning" skills, meaning they spend more time "thinking" to produce the best responses from the ChatGPT chatbot. This is known as test-time scaling, and models that use it (GPT-4o1 to GPT-4o3) are better at problem solving, and bring AI closer to human intelligence on an academic level.
It cost OpenAI around $20 billion to reach this point (money it mostly raised from investors since 2015). But DeepSeek recently released its V3 model, which was created for just $5.6 million, and yet it's competitive with OpenAI's GPT-4o models across several performance benchmarks.
The U.S. government banned Nvidia from selling its latest graphics processors (GPUs) to Chinese AI companies, so DeepSeek developed V3 using less powerful versions like the H100 and the H800. To compensate for the lesser performance, DeepSeek had to innovate on the software side by creating more efficient algorithms and data input methods.
The company also used a technique called distillation to create V3. It involves taking a small model and training it using a successful model like GPT-4o1 to produce a similar final product. This strategy supercharges the speed with which an AI company can train a competitive LLM, and it will potentially lead to commoditization. In other words, there could be hundreds of LLMs on the market in the future with similar capabilities, and they will mostly be interchangeable.
That could be a real threat to OpenAI and even Nvidia. OpenAI could lose the advantage it established thanks to its considerable financial resources, and since less LLM training will be required, Nvidia could suffer from reduced demand for GPUs.
Training is only one side of the equation. There is also the inference process, which involves the AI model turning prompts into accurate responses. But like with any business, lower overall costs can translate into lower prices for customers.
As of this writing, DeepSeek charges just $0.14 per 1 million input tokens, which is 94% cheaper than OpenAI's rate of $2.50 per 1 million input tokens (input tokens are calculated based on the number of words in a user's prompt).
But DeepSeek isn't the only AI lab which seems to have cracked this code. Kai-Fu Lee, who used to run Alphabet's Google operations in China, launched an AI start-up called 01.ai. According to its website, its Yi models perform well against competing models from DeepSeek. The company charges just $0.10 per 1 million input tokens, which is even cheaper than its Chinese rival -- and substantially cheaper than OpenAI.
I think OpenAI is in trouble if LLMs continue trending toward commoditization. Plus, its models are closed-source, so developers are locked into the company's ecosystem, which won't be a desirable feature once competitive open-source LLMs are widely available.
By contrast, DeepSeek uses an open-source approach that gives developers more freedom to tweak its models as necessary to build AI software. Developers can also download open-source models locally so they never have to share their sensitive data with the creator.
But while OpenAI faces uncertainty, Nvidia might actually benefit from plunging inference costs, which could offset some of the lost GPU demand on the training side.
Think about the progression of the cellphone. When we had to pay a fee every time we sent a text message or browsed the internet, we didn't use our phones as frequently as we do now. Unlimited plans with uncapped calls, texts, and data enable us to spend hours on our phone each day for a nominal monthly fee -- simply put, when costs came down, usage skyrocketed.
AI could follow a similar path, and as usage increases, companies will need more of Nvidia's GPUs to cover demand for inference. That will be especially true as reasoning capabilities evolve, because more thinking requires substantially more computational power.
The short-term picture is a little less certain. Will some of Nvidia's customers reduce their data center spending as they optimize their training methods like DeepSeek did? It's hard to say, but a new quarterly earnings season just started, so we should receive an update from almost every one of them over the next few weeks.
Before you buy stock in Nvidia, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Nvidia wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $758,099!*
Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. The Stock Advisor service has more than quadrupled the return of S&P 500 since 2002*.
Learn more »
*Stock Advisor returns as of January 27, 2025
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool’s board of directors. Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.