Chinese AI firm DeepSeek has emerged as a potential challenger to U.S. AI companies, demonstrating breakthrough models that claim to offer performance comparable to leading offerings at a fraction of the cost. The company’s mobile app, released in early January, has lately topped the App Store charts across major markets including the U.S., UK, and China, but it hasn’t escaped doubts about whether its claims are true.
Founded in 2023 by Liang Wenfeng, the former chief of AI-driven quant hedge fund High-Flyer, DeepSeek’s models are open source and incorporate a reasoning feature that articulates its thinking before providing responses.
Wall Street’s reactions have been mixed. While brokerage firm Jefferies warns that DeepSeek’s efficient approach “punctures some of the capex euphoria” following recent spending commitments from Meta and Microsoft — each exceeding $60 billion this year — Citi is questioning whether such results were actually achieved without advanced GPUs.
Goldman Sachs sees broader implications, suggesting the development could reshape competition between established tech giants and startups by lowering barriers to entry.
Here’s how Wall Street analysts are reacting to DeepSeek, in their own words (emphasis ours):
Jefferies
DeepSeek’s power implications for AI training punctures some of the capex euphoria which followed major commitments from Stargate and Meta last week. With DeepSeek delivering performance comparable to GPT-4o for a fraction of the computing power, there are potential negative implications for the builders, as pressure on AI players to justify ever increasing capex plans could ultimately lead to a lower trajectory for data center revenue and profit growth.
If smaller models can work well, it is potentially positive for smartphone. We are bearish on AI smartphone as AI has gained no traction with consumers. More hardware upgrade (adv pkg+fast DRAM) is needed to run bigger models on the phone, which will raise costs. AAPL’s model is in fact based on MoE, but 3bn data parameters are still too small to make the services useful to consumers. Hence DeepSeek’s success offers some hope but there is no impact on AI smartphone’s near-term outlook.
China is the only market that pursues LLM efficiency owing to chip constraint. Trump/Musk likely recognize the risk of further restrictions is to force China to innovate faster. Therefore, we think it likely Trump will relax the AI Diffusion policy.
Citi
While DeepSeek’s achievement could be groundbreaking, we question the notion that its feats were done without the use of advanced GPUs to fine tune it and/or build the underlying LLMs the final model is based on through the Distillation technique. While the dominance of the US companies on the most advanced AI models could be potentially challenged, that said, we estimate that in an inevitably more restrictive environment, US’ access to more advanced chips is an advantage. Thus, we don’t expect leading AI companies would move away from more advanced GPUs which provide more attractive $/TFLOPs at scale. We see the recent AI capex announcements like Stargate as a nod to the need for advanced chips.
Bernstein
In short, we believe that 1) DeepSeek DID NOT “build OpenAI for $5M”; 2) the models look fantastic but we don’t think they are miracles; and 3) the resulting Twitterverse panic over the weekend seems overblown.
Our own initial reaction does not include panic (far from it). If we acknowledge that DeepSeek may have reduced costs of achieving equivalent model performance by, say, 10x, we also note that current model cost trajectories are increasing by about that much every year anyway (the infamous “scaling laws…”) which can’t continue forever. In that context, we NEED innovations like this (MoE, distillation, mixed precision etc) if AI is to continue progressing. And for those looking for AI adoption, as semi analysts we are firm believers in the Jevons paradox (i.e. that efficiency gains generate a net increase in demand), and believe any new compute capacity unlocked is far more likely to get absorbed due to usage and demand increase vs impacting long term spending outlook at this point, as we do not believe compute needs are anywhere close to reaching their limit in AI. It also seems like a stretch to think the innovations being deployed by DeepSeek are completely unknown by the vast number of top tier AI researchers at the world’s other numerous AI labs (frankly we don’t know what the large closed labs have been using to develop and deploy their own models, but we just can’t believe that they have not considered or even perhaps used similar strategies themselves).
Morgan Stanley
We have not confirmed the veracity of these reports, but if they are accurate, and advanced LLM are indeed able to be developed for a fraction of previous investment, we could see generative AI run eventually on smaller and smaller computers (downsizing from supercomputers to workstations, office computers, and finally personal computers) and the SPE industry could benefit from the accompanying increase in demand for related products (chips and SPE) as demand for generative AI spreads.
Goldman Sachs
With the latest developments, we also see 1) potential competition between capital-rich internet giants vs. start-ups, given lowering barriers to entry, especially with recent new models developed at a fraction of the cost of existing ones; 2) from training to more inferencing, with increased emphasis on post-training (including reasoning capabilities and reinforcement capabilities) that requires significantly lower computational resources vs. pre-training; and 3) the potential for further global expansion for Chinese players, given their performance and cost/price competitiveness.
We continue to expect the race for AI application/AI agents to continue in China, especially amongst To-C applications, where China companies have been pioneers in mobile applications in the internet era, e.g., Tencent’s creation of the Weixin (WeChat) super-app. Amongst To-C applications, ByteDance has been leading the way by launching 32 AI applications over the past year. Amongst them, Doubao has been the most popular AI Chatbot thus far in China with the highest MAU (c.70mn), which has recently been upgraded with its Doubao 1.5 Pro model. We believe incremental revenue streams (subscription, advertising) and eventual/sustainable path to monetization/positive unit economics amongst applications/agents will be key.
For the infrastructure layer, investor focus has centered around whether there will be a near-term mismatch between market expectations on AI capex and computing demand, in the event of significant improvements in cost/model computing efficiencies. For Chinese cloud/data center players, we continue to believe the focus for 2025 will center around chip availability and the ability of CSP (cloud service providers) to deliver improving revenue contribution from AI-driven cloud revenue growth, and beyond infrastructure/GPU renting, how AI workloads & AI related services could contribute to growth and margins going forward. We remain positive on long-term AI computing demand growth as a further lowering of computing/training/inference costs could drive higher AI adoption. See also Theme #5 of our key themes report for our base/bear scenarios for BBAT capex estimates depending on chip availability, where we expect aggregate capex growth of BBAT to continue in 2025E in our base case (GSe: +38% yoy) albeit at a slightly more moderate pace vs. a strong 2024 (GSe: +61% yoy), driven by ongoing investment into AI infrastructure.
J.P.Morgan
Above all, much is made of DeepSeek’s research papers, and of their models’ efficiency. It’s unclear to what extent DeepSeek is leveraging High-Flyer’s ~50k hopper GPUs (similar in size to the cluster on which OpenAI is believed to be training GPT-5), but what seems likely is that they’re dramatically reducing costs (inference costs for their V2 model, for example, are claimed to be 1/7 that of GPT-4 Turbo). Their subversive (though not new) claim – that started to hit the US AI names this week – is that “more investments do not equal more innovation.” Liang: “Right now I don’t see any new approaches, but big firms do not have a clear upper hand. Big firms have existing customers, but their cash-flow businesses are also their burden, and this makes them vulnerable to disruption at any time.” And when asked about the fact that GPT5 has still not been released: “OpenAI is not a god, they won’t necessarily always be at the forefront.”
UBS
Throughout 2024, the first year we saw massive AI training workload in China, more than 80-90% IDC demand was driven by AI training and concentrated in 1-2 hyperscaler customers, which translated to wholesale hyperscale IDC demand in relatively remote area (as power-consuming AI training is sensitive to utility cost rather than user latency).
If AI training and inference cost is significantly lower, we would expect more end users would leverage AI to improve their business or develop new use cases, especially retail customers. Such IDC demand means more focus on location (as user latency is more important than utility cost), and thus greater pricing power for IDC operators that have abundant resources in tier 1 and satellite cities. Meanwhile, a more diversified customer portfolio would also imply greater pricing power.
We’ll update the story as more analysts react.
Hey, I am a multifaceted professional excelling in the realms of blogging, YouTube content creation, and entrepreneurship.
With a passion for sharing knowledge and inspiring others, I established a strong presence in the digital sphere through his captivating blog articles and engaging video content.