ARTIFICAL INTELLIGENCE DeepSeek AI in the chip-town

From Venus Kohli 6 min Reading Time

Related Vendors

DeepSeek AI is like that friend who helps you write a book but sells it to others for “research purposes“. The latest AI model was developed amidst the USA’s export controls against China. Despite all the tensions, DeepSeek AI proves to be superior to other existing generative AI models— well, the internet believes in it! But what about the semiconductor industry? How will it affect the USA-dominated chip industry? Read our article to know about the impact of DeepSeek AI on the global ‘semiconducting scale’.

DeepSeek AI is an AI model based out of China. The hype around DeepSeek AI revolves around its lower training and computational costs. Learn more about this here.(Source:  沈军 贡 - stock.adobe.com)
DeepSeek AI is an AI model based out of China. The hype around DeepSeek AI revolves around its lower training and computational costs. Learn more about this here.
(Source: 沈军 贡 - stock.adobe.com)

The sudden rise of DeepSeek AI

One day everybody woke up to find the word “DeepSeek AI” trending across all social media platforms. Some of you might have googled the word for more information, coming across trending news articles. As a result, DeepSeek AI ended up becoming 2025’s biggest blow to the key AI players in the market.

DeepSeek AI was created by a Chinese company Hangzhou DeepSeek Artificial Basic Technology Research Co., Ltd, which develops open-source LLMs (Large Language Models). Simply put, DeepSeek AI is an AI model like GPT, Gemini, Llama, and many others. DeepSeek AI has existed for about 18 months but the sudden popularity is due to the launch of its free app on January 20, 2025.

Cheaper or better?

Before 2022-effective export controls on China, the founder of DeepSeek AI stockpiled 10,000 NVIDIA A100 GPUs to train the AI model. In terms of performance, DeepSeek-R1 offers comparable results to other AI models— at much lower training costs and computing power.

The company claims that they spent about USD6 million on model training costs. On the other hand, the latest AI models typically spend several millions of dollars in training costs. OpenAI’s first training model, ChatGPT-3, took USD 2-3 million in training and making.

Google AI’s paLm took anywhere between USD 3-12 million in training costs. ChatGPT-4 took a whopping USD100 million in training costs. DeepSeek-R1 model, offering comparable (or even better) responses than ChatGPT-4, took only USD6 million in training costs.

Relationship between AI and GPUs

Training AI and ML models require high-speed processing, large datasets, complex mathematical computations, and iterative validation. High-performance chips, known as "AI semiconductor chips," enable large-scale data processing, parallel computing, and advanced mathematical operations with exceptional speed and efficiency.

AI chips are majorly GPUs (Graphic Processing Units) but can also include AI accelerators, TPUs (Tensor Processors Units), and FPGAs. All these AI chips are expensive and in high demand. As the generative AI industry has skyrocketed in 2 years, the usage of GPUs has also increased.

In 2022, the generative AI market size was about USD12.05 billion. The GPU market size was USD43 billion in the same year. It is important to note that GPU usage is not limited to AI but extends to data centers, gaming, virtual reality, augmented reality, animation, and cloud applications.

In 2025, the generative AI market is worth USD37.89 billion and the GPU market size is about USD101 billion. The generative AI industry is all set to hit a trillion dollars in a decade. The growth of the generative AI industry with the introduction of newer applications and tools drives the GPU demand.

Impact of DeepSeek AI on the semiconductor industry

DeepSeek AI and its lower training costs and computing power consumption impose a business threat to existing AI companies. It makes the industry wonder if AI is worth spending a trillion dollars or if it is a few millions-job. The answer lies in the future. The section explains the impact of DeepSeek AI in the semiconductor industry.

Reduced AI semiconductor demand

Training complex AI and ML models require thousands of GPUs running in parallel. Advanced algorithms must be tested on large-scale GPU set-ups, resulting in higher training costs. The higher the AI model training spends— the reduced GPU market availability (GPU shortage).

Low training costs require a small number of GPUs. As per principles of demand and supply, easy availability of GPUs will reduce their costs. In simple words, lower training costs of the DeepSeek AI model will reduce GPU demand and price— improving supply chain issues.

The GPU price reduction is not going to happen soon. During Q4 24, industries using the DeepSeek AI model have witnessed increased GPU demands because the model is new and requires more testing. The initial boom is likely to slow down in upcoming years.

Lower power consumption

Most metrics show that DeepSeek AI models reduce energy consumption per unit of computational power. The energy usage is 40 % lower than the current GPT model. The reason for lower energy consumption is the usage of optimized AI algorithms run on a combination of GPUs and TPUs - AI accelerators.

The complexity of an AI model affects energy consumption but DeepSeek-R1 is said to produce comparable responses to existing ones at lower energy consumption. TPUs and accelerators are typically energy-efficient compared to GPUs. Companies aiming to achieve net zero may rely on DeepSeek AI. Due to 60 % renewable energy usage, DeepSeek AI accounts for a lower carbon footprint.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy.

Unfold for details of your consent

Stock fluctuations

Most semiconductor chip manufacturing companies are public limited, listed on the stock exchanges of their respective countries. While news about 2-nm and 1-nm chips boost stock prices, news about low demand for GPUs can crash such stock prices. For example, the news about the launch of the DeepSeek AI app was a key driver of the fall of NVIDIA's stock price by 17 %.

Reduced operational costs

Lower training costs and energy consumption of the DeepSeek AI model can push enterprises to optimize cloud, hybrid, and on-premises AI models. A report by Kompas AI shows that an unnamed telecom company used DeepSeek AI-based chatbots to witness a 25 % reduction in customer service spending. Enterprises can use the DeepSeek-R1 model to scale AI for reducing operational costs in departments like IT infrastructure, contact centers, manufacturing, IoT, marketing, and various others.

AI innovation

Lower training costs will boost AI innovations at lower costs. Intelligent minds and enthusiastic enterprises that lack funds may create something impactful. In the future, enterprises will launch their new AI models or LLMs like a piece of cake. However, this could only be possible if claims about low training costs in DeepSeek AI are true.

China’s comeback

As of 2025, China is among the list of world's largest producers and consumers of semiconductors. However, international export controls deprived them of building advanced AI models, tools, and technology. Home-grown AI models will enable them to break free from the process node limitations and manufacture small chips. According to a source, DeepSeek AI has been the most popular news out of China in the last 185 years!!!

Cybersecurity threats

The only drawback associated with DeepSeek AI is security compromise. Data fed to the AI model with critical user information like login credentials is at risk. Such apps are known to use and sell data to third parties, making the impact of DeepSeek AI less effective. Countries like Australia, South Korea, and Taiwan have already banned the app, making speculations about potential bans in other countries. As a result, DeepSeek AI failed miserably on the reliability scale, despite hitting the top spot.

Conclusion

Judging the quality of responses generated by DeepSeek-R1 and other existing AI models is subjective. Whether effective or not, sustainable or not, or safe or not, any publicity is always good publicity! The attention DeepSeek AI is getting will facilitate app downloads. Time will tell if Chip-town needs to worry or ignore it like another addition.

References

PCIM Expo 2025: Join the industry highlight

(Source: Mesago)

Position your company at the forefront of the power electronics industry! Exhibit at the PCIM Expo 2025 in Nuremberg, Germany to showcase your solutions and connect with top decision-makers and innovators driving the industry forward.

Learn more

(ID:50341238)