China Orders Data Centres to Use More Domestic AI Chips Amid US Export Curbs
- Aug 18, 2025
- 2 min read
China is mandating its data centres to use more domestically produced artificial intelligence chips, intensifying efforts to reduce reliance on foreign technology as the United States tightens semiconductor export controls.

Publicly owned computing hubs across the country have been instructed to source over 50% of their chips from Chinese manufacturers, according to individuals familiar with the policy.
The directive stems from guidelines first introduced in March 2024 by the Shanghai municipality, which required intelligent computing centres in the city to adopt more than 50% domestic computing and storage chips by 2025.
The policy, aimed at strengthening AI computing resources in Shanghai, was supported by local branches of the National Development and Reform Commission and the Shanghai Communications Administration, under the Ministry of Industry and Information Technology.
An adviser in the data centre industry said the Shanghai chip quota has since become a nationwide mandate.
The move comes as China accelerates its push for technological self-sufficiency amid a prolonged tech conflict with the US, which has restricted Chinese access to advanced semiconductors used in AI development.
To support its AI ambitions, China has ramped up the construction of intelligent computing centres that consolidate computing power for the domestic AI sector.
In 2023 and 2024, more than 500 new data centre projects were announced across regions including Inner Mongolia and Guangdong, according to MIT Technology Review, citing data from KZ Consulting.
Generative AI requires significant computing power, typically provided by advanced chips such as Nvidia’s H100 and H800, which are now banned from export to China by the US.
Although Nvidia’s H20 chips were recently approved for sale in China, the Chinese government has raised concerns about potential network security risks, a claim Nvidia has denied.
While Chinese chips are considered less advanced than Nvidia’s, they are still usable for inferencing tasks in trained AI models. However, Nvidia remains the preferred choice for training new AI models, according to a source.

Currently, iFlytek is the only Chinese tech company that has publicly confirmed it is using Huawei Technologies’ chips to train AI models.
The shift to domestic chips has introduced technical challenges for AI data centres, which now operate with a mix of local and foreign hardware.
AI chips typically rely on proprietary software ecosystems, such as Nvidia’s CUDA or Huawei’s CANN. Adapting AI models developed on one platform to run on another can be complex and resource-intensive.
Most Chinese AI models are still built using Nvidia’s ecosystem, making the transition to domestic chips difficult for many data centres, unlike specialist firms such as Beijing-based SiliconFlow.
SiliconFlow, in collaboration with Huawei, has developed a solution using Huawei’s Cloud Matrix 384 architecture and Ascend chips to run DeepSeek’s R1 models more efficiently than Nvidia’s H800, according to research published in June.
China mandates over 50% domestic AI chip use in public data centres
Policy expands from Shanghai to nationwide implementation
Over 500 new data centres announced in 2023 and 2024
Source: SCMP
