China’s tech giants burn cash to try to dominate AI healthcare

27 Feb 2026
technology
Caixin Global
Caixin Global
Tech giants in China are splashing huge sums on AI healthcare assistants, while regulators and medical tests push the technology forward — but with most services still free and business models unclear, can AI really transform healthcare and make money?
Can AI really reshape the healthcare sector? (iStock)
Can AI really reshape the healthcare sector? (iStock)

(By Caixin journalists Cui Xiaotian, Chen Xi and Lu Zhenhua)

In China, the battle for consumer users of medical artificial intelligence (AI) has begun. In late 2025, Ant Group Co. Ltd. unleashed a blitz of advertising for its rebranded personal health assistant, called Afu, burning through hundreds of millions of RMB in a single month. By early 2026, on the other side of the ocean, OpenAI released its personal health assistant, ChatGPT Health, opening beta testing to a small group of users.

Such products have emerged in droves over the past year. There is Future Doctor, incubated by Medlinker; JD Health International Inc.’s AI doctor Dawei; Alibaba Group Holding Ltd.’s Quark; Baidu Inc.’s Ernie Health Manager; Iflytek Co. Ltd.’s iFlyHealth; and ByteDance Ltd.’s Xiaohe AI Doctor.

These majors are primarily the work of China’s internet giants, along with healthcare service companies and AI upstarts. All are targeting the massive personal health market.

In October 2025, China’s National Health Commission and four other departments released implementation opinions on promoting and regulating the AI healthcare business. According to the opinions, the goal is to “basically establish a batch of national AI application pilot bases in the medical and health field” by 2027. So far, multiple provincial-level regions including Beijing, Shanghai, Zhejiang, Henan and Hefei, have all laid out plans for these pilot projects.

AI proves itself

On 20 December 2024, a double-blind competition between doctors and AI quietly concluded at the Beijing Arion Cancer Center. Xu Zhonghuang described the results as leaving him “disappointed and panicked”.

A multidisciplinary team of eight veteran doctors faced off against five domestic and international large models: Doubao, Baichuan, Xiaohe, ChatGPT o1 and Gemini. The competition used a set of cases from Peking Union Medical College Hospital with complete answers and outcomes.

“Human doctors could just manage a draw with AI.” — Xu Zhonghuang, President, Beijing Arion Cancer Center

In a double-blind competition between doctors and AI consisting eight veteran doctors and five domestic and international large models, human doctors came in third. (iStock)

Scoring was divided into six rounds: taking a medical history, supplementary examinations, definitive diagnosis and treatment planning, efficacy assessment and follow-up treatment, analysis of recurrence and differential diagnosis, and secondary treatment regimens.

ChatGPT o1 took first place with a landslide score of 314.5 — achieving the highest rating in five of the rounds. Gemini followed with 265 points, while the human doctors took third place with 251.5 points. The remaining scores were Doubao with 214 points and Baichuan with 187.5 points. Xiaohe’s score was not released.

“Human doctors could just manage a draw with AI,” Xu lamented at a roundtable discussion in June 2025. Although he did not disclose further clinical details, he evaluated ChatGPT o1 as having balanced comprehensive capabilities far exceeding other models.

It delivered stable, efficient outputs across all links and performed stunningly in organised summarisation. While providing tumour diagnosis and treatment, it also focused on patient nutrition and psychological support, conveying humanistic care through text, according to Xu.

Human doctors performed better in the definitive diagnosis and treatment planning phase. They considered patient benefits and risks and incorporated the latest literature and research data to propose cutting-edge plans for follow-up treatment. “Progress in AI is happening too fast,” Xu said. “It is hard to predict what level it will reach in three or five years. Therefore, I always maintain a great sense of awe toward AI.”

“I feel that as long as the question is asked properly, the DeepSeek answer is accurate and professional. So, the value of junior doctors like us is easily replaced.” — the holder of a medical PhD from a well-known Beijing hospital

ChatGPT logo is seen in this illustration taken on 22 January 2025. (Dado Ruvic/Illustration/Reuters)

Similar competitions have been taken place constantly over the past year. Developers not only pit AI against doctors in speed and accuracy, but have also had AI models take medical licensing exams. In August 2025, US medical tech firm OpenEvidence announced that its AI product for medical professionals passed the United States Medical Licensing Examination (USMLE) with a perfect score. In China, iFlytek, Quark Health and Baichuan Intelligence have all announced that their models had passed the National Medical Licensing Examination (NMLE).

The medical community was briefly on edge. At the Pujiang Medical Artificial Intelligence Conference on 20 November, four directors from tertiary hospitals who competed against AI were ashamed of losing and were unwilling to reveal their identities.

“I feel that as long as the question is asked properly, the DeepSeek answer is accurate and professional. So, the value of junior doctors like us is easily replaced,” lamented the holder of a medical PhD from a well-known Beijing hospital.

Users’ shift

The backdrop for this proliferation is the increasingly common practice of consulting general large language models for health issues.

Users open DeepSeek, ChatGPT, Doubao, or Kimi, input their symptoms, and receive analysis and interpretation within seconds. The AI suggests potential conditions, severity levels, necessary follow-up checks, medication options and daily care advice. If provided with more information, the AI offers even more detailed responses.

While these products carry disclaimers stating that “content is for reference only”, the experience is often preferred over the time-consuming effort of a hospital visit or the polarising experience of searching the internet, where users often encounter an unreliable mix of information and advertisements. Asking the AI feels convenient, objective and reassuring.

People use their mobile phones as they wait for a train at a subway station in Beijing on 18 January 2026. (Wang Zhao/AFP)

Data supports this shift. In January, OpenAI released a report stating that more than 5% of ChatGPT queries globally were related to healthcare, averaging billions of inquiries a week. Among its more than 800 million active users, one in four asks a health-related question every week. On a daily basis, more than 40 million people consult ChatGPT on medical matters.

Notably, usage frequency is higher where healthcare is harder to come by. In rural areas, for instance, users send nearly 600,000 health care-related messages weekly on average, according to the report. Seventy percent of these conversations occur outside normal clinical hours.

Consequently, consumer-facing medical AI products have launched in succession to carve up this market. Users can consult them for specific diagnostic and treatment advice.

Compared with general large models, these specialised products claim to feature specific improvements: they are based on vertical medical models, trained on extensive hospital medical records, mimic expert chains of thought, and utilise designs like multi-round questioning to reduce hallucinations, enhance memory, and improve the efficiency and accuracy.

“When we discuss [business models] internally, there is a lot of controversy. We argued for a long time and, frankly speaking, there is no answer.” — Chen Liang, Senior Vice President and Chief Marketing Officer, Ant Group

Poor prospects

However, the industry still has no answer on how AI consultations will make money. Whether it’s general large models or specialised AI consultation products, basic services provided to ordinary users are currently free.

Conversely, companies have invested heavily in data, computing power and algorithms — particularly in acquiring high-quality hospital data and feedback on diagnostic reasoning that mimics that of the experts. These require research collaborations with top-tier hospitals and leading experts, followed by rigorous data cleaning and labelling.

An ambulance used to transport Covid-19 patients in Shanghai, China. (SPH Media)

One healthcare investor expressed scepticism about the prospects of consumer-facing AI consultations. He argued that general large models will likely upgrade their functions, closing the performance gap with vertical medical models.

“When we discuss [business models] internally, there is a lot of controversy,” Chen Liang, senior vice president and chief marketing officer of Ant Group, admitted at an Afu media briefing in December 2025. “We argued for a long time and, frankly speaking, there is no answer.”

However, he believed that as the population ages, Afu will help people “solve a few problems”. Once it creates value for society, the business model will emerge.

This article was first published by Caixin Global as “In Depth: China’s Tech Giants Burn Cash to Try to Dominate AI Health Care”. Caixin Global is one of the most respected sources for macroeconomic, financial and business news and information about China.