In one of the scenarios at Huashan Hospital, AI assistants take detailed patient records before a consultation, enabling doctors to have a comprehensive view of the case.
However, Cheng stressed the records must be absolutely accurate.
“Only if they pass extremely stringent tests can AI large models particularly trained to make clinical records be used at our hospital,” Cheng told NewsChina.
According to Cheng, Huashan Hospital has tested different large models run by DeepSeek. One essential criteria is their abilities in filing and updating clinical records.
“Doctors must double-check all digital records taken by AI models to ensure the quality and safety of healthcare services,” Cheng said.
His caution is warranted due to a phenomenon called AI hallucinations, which means AI models misinterpret patterns and generate incorrect or misleading outputs.
Because of these glitches, some clinicians, like those at Tongji Hospital in Shanghai and Huazhong University of Science and Technology in Wuhan, Hubei Province, are more cautious.
“AI models can make blunders in clinical practices, say by taking signal interference in medical imaging as signs of lesions,” Guo Wei, deputy chief of the infectious diseases department at Tongji Hospital, told NewsChina.
According to Yu, the credibility of AI platforms depends on accurate and professional data input, as mistakes lead to erroneous results.
At BCH, more than 300 top doctors have contributed their knowledge and decades of records of cases to the AI pediatrician application.
“To reduce AI hallucinations, it is essential to guarantee data uniqueness and accuracy from the very beginning of the modeling,” Ni said, adding that compared to DeepSeek, AI pediatricians designed for the medical field perform in more precise ways.
The possibility of hallucinations requires doctors to double-check AI responses to avoid medical negligence, otherwise they will be responsible for the consequences. Besides, the hospitals and developers are accountable for negligence caused by medical staff operating the AI without training or flawed algorithm design, Deng Yong, professor of medical and health law at the Beijing University of Chinese Medicine, told NewsChina.