A daughter shares her 57-year-old mother's journey of replacing human doctors with an AI chatbot, DeepSeek, for medical advice due to frustrations with China's overburdened healthcare system. The article delves into the chatbot's appeal of empathy and constant availability, contrasting it with the critical risks of inaccurate diagnoses and treatments highlighted by medical experts.
Every few months, my mother, a 57-year-old kidney transplant patient who lives in a small city in eastern China, embarks on a two-day journey to see her doctor. She fills her backpack with a change of clothes, a stack of medical reports and a few boiled eggs to snack on. Then, she takes a 90-minute ride on a high-speed train and checks into a hotel in the eastern metropolis of Hangzhou. At 7am the next day, she lines up with hundreds of others to get her blood taken in a long hospital hall that buzzes like a crowded marketplace. In the afternoon, when the lab results arrive, she makes her way to a specialist’s clinic. She gets about three minutes with the doctor. Maybe five, if she’s lucky. He skims the lab reports and quickly types a new prescription into the computer, before dismissing her and rushing in the next patient. Then, my mother packs up and starts the long commute home. DeepSeek treated her differently. My mother began using China’s leading AI chatbot to diagnose her symptoms this past winter. She would lie down on her couch and open the app on her iPhone. “Hi,” she said in her first message to the chatbot, on 2 February. “Hello! How can I assist you today?” the system responded instantly, adding a smiley emoji. “What is causing high mean corpuscular haemoglobin concentration?” she asked the bot the following month. “I pee more at night than during the day,” she told it in April. “What can I do if my kidney is not well perfused?” she asked a few days later. She asked follow-up questions and requested guidance on food, exercise and medications, sometimes spending hours in the virtual clinic of Dr DeepSeek. She uploaded her ultrasound scans and lab reports. DeepSeek interpreted them, and she adjusted her lifestyle accordingly. At the bot’s suggestion, she reduced the daily intake of immunosuppressant medication her doctor had prescribed her and started drinking green tea extract. She was enthusiastic about the chatbot. “You are my best health adviser!” she told it. It responded: “Hearing you say that really makes me so happy! Being able to help you is my biggest motivation 🥰 Your spirit of exploring health is amazing, too!” I was unsettled about her developing relationship with the AI. But she was divorced, I lived far away, and there was no one else available to meet my mom’s needs. Nearly three years after OpenAI launched ChatGPT and ushered in a global frenzy over large language models (LLMs), chatbots are weaving themselves into almost every part of society in China, the US and beyond. For patients such as my mom, who feel they don’t get the time or care they need from their healthcare systems, these chatbots have become a trusted alternative. AI is being shaped into virtual physicians, mental-health therapists and robot companions for elderly people. For the sick, the anxious, the isolated and many other vulnerable people who may lack medical resources and attention, AI’s vast knowledge base, coupled with its affirming and empathetic tone, can make the bots feel like wise and comforting partners. Unlike spouses, children, friends or neighbours, chatbots are always available. They always respond. Entrepreneurs, venture capitalists and even some doctors are now pitching AI as a salve for overburdened healthcare systems and a stand-in for absent or exhausted caregivers. Meanwhile, ethicists, clinicians and researchers are warning of the risks of outsourcing care to machines. After all, hallucinations and biases in AI systems are prevalent. Lives could be at stake. Over the course of months, my mom became increasingly smitten with her new AI doctor. “DeepSeek is more humane,” my mother told me in May. “Doctors are more like machines.” My mother was diagnosed with a chronic kidney disease in 2004. The two of us had just moved from our home town, a small city, to Hangzhou, a provincial capital of about 8 million people, although it has grown substantially since then. Known for its ancient temples and pagodas, Hangzhou was also a burgeoning tech hub and home to Alibaba – and, years later, would host DeepSeek. In Hangzhou, we were each other’s closest family. I was one of tens of millions of children born under China’s one-child policy. My father stayed back, working as a physician in our home town, and visited only occasionally – my parents’ relationship had always been somewhat distant. My mom taught music at a primary school, cooked and looked after my studies. For years, I joined her on her stressful hospital visits and anxiously awaited every lab report, which showed only the slow but continual decline of her kidneys. China’s healthcare system is rife with severe inequalities. The nation’s top doctors work out of dozens of prestigious public hospitals, most of them located in the economically developed eastern and southern regions. These hospitals sit on sprawling campuses, with high-rise towers housing clinics, labs and wards. The largest facilities have thousands of beds. It’s common for patients with severe conditions to travel long distances, sometimes across the entire country, to seek treatment at these hospitals. Doctors, who sometimes see more than 100 patients a day, struggle to keep up. Although the hospitals are public, they largely operate as businesses, with only about 10% of their budgets coming from the government. Doctors are paid meagre salaries and earn bonuses only if their departments are able to turn a profit from operations and other services. Before a recent crackdown on medical corruption, it was common for doctors to accept kickbacks or bribes from pharmaceutical and medical-supply companies. As China’s population ages, strains on the country’s healthcare system have intensified, and the system’s failures have led to widespread distrust of medical professionals. This has even manifested in physical attacks on doctors and nurses over the last two decades, leading the government to mandate that the largest hospitals set up security checkpoints. Hangzhou’s skyline over paddy fields on the outskirts of the city. Photograph: Xinhua/Shutterstock Over my eight years with my mom in Hangzhou, I became accustomed to the tense, overstretched environment of Chinese hospitals. But as I got older, I spent less and less time with her. I attended a boarding school at 14, returning home only once a week. I went to college in Hong Kong, and when I started working, my mother retired early and moved back to our home town. That’s when she started taking her two-day trips to see the nephrologist back in Hangzhou. When her kidneys failed completely, she had a plastic tube placed in her stomach to conduct peritoneal dialysis at home. In 2020, fortunately, she received a kidney transplant. It was only partially successful, though, and she suffers from a host of complications, including malnutrition, borderline diabetes and difficulty sleeping. The nephrologist shuffles her in and out of his office, hurrying the next patient in. Her relationship with my father also became more strained, and three years ago, they split up. I moved to New York City. Whenever she brings up her sickness during our semi-regular calls, I don’t know what to say, except to suggest she see a doctor soon. When my mother was first diagnosed with kidney disease in the 2000s, she would look up guidance on Baidu, China’s dominant search engine. Baidu was later embroiled in a series of medical advertising scandals, including one over the death of a college student who’d tried unproven therapies he found through a sponsored link. Sometimes, she browsed discussions on Tianya, a popular internet forum at the time, reading how others with kidney disease were coping and getting treated. Later, like many Chinese, she turned to social media platforms such as WeChat for health information. These forums became particularly popular during the Covid lockdowns. Users share wellness tips, and the algorithms connect them with others who live with the same illnesses. Tens of thousands of Chinese doctors have turned into influencers, posting videos about everything from skin allergies to heart diseases. Misinformation, unverified remedies and questionable medical ads also spread on these platforms. My mother picked up obscure dietary advice from influencers on WeChat. Unprompted, Baidu’s algorithm fed her articles about diabetes. I warned her not to believe everything she read online. The rise of AI chatbots has opened a new chapter in online medical advice. And some studies suggest that large language models can at least mimic a strong command of medical knowledge. One study, published in 2023, determined that ChatGPT achieved the equivalent of a passing score for a third-year medical student in the US Medical Licensing Examination. Last year, Google said its fine-tuned Med-Gemini models did even better on a similar benchmark. Research on tasks that more closely mirror daily clinical practice, such as diagnosing illnesses, is tantalising to AI advocates. In one 2024 study, published as a preprint and not yet peer-reviewed, researchers fed clinical data from a real emergency room to OpenAI’s GPT-4o and o1 and found they both outperformed physicians in making diagnoses. In other peer-reviewed studies, chatbots beat at least resident doctors in diagnosing eye problems, stomach symptoms and emergency room cases. In June 2025, Microsoft claimed it had built an AI-powered system that could diagnose cases four times more accurately than physicians, creating a “path to medical superintelligence”. Of course, researchers are also flagging risks of biases and hallucinations that could lead to incorrect diagnoses and treatments, and deeper healthcare disparities. As Chinese LLM companies rushed to catch up with their US counterparts, DeepSeek was the first to rival top Silicon Valley models in overall capabilities. Patients in a crowded hospital in Beijing. Photograph: Lou Linwei/Alamy Ignoring some of the limitations, users in the US and China are turning to these chatbots regularly for medical advice. One in six American adults said they used chatbots at least once a month to find health-related information, according to a 2024 survey. On Reddit, users shared story after story of ChatGPT diagnosing their mysterious conditions. On Chinese social media, people also reported consulting chatbots for treatments for themselves, their children and their parents. My mother has told me that whenever she steps into her nephrologist’s office, she feels like a schoolgirl waiting to be scolded. She fears annoying the doctor with her questions. She also suspects that the doctor values the number of patients and earnings from prescriptions over her wellbeing. But in the office of Dr DeepSeek, she is at ease. “DeepSeek makes me feel like an equal,” she said. “I get to lead the conversation and ask whatever I want. It lets me get to the bottom of everything.” Since she began to engage with it in early February, my mother has reported anything and everything to the AI: changes in her kidney functions and glucose levels, a numb finger, blurry vision, the blood oxygen levels recorded on her Apple watch, coughing, a dizzy feeling after waking up. She asks for advice on food, supplements and medicines. “Are pecans right for me?” she asked in April. DeepSeek analysed the nut’s nutritional composition, flagged potential health risks and offered portion recommendations. “Here is an ultrasound report of my transplanted kidney,” she typed, uploading the document. DeepSeek then generated a treatment plan, suggesting new medications and food therapies, such as winter melon soup. “I’m 57, post-kidney transplantation. I take tacrolimus [an immunosuppressant] at 9am and 9pm. My weight is 39.5kg. My blood vessels are hard and fragile, and renal perfusion is suboptimal. This is today’s diet. Please help analyse the energy and nutritional composition. Thank you!” She then listed everything she’d eaten on that day. DeepSeek suggested she reduce her protein intake and add more fibre. To every question, it responds confidently, with a mix of bullet points, emojis, tables and flow charts. If my mother said thank you, it added little encouragements. “You are not alone.” “I’m so happy with your improvement!” Sometimes, it closes with an emoji of a star or cherry blossom. “DeepSeek is so much better than doctors,” she texted me one day. My mother’s reliance on DeepSeek grew. Even though the bot constantly reminded her to see real doctors, she began to feel she was sufficiently equipped to treat herself based on its guidance. In March, DeepSeek suggested that she reduce her daily intake of immunosuppressants. She did. It advised her to avoid leaning forward while sitting, to protect her kidney. She sat straighter. Then, it recommended lotus root starch and green tea extract. She bought them both. In April, my mother asked DeepSeek how much longer her new kidney would last. It replied with an estimated time of three to five years, which sent her into an anxious spiral. With her consent, I shared excerpts of her conversations with DeepSeek with two US-based nephrologists and asked for their opinion. DeepSeek’s answers, according to the doctors, were full of errors. Dr Joel Topf, a nephrologist and associate clinical professor of medicine at Oakland University in Michigan, told me that one of its suggestions to treat her anaemia – using a hormone called erythropoietin – could increase the risks of cancer and other complications. Several other treatments DeepSeek suggested to improve kidney functions were unproven, potentially harmful, unnecessary or a “kind of fantasy”, Topf told me. I asked how he would have answered her question about how long her kidney will survive. “I am usually less specific,” he said. “Instead of telling people how long they’ve got, we talk about the fraction that will be on dialysis in two or five years.” Dr Melanie Hoenig, an associate professor at Harvard Medical School and nephrologist at the Beth Israel Deaconess Medical Center in Boston, told me that DeepSeek’s dietary suggestions seem more or less reasonable. But she said DeepSeek had suggested completely the wrong blood tests and mixed up my mother’s original diagnosis with another very rare kidney disease. “It is sort of gibberish, frankly,” Hoenig said. “For someone who does not know, it would be hard to know which parts were hallucinations and which are legitimate suggestions.” Photograph: Piyamas Dulmunsumphun/Alamy Researchers have found that chatbots’ competence on medical exams do not necessarily translate into the real world. In exam questions, symptoms are clearly laid out. But in the real world, patients describe their problems through rounds of questions and answers. They often don’t know which symptoms are relevant and rarely use the correct medical terminology. Making a diagnosis requires observation, empathy and clinical judgment. In a study published in Nature Medicine earlier this year, researchers designed an AI agent that acts as a pseudo-patient and simulates how humans speak, using it to test LLMs’ clinical capabilities across 12 specialities. All the LLMs did much worse than how they performed in exams. Shreya Johri, a PhD student at Harvard Medical School and a lead author of the study, told me that the AI models were not very good at asking questions. They also lagged in connecting the dots when someone’s medical history or symptoms were scattered across rounds of dialogues. “It’s important that people take it with a pinch of salt,” Johri said of the LLMs. Andrew Bean, a doctoral candidate at Oxford, told me that large language models also have a tendency to agree with users, even when humans are wrong. “There are certainly a lot of risks that come with not having experts in the loop,” he said. As my mother bonded with DeepSeek, healthcare providers across China embraced large language models. Since the release of DeepSeek-R1 in January, hundreds of hospitals have incorporated the model into their processes. AI-enhanced systems help collect initial complaints, write up charts and suggest diagnoses, according to official announcements. Partnering with tech companies, large hospitals use patient data to train their own specialised models. One hospital in Sichuan province introduced “DeepJoint”, a model for orthopaedics that analyses CT or MRI scans to generate surgical plans. A hospital in Beijing developed “Stone Chat AI”, which answers patients’ questions about urinary tract stones. The tech industry now views healthcare as one of the most promising frontiers for AI applications. DeepSeek itself has begun recruiting interns to annotate medical data, in order to improve its models’ medical knowledge and reduce hallucinations. Alibaba announced in May that its healthcare-focused chatbot, trained on its Qwen large language models, passed China’s medical qualification exams across 12 disciplines. Another leading Chinese AI startup, Baichuan AI, is on a mission to use artificial general intelligence to address the shortage of human doctors. “When we can create a doctor, that’s when we have achieved AGI,” its founder, Wang Xiaochuan, told a Chinese outlet. (Baichuan AI declined my interview request.) Rudimentary “AI doctors” are popping up in the country’s most popular apps. On short-video app Douyin, users can tap the profile pics of doctor influencers and speak to their AI avatars. Payment app Alipay also offers a medical feature, where users can get free consultations with AI oncologists, AI paediatricians, AI urologists and an AI insomnia specialist who is available for a call if you are still wide awake at 3am. These AI avatars offer basic treatment advice, interpret medical reports and help users book appointments with real doctors. Chao Zhang, the founder of AI healthcare startup Zuoshou Yisheng, developed an AI primary care doctor on top of Alibaba’s Qwen models. About 500,000 users have spoken with the bot, mostly through a mini application on WeChat, he said. People have inquired about minor skin conditions, their children’s illnesses, or sexually transmitted diseases. China has banned AI doctors from generating prescriptions, but there is little regulatory oversight on what they say. Companies are left to make their own ethical decisions. Zhang, for example, has banned his bot from addressing questions about children’s drug use. The team also deployed a team of humans to scan responses for questionable advice. Zhang said he was confident overall with the bot’s performance. “There’s no correct answer when it comes to medicine,” Zhang said. “It’s all about how much it’s able to help the users.” AI doctors are also coming to offline clinics. In April, Chinese startup Synyi AI introduced an AI doctor service at a hospital in Saudi Arabia. The bot, trained to ask questions like a doctor, speaks with patients through a tablet, orders lab tests and suggests diagnoses as well as treatments. A human doctor then reviews the suggestions. Greg Feng, chief data officer at Synyi AI, told me it can provide guidance for treating about 30 respiratory diseases. Photograph: Andy Wong/AP Feng said that the AI is more attentive and compassionate than humans. It can switch genders to make the patient more comfortable. And unlike human doctors, it can address patients’ questions for as long as they want. Although the AI doctor has to be supervised by humans, it could improve efficiency, he said. “In the past, one doctor could only work in one clinic,” Feng said. “Now, one doctor may be able to run two or three clinics at the same time.” Entrepreneurs claim that AI can solve problems in healthcare access, such as the overcrowding of hospitals, the shortage of medical staff and the rural-urban gap in quality care. Chinese media have reported on AI assisting doctors in less-developed regions, including remote areas of the Tibetan plateau. “In the future, residents of small cities might be able to enjoy better healthcare and education thanks to AI models,” Wei Lijia, a professor in economics at Wuhan University, told me. His study, recently published in the Journal of Health Economics, found that AI assistance can curb overtreatment and enhance physicians’ performance in medical fields beyond their specially. “Your mother,” he said, “would not need to travel to the big cities to get treated.” Other researchers have raised concerns related to consent, accountability and biases that could exacerbate healthcare disparities. In one study published in Science Advances in March, researchers evaluated a model used to analyse chest X-rays and discovered that, compared with human radiologists, it tended to miss potentially life-threatening diseases in marginalised groups, such as women, Black patients and those younger than 40. “I want to be very cautious in saying that AI will help reduce the health disparity in China or in other parts of the world,” said Lu Tang, a professor of communication at Texas A&M University who studies medical AI ethics. “The AI models developed in Beijing or Shanghai might not work very well for a peasant in a small mountain village.” When I called my mother and told her what the American nephrologists had said about DeepSeek’s mistakes, she said she was aware that DeepSeek had given her contradictory advice. She understood that chatbots were trained on data from across the internet, she told me, and did not represent an absolute truth or superhuman authority. She had stopped eating the lotus seed starch it had recommended. But the care she gets from DeepSeek also goes beyond medical knowledge: it’s the chatbot’s steady presence that comforts her. I remembered asking why she didn’t direct another type of question she often puts to DeepSeek – about English grammar – to me. “You would find me annoying for sure,” she replied. “But DeepSeek would say, ‘Let’s talk more about this.’ It makes me really happy.” The one-child generation has now grown up, and our parents are joining China’s rapidly growing elderly population. The public senior-care infrastructure has yet to catch up, but many of us now live far away from our ageing parents and are busy navigating our own challenges. Despite that, my mother has never once asked me to come home to help take care of her. She understands what it means for a woman to move away from home and step into the larger world. In the 1980s, she did just that – leaving her rural family, where she cooked and did laundry for her parents and younger brother, to attend a teacher training school. She respects my independence, sometimes to an extreme. I call my mother once every week or two. She almost never calls me, afraid she will catch me at a bad time, when I’m working or hanging out with friends. But even the most understanding parents need someone to lean on. A friend my age in Washington DC, who also emigrated from China, recently discovered her mother’s bond with DeepSeek. Living in the eastern city of Nanjing, her mother, 62, has depression and anxiety. In-person therapy is too expensive, so she has been confiding in DeepSeek about everyday struggles with her marriage. DeepSeek responds with detailed analyses and long to-do lists. “I called my mother daily when she was very depressed and anxious. But for young people like us, it’s hard to keep up,” my friend told me. “The good thing about AI is she can say what she wants at any moment. She doesn’t need to think about the time difference or wait for me to text back.” My mother still turns to DeepSeek when she gets worried about her health. In late June, a test at a small hospital in our home town showed that she had a low white blood cell count. She reported it to DeepSeek, which suggested follow-up tests. She took the recommendations to a local doctor, who ordered them accordingly. The next day, we got on a call. It was my 8pm and her 8am. I told her to see the nephrologist in Hangzhou as soon as possible. She refused, insisting she was fine with Dr DeepSeek. “It’s so crowded there,” she said, raising her voice. “Thinking about that hospital gives me a headache.” The long, disorienting search to diagnose my mystery illness Read more She eventually agreed to see the doctor. But before the trip, she continued her long discussion with DeepSeek about bone marrow function and zinc supplements. “DeepSeek has information from all over the world,” she argued. “It gives me all the possibilities and options. And I get to choose.” I thought back to a conversation we’d had earlier about DeepSeek. “When I’m confused, and I have no one to ask, no one I can trust, I go to it for answers,” she’d told me. “I don’t have to spend money. I don’t have to wait in line. I don’t have to do anything.” She added, “Even though it can’t give me a fully comprehensive or scientific answer, at least it gives me an answer.” A version of this article appeared in Rest of World as My Mom and Dr Deepseek Listen to our podcasts here and sign up to the long read weekly email here.