16 January 2026
By Daniel Pye
ChatGPT Health, which has been launched in the US, is not currently available in Europe due to extra regulatory hurdles.
The artificial intelligence app, which offers health advice based on patient records, will not be accessible in the UK and EU until additional compliance measures are worked through, confirmed OpenAI, which owns the software.
Medicines and Healthcare Products Regulatory Agency (MHRA) has confirmed to Doctors.net.uk that the platform, launched in the States on 7 January, would likely be considered a medical device subject to strict UK laws.
ChatGPT Health offers its users the ability to connect their medical records and wellness apps to the AI model, which it then uses to generate responses that are informed by the health information.
According to OpenAI’s figures, over 230 million people globally ask health and wellness related questions in the standard ChatGPT app every week.
“ChatGPT can help you understand recent test results, prepare for appointments with your doctor [and] get advice on how to approach your diet and workout routine,” OpenAI claims on its website.
OpenAI will work with regulators in Europe and the UK to make its products available as soon as possible, it told Doctors.net.uk.
Alex Ruani, a doctoral researcher in health misinformation with the University College in London, has concerns about the platform.
He told The Guardian: “ChatGPT Health is being presented as an interface that can help people make sense of health information and test results or receive diet advice, while not replacing a clinician.
“The challenge is that, for many users, it’s not obvious where general information ends and medical advice begins, especially when the responses sound confident and personalised, even if they mislead.”
Dr Ruani said there had been too many “horrifying” examples of ChatGPT “leaving out key safety details like side effects, contraindications, allergy warnings, or risks around supplements, foods, diets, or certain practices”.
An MHRA spokesperson said the organisation’s strategic priority is to ensure the UK’s regulatory framework keeps pace with emerging AI technologies.
“Manufacturers that make claims that their product, including AI-based technologies, can help manage or advise on health problems and that use personal health data are likely to have a product that performs a medical purpose.
“This would qualify them in the UK as a medical device and mean they are subject to the requirements of the UK Medical Device Regulations 2002,” they said.
Determining whether a product is performing a medical purpose is “highly context-specific” and is done on a case-by-case basis.
“The MHRA is aware of the complexity involved in making these determinations, and we have an extensive programme of guidance underway to support both our current and future legislative system so that emerging technologies can be used safely for patients and the public.”
On 26 September 2025, the MHRA launched the National Commission into the Regulation of AI in Healthcare, which brings together global AI leaders, clinicians and regulators to advise the body on creating a new regulatory framework.
This new framework is due to be published in 2026. The MHRA has launched a call for evidence, inviting contributions from the UK and internationally.