The Newest UK And EU Developments In Telehealth Regulation – Healthcare
It is often stated that COVID-19 and remote working have led to
a boom in the development and use of virtual and digital health
care.
There has certainly been much activity in the regulatory space
to help provide greater oversight of this area. The following is a
review of some of the key developments in the U.K. and EU in 2022
and a look ahead to 2023.
Telehealth and Digital Health Developments From a Regulatory
Standpoint
The Medical Devices Regulation and In Vitro Diagnostics
Regulation
According to the Medical Devices Regulation1 and In
Vitro Diagnostics Regulation,2 software used for a
medical purpose, such as diagnosing or predicting a disease, is
classified as a medical device.
However, software classification is fraught with practical
challenges because it may not be immediately apparent how the legal
parameters apply in the virtual environment.
The functions of the software must be reviewed considering the
guidance on software qualification and classification3
to determine whether a medical purpose exists.
In May, the In Vitro Diagnostics Regulation, which includes
software within its remit, became applicable. However, many
software medical device regulations do not yet meet the new
requirements.
Amending regulations were published in January that extended the
transitional provisions for certain products to give them more time
to meet the new standards.
In addition, further measures were proposed this month for both
the Medical Devices Regulation and the In Vitro Diagnostics
Regulation, due to the huge potential impact on patients and
industry of devices not meeting the current deadlines.
These should be put in place early in 2023. The hope is that
these extended deadlines will provide more time for companies to
ensure compliance with the new rules.
Use of Artificial Intelligence
Across the U.K. and EU, several initiatives are being developed
to encourage the development of AI-enabled medical technologies and
increase confidence in their use. For example:
- The European Commission published a report on “Artificial
Intelligence in Healthcare” in 2021, which provides an
overview of national strategies and lack of policies for AI in
health care. - The U.S. Food and Drug Administration, Health Canada and U.K.
Medicines and Healthcare Products Regulatory Agency published
“Good Machine Learning Practice for Medical Device
Development: Guiding Principles” in 2021 to help standardize
principles across the regions. - The G7 collaborated on the international principles for the
evaluation, development and deployment of AI medical devices. - The U.K.’s National Health Service AI Lab launched its
National Strategy for AI in Health and Social Care to support the
development of AI-driven technologies.4
Currently, there are limited concrete legislative provisions in
place, and companies must navigate the various schemes in each
country. Authorities are aiming to introduce more concrete
frameworks in 2023.
Telehealth
The regulation of telehealth throughout the EU depends on
national legislation, so it varies considerably. Overall, the
regulation of telehealth is not well developed.
For example, in the U.K. there are currently no laws
specifically addressing telehealth, so these services are regulated
in the same way as face-to-face services.
It is likely that new policies and initiatives will be developed
in 2023. For example, in November, the World Health Organization
issued a consolidated guide on the key steps and considerations for
implementing telemedicine.5
Developments Affecting Physicians and Delivery of Health
Care
Virtual Interactions With Health Care
Professionals
This year, recognizing that some congresses now have both
in-person and virtual elements, the European Federation of
Pharmaceutical Industries and Associations published guidance on
virtual and hybrid international medical congresses.
The guidance advises that companies should clearly identify the
product label to which promotional materials refer and ensure a
process is in place to confirm participants’ status as health
care professionals.
The federation has also published guidance to assist member
companies with using social media and digital channels. The
guidance advises companies to review and monitor their social media
activities, consider when digital opinion leaders are used and
train employees on responsible conduct. Guidance from U.K.
authorities is expected to be published shortly
Reimbursement of Digital Technologies
To address the limited way in which EU health care authorities
are adopting software and AI applications, the Health Technology
Assessment bodies in EU states are developing methods for
evaluating standalone software and apps.
In October, a European taskforce6 on the evaluation
framework for digital medical devices was launched to seek to
harmonize assessment criteria in the EU.
Further, several countries have introduced fast-track pathways
for the reimbursement of digital health solutions with rapid review
processes, such as the Digital Health Apps scheme in Germany, and a
similar one in France, which launched this year.
In the U.K., there is no national reimbursement program for
digital products, but providers can incorporate software into care
provision once they have met the National Institute for Health and
Care Excellence evidence standards framework for digital health
technologies, which was updated in August to include the evidence
requirements for AI and data-driven technologies with adaptive
algorithms.
Focus for Developers and Users From a Data Protection
Standpoint
Guidance Under the General Data Protection
Regulation
Software technologies raise important data protection
implications, given the large amount of data collected and
processed. The relationship between AI and data protection at
EU-level was included in the European Data Protection Board’s
Work Program for 2022, although guidance has not yet been
published.
However, data protection authorities around the EU continue to
publish guidance in this area. For example, a GDPR compliance guide
and self-assessment tool for AI systems was published in France in
September.
Similarly, the U.K. Information Commissioner’s Office
published guidance in October on the relationship between AI and
data protection. The guidance includes data protection principles
for AI systems and emphasizes that data protection should be
considered at the design stage of an AI project.
The European Health Data Space
In May, the EC published a proposal for a regulation for the
European Health Data Space to regulate and facilitate electronic
health data access and sharing across the EU.7
The two main objectives of the data space are to enable
individuals to easily access and control their electronic health
data and allow researchers, innovators and policymakers to use
electronic health data in a lawful, legitimate, trusted and secure
manner.
As the proposal touches upon sensitive areas of the EU member
states’ health care systems, these negotiations have been
challenging. For example, the European Data Protection Board and
the European Data Protection Supervisor have noted several
potential GDPR issues with the health data space, and have made
suggestions to clarify the interplay with existing data protection
laws.
What Legislative Changes Are on the Horizon?
The Regulation of AI
The European Commission proposed an AI Act in April
20216 that will cover all uses of AI and does not
currently distinguish between technologies already regulated by
other relevant sector-specific legislation, such as medical
devices. Negotiations are ongoing between the European Parliament,
the European Council and key stakeholders.
Earlier this month, the Council of the EU adopted a common
position on the text. The Parliament is scheduled to vote on the
draft by March 2023.
The AI Act has a risk-proportionate approach, categorizing four
levels:
- No or minimal risk;
- Limited risk;
- High risk; and
- Unacceptable risk.
Medical devices will likely be classed as high risk and would
therefore be subject to risk assessment, mitigation and appropriate
human oversight. There is therefore a concern that the AI Act may
require extensive evaluation and certification, over and above the
requirements in the Medical Devices and In Vitro Diagnostics
Regulations.
Additionally, in September, a joint industry statement was
published by a coalition of 12 European trade bodies calling for an
alignment of the proposed AI Act with existing, sector-specific
product safety legislation, such as for medical devices.
The coalition points out that the proposal significantly risks
over-regulating the medical device industry. If not addressed
adequately, the duplicative requirements could limit a wide range
of products and technologies accessing the EU market.
The most recent version of the text indicates that the EC
Council is in favor of a narrower definition of AI systems, which
would not capture all types of traditional software. However,
medical devices do not seem to have been addressed, and the
obligations on high-risk AI systems have been maintained.
The U.K. has taken a different approach. In July, the U.K.
government published a policy paper on regulating AI. The
government proposes establishing a pro-innovation framework of
principles for regulating AI while leaving regulatory
authorities’ discretion over how the principles apply in their
respective sectors. The U.K. government’s white paper is
expected to be published in 2023.
Regulation of Medical Devices in the U.K.
The EU Medical Devices Regulation does not apply in Great
Britain, although it does in Northern Ireland, given the agreement
reached with the European Commission postBrexit. As such, the U.K.
Medicines and Healthcare Products Regulatory Agency has been
considering the future U.K. regime.
In June, the agency published the U.K. government’s response
to the consultation on the regulatory framework for medical devices
in the U.K. and its intentions for the U.K. regime.7
For software medical devices, the new regulations will include a
new definition of software, currently proposed as a set of
instructions that processes input data and creates output data.
The classification rules will be amended to include the
International Medical Device Regulators Forum Software as a Medical
Device classification rule, to allow for international alignment.
This is likely to lead to up-classification of software, which was
one of the areas where industry hoped the U.K. regime might provide
more discretion compared to the EU Medical Devices Regulation.
Further essential requirements will be introduced to assure the
safety and performance of software as a medical device. The
Medicines and Healthcare Products Regulatory Agency does not
propose to define AI as a medical device or set out specific legal
requirements beyond those being considered for software as a
medical device, as this would risk being overly prescriptive.
Liability for Defective Products
It has been suggested that the existing Product Liability
Directive makes it too difficult for claimants to succeed in their
claims, especially when products are new or technologically
complex.
As a result, in September, the European Commission published two
proposed directives:
- A new Product Liability Directive,8 which would make
it easier for consumers to obtain compensation by expanding the
regime’s scope and alleviating the burden of proof in certain
circumstances. It amends the definition of a product to include
software and digital manufacturing files, such that AI systems and
AI-enabled goods are within its scope. It also adds data loss and
psychological harm to the types of actionable damage and alleviates
the burden of proof by introducing certain rebuttable presumptions;
and - A new AI Liability Directive9 aims to harmonize
fault-based liability rules that apply to claims beyond the scope
of the Product Liability Directive. It seeks to ensure that persons
claiming compensation for damage caused to them by an AI system
will have a level of protection equivalent to that enjoyed by
persons claiming compensation for damage caused without the
involvement of AI.
It is unclear when and in what form these proposals will be
finalized; they are quite contentious and may not be approved in
their current form.
Footnotes
1. Regulation (EU) 2017/745.
2. Regulation (EU) 2017/746.
3. https://health.ec.europa.eu/system/files/2020-
09/md_mdcg_2019_11_guidance_qualification_classification_software_en_0.pdf.
4.
https://digital-strategy.ec.europa.eu/en/library/artificial-intelligence-healthcarereport;
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/at
tachment_data/file/1028766/GMLP_Guiding_Principles_FINAL.pdf;
https://www.gov.uk/
government/publications/g7-health-track-digital-health-finalreports;
https://transform.england.nhs.uk/ai-lab/ai-lab-programmes/the-nationalstrategy-for-ai-in-health-and-social-.
5.
https://www.who.int/publications/i/item/9789240059184.
6.
https://eithealth.eu/news-article/press-release-digital-medical-devices-launch-of-aeuropean-taskforce.
7.
https://eur-lex.europa.eu/resource.html?uri=cellar:dbfd8974-cb79-11ec-b6f4-
01aa75ed71a1.0001.02/DOC_1&format=PDF.
8.
https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2022:495:FIN.
9.
https://ec.europa.eu/info/files/proposal-directive-adapting-non-contractual-civilliability-rules-artificial-intelligence_en.
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.
Comments are closed.