A product that plays music to help children brush their teeth for the right amount of time, is rightly part of the digital health technology space, alongside a product that checks heart rhythms for arrhythmias like atrial fibrillation. Both are great products and have value to offer, but beyond their common mode of delivery, they are poles apart.
As the uptake of digital health tech increases, so too does the need for robust digital health assessment methodology. National bodies looking to create a digital health standard or innovators looking to gather evidence both need clarity on this issue because so much rests on it. It’s these evaluations of impact which enable healthcare professionals and the public to make informed decisions about digital technologies.
The two key challenges are: what, in the first instance, constitutes suitable evidence, and then, how does this evidence fit into standards which vary widely across the globe?
When it comes to evidence, how can you tell which digital health technologies are safe, which will deliver improved outcomes, and in what scenarios?
Initially, there was no international reference point to help with this challenge. Traditional healthcare approaches to evidence, typically centre around randomised controlled trials (RCTs) or, more recently, high-quality observational studies capturing real world evidence. But digital health presents challenges to these traditional evidential approaches.
Firstly, it is crucial to avoid categorising all digital health technologies as one homogeneous group of products or services that demand the same level of assurance. We describe this as the Proportionality Principle.
Secondly, digital health technologies tend to evolve from early-stage prototypes and proof of concepts into more stable and established products. How can assessment keep pace with these changes? We describe this as the Lifecycle Challenge.
Finally, these technologies need to show they are: as effective or more effective than the equivalent non-digital process, straightforward to engage with in real-world circumstances and capable of delivering material economic benefits. We call these additional factors the Evidential Range Challenge.
Given all this, how can we develop an assessment model that maintains a suitable balance between on the one hand assurance rigour, and on the other practicality and achievability? Which brings us to the next key challenge. Where does all this evidence fit into international standards?
Over the last few years, a number of frameworks and models have emerged that seek to address some of these issues.
- To move away from a reliance on RCTs, in 2019 the UK’s National Institute for Health and Care Excellence (NICE) created the Evidence Standards Framework for Digital Health Technologies (ESF). This tiered approach is the most established methodology to date. The framework groups products into tiers based on their functionality, each of which outlines what the developer must establish for their digital health technology. This model was the first to properly enshrine the Proportionality Principle.
- The self-certification ISO 82304-2 (Health software — Part 2: Health and wellness apps—Quality and reliability), developed as a new European standard, has recently been published. It asks organisations to demonstrate different types of evidence, and indicates, based on some intended usages, that an observational study or a randomised controlled trial is required. The ISO has yet to be tested at scale, however, it is a major piece of work and supports the direction of travel established through the NICE Framework.
- Created in Germany in 2020, the Digital Health Applications (DiGA) process requires healthcare tools to meet specified criteria under the Digital Healthcare Act. Innovators can apply for the DiGA without an RCT and get temporary registration for a year. This gives lower maturity technologies time to gather evidence and is the first methodology to begin to address the Lifestyle Challenge.
- ORCHA, (the Organisation for the Review of Care and Health Applications) introduced an Adapted Evidence Standards Framework to its reviews in early 2021. This has evolved and been adapted following the application of the original NICE Framework to in excess of 1,700 assessments of digital health products both in the UK and internationally.
It’s clear that models are emerging which seek to meet the challenges of digital health assessment. When considering which elements to include in the development of your own digital health assessment, it is important to consider the gaps and challenges in current approaches, but also the growing consensus that is emerging at a strategic level.
Tim Andrews is Chief Operating Officer at ORCHA.