When Apple announced ResearchKit in March 2015, and unveiled the first five apps to test-drive the company’s new approach to clinical research, the announcement made a big splash. And indeed, in the weeks following, those apps posted incredible recruitment figures: for instance, Parkinson’s emPower study saw 5,500 sign-ups by day two and ended up with more than 18,000 participants.
But there were also questions, and now that the hype has died down and ResearchKit has become a day-to-day reality, those questions are looming larger than ever before. Chief among them: Sure, ResearchKit can drive big numbers of sign-ups, but how long can an app keep those participants engaged? Just ask Fitbit which is easier – initial adoption or retention? The other big question: What about the 80 percent of the global market that uses Android phones? When will ResearchKit be able to accommodate that population and remove that point of bias from the data it collects?
According to John Wilbanks, the Chief Commons Officer at Sage Bionetworks, which does the data back-end for the five original ResearchKit apps and five more, there are currently about 30 ResearchKit apps live on the app store (Apple showcases 15 of those). Earlier this summer, GlaxoSmithKline launched the PARADE study for rheumatoid arthritis, the first ResearchKit study led by a pharma company. At least two studies, the MoleMapper melanoma study and a study of American fitness habits called AmericaWalks, have been or are about to be launched as cross-platform studies, with both Apple and Android versions.
On Monday this week at the Mobile in Clinical Trials event in Boston, sponsored by The Conference Forum, ResearchKit was a major topic, showing up in nearly half of the panels in the daylong event. And as presenters shared their experiences with the platform, including representatives from the PARADE, MoleMapper, and AmericaWalks apps, they hit both those topics and more pretty hard.
“It’s good for us to take the perspective that ResearchKit is still a baby, and it’s still learning and just like AI and everything else, we’re investing time and effort to see if it works,” John Reites, chief product officer at Thread Research which helped develop the PRIDE app and the Johns Hopkins EpiWatch app, said at the conference. “ResearchKit involves investing time. It’s not out of the box.”
How can ResearchKit apps keep participants' interest?
Though hard numbers were rare, most of the participants admitted that the next big challenge for ResearchKit would be to keep patients coming back long enough to derive the data the study needs.
“When you move to mobile, the new challenge is retention and engagement,” Wilbanks said. “So how do you keep people excited to do this and how do you do the test design, to support sustained engagement over time? After 30 days, they might put your study app away and start playing Pokemon Go, and you need to have a plan for when that happens and you have to have a transition to a more passive measuring protocol, otherwise you’re just going to stop getting data.”
A lot of possible answers were presented to the retention problem, but they fell into two main buckets: giving participants a reason to come back, and making sure they didn’t have any reason to leave. For the second, that involves understanding the tradeoff between active and passive data collection. Active data collection, like surveys and timed tests, is more valuable for researchers but more effort for participants whereas passive data collection, like continuous activity monitoring, could make it easy to keep patients enrolled, at the expense of high quality data.
“Even if they’re relatively short for the patients, these tests still are work,” Kara Dennis, managing director of mobile health at Medidata Solutions, which is working with GSK on PARADE, said. “So I think a big learning over time is how you can reduce that for the patient.”
But the first option — giving patients a reason to return — was the one most speakers focused on. It’s really just another way to say ‘patient engagement’. Dennis shared an anecdote from another team, the Mount Sinai group working on a ResearchKit asthma study, which had done some research into which patients stayed and which disappeared.
“Their observation was it tended to be the patients with the most severe cases of asthma, and those patients were finding that the app helped them manage their disease more effectively [who stayed engaged],” she said. “It was the patients that were finding clear disease management benefit. It wasn’t just altruistic, 'I’m filling out these questionnaires for the greater good of society.'”
Wilbanks does think altruism is a reasonable motivator, however. He said that as his team designs apps for the future, explaining how each step helps people with a disease has proven to be a good motivator (i.e. “By taking this test twice a day, you’re helping us learn how Parkinson’s affects people”). He also sees a lot of potential for social connections as a motivator, but it’s a challenge to do them in a way that supports patient privacy.
Then of course, there’s gamification. One app that launched this spring which uses the ResearchKit framework is Sea Hero Quest, a mobile game designed to help researchers establish baseline data for research into dementia. Julian Jenkins, VP of Innovation Performance and Technology at GSK, said they’re also introducing some gamification into PARADE, giving patients realtime feedback about where they are in data sharing compared to other patients to encourage their feedback.
MoleMapper falls short of real gamification, but does aim to inject some fun and levity: In order to help users keep track of their different moles, rather than numbering, the app autogenerates celebrity names like “Moley Cyrus”.
But Wilibanks thinks that Apple’s CareKit highlights one of the most promising avenues for keeping patients coming back: helping patients use the data they collect via ResearchKit to aid in their actual care.
“We believe that a reason people will stay engaged and a really evergreen reason people will stay engaged is if they can start to take their data to their doctor,” he said. “Now if I walk into my doctor today with a binder full of printouts of gyroscopes and accelerometer data, my doctor would just stare at me and say ‘take that away’. So the way that I hope CareKit gets used in our context is to give back your study data including your free text data in a format, so like that tapping graph, you could take that in to your doctor, and your doctor could say ‘Hey what happened here in the middle.’ And you could say ‘Oh, that’s when I stopped going to yoga’ or ‘That’s when I had to travel for work’ and you could work on a plan together to address that. That’s where we’re going to go with it.”
Whatever your approach to retention, Reites said, you have to plan it in from the beginning.
“We know that when you get these massive amounts of patients enrolled, like 10,000 patients, for them to keep coming back to the app it better be pretty magical or provide something of really good value. And so a couple of the things that I would encourage you to do is build engagement predictors into your design...What you’re looking for is, what’s the reason passively and actively for a person to come back in your app every week, month, and year? And you have to really know what those infrastructures are and plot those out. Because the reality is most of the drop-off we’ve seen is because there’s not a feature that makes the patient say ‘I keep coming back to this app because it helps me with X’. What this is forcing us to do is to really think about support features we put in patient support or adherence programs, and start to put those into clinical research studies — so we’re giving patients back just as much as we’re taking from them.”
How is ResearchKit making its way to Android?
We wrote almost a year ago about ResearchStack, the initiative led by Open mHealth to create an Android version of ResearchKit. Now the first ResearchStack app, Mole Mapper, is about to launch. Around the same time, clinical trial company TrialX began it’s own effort in the same vein, called ResearchDroid, and launched a test app on both platforms called AmericaWalks, which collected step data from about 200 participants and was mainly intended as a crossplatform proof of concept app.
“Last year when Apple released the ResearchKit platform, we went to investigators, and showed it to them to get responses on what they wanted to do with it,” Chintan Patel, CTO and co-founder of TrialX said at the conference. “The big thing we heard from investigators was they didn’t want to do a study that was iOS-only. So we went back to our bench and started developing an Android port of it called ResearchDroid, which we released in October and based on that we released the AmericaWalks study.”
Patel and Dan Webster, a research fellow at the National Cancer Institute who was in charge of the development of Mole Mapper, both laid out some of the challenges they face creating a cross-platform app.
“One of the first challenges we came across was matching the UI norms across the two platforms,” Patel said. “iOS users are used to back buttons on top, whereas on Androids people prefer to use a physical back button [for instance]. These are small details but they become important when you try to go across platform.”
But a bigger problem than UI was hardware compatibility, as many ResearchKit apps, use phone sensors including the accelerometer, gyroscope, and camera.
“As we go forward and try and build things where there is fragmentation, like at the level of the camera sensor and how deep it is, to be able to tell the depth of field, we can do something like that with Apple phones, but we won’t be able to autodetect moles necessarily [on Android] because there’s so much difference,” Webster said. “The other thing is the integration of genetic data into this app. Currently there’s a plug and play module with 23andMe for ResearchKit, but not for ResearchStack.”
Patel said there were problems syncing data on the backend, and integrating with different default mapping apps. Also, the step counts the study returned differed significantly between iOS and Android participants, though it’s hard to tell how much of that was a data collection problem and how much was a demographic difference.
Even privacy and security are different between the two platforms.
“In terms of some of the data challenges, iOS has HealthKit and there’s a permission step before a participant gives access to the data,” Patel said. “ With Android, there is no such thing, only when you install the app it gives all those permissions. In designing the app you have to consider these things. You need to think about security and privacy on Android. People were concerned, one concern was ‘Does the data go to Google?’ We had to get data from raw servers instead of from Google Fit, because Google Fit does send data to the server, which is a no-no for us.”
Wilbanks summed up the general mood of the conference around Android integrations, which are still a ways away from becoming the norm.
“Android also creates a lot of complexity,” he said. “You have a large number of versions of the underlying operating system. You have different chipsets for accelerometer and gyroscope and GPS. You have different pedometers, different altimeters. You have to work around how it is you’re going to normalize step data for different phones, for instance. How you’ll normalize accelerometer data for different manufacturer’s handsets, a Samsung handset and an LG handset, not only do they have different chipsets inside them but you have variations even within those platforms. So Android’s hard. But everyone knows that we have to get there.”
Other challenges
Speakers laid out a number of other challenges beyond retention and Android compatibility. Medidata’s Kara Dennis highlighted, for one thing, the need for a larger conversation around validation and standardization of the sensor data being returned by ResearchKit apps.
“How do we insure that each data element we’re gathering is valid as a clinical observation?” she asked. “I think that’s a question that’s been asked throughout the day. How can you validate sensor data? There’s a question of clinical validation and a question of clinical validation and both of those are very relevant in our experience here...For a walk test, the instructions are to place the phone in your pocket and complete the walk test. How do we know that the data in that particular exercise is valid and that it’s consistent and reliable and accurate?”
John Reites, from Thread Research, suggested that the biggest issue for pharma might just be convincing higher-ups to buy in.
“The majority of people in your company do not know what ResearchKit is,” he said. “Bar none, phase one. We have an education problem, in that we have to spend time educating people in our organizations about what ResearchKit is and that it’s not this diabolical app out to take their job away. Because I have literally heard that over and over again across the globe.”
He gave some tips for facilitating that education process.
“One of the things we found really helpful was actually helping these companies to walk through building a ResearchKit app using the same principles they would use to start a startup. So we took this into these companies and as part of this ongoing process of developing an app for use in a study on ResearchKit, we had the project teams walk through this and think very strategically about ‘Why am I trying to create this app?’ ‘What’s the value to the patient?’ ‘If I put it out in the world does the patient actually care?’”
There is an ongoing challenge of managing the volume of data that comes in.
“It’s more like being on the Twitter team than being on a traditional clinical trial data informatics team,” Wilbanks said. “You wake up every morning to a firestorm of data in the face. So we need more of a contemporary analytic approach to the data than a traditional approach.”
Finally, even the five studies that have been out for a year and a half now are just starting to dive into how to share the data they’ve collected with the wider research world. WIlbanks talked about how Sage is building a system to make sure those that want the data for research can access it, but participant’s privacy is still preserved. In all their apps, they ask participants if they want to share their data more widely with the world. Most say yes.
“In all of our studies on average between 65 and 80 percent want their data shared this way,” he said. “This is the beginning of an ecosystem. There’s the beginning of a dialogue and also the beginning of an analytical community of practice. And we built all of this relatively cheaply, because we started from the beginning from the perspective of design where the researcher and participant were closer to partners. Pharmaceutical companies, academic medical centers, are downloading the data now.”