false
Catalog
Artificial Intelligence in all Aspects of Care Rel ...
Artificial Intelligence in all Aspects of Care Rel ...
Artificial Intelligence in all Aspects of Care Related to Remote Patient Monitoring
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Well, good morning, ladies and gentlemen. Welcome to this session about artificial intelligence in all aspects of care related to remote patient monitoring. And as we have heard this morning, our designated future is remote monitoring to tackle the healthcare shrinking workforce and the elderly growing population, et cetera. And we have a great panel here. We will do a short tour, the table as we call it in Europe. And I will start off with a short introduction of myself. My name is Nico Bruining. I come from the TORAC Center in Rotterdam, the Netherlands, which is part of the Erasmus Medical Center, which is the largest university hospital in the Netherlands. My background is in computer science, and I am the head of a department which controls all the clinical and experimental data of the TORAC Center, which is TORAC surgery and cardiology. And I worked there for the past four decades. I also have a position in the European Society of Cardiology for a long time. And the past couple of years, last three years, to be honest, I am the editor-in-chief of the European Heart Journal of Digital Health. And that's one of the reasons I am here to this meeting. So please, let's go from left to right to an introduction of everybody here. All right, good morning, Nico, and good morning, panel. My name is Leo Rappellini. I lead the R&D team for the cardiac diagnostic business at Medtronic. And so I'm responsible for products like implantable cardiac monitors like LINC and LINC2. Those are products that generate a lot of data, a lot of accurate data over a long period of time. And so where AI fits into this is that we have worked to make sure that we have appropriate AI algorithms to reduce the workload of physicians. And so I can share some of the experience with the rest of the panel and with the audience. Hey, yes, I'm Steven Browning from the FDA. I am an assistant division director for one of our two cardiac diagnostics teams. We have it basically split between Rhythm and Rate, which is Jenko's and my counterpart. Mine is hemodynamic and heart failure. So any blood pressure, blood flow, heart failure, disease prediction, stuff like that. So my team does the actual reviews of products as these manufacturers wanna change stuff and add new features. So a lot of on-the-ground experience with these products. Great. Hi, my name is Amit Rushi. I'm with Rhythm Science. We are a cardiac data platform company, really solving for workflow needs across hypertension management, heart failure management, ambulatory cardiac monitoring, and CID rhythm management. My background is in diabetes, kidney disease, dialysis, and heart failure. And I was at Medtronic in the diabetes division where I established a data science team there, really incorporating a lot of AI. And so that's where I got into workflow optimization, being able to get into clinical decision support, et cetera. Hello, everyone. My name is Baptiste Lefebvre. So in the past, I worked on vision restoration, and marginally in computational neuroscience during my PhD. And today, I hold a position as a clinical scientist and data scientist at Cardiologues, now part of Philips. And I especially work on clinical trials, so ensuring that we have all the scientific groundness for the demonstration of the efficacy of our AI model. Hi, I'm Blair Hunt, CEO at 91Life. We're a data science technology platform that's aiming to push the frontiers of precision medicine. Our first platform is an ecosystem for automating remote monitoring of implanted cardiac devices and ACGs and wearables using artificial intelligence in addition to modern native technology in the cloud. By background, I'm a mathematician, spent almost 20 years in Wall Street as a trader, portfolio manager in derivatives where we used a lot of what we used to call mathematical modeling and data science, and now people like to refer to it as AI. Thank you. Hello, everyone. My name is Chan-Ho Lim, and I'm the assistant director of AI and digital health program at the Heart and Vascular Institute at Tulane University. And we are a multidisciplinary team that study multimodal AI models from wearables, imaging to EHR. Thank you. Well, thank you all very much. So we have a great panel, starting with people from the industry, data analysis, data scientists, but also the regulators, which is a very important factor in this. If I may start off with the first question, what do we have as the current state of remote monitoring? And if I may start with you, Leonardo, what do you have and what do you see in the near future coming from a company perspective? Yeah, so remote monitoring obviously has exploded, especially after COVID and it's here to stay. I think when we talk to physicians, my experience is mostly in the cardiology space. They are the clinics and the clinicians tend to be overwhelmed with the data. And so there is an important play we can do as manufacturers of the products or other companies that manage the data to really pinpoint and making sure we provide accurate data that can drive appropriate clinical decisions. And I think the more we can do and keep the accuracy and the clinical decision criteria in mind, the more we can really help remove a lot of the noise. And so AI obviously is a prime example of tools we can use to remove a lot of the false positives and to potentially add decision path that help the clinician. But we're just at the beginning. Obviously there's a lot of data. And so as with the data, a lot of these tools can help with the workflow. Thanks. Baptiste, what is the point of you or Philips in this aspect? So for Philips, I think that we have several domain that we are working on. So the historical one is the alter monitoring, for example. In this domain, especially false positive reductions was a really hard work. It helps reducing the time spent by your clinicians or service provider to analyze the alter recordings. But we are now focusing more on wearables, obviously. That's new trends. And we see also a lot of different sensors coming on the market. So connected scale, for example, blood monitors, oximeters, thermometers. So different sources of data that needs to be aggregated, I guess. There's one thing we have heard during the last couple of days. The enormous amount of data remote monitoring can deliver to the clinics. We see now other companies and other platforms coming up. But what can AI do or what can AI not do? And if I may ask Anshu, if you can a little bit elude about what AI can provide us with and what not at this stage. Yeah, so what can AI provide to us today? Before we dive into even clinical topics about what can AI deliver to us today, I think it's difficult to talk about remote patient monitoring without continuous CCGs or PPGs that are out there today. And for any large scale trials, you're going to be collecting tons and tons of data. And the amount you store is all the cost of running this kind of program. Where AI becomes really useful is, and there's a lot of noise in these types of signals. And where AI can be really useful is to detect the areas of noise and then remove the noise and really highlight the areas where we can solve clinical questions from these signals instead of diluting with a lot of data and the problems that we want to solve. But help us focus and drive the cost down of running these types of studies. If you look to the industry and what is now available on the market, everything has to be regulated and tested and to be trustworthy or explainable AI and things like that. And if you look to the FDA site, we see 400 softwares being approved for radiology. So imaging is doing very well. And if you look to the field of cardiology, there we are number second, 70. So there is a little bit of shortage, let's say in developments beyond atrial fibrillation, because that's one big one. And the other one is part of cardiology is also within imaging. So how's the FDA looking at this? I mean, the FDA is looking at tons of different products. It's one of those things I can't tell you all the companies we talked to because it's confidential until it's on the market. But we just created a new regulation for machine learning based disease prediction or likelihood for. And so we're hopeful that that regulation sort of offers a path. And if you look at those special controls, it sort of tells you what we're looking for in machine learning based systems. But for these alarm reductions or making it more reasonable for the doctors, I think we're mostly looking at, have you thought through this? Are you really only reducing false positives and not generating false negatives as you reduce false positives? It is what you also said, Leonardo, high quality data, but what is really the definition of high quality data? Is it also important that if you have, let's say one digital biomarker, which you can measure with a wearable, that you also know the context of it? And for example, blood pressure measurements, it's important to know if it is measured standing up or lying down, or the bite code effect that can all influence that. And I can also think that with a rhythm disturbance, if you're looking to the television, you have a resting heart rate of 150, your upper watch said, hey, you have a high heart rate, but are you looking at something which is emotional? Or what is the context of that? Is that a false positive, a false negative? How do you define a false alarm? Yeah, I mean, it's a good question. I think ultimately, when I talk about high quality data, it speaks to two different dimensions of the data. One is how you collect the data. So what are the sensors that you're using to collect those data? Are those sensors like in an implantable device, or are they in a wearable device? How much is the patient variability counts? And the patient compliance to the type of measurements that needs to be done. So the source of data is a first step to have high quality data. But then the next step is how do you, whatever source of data, whatever sensor you use to get the data, how do you manage that data? How do you validate whatever algorithms you're using, either deterministic algorithm or AI algorithm? And that's where, first of all, FDA offers a first bar of guidance of how do you have to do it, but ultimately, it's the performance that you can show in terms of sensitivity and specificity to that specific clinical issue you're trying to address that is the critical question. So I really see it as two steps. The quality of the data coming from the sensor, and then the sensitivity, specificity to the question that you're asking, and what validation that you have that you can prove those numbers, basically. Speaking about testing validation and running trials, I'm looking then to Chen Ho again. So we use the traditional statistics, we have power calculation, and then you determine your outcome, calculate the number of patients you want to include. But with training of an AI model, that's completely gone. In the journal I see all kinds of studies submitted with small groups, with larger groups. They take a large part for the learning of the teaching of the model, the training of the model to be set, and a small part of the testing. But external validation, that's something we require today. How do we have to deal with it? So it's a completely shift in paradigm of the traditional trials than the digital trials. Yeah, where do I start for this? Obviously, as everybody knows, for to train a good AI model, you need a lot of labels and good quality labels and a lot of data. And oftentimes, when you want to make a prediction model for certain kinds of disease or comorbidities, you want high incidence rate, and you look for the ways to produce high incidence rates in your population. But I've also experienced where sometimes, so you do a clinical trial like this, and the last thing you want is to develop a model with high AUC of something and then have it not be generalizable because then it sits in your lab and you've wasted a lot of money and time and the team's efforts and the patient's efforts and everything. So I think it's really important that your data really resembles the real world scenarios and it doesn't only stay there and how you stratify your population into your training and validation and testing sets are all really important topics, yeah. Can you give a little bit of an estimate on the sizes you need? For instance, with training of EKGs, you need hundreds of thousands of EKGs to train that, and sometimes you see much smaller populations with other kinds of pathologies. So what do you call enough? So there are ways in terms of the development of new AI models that helps us reduce the amount of training data and let's say image segmentation, for example. We used to need a lot, like thousands of slides or scans of MRI images to train a good segmentation model and other AI methods for it. But now there are lots of foundation models and pre-trained models out there. For example, Facebook has something called segment anything model. So it wasn't really trained on any medical images originally, and I saw their data set and pictures of corgis and everything. And we applied it to some of the MRI scans that we have, and then we also tailored, fine-tuned it to the scans we have, and it worked really well with a very small number of samples. So I think the number of samples that you would need for different kinds of AI models really, really differ. Some things you don't even need AI. You use AI models to make these types of predictions and then you end up finding that statistical models work just fine also. So I think it really depends on the context of what problem you're solving and what type of data you're dealing with. Okay, Bleron, you're running a company which is a cloud-based AI dashboard, and you take data from all kinds of different companies, like CardioMem, et cetera. Can you elude a little bit about what your company is all about and how does it help healthcare? Sure. So first of all, I would say that I'm much more thoroughly familiar with remote monitoring of implanted cardiac devices where we have grown our expertise. But as Leo said earlier, so going back, I would say I look at the AI opportunity here in sort of three verticals, if you will. The first one is really just basic, making the technology better. I'm talking about some basic machine learning models that can sort of improve the user experience or be used to go from unstructured data to structured data. And the second one is more around identifying, let's say, disease issues, problems, and this goes to the heart of sensitivity versus specificity. Because we know if you sort of want to increase the sensitivity at a degree of low specificity, then the problem is that you're gonna tune out the clinicians. And I think this is, for example, where Medtronic with Link-2 has done a great job, at least based on their presentation, where they've reduced by more than 30, 40%, I think, the false positives around AFib. And then the third aspect, the third vertical that I see is predictability, right? And I think from our perspective, remote monitoring of implanted devices presents a goldmine in the opportunity to use AI or mathematical modeling, statistical modeling in general, because first of all, you're getting continuous data. Even when you're not getting data, you know that everything is routed to normal, for instance, these devices are programmed to trigger any time there is an aberration. Secondly, you're getting very low noise because you're getting EGMs instead of ECGs. And then you can also couple it with all kinds of other data, whether it's from EHR, 12 EDCGs. You can also look at the, for example, using LLM, now you can look at all the notes from the doctor or various doctors, excuse me, and you can create some sort of standard of care. And then because of the continuity of data, and because you are, as Chandler said before, because you need a high incident rate of failure, so any kind of mathematical modeling that's a continuous function, if you have aberrations, it's much easier to start building the pattern recognition when you have some kind of aberration in the function. And because we're dealing with basically the sickest patients in our population, now you've got a very useful playground where you can start to train models. So I think this was what was attractive for us, particularly why we wanted to go to heart failure and implanted cardiac devices, which generally are treating sort of irreparable heart disease. Look at the trajectory of disease. So you learn over time, you start to understand the impact of treatment or care as well, but fundamentally, the impact that disease has had in the cardiovascular functions, and then you come up the ladder, sort of you deal with the sickest patients, then you go into the patients that need monitoring, those patients that generally have ILRs, then you sort of move up to the Holter monitors and so on and so forth. Ultimately, the idea is that you build models that address the, let's say, if we look at the US population, something like 85 million people with cardiovascular disease. And I think that's when you start to also build a bridge with other diseases, you know, whether it's pulmonology or diabetes and so on and so forth. So from our perspective, I would say we're still in the very early days of doing something meaningful with AI. When we actually started, believe it or not, we looked at training AI models with very few patients, hundreds, not thousands, and we could easily get into sensitivity and specificity in the 70s or 80s, even for prediction. Like I was asking the question to our AI team, tell me if this patient is gonna show an episode of AFib in the next remote monitoring report. And I could get sort of like 83% sensitivity with specificity in the 70s. Why is this important? Because I think we've come to the conclusion through this remote monitoring that you ask what the state is today, something upwards of 75 to 90% of all alerts that come into the platform are not really actionable. And this is really difficult for the clinical team. This is why the guidelines from HRS for remote monitoring have concluded that you need three nurses for a thousand patients. And that's just not sustainable. That's why I think one aspect of this is companies like Medtronic and Boston Science and so on, they have to do a better job, use AI to filter out, first of all, the alerts you send, because then the moment you send an alert, it becomes a liability. And on our side, we can use classifiers to, let's say, take out all these false alerts, but it becomes a liability on our side as well. And then we'd have to go through a 510K potentially. And now the problem is when, for example, Medtronic does this with Link 2, they're device-specific, they're going through the 510K, they make a change. Now we have to apply for the 510K of the, let's say, Link 3 and so on and so forth. So it's a very tough balance. And AI can do a lot of help here, but I think it really has to be a multifaceted, concerted effort between all the players, which includes FDA, device makers, clinicians. So in all aspects of the spectrum of the care, for us to sort of come to some kind of a reasonable conclusion on the cost benefits of how much data is too much data and sort of how much risk do we take, right, because we can't go for 100% sensitivity. It's just, we don't have the manpower, we don't have the, our healthcare system is already too expensive. I fully agree with you that liability is an important and very difficult topic to tackle. Amit, you are from the Rhythm? Rhythm Science. Science, yeah, that's a, I would say Rhythm Society, but Rhythm Science was. Can you tell a little bit what you are providing for AI support for remote monitoring? Yeah, I mean, very much trying to do something that's actionable for the cardiologist and her staff to manage these patients in the ambulatory sense. So I can really, I can dig into the heart failure aspect as an example. There are some patients that actually do have those implantable devices for pulmonary artery pressure sensors, generating this data that's, you know, it's a device, not a therapeutic, to generate this data from which more clinical context is known and needs to get acquired in order to determine should I, what should I do with this patient next? It looks like there's a pressure drop. You know, is this a real, true alert or not? The premise of that is where we get an opportunity to add on more devices to clinically contextualize any such alerts from those data-generated devices. So we've been able to move forward with making sure that when an alert comes from this kind of device, what other attributes from routine engagement with the patient, routine data capture from a blood pressure cuff, maybe also a weight scale, where the history of that patient can be applied for baselining the patient, contextualizing this, you know, heart failure alert for potential compensation, and then triggering a lower acuity need from the clinical staff to do an outreach to specify certain types of questions. So we really look to apply these kinds of technologies in something that's immediately actionable, that can help out with efficiency needs, that doesn't disrupt their perception of the value of that device they implanted, because in those patients that actually have those implants, if they're not deriving enough value to better manage them outside of the hospital, outside of the ED for heart failure, then they're not likely to put that implant in the next patient that is probably in need of it. So we really focus on where can we make that impact immediately, and show them that this is something that can be efficiently done. Even, you know, you get this alert, we don't want to necessarily say, like, hey, that wasn't a real alert. It's like, hey, that was a pressure drop. However, other pressure readings actually show that this patient may be experiencing just a one-off. So when you do have your nurse call that patient, just inquire about that, and then go ahead and adjudicate that alert appropriately. So that's where we've learned to really listen to our partners, our customers, to figure out, like, what else would you need? When you get this, what else do you go through? How can we actually automate that for you?
Video Summary
The panel discussed the importance of artificial intelligence (AI) in remote patient monitoring within the healthcare sector. With a focus on cardiology, the conversation touched on the challenges faced by clinicians in managing large volumes of data and alerts from monitoring devices. Various experts from industry, regulatory bodies, and technology companies highlighted the potential for AI to streamline workflows, reduce false alarms, and improve the accuracy of clinical decision-making. Companies like Medtronic and Philips are leveraging AI to enhance the performance of their devices and platforms, while startups like Rhythm Science are developing AI solutions to provide actionable insights for managing heart failure patients. The discussion also delved into the regulatory aspects of AI in healthcare, emphasizing the need for rigorous testing, validation, and transparency in AI models to ensure patient safety and regulatory compliance. Overall, the panel concluded that AI has significant potential in transforming remote patient monitoring by enabling more efficient and effective healthcare delivery systems.
Keywords
artificial intelligence
remote patient monitoring
cardiology
workflow
Medtronic
Philips
Rhythm Science
heart failure patients
regulatory compliance
healthcare delivery systems
HRX is a Heart Rhythm Society (HRS) experience. Registered 501(c)(3). EIN: 04-2694458.
Vision:
To end death and suffering due to heart rhythm disorders.
Mission:
To Improve the care of patients by promoting research, education, and optimal health care policies and standards.
© Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
×
Please select your language
1
English