false
Catalog
What is New About Our Old Tools in Electrophysiolo ...
What is New About Our Old Tools in Electrophysiolo ...
What is New About Our Old Tools in Electrophysiology?
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good afternoon, everybody, and thank you for joining us for this session. So obviously we have quite a large panel here, and, you know, we aim for a highly diverse panel of highly diverse opinions, experiences, and whatnot. We apologize, of course, for the size of the panel, but we're hoping to get a lot of different opinions hopefully here from not just ourselves, but from you as the audience as well. Now our goal today is really to touch on this concept of what is new about our old tools in electrophysiology, but it really is much broader than that, and that's kind of the goal of such a diverse group of opinions. But in order to kind of align all of us together, I'm going to let each of my colleagues introduce themselves, and the one question in addition to their background, I'd like them to mention what they feel is the coolest, newest, most interesting advance they think they're seeing in electrophysiology in the past year. Minto Chiraki, I'll hand it to you. Suraj, thanks. Great to be here. My name is, well, before I even introduce myself, I'm going to add to Suraj, we have eight guys on stage, and I was told by the organizers that the intent was most definitely not a mantle. There were speaker changes at the end. So on behalf of everyone, we apologize that you have to stare at eight men, but we're going to do our best to keep it interesting. I'm a cardiac electrophysiologist by training and also with training in computer science and created wearables early on, had an EP academic career, loved it, currently for the last two years I lead medical science product innovation at iRhythm Technologies. What excites me is I'm going to speak in an abstract form, if I may, rather than specific to technology, is the continuum that we've created between what we used to dichotomize as consumer technologies and medical. I think we found a happy medium in a gray zone where they all interface with each other, including on the regulatory side. So I think that just unlocks and opens so much potential for us. Hi, everyone. My name is Dr. Amrish Pandey. I'm a cardiologist at UT Southwestern in Dallas, Texas. I'm a general cardiologist by training. I practice mostly seeing patients with heart failure with preserved ejection fraction on research and I am an epidemiologist and also do a lot of work with AI, ML-based risk prediction models and leveraging data science to get more information about future risk of heart failure and other cardiovascular conditions. And to Suraj's point, I think the most exciting piece that I see now in EP and in general cardiology is the amount of information that we can gain from a simple 12 lady kg or even one lady kg now. And I was just talking to Subu here and was telling him that when my patients see that I can see their EKG on my stethoscope and they feel so much more satisfied by that encounter of care than they would or they used to when I did not have that technology. So I think technology has brought us closer to patients and has given us more opportunities to better understand our patients and that part I find really exciting. Thank you. Dan Canlan, I'm an EP cardiologist, former Cleveland Clinic for 16 years. Currently the chief medical officer at Massimo Medical Device Company based in Irvine, California that specializes in non-invasive monitoring, one of the largest manufacturers of pulse oximeters globally. The thing that excites me the most about where we're headed is this opportunity to have wearable devices, wearable sensors that are non-invasive, that are blended technologies. We're taking advantage of electrical sensors, acoustic gyrocellarometers, thermistors, and really encompass a lot of the human signal in a very simple wearable device. Hi everyone, I'm Josh Lampert from Mount Sinai Hospital in New York. I'm an electrophysiologist and I'm the medical director of machine learning for the Mount Sinai Fooster Heart Hospital. My work entails developing deep learning algorithms based on the ECG, echo, large language models as we both develop models for diagnosis as well as risk stratification as well. I think one of the most exciting aspects is that we're starting to actually see some heart outcomes in terms of the benefits of some of these technologies applied to whether it's the ECG or other wearables, whether it's inpatient or outpatient, and I think that that's a very promising feature, although I guess we'll discuss there's going to be some caution as well moving forward. Hey everybody, Subbu Venkatraman here. I'm the CTO of Echo Health. For those of you who haven't heard of Echo Health, we build electronic stethoscopes which are used by over 500,000 providers around the world right now, as well as FDA cleared algorithms for cardiac disease detection. I guess when I think about what excites me the most right now in recent tech, it's about the democratization of healthcare to some extent, where we can bring every nurse and every family physician to the level of performance of an expert when it comes to the detection of cardiac disease by using digital tools and by using AI. And I think that is a really, really powerful step, and that helps not just in the U.S., but it helps internationally as well, and I think that's really, really cool. Thanks Subbu. My name is Ben Green, and I appreciate you mentioning family physicians because I am a family physician, and sometimes I feel like I'm an imposter, EP physician, but I'm SVP of services at AliveCorps. AliveCorps, as you may know, makes consumer and personal ECG devices. We also now have devices for clinical and enterprise use. And what excites me about what we're seeing in the EP world and beyond is really the realization of moving care upstream, moving care into patients' hands for early detection, risk prediction that is driving behavior change, and we see it every day, and we see the ability of tools in patients' hands driving change that is ultimately helping to lead to improvements in chronic disease. Hello. I'm Dan Moussat. I'm an electrophysiologist at the Valley Hospital. I'm the director of research at the electrophysiology at the Valley, and I'm an end user of all your products. And I'm very, very happy, and I'm very excited in the EP world how the pace, the rapid pace of changing of the EP world. So in the past two or three years, you can see an exponential boom of the AI and of the technologies that definitely will help us be better doctors overall. Thanks all of you. And I realize I forgot to give you my background. I'm also Serge Kappa. I'm a cardiac electrophysiologist at Mayo Clinic in Rochester, Minnesota, EP lab director there, and I also work with a number of you folks as well as multiple startups and have been a CMO in various roles at other companies over the years. So we've heard a lot of commentary on what's actually occurring in our field of electrophysiology where we actually have less is more, namely that with less and less construct information, we can hear more information about the patient through these predictive algorithms. We're also getting more with less in the sense that we can actually get more and more stuff out of a patient with less healthcare engagement, with more devices that are actually facing on the family medicine side or even the home side with consumer engagement. But one of the key questions becomes, how do we integrate all of this into what we do in electrophysiology, whether it be on the diagnostics realm to transform, what is the goal actually there to make it cheaper, to make it more effective, or to increase longevity? And I'd like to hear from anybody on the panel to hear what their thoughts are on when we think about this Venn diagram of competing interests of each, less cost, more people and more better diagnostics or better outcomes and how they see these technologies playing a role of what we can expect in EP in the next five to 10 years. So I'm going to start, okay? I think you dotted the I, okay? We need all three of them, okay? So you cannot, you have to find the balance between the three because one costs, you know, as a community hospital, they look at us all the time, how much we spend, how many catheters, how much we do, and of course, this matters. So the cheaper, the better. Patients, you know, the number of the patients, yes. You know, it's an accurate problem right now with access to the physicians for the patients. So we have to find some help in AI to get better access of the patients to a physician they need. And, you know, also better, yes, definitely we need better. Because, you know, the AI, you know, should we have to try to improve the tools for a better positive predictive value, you know? So whenever it's the AI gives us a possible diagnosis, you know, for us to make us to believe in that and to go to the patient and discuss it, look, this you might have. Let's look into further. And also the insurance companies, okay, will want to ask to show that, you know what, yes, the predictive, it's a very good predictive value of the AI. If I can just jump in from the industry side, you know, representing the perspective of a company that builds sensors, I think it's a really important question because, you know, can we build a $3,000 wearable sensor that can tell you, you know, all of these beautiful, you know, physiologic parameters? Yes. Is that practical? No. And so I think the answer to your question might be that it depends on the patient population. Like, for example, the heart failure patient population that drives a lot of these costly readmissions, you know, a lot of the data that's coming out shows that it's very simple things. One study that I saw found that 70% of the patients had subclinical neurocognition issues at discharge that were not diagnosed. And when you do a simple screen for that and you just refer those patients into memory care or whatever, simple things like, you know, taking their medications all of a sudden, you know, drives that readmission rate down. So in that instance, like, you didn't need a lot of technology to solve a very simple, you know, clinical or workflow problem. So I think really that's where the devil's in the details of, you know, starting with, okay, what problem are we trying to solve for this patient population? And what is the minimum, you know, technology that we need to accomplish that? Because that's going to be associated with, I think, the cost. I agree with Dan. You do not need an Apple Vision Pro for sensors. Like, you just don't need it. Like, you can do things with low cost and other things. Going back to your question, Saroj, I think there's two issues. One in healthcare is we haven't figured out how to price AI. We don't know how to price it, how to value it. Is it a feature or a product? We can argue that all day long. How do you either way if it's a feature, then is it an add-on code? Is it NTAP? Is it this or is it that? We don't have stable AI reimbursement. So then you say, okay, I can't featurize this for reimbursement. Then I'm going to make it a product. Then you've got to tie it to hardware. You lose the agility and mobility of AI moving off. You know this from portfolio companies that Mayo has. The second issue is the migration and portability of data. How many of us are stuck with Google or Apple iCloud because we don't want to go through the trouble of moving our photos on another cloud because it's just too much work to go to a lower cost solution? We all have that, right? We're getting to the point where patients don't have the ability to have portability of their data and not just a copy of the EHR, but your sort of life journey of every single data point at its granular level going with you. That is a solve for which there's no business intensive or financial model that someone has been able to solve. I think that's going to limit us. I just wanted to add a slightly bigger picture to this really interesting conversation. One of the biggest challenges that I have noticed in my practice as well as in my health system is the integration of AI solutions for the health system and scalability is very challenging because the solutions are complicated and even though they seem simple in the lab or in the tech world, but when you come to operationalizing it in a complicated and complex health system, it becomes much harder to integrate it with EHR and provide it to all the physicians as well as patients. There's a real challenge with equity where patients who are disadvantaged, underserved, are least likely to avail the benefits of whatever monitors, whatever sensors and whatever home-based solutions we are developing. I think the root problem that we have to address at a societal level is to make scalable and simple solutions and which would actually be cheaper if they are not as complex in technology. In medicine, we always say simple is complicated and complicated is impossible. I think oftentimes we are trying to get to the best positive predictive value or the best sensitivity and compromising on the simplicity of the solution that can be scaled and implemented for the most vulnerable patient populations and the most disadvantaged patient populations. Just to jump in and add some extra detail, I wanted to echo the points made about focusing on the specific patient populations that we're talking about and also again the reimbursement schema for how AI tools are reimbursed because we are at a fairly unique time in which in some sense interests are starting to align. We can reduce the cost of healthcare, improve patient outcomes. We can prevent potentially patients from using the emergency department as their primary source of healthcare which would drastically reduce cost. We could focus on hospital at home applications, right, which can prevent patients from delaying their care and then having very lengthy, costly ICU stays from diseases that were missed. We can also narrow clinical variants and improve how clinicians actually interact and treat their patients and improve guideline adherence and things like that. So if we just take a step back, we can really consider that in this moment, if we accept that there's going to be some upfront pain, just like having to migrate your data from one source to another, we can provide cheaper care that's actually better and with also the understanding that we also need to be humble in how we treat patients and that a lot of the pre-existing tools that we're currently making decisions on are not so great. If you take something, for example, like the revised cardiac risk index, which is cited in many perioperative, you know, procedure planning notes, you know, a recent paper from David Liang's group showed that the area under the curve for that was .67 compared to a deep learning model and if you tried to develop a model now with the performance of that historical tool, which is cited all the time, it would never get published. So I think if we just put this in context of what we're currently using, we can actually improve quality at a cheaper cost, especially with the scalability. I appreciate everybody's insights. And you know, taking a step forward with that, you know, looking at the audience here, you know, we've talked about both hardware innovations, a variety of different sensor technologies, whether it can be used in the clinic or at home, as well as AI has been a huge topic in discussion in terms of AI algorithms that can be enabled for use. How many of the audience have actually used any of the hardware tools that we've been discussing up here? This was a show of hands. Or have they actually to their knowledge implemented any of these AI algorithms into their practice? So that brings an important question, right? For the last 36 hours or 24 hours, we've been discussing all of these novel AI algorithms, all of these novel sensor-based technologies, et cetera. Where for this panel is the disconnect? Where are we getting from all this stuff is new, exciting, being published in Nature Medicine, there was a paper just a couple of days ago, et cetera, to actual clinical utilization by people who would come to a session like HRX? Sure. I could start. I think it goes back to kind of your first question as well. What is the problem that we're solving? And then further, how do we implement that solution to solve that problem? And in an ideal world, that solution is solving many problems. But I think that you have to start with a problem, with a specific use case, in a lot of ways, to get the ball rolling. For instance, we have partners who are starting to implement home anti-arrhythmic loading programs, home sodal loading, using a cardiac device to look at that QT interval taken from home. So in that situation, the problem was we have patients who are being admitted for three days for inpatient loading, and we're going to shift the care to the home. We're going to save, obviously, a lot of costs for the patient, but that's a patient satisfier. It's going to reduce complication. So I think using a use case like that, for example, helps some systems, helps some providers to kind of get over the edge and make that first step. Integration of data is certainly key as well. So with algorithms, having that data integrated into systems, which is not the case, certainly everywhere, I think is another huge barrier to its implementation as well. Let me add to that. I think a lot of what we're seeing in the AI publications in the last few years is what we would think of as clinical efficacy, right? So the algorithm works, it has an ROI, I mean, sorry, it has a AUC of blah, 0.95, right? So that's really good. That's the first step. What we're now starting to see is papers coming out showing the clinical utility long-term, which is, hey, if you use this algorithm as part of care, like what you just said, does that patient do better over 12 months? Does that patient do better over many years, right? And we're starting to see some of those. What we've seen very, very little of is financial ROI papers, right? Saying that, hey, the health system truly saves money, and let's all face facts, that is important. Seeing that, and I think you need to do all three before we have a large audience like this all saying, yeah, they use these tools, right? And I think we're only in the first part of that journey. I think why these, so, right, 7,500 cardiology AI papers last year, 1% had FDA, you know, less than 100 things got FDA clearance in the last four years, and so you have adoption issues. It goes back to like epidemiology and statistics one, not even 101. Risk prediction works best for intermediate risk populations. If the risk is low to begin with, you can guess a negative result, and you'll be right 99% of the time. If the risk is high, you can guess yes, and you'll be right 99% of the time. The intermediate risk is where AI has performed very well. For things of moderate prevalence, EF less than 40, for example. So I think the financial ROI lines up with the clinical performance if you chase intermediate risk. We do poorly in that category. Even for AFib, you know who's not going to have a stroke usually. You know who definitely is at high risk. The gray zone is in the middle. So if we focus on the intermediate risk as a problem statement, I think we can make progress and start seeing adoption. So I agree with that, and I just want to add one other component. So it's not just a matter of publishing the efficacy and then choosing the right patient population, the right use case of intermediate probability. It's also that clinical transformation piece of incorporating that into the workflows, and that's very difficult. It's just been my experience in this role, going around the world and globally looking at different healthcare systems and their configurations. My own opinion is that I think that 80% of that change is really workflows, and maybe 20% of it is technology. And so I think it's just changing culture, practice for nursing, for physicians. It's a very difficult and slow process. I wanted to add something to what Mintu said. I think that's a really great point, and we have done a lot of risk prediction models where we have actually shown that adding biomarkers or echoes to intermediate risk is always what works, and we have seen that with CAC scores and other things in traditional cardiovascular practice as well. I think that one additional thing that I think we as clinicians don't think of often is the modifiable risk component, and often we spend 90% of our time in the hospital thinking of the highest risk patients who have risk driven by factors that may or may not be modifiable by single intervention, single technology, or one screening tool. It is the low intermediate risk that is more likely to be modifiable, and if you think about at a population level, improving modestly the risk burden in an intermediate risk range or a low intermediate risk range is going to have a more absolute risk reduction because the prevalence of those participants or those individuals is much higher than the 1% that had the highest risk, that we spend a lot more energy and a lot more, I think, dollars on their management and finding optimal ways to mitigate their risk, which often fails. And then it pushes us even away from using those technologies in less or lower risk patients. So AI, I think it's still at the beginning. We still have this like a new car. You get a new car, you find what Ken can do. But right now, we have hundreds and hundreds of pockets of each center showing 200 patients, 300 patients, this is what we do, this is what we do. But it's their recipe for their areas. So for example, me in New Jersey. Tell me, which AI should I consider? There are hundreds of algorithms. Which one is best? Which one I know that we're going for the next five years to convince our hospital that you have to buy this because this will improve the patient care and will reduce the cost. And I think everybody is here. It stays in expected to see which company is there. So tell me, which one of you would be the best to invest? I don't know right now. That's actually a great point and actually dovetails off of what Mindu brought up earlier, which is how do we deal with multiple different data silos with all of these new tools? Because we're increasingly seeing each of these new tools having a different data back end, data silo back end. And they might have integrability in terms of the net electrophysiologic state or clinical state of a patient. But has anybody been thinking about how you align the data between all of these disparate data sources that, frankly, have more power together than they are separately? I mean, yeah, it's certainly, I mentioned it earlier. It's something that we at AliveCorps are very mindful of. We have a partnership with GE. And our data and device data from consumer and consumer cardiac devices can flow now into GE Muse. So that data is visualized side by side to other familiar data sources from other ECG devices. So obviously, there are unified platforms that exist today. And I know others on this panel are also integrating data into common platforms. So I think that's a first step. We hear that all the time from doctors. Can I see your ECGs side by side from the other ECGs that I'm looking at on a daily basis? So I think we need to do more of that. But as you mentioned, yeah, more devices, more platforms, that's not going away. So it is an active, really, action that we all need to take to stop and think about the workflows for these providers. Yeah, I think it's on our industry side. The answer to your question is real simple. The implementation of that is very, very hard. We've got to tear the silos down. It's as simple as that. We need to have AliveCorps ECGs inside our safety net application that we have. And we have to get to that vendor agnostic configuration because we recognize there are so many distinct companies out there and so many distinct products and solutions where it's really mastered this particular aspect of it. And so we really have to evolve to a more vendor agnostic state. So we're getting a number of questions from the audience. And anybody who doesn't have an app to actually ask any questions, feel free to step up and ask. So we have a question of, is AI in current form guilty of concentrating on hospital practice and not on preclinical intervention where perhaps it's maximum bang for buck exists? I think that's absolutely right. But the challenge is accessing those individuals who have preclinical disease in the community is much more harder. Health systems represent a conglomeration of individuals who are coming to seek care, are motivated to seek care, and are potentially motivated to modify their health behavior. So I think that represents an easy access for implementation of a lot of these solutions in an ideal world. I think population-based intervention should be the main goal, because that's where you can have most return on your investment from any preventive solutions. But I think finding those patients and convincing them to make changes when they're feeling absolutely fine and sitting in their home is very hard. That's a health care delivery problem. That's not an AI problem. And the one problem that AI has not yet solved, and I don't know if it ever will, is getting people to do things that help themselves and not hurt themselves, right? Alcohol, sleep, exercise, blah, blah, blah. Everything's been tried. Economic, financial, behavioral incentives, behavioral economics, nothing has worked, right? So you could create whatever all of us create all day long. But until you can get humans to do the right thing, you have a problem. Additionally, for the development of some of these algorithms, it is just easier to have high-quality data sets for just the pragmatic development. When you have high-acuity settings, like patients in an ICU, or extremely prevalent diseases. So there's inherently going to be an upfront bias for those types of solutions, just because it's easier to have high-quality data sources. If you look for more uncommon diseases, your end is going to be extremely low. For example, we had developed an algorithm to predict, based on the ECG and ventabular data, whether patients with long QT have pathogenic genotypes, predicting the actual genotype. And that's great in the model function as well, but the number of patients we're talking about is tens of patients, as opposed to many thousands that you would hope to see for high-quality algorithms, and also to correlate that with outcomes. So eventually, the solutions for prevention will come. But again, you're just going to see a lot of more high-acute settings, using publicly available databases that have that data readily available. The other thing we haven't solved is not the health data. Your Apple wallet, your digital payments, is one of the best sensors you have. You can tell who is going to have a heart attack and who's going to be overweight, based on their spending, period. You don't need anything more. How are we ever going to get that into healthcare? Well, maybe we should. Apple payments and other electronic wallets have all the health information there. We don't have a way to top it, right? And so how can we align incentives and create the right guardrails for privacy to use that stuff? I think fintech intersecting health is the next wave that we're going to see in the next decade. So I think predictive models need to perform better. Because we have from EKG predicted that patients will develop, in five years, cardiomyopathy. But this is only like 40, 50% chance of that. So how can I go to a patient now and say, look, your EKG says that in five years you're going to develop cardiomyopathy. Maybe or not. And how do you understand what the mechanism that he will develop cardiomyopathy? What intervention should I do right now to prevent him from developing cardiomyopathy? I think this is what we have to tackle down and to understand. Yeah. I mean, I think we are seeing AI in the hands of consumers today, though. I don't think it's all in the acute settings, in the hospital settings. You know, AFib detection is an example. You know, there's obviously a lot of devices out there that are providing AI and power of AF detection and screening in patients' hands. I think there's a responsibility for all of us to help them and help consumers understand what that means and how do you deal with that information. But it is being provided, whether we like it or not. They're buying devices. They're buying Apple Watches. They're buying devices that are detecting disease earlier than otherwise. I think to the point Mintu made about behavior change, I think, you know, CGMs are a good example where it is starting to drive some behavior change. People are seeing data that maybe they don't have diabetes yet. You know, I've worn a CGM. I think probably some of you have worn CGMs before. You see that data and it does, sometimes it does inspire you to make a change. Now, is that happening at a whole-scale population level? No, but things like that, putting data in the hands of patients and consumers, what is that doing? Is it creating more anxiety or is it creating a positive change? And, you know, the jury's still out on that. No, I really appreciate that. And actually, we have a question from the audience. I can hand you my microphone. Thank you for taking the question. Maybe one more from a operational and business perspective. And maybe this is provocative for this setting. Where do incentives come into this? Because if you think about technologies in EP, if we're really honest, PFA is gaining adoption in no small part because of the cut to CPT codes that happened over the past couple of years. So how do you think about shifting incentives and guidelines to the adoption of AI technology? Because it sounds great and it's good for the patient and it's potentially good for the healthcare system, but we're not set up as a healthcare system to operate in that context. So how do you think, how does that play a role? And ultimately, how do you see that changing and evolving over time? It's an awesome question and I hope I'm not gonna give you a contrite answer, like a kind of answer you're used to hearing, but value-based care is shifting that, right? So greater than 51% of Medicare is now value-based. It's going up. The projections say in 10 years, it'll probably be in the 60s. At Iritm, we think that is a good way where you don't have to do large, huge, pivotal trials and get into large guideline recommendations if you have a payvider willing to see change and an ROI immediately. That sort of goes back to kind of at the micro level of what you're talking about with PFA, but that's insufficient because it's 65 and over. It fails to address a younger population. And by the way, anybody who's not in invasive electrophysiology here, PFA is pulse field ablation, basically a different mechanism providing energy to injured tissue. I would just add that I think the example of PFA is one of many because I think the point that you were making was that the incentive in the short term is pulled based on what's happening with reimbursement today. But I think the changes that Mintu alluded to in terms of that long-term pivot, I think a lot of us, and I could probably speak for, I could definitely speak for Massimo, and we've already heard from Iritm, but probably for Livecore and other companies as well, we're building for that future because we understand what's happening with demographics. That it's almost not by choice. Like we're going in that direction. So I think the people that are ready when we get there and when payers ultimately realize, look, there's just not enough clinicians to render care the way that we've traditionally done it. And so we have to do it more efficiently. We have to do it from a distance. That's the world that we all know is eventually coming and you have to just kind of survive and make sure that you're incrementally progressing towards that eventual future state. Thanks, Subbu. I know we're excited to answer and then I'll move on to another question. Sure. I agree with the points raised so far in that value-based care is the place where a lot of us are focusing right now. You either do that or you work really hard on a reimbursement strategy or you work really hard on showing a financial ROI to the system. Those are both harder, longer problems, right? But if you want to succeed in the long-term while you're waiting for that transition to value-based care to happen, you have to do both, right? Because if you don't do both, then your market share is going to stay fairly limited. So I think a lot of us are thinking along the same lines. Thanks. So another question for the audience, is there a role for FDA or other regulatory bodies to establish common data standards or platforms, structures or interoperability beyond the organic evolution that are going through FDA approval processes, et cetera? I think that's a great question. I'm not convinced it's the FDA, though, in that there was an earlier session on AI and data overload. There was a roundtable which talked about the same thing, which talked about how in home security, there were many different companies following many different ways of sharing data, and then they moved to a standard. And once the companies got together and moved to a standard, then it became a whole lot easier to have home automation across different companies talk to each other, right? You could imagine something very similar happening with single ADCGs, for example, where you can get a single ADCG from your Apple Watch or your Samsung Watch or your Fitbit or your AliveCore, and they should all be able to output in the same standard. I think the companies need to get together to do that. Maybe a government agency can help, but I think it's incumbent on us to drive that change. I would also add that I think we need to do more to study the real-world impacts of the technology, right? Because there's already been simulation data that if you deploy several different algorithms that may be related, either you retrain the model over time or you sequentially or concurrently deploy models, they may interact and degrade over time. And what we, especially in EP, we love to talk about modifying substrates, but that's exactly what we're doing. Modifying substrate, but that's exactly what we're going to be doing once we start acting on these predictive models, right? We're gonna start treating patients differently. They're gonna be on guideline-directed medical therapy for certain disease processes. We're going to be modifying their blood pressure, their cholesterol, whatever risk factor for whatever prediction that we're working with. And once we start doing that, the models may function differently. And so we need more real-world evidence on what we are going to be doing as we interact with these models. FDA aside, we just need to be able to generate that data so that we can have reasonable recommendations regardless of who's helping to put it together, whatever it's a consensus statement from a professional society. And one thing I would add is there actually, and I'll hand it back to the group, there actually are collaborative organizations that are looking at this. You know, if you look at CHI, the Coalition for Healthcare AI, which is a collaboration between MITRE, FDA, CMS, Mayo, Microsoft, et cetera, are looking at nutrition labels for AI, like who an AI algorithm should apply to based on who it's trained on. So there definitely are collaborative efforts starting to look at how we create guardrails around safe, responsible applications. Dan? I was just going to, I was smiling because, you know, like a lot of people on this stage, I spend a fair amount of time talking to people at the FDA. And what's very interesting to me is that when you engage with the FDA, the FDA is looking for the healthcare community, right, and industry to set the standards, you know, so that they can adjudicate based on that. And often in panel discussions like this, it comes up, you know, where is the standard coming from the FDA? So it's like a little bit of a chicken and the egg thing. And I think you hit the nail on the head when you were talking about, you know, those initiatives where bringing people together and say, okay, what are the common standards? What are the definitions? You know, what is that level of evidence that we're looking for to say, okay, this is a generalizable algorithm that can be applied to a broad patient population? One potentially contrarian view, though, is Chai is getting a little bit of flack right now on the Hill in DC of being a way that the big companies box out the small companies, and that's going through scrutiny, right? So you don't want to raise the cost to enter. You want to keep it low, right? Like all of us on the digital side, you have to pay a ton of money just on cybersecurity, pressure testing, and all these other things, right? So there's already these cost barriers, and some are saying, no offense to anyone, but is Chai really going to be done the right way where there is still freedom to operate for the small guys? Yeah. I think also to that point, you know, clinical validation and doing the right clinical validation to support your technology, it's expensive. You know, for small companies who are developing novel technologies, not to put blame, but sometimes there are, you know, retrospective studies done. There are not prospective studies done, and I think, you know, we do need to see those. The industry and the healthcare providers need to see prospective randomized controlled studies, you know, to then implement, you know, as you're mentioning, Josh, and that's just not being done, and I think there was a recent publication on FDA-approved devices, and the quality of clinical validation was all over the place. There were some prospective, many retrospective, some not at all, so it is incumbent on industry, you know, to develop and to produce those studies, but they're expensive, and, you know, we're seeing that. Just to add to that point, I think a theme that we're kind of referring to is pragmatism, and we just have to operate in the real world and understand that historical tools that we're using are, in some essence, garbage in many cases, but if you look at other examples for where there is real-world data generated, there was a randomized trial out of Taiwan predicting 90-day mortality, but there were some issues with the methodology. I mean, it was still a great study, and they showed a 31% reduction in all-cause mortality in the high-risk patient group, but one of the things that struck me was they only delivered the highest top 10% of notifications specifically to avoid alert fatigue, and that's another big issue that we have to account for, and I'm not sure, methodologically speaking, if making the arbitrary decision to only notify a certain number of alerts, because when you do that cost-ratio analysis and you decide where you want your sensitivity to be and where the FDA approves your algorithm, you can't just arbitrarily decide, okay, I'm only going to now deliver X number of alerts, because you're inherently changing that real-world performance, so it's something that we need to consider also. Totally agree, and we have one more audience question. Thanks, Suraj. Subbu, I agree with your point. I don't think, sticking on the policy side, I don't think this is FDA's position, but I did want to double-click on something Mintu said, which I thought was super interesting. When we think about, as med-tech companies and digital health companies, we are all sources of our own data that we're building algorithms on, but being able to supplement that with outside data, purchasing behavior, diet history, stuff like that, we're actually, I'm sure all of you are as well, hearing a lot of that thesis and vision from the pure, large-cap Silicon Valley tech players, which I think are going to be an important entry into this space, but then I also think about those industries and what we can learn from that we haven't evolved yet in our policies around data and privacy, which are almost archaic in our industry compared to others. I mean, you look at social media, you look at consumer tech, you look at fintech, the reason they're able to do so much with the data is because it's all opt-in, right? Do you guys see a path to our industry advocating for something similar? Because we just give away our data and everything else, and there seems to be such fear in doing the same in healthcare. I'm just curious on all of your perspectives there, because I really do think we could unlock a lot more individualized risk if we're combining other sources of data. So we just have a couple minutes left, so I'll maybe take a couple answers. Josh, it seems like you're raring to answer. So I think the availability of data and the sharing of data is extremely important. This is something for federated learning, where you can co-develop algorithms without having to breach your firewall and give somebody else your data. You can have the availability to run the models in different environments. So that's something I think is actually very important, and we were talking a lot about silos before. I think these are all interrelated. Everyone, whether it's a company or an academic institution, is very protective of the data that they have, and the performance of the models they develop, and the availability of the details of those models. Because essentially, if you provide a model to another institution, you're inherently giving them the recipe for the model, right? You're giving the weights and biases that were developed during hyperparameter tuning and model development. So there is some IP consideration here as well, because specific algorithms can be inherently difficult to protect with patents. So it's just, I agree. I think through federated learning and more larger collaborations, we can share data and kind of accomplish what you want to. And Amrish, I'll give you 30 seconds to answer, and then wind down. I think one of the challenges with sharing data, and I have faced this at my institution as well, has been the lack of general trust between the health systems and the vendors and AI companies, because the health system doesn't understand what is actually gonna happen to the data and how much of you are gonna do what you are telling that you will do versus use the data for developing other models or even things that are not even in the agreement yet. And that has been a big challenge, and that has also been related to not having enough details about what happens in the black box when the data goes to the third party or the vendor. So I think we need to create better trust and better, I guess, awareness about what exactly is gonna be done with the data, and then only the data sharing will happen, because otherwise, no health system is at this stage willing to give away the data. They can do federated learning, but they will never give the data and then let you run with it and validate your algorithm and so forth. All right, so we're all out of time here, but obviously, everybody, thank you for coming. Thank you for attending. Thank you for our panel members for contributing to an exciting session. Feel free to reach out to any of us offline whenever you want regarding any of the other questions you might have. Thank you.
Video Summary
This session focused on emerging trends and technologies in electrophysiology, featuring a panel of experts aimed at discussing advancements and integrating diverse perspectives from the audience. Key themes included the role of AI and wearables in improving patient outcomes and healthcare delivery efficiency. Minto Chiraki emphasized blending consumer and medical technologies, while Amrish Pandey highlighted the potential of extracting significant information from simple diagnostics like the EKG. Industry perspectives from companies like iRhythm Technologies, Massimo, AliveCor, and others underscored the importance of integrating these new tools into existing workflows, balancing cost, and ensuring broad accessibility.<br /><br />The panel also discussed the barriers to wider adoption of AI and wearable technologies, which include the lack of real-world validation, the complexity of integrating new technologies into clinical practice, and the need for scalable solutions. Financial considerations, regulatory issues, and data silos were identified as significant challenges. The session touched on the necessity for standardized data sharing and privacy-considerate approaches, pointing to the evolving role of federated learning and collaborations to facilitate this. The discussions underscored the crucial need for systemic changes in healthcare to effectively incorporate these technologies for broader and more equitable patient benefits.
Keywords
electrophysiology
AI
wearables
patient outcomes
healthcare delivery
data integration
standardized data sharing
federated learning
HRX is a Heart Rhythm Society (HRS) experience. Registered 501(c)(3). EIN: 04-2694458.
Vision:
To end death and suffering due to heart rhythm disorders.
Mission:
To Improve the care of patients by promoting research, education, and optimal health care policies and standards.
© Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
×
Please select your language
1
English