false
Catalog
Bridging the Gap between AI and Human Intelligence ...
Bridging the Gap
Bridging the Gap
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello everyone, I could see that we have some people coming into the room. Well thank you everyone for joining and depending on where you're tying into it's either still lunchtime, or you're like me at 7pm on the east coast here, and we're going to be talking today about bridging the gap between AI and human intelligence to advance healthcare outcomes. It is powered by Baxter but I want to thank HR X for allowing to host this webinar this evening. I'd like to welcome our three panelists today. If you'd like to introduce yourself, we'll start with Amy. For me it's about 4pm in the afternoon so I'll say good afternoon. My name is Amy Finans, I am a nurse practitioner who works in Phoenix, Scottsdale, Arizona, I work at the for the honor health hospital system in the cardiac arrhythmia group and electrophysiology group. I'm the lead nurse practitioner here and also the program developer for our remote monitoring clinic as well as our AFib clinic, our heart failure hybrid clinic, and all the fun projects we get to kind of kick up here at honor health. So, that's me. Awesome. Thank you for joining us, Amy. Dr. Poole. Hi everybody it's 4pm for me also so I am electrophysiologist at the University of Washington in Seattle, Washington, not the other Washington. And I'm excited about today. This is an offshoot from a lovely roundtable discussion that we had at HR X was really fun with a bigger group of people, but I'm looking forward to touching on some of those same topics today. In terms of, you know what I do I've done a lot with devices and device technology and, you know, clinical trials of sudden death and atrial fibrillation. And I'm currently the editor in chief of the heart rhythm. Oh to journal. Awesome. Thank you for joining us, Dr. Poole. My name is Kim Rodriguez and on behalf of Baxter, I'm one of the four associate directors that oversee one of our IDTFs here in New Jersey, and we've run about 250 technician lab, where we can see up to 14 days I've been with Baxter now going on, seven years, and I've been in the remote cardiac monitoring space for almost 18 years. I love the heart. It hits home personally for me. So just being able to teach it I've had the opportunity to teach it at local community college, even here at Baxter as well for our employees. So thank you for joining us today for this call. I wanted to just bring up that you will be allowed to post questions in the chat. However, we'll address those at the end of the webinar. There is a raised hand feature that you can raise your hand but we will not be able to call on you so please we encourage that throughout this webinar that you do put in your questions for us to answer at the end. We have a beautiful discussion today. Here's our wonderful agenda. I wanted to bring up that we're not all experts to AI and those that are new or who are experts rather in the space, it's still relatively new to them as well. This conversation like Dr. Poole mentioned is for us to just talk about it more broad and spacious. So we're going to start with the first topic which is AI and healthcare today. With that being said, let me make sure my screen stops sharing. You know when we talk about how AI approaches, the way we deliver healthcare in recent years. I wanted to talk to Dr. Poole, from your standpoint from electrophysiologist, how do you feel it's beginning to shape what's going on in your world. I think that's a great question you know even a decade ago or two decades ago when you heard the word artificial intelligence unless you were, you know, a computational scientist or something that all seemed really scary we talked a lot about, you know, certainly need for humans as AI, you know that's creating these really intelligent machines just going to take over everything we do but, you know, fortunately, I think as we have become accustomed to and understood what these mathematical algorithms do. Or she can see what the benefit is going to be in our own field of electrophysiology. So for us it's mostly a subset of AI that we're talking about which is machine learning, which really encompasses systems with large pieces of data which is what a rhythm is are sort of all of our remote monitoring and and learning from that data without having to rely upon explicit programming, and from that data, having functions that are going to be helpful to us as healthcare practitioners so we hope that it will filter out noise on loop better we hope that it will help us with screening of atrial fibrillation. We hope that it will help us manage our remote data. So I think that's exciting part for us and then there are other applications, you know, machine learning that, you know, obviously we can talk about but I think for Amy and I that is the part of our field that we immediately can see the potential benefits of machine learning. Sure, and we know it right and and as you mentioned both you and Amy and this frontline care regarding this integration. How do you feel your team is adapting to it when it comes to the function of AI or how it's being used. Well, I don't think my team actually thinks that way. I mean, we know that that these algorithms are running in the background and almost everything that we touch. We're aware of that and I think it just depends upon whether you're somebody who does like clinical research where the interest becomes more about what's the evidence base you know who's watching over these algorithms can we trust them, sort of all of that versus, you know, other people who may just be delighted about the fact that we maybe can hope for something that, you know, runs our lives more smoothly and makes our delivery of care actually more efficient for the patient and, and there's a lot of different examples I think that will probably talk about but, you know, I don't see that people are talking about AI and machine learning in a way that is sort of worried about where this is going, rather being very interested in where this field is going. Specifically, you know, as I said machine learning algorithms. And I'd like to, you know, add when it comes to, if you are aware that it is in the workflow, especially for us where we do deal with a lot of high volume cardiac rhythm reports, it does help prioritize what's needed attention versus what it allows the technicians to focus on with their expertise, at most right and and at least like you said there's times where we may not even know it's in use or I just learned recently about the difference between algorithms and the AI that learns algorithms, so I could definitely see that. If we're not asking about it it's how do we know right. Do you feel like that there's specific data sets like ECG pattern where ECG could be more helpful in. Well, I think the whole story about ECG is really interesting. I've seen examples from different investigative groups that have worked on this the male group and others, where information is embedded in the ECG signals that we don't really see, and the ability of these studies using machine learning algorithms, etc. to identify what we cannot identify and that that's really the beauty of artificial intelligence and machine learning is that rather than giving it the answers that then programming in some sort of, you know, predictive analysis, it's, it's learning from these algorithms and these, you know, specific data inputs to try to make the ability hopefully to identify abnormalities, better than what we've been able to do before so the ability of an ECG to potentially tell you what somebody's injection fraction is going to be the ability of machine learning especially like deep neural networking to try to help us develop really robust predictive algorithms for things like sudden cardiac death or predictive algorithms for where should you do your AF ablation. I mean there's just, it's really vast but the common denominator here is trying to make our diagnostics, better and more precise to deliver better healthcare. Do you feel, and maybe, Amy, anything in particular that surprised you about learning these new aspects of what it can do. And what you've seen so far. Yeah, I know I'm going to just kind of echo back on what Dr. Poole just said, I think we are so comfortable with the idea of an algorithm because particularly in electrophysiology we've been working with companies that have all these built in algorithms already So, the idea of working in an algorithm world for us is like, it does, I mean we've done it for the last few decades, but the idea that these algorithms now continue to learn and can now see things we can't see, and the idea that we can have the predictive analytics that we can have, like, you can look at an EKG on a 22 year old kid and it's telling you the EF is low on this kid, it would never order an echo on a 22 year old kid. The idea that this can now be done and it's something that in our eyesight as we're looking, it, it goes kind of beyond what we can do at a, at a base diagnostic sort of level, I think that's very very exciting. That is very exciting. And you know, while, while that is in the same compass, it brings into the point of human intelligence, or the role of it. And I would like to transition into that, you know, when we talk about the human intelligence and how it factors in its development. We understand that AI processes a lot of data sets. There's no substitute for clinical judgment of course. When, in your experience, where do human oversight remain absolute critical. And I think many want to hear that because it definitely is not something that we want to take out as a key component here so let's talk a bit about the role of our human intelligence and how it plays when we're looking at this in our workflow. You want me to talk or Amy. You, Dr. Poi. Okay, sorry. Well, you know, I'm, you know, those of us on this call today are not the people that are, are, you know, doing the science behind these algorithms but the people that are there's human interaction at every step of the way there has to be human oversight. So, first of all, I have to start with what's the question that you're asking, you know, I think we brought this up at the roundtable you know what is that truth that we're trying to get at that has to be defined, and then the data sets that are going to be chosen to feed into the algorithm, have to be defined by, you know, the human or the you know the researcher. And that has to remain with that kind of oversight through the development and the training and if it's a retraining algorithm as well as, you know, the validation and then taking that into other subsets so from that perspective, the actual generation of these algorithms themselves have to be handled and run by people who are experts in this field. But, you know, the the practical part is in, I think, you know what Amy and Kim and I do which is the over reading of rhythms. And this is a huge part of what we do as electrophysiologists and we are looking at a lot of rhythms off of CIEDs. So, you know, I think we talked about this before Kim but the idea that we would not have to over read any of the rhythms at all doesn't seem feasible at the moment. So there's a difference in the patient's life or death potentially, but I would like it if my daily life for over reading remotes was more efficient. There's a lot of talk about going to alert based over reading that we would get to the point where we can, you know, trust these algorithms to, for sure, understand what's atrial lead noise versus true atrial fibrillation, for example. So we don't spend our life looking at that and loop recorders that, you know, the noise is out of there so we can look at real rhythms that it, you know, safely identifies sort of the common, less important or less urgent aspects of a remote read so we can really concentrate on what's truly abnormal. But to define what's truly abnormal takes, you know, the input to agree with what is normal and what is abnormal. Yeah, and when I, you know, when we're talking about AI and when it flags an issue let's just say, but the human context that allows for the correction of the interpretation. Bottom line like you mentioned so it seems like the human intelligent will continue to evolve in any new way that AI becomes more integrated in our practices or workflow. One thing that I've noticed about where we play a role is when we train it right now you and I and Amy we don't train these AIs we're not the engineers behind it but when I say we is sometimes I know with our data sets. When our technicians are interpreting. It also alerts the engineers. Did it do the correct thing. Did it help so we don't think we are training but we are training them, that's clinically relevant. You know the algorithms don't build themselves, it needs us as the clinician, whether it's you that are the last person to view it or a technician. We're labeling the data we're validating its output we're constantly refining it, and then it learns over time so this isn't a set it and forget it and I'm with you, Dr. It's not something that we just want to throw out there and and let it do its own thing. I don't think there, there is no substitute for that judgment for you, Amy, when your team is interpreting or you're doing the final analysis I'm curious about how you keep up with it as well from your forefront and your clinical decision making. I think that the nice thing about having the, the team that we have for the device clinic so our device clinics. We have around 8000 patients and of those they like 850 of those are just loop recorders no loop recorders are horrifically noisy as most people know. You know the better out and they're always every year they're improving the algorithm and the P wave identification and that. And so we're getting less a fib and less noise and less undersensing alerts that are inappropriately positive and all of that stuff. But we still have patients that also activate devices like crazy and and it would be really nice if we could quiet some of that noise as well. I don't know if AI is up for that yet, but in my brain I think AI should be able to handle all the data-driven tasks that it's so much noise to us and it becomes a point where if the information and the deluge of the data that's coming in becomes so big, it actually becomes less diagnostic. So the more data you have, the less diagnostic it becomes because the chances of you missing something that's a legitimate issue just goes up and up and up. I know it's patients who will send you 50 downloads in a span and now you have devices that don't filter that out and only report once a day, they report in real time as a patient activates it reports it out. So we're constantly telling patients you got to stop sending information and can't get through all of that information. The idea that you can build an algorithm or an AI that can kind of manage all of that data, that would be helpful and then it leaves the actual human intelligence to the patient care. Once we have the data that's actionable, what are we going to do with that data and how do we help the patient cope with the symptoms they're having or the medication that we have to start, all that stuff. So I feel like it's going to be a part, definitely going to be a partnership. I think over time, in my opinion, I think the human intelligence is going to lean more towards the patient management and the patient care and the empathy that we give and stuff like that. And we're going to get more and more comfortable with AI driving the data-driven, I don't want to say noise, but the data information that's coming in. Yeah. And that's okay because then it keeps us on the forefront to making sure that everything looks okay. For example, I've encountered where it might said one thing, but it took me to interpret it correctly. And that's okay. I feel like it's going to remain that human oversight is absolute at the end of it all. I'm with you both. I would love for it to make our lives easier to make it, not that it removes us, but it allows us to process more things more quickly and more efficiently. At least that's the role I would like to play in it for sure. But I'd like you go right ahead. I think the idea is filtering the data to what's actually diagnostic and actionable. And I think that's where kind of a lot of these, the machine learning and the analytics are going to be helpful. It's a lot. I think we talked about it once in one of our conversation. If you think about all your data as a big giant forest, right? Our previous algorithms, it maybe might get you to the tree you want. AI can get you to the branch and then the human intelligence can get you to the leaf. And if we can find the actual diagnostic clinical rhythm correlation or whatever it is we're looking for, then we can step in with our patient care and take over. Absolutely. And let's talk a little bit about an example and I'll touch on that on our next subject here. When we're talking about AI and human intelligence, especially let's bring in a little bit more of a more specific cardiac monitoring space and how it's an example here. When it's in our ECG workflow, specific to the BardiCam, the AI does allow us to process massive amounts of cardiac data quickly. Whether it's flagging an interested potential, let's talk about AF, for example, episode. The real magic comes in what happens when our human expertise jump in, our EKG technicians, our clinical reviewers, if they step in to confirm and interpret those findings. And that's that synergy that we're looking for that ensures those accuracies in a timely manner of clinical actionable reports per se. And then to simplify it, like you mentioned, I love your analogy of the forest, the tree, the branch, the leaf. I look at it as AI narrows that haystack, shrinks it for us while our team finds that needle that says, here I am, that shiny bright needle. It's not only efficient, it is collaborative model that enhances our patient care and the physician's confidence. And let's talk about that a little bit. I'd like to share with the audience how our specificity matters, especially when it comes to the AF detection here. When we misinterpret these rhythms, it can not only delay, it can alter life-saving interventions. Like for example, whether it's an anticoagulant or stroke prevention or things of that nature. So the AI needs to be trained to validate these sets. And I can confidently say that we've been able to use over a thousand studies that give us an outstanding performance that's seen here on the slide. 96% is sensitivity, another 99.8% for specificity, and then 99.7% for positive productivity. And when you look at that positive productivity, those are other EPs looking at what the data set brought out and only less than 1% is disagreeing. Those numbers are not just statistics, but they're directly influenced on how confident the physicians can act on these reports, making it a timely care decision. So when we talk about that again, highlighting this strip here, I hope everybody can still see my screen, right? This 40-minute view not only allows for the technician to be confident in what they're viewing, how it's plotting, the ECG can be confused like, well, am I looking at a fib or am I looking at a flutter? Well, we know it can't be both. It needs to be one or the other. However, when you're given the AI that can flag, hey, part of this is AF and then part of it is something else, it allows the technician to confidently label it appropriately or even find that transition, which is super duper helpful. Another thing that helps us is to accurately identify those two distinctions, which is very essential, like I mentioned before, whether it's the guiding that therapy. And this is where I feel like AI and human intelligence shine tremendously. And this is how we use that technology, how we leverage it here with our CAM patch. Now, again, we're not there yet to see it at its optimal, but the next slide kind of shows how it works. It's super duper helpful to see how these layers work in tangent or together. So this visual aid here represents how AI and the human intelligence stage work in this reporting process. So step one, you see how this software is continually advancing. It's seeing the data, it's being developed. It's not only updated by any data set, it's by real world clinical cases that are put into it. And then the feedback that we receive from the software gets smarter over time, which is super awesome. And then in step two, you can see that data set in visual form. So once it's captured, this AI organizes it, it displays it clearly. It's in a meaningful way. This step ensures that we can see the arrhythmia data, not just the R to R plot with the strips. Then that's when the human oversight comes in. This is where the first human touch point enters, the trained viewer, which is that ECG tech, often a certified technician or clinical specialist, carefully evaluates the interpretation or what it flags or captures. And then we ensure that the findings are consistent before we put it on the report. And then secondly, that second and third layer. And you're probably wondering, well, why you have so many layers if you're having AI come into the picture and it's so accurate? Because sometimes it needs to be escalated because of something further validation, not because there's an error. Sometimes we see things that we need to call on or notify on that's a little bit more critical. I think that's super duper important. And then it curates a report. We curate this. This is that checks and balance that we finalized the report. It's not just a data dump, but it's structured in a way where the physician can read it. They can see the timestamps, the rhythm strips, the clinical correlations with it, and then the physician can review it and approve it. Super important. This ordering physician can finalize it, sign off on it, make the decisions that become the central point of this oversight. And then again, it's to support the clinician, not to replace it. And then of course, the market input, the last piece of the puzzle. This circle that we see helps the physician and the healthcare team provide this feedback, which goes back to improving our software, whether it's our team as well. And then of course, we can respond to the real world needs as we say. So all in all, it's super important to have this full circle process. AI is a partner with a multi-human layer intelligence. This ensures that each report is not only fast, but efficient, but also safe, accurate, and has clinical value. So I think that's super important. But then again, the big question, can we trust AI? Can we trust it? When we bring up trust, how do we want to use it in our clinical setting? I want Amy to speak more on this because I know trust is a huge factor. You brought it up about when it comes to this trust. Where do you think that trusting the interpretation could be clinically sound? I think that, like I said, I think because we've worked with algorithms for so long, we're getting pretty comfortable with the reports that are coming in. I know we all will tweak reports. I've changed reports. I'll get an EKG reading and then with my pen, cross it off and write what I think is on the side. I mean, I still do that. Especially the QT measurement, right, Amy? Yes. It's the most common one. It didn't quite recognize the patient had a device. So we're like, you know, we still do that. And so there's still some, sometimes, you know, we're like, nah, it doesn't know what it's talking about. I'm going to write it down. I think one of the big things for us, particularly as providers is when we talk about AI being a partner, I think it's a good partner to have in regards to collaboration. It's not an accountable partner in any way. And, you know, you get a report, you get a 10-page summary report attached to a five-page AI disclaimer. There's no, you know, human oversight. So I don't think that's going to go away anytime soon until AI becomes an accountable partner. We're always going to be the ones who put our final stamp and the oversight. So I think that that's always going to be an issue for providers from a provider standpoint. I think it's going to get better and better. And I think as we watch these AI algorithms build more predictive analytics and we see more accuracy and support and even guiding, you know, next steps for recommendations and stuff like that, I think it is going to start to build a little bit more trust over time. So I think from a provider standpoint, it's just going to be a matter of experience and time. And so it is better. For us in EP, I think we're a little bit more prone to, like, lean towards the idea of using AI pretty aggressively because, like I said, we've been using algorithms for so long versus some of the other maybe less technical specialties. And I think the other thing when you talk about trusting AI, you have to always think about the patient. You know, Honor Health has recently kind of started with the chat box with MyChart and virtual assistance to try to filter through some of the MyChart messages and stuff like that. Some patients, like, are OK with that. And some patients are like, yeah, you know, everybody's on the everybody's on the phone call where you're like, you know, what would you like to do? And you have to tell it what you're trying to do. And then at some point, you're just yelling operator into the phone trying to get somebody on the phone. And I think that kind of becomes a little bit frustrating to them. But that's actually an algorithm that are patient, directly patient facing. Most of the time, they're not even going to know that these algorithms are in place and that AI is working on their behalf on the backside. And even by the time I think they do realize by the time we in health care catch up with what every other industry is doing, it will kind of be commonplace. So by the time these algorithms are like really heavily patient facing, I think patients for the most part in every other industry will already have been working with AI for different things. It'll almost kind of seem like, well, of course, it's a chatbot with my chart. You know, it's a chatbot everywhere else. By the time we start, we're not the fastest in health care to to adjust the technology. We were joking. I think I think it was the HR. We're joking about the fact that in health care, we still use the fax machine. We're the only industry probably that still uses the fax machine. And there we are. So we're not as as fast moving and savvy. And I think a lot of that has to do with the fact that we the liability associated with the work that we produce makes us a little bit more cautious to trust AI fully because they aren't an accountable partner yet. So I think for both aspects, I think for both providers and for patients, it's going to be a matter of time. I think by the time in health care, AI is really heavily patient facing. I think they'll be pretty much desensitized to AI. That's my opinion. Yeah. I mean, you think about the people that are much younger, younger than us. I mean, they're growing up with this, right? It's not something weird or strange. They use it commonly every day. Everybody's on chat GPT and they're using it on their iPhones to talk to their friends. And this is just commonplace. It becomes part of the fiber of what their education is to already accept that this is there. But I do think that the other aspect of trust still comes back to this concept of what is the evidence base and how do we make sure that these five pages and 10 pages or whatever you're describing, Amy, like who is looking at that? How do we make sure that these algorithms are retraining in the correct way? They're not going to end up being horribly biased without somebody having an oversight. So I always come back to that. I think that that still remains a really critical, important question and it doesn't necessarily touch us directly in our patient care, but it trickles down. So I think, again, we talked at HRX about the fact that this is really understood by the regulatory bodies that there have to be a lot of really super smart people, that this is their career that are over looking these AI algorithms and the directionality that they're going at the level of the NIH and the FDA and also our industry partners. I mean, they know that they're liable if something ends up causing harm rather than creating benefits. So having a lot of really smart people around that are in computational science and technology and engineering and everything is obviously really, really critical. And for the rest of us that aren't in those fields, I think, Amy, you've really sort of hit the nail on the head. I think over time, as we see these algorithms operating, we become comfortable with them, then we transmit that sense of comfort to our patient. By the time all that happens, like I said, the patients have grown up in this whole field and it won't seem so strange to them. I want to hang on to that thought too, Dr. Poole. Do you feel, for someone who's new, because there's probably clinicians or fellows that are on this call listening in and they're tapping into this, do you think it's important, though, for them to read those citations before making decisions on equipment that they might want to put into use in their practice? Because to your point, who has the time to validate some of these things? Because I saw a claim out there, we were 100% detect. And then when you look at the citation, there was only maybe 50 studies. And so how do you trust that? Have you ever had a moment where things were wrong or right, and how you truly valued it to where it doesn't make you second guess the trust, but the fact that you just, you know what, I need to step up my game a little bit further. Have you ever had that yet? Not really that situation, but I'm just going back to what your original question was. You know, I think that in terms of the other entities that have to have really smart people are the institutions, the hospitals. So, you know, now we have, you know, CFOs or CEOs or COOs or some sort of, you know, C-suite type of people that are running the digital health and digital technology aspects of universities. I mean, just for this purpose, I mean, many purposes, but to provide oversight into the digital health applications that are running at that system and equipment and, you know, all of those sort of big sort of facility aspects, you know, that's not our job as a practitioner. And it, honestly, it isn't probably what's going to come to, you know, mind for somebody who's buying new equipment. You know, we have to, at some level, trust the experts in this field and the experts at our institutions and our hospitals, the NIH and the FDA. We have to be able to trust them that they are, in fact, providing that type of oversight because, you know, our lives move so quickly. You know, Amy just described, you know, sort of her daily life in the deluge of all the data that we have to be able to, you know, get through our workday efficiently, but with enough trust that at the end of the day, that what we are recommending to our patients is correct. It is the truth. What we've diagnosed, the rhythms are correct, that they are the truth and our patients end up then, you know, really able to trust their providers. Wow. I love how you brought that in because it needs to be transparent, has to be consistent. Clinicians like yourself need to know where it's coming from, how it's being trained and validated, how much oversight is in place. And when it's clear that trust grows, your patient feels it, you feel it, your team feels it, and it's not just a tool, but then it becomes an outcome. So, we understand now we're thinking forward, right? Forward thinking here. We're not experts yet. I want to make that very clear when we're talking about AI reaching its full potential, and we may not see that in my lifetime or anybody's lifetime on this call. But let's talk a little bit about that because I think that has a lot to do with how we get there. How do we unleash it? When we're looking ahead, what excites both of you about this AI in the healthcare setting and where do you hope it's heading? At least that's what I want to know. I'm sure others on this call. I know for my team, when they found out that the AI was embedded to find AF, they got excited because we know trying to even go into a report and highlight the burden, do all this stuff, that takes time. And to know that our team can be a little bit efficient, just checks and balance, that's hopeful, right? What else can it do in the future? Talk to us a little bit about what excites you and what you look forward to. Me? I'll go. Whoever would like to chime in. Well, I think for me, again, we're all about patient care, patients first, right? Even if we do a lot of research, I mean, the reason why we're here is to take really good care of patients. So, what excites me is that we might get to the point where we make more accurate diagnoses, that we can provide better education to our patients in an efficient manner, that we can follow our patients more efficiently and accurately, and that in the end, that we actually provide a higher level of health care because these algorithms have made our lives easier, but most importantly, able to identify maybe what we couldn't identify, pull data together that creates associations and correlations that we had not observed before. Actually, perhaps getting rid of some of the bias with the power of AI algorithms that retrain constantly. So, I think the end game is providing more efficient and safer health care. No, 100% agree. I think the ability of AI to enhance our decision-making and get us to diagnosis quicker, sometimes we are ordering a lot of tests just to get to the diagnosis. Sometimes we have diagnosis that we only find because we've managed to exclude everything else, and it's almost like that diagnosis by exemption. If there's nothing else, it must be this. I think it's exciting to see how AI can help us with that in the future, enhanced medical imaging, early detection, catching things much earlier and not until it's later. I can't, the whole ECG and being able, the things that the AI can now see on ECG that can push us in a direction with somebody who is asymptomatic and being able to see those things way earlier is super exciting. The reduction of the workload to the data-driven stuff, I think is important. Let AI handle the data-driven and then let us as a provider give the care. I think that for all of us and why we went into health care in the first place, it just is very, very much the whole point. It's the whole point of why we went into health care. Give us more time to do that. I always think it would be fun. One of my biggest complaints is, as everybody knows in Arizona, we have a lot of winter visitors where they spend half the year somewhere else. Now, thankfully, Epic, I think it's pretty good. We use Epic in care everywhere. It's pretty helpful. How nice would it be if we could fully embed AI into the EHRs, if we can get past the data privacy issues where they can actually help correlate things that are happening in patients as they go to different health care providers and keep a patient's medical history concise. I think that would be amazing if we're looking future state of AI, if that could happen. That would really decrease our time sitting in front of our Epic inbox. I love that idea of pulling the important parts of a patient's history and their imaging and all of that together in a package that can, in essence, travel with them. That would be amazing. We should work on that. I think that should be a project. I think that's great. Then, as Dr. Poole said, I think democratizing health care access. I think AI has a big role for that in the future as well. That's pretty neat to mention because even earlier, you were talking about the chat box. When the patient, whether they know they're talking with a real person or a bot, they can be frustrated. Do you think there'll be a time where they'll be able to embrace the idea that AI is helping with those quick turnarounds or the quick, helpful tips that could help you, either of you, as something to focus more on? I feel like, as a patient, I can easily get frustrated when I put myself into their shoes. If I can't even get through to someone I want to talk to or something's not being specifically answered, but now it's embedded into the workflow, do you think that patients are ready to trust that AI is involved in, potentially, their care? Can they trust that? I think that AI algorithms for chat box and virtual assistance, they're getting better and better. In some places where they're really advanced, it almost takes you a few minutes before you realize you're not talking to a real person. You're like, oh, shoot, I think I'm talking to a bot. I think those are just going to continue to get better. Like Dr. Poole said earlier, with chat GPT and DeepSeek and all these, these kids are growing up with this AI that, I mean, you just grab your phone and ask it a question. It just becomes this seamless sort of conversation. I think it's going to get better that way. I think when that happens, I think it will help a lot. Ultimately, for a lot of the common questions that patients call it and ask for, I think that that will help. Yeah, I agree with you. I actually think that patients won't be put off by that over the next decade, right? I mean, again, because these people are growing up into this space already. I think that they will be able to trust it because they'll still have their healthcare provider who could tell that they trust that information, but I think it's a really interesting space and moving so quickly. Of course, the algorithms are retraining constantly with all of these users. I think that they can only get better. I agree with both of you. I believe that as it unleashes into how it's designed, how it's embedded, it's with us in mind. It's not around the clinician or the technician that's using it, but the more we built it into our integration seamlessly, I think it's going to continue to evolve and get better into our workflow, help support the clinical reasoning behind what we're using it for. Then at the end, it's going to have a better impact, but it's about enablement. It's not about replacement. It's allowing us to get better and quicker decision-making and how it evolves. I think trust is going to be there. There are certain things that I like that it brings up, especially from as basic level as a Google search. You put something in and it knows how to generate and then says, oh, here's this link to do further research on your own. Don't take my word for it. I could see that even have more meaning when it comes to healthcare as well. I think this was super resourceful, this topic of conversation. I think there's a lot in store for us to think about. I hope there's a few takeaways for our audience to continue to move on to this subject and to embrace it because it is here. I don't think it's going anywhere. I can't even look at my phone without it recommending something because it heard me say something, so to speak. Let's talk a little bit about a recap of our agenda today. I wanted to make sure that we're talking about this clinical value of AI and that human synergy. For example, we've heard from both of you with your clinical expertise, how it's in the workflow, how it's shared in your day-to-day lives. In my ECG analysis here, it's clear that AI isn't about removing the human element. Absolutely not. It's about using technology that highlights the right information at the right time so you, as clinicians, can move faster, be a little bit more accurate and confident in what you do. Then we talked a little bit about that trust, how important it is. It is foundational. We're never going to stop wanting to find ways to make it better. We need that data, like you mentioned, Dr. Poole and Amy, that it's going to take those experts to train the data more effectively, the systems, the workflow. Then that way, the trust can be a little bit more transparent or the AI is built with the transparency that we can get those clinical validations and the support it needs from a human standpoint. Then again, the connection in our workflow. It's so important that if we're going to invest in these human layers, especially for the example of the human specificity that I spoke about, it's so important why each report has to be fast but yet clinically sound and why it connects into our workflow. Then final thought, how are we unleashing it today? Again, it starts with conversations like these, bringing us clinical team together, the technology leaders that are out there, and asking those hard questions, designing them smarter, looking at those citations into our workflow. It's not about chasing what's flashy out there, but it's about building something that's super functional, it is scalable, and most importantly, is safe. Super important. I want to thank you ladies for joining us this evening and everyone that joined on the call. I hope this does reinforce any idea or sparks ideas of the importance of having human-centered AI in our healthcare. We do have a question in the chat. It says here, what steps should industry take as we continue to roll out AI? For example, the step of getting FDA approved. I'm going to stop sharing so we can talk about that question. The steps are to be working with the FDA all along when you're developing any new product. I think anybody that's worked with industry, trying to get a product to market, understands that the key is to start early, having conversations with the FDA. They are great. They're wonderful. They want to meet with people. They want to make sure that the evidence is going to become what they need to review for FDA approval. I don't know, for FDA clearance, it's a much lower bar. It just has to show that it's not harmful. I think the key is to work with the FDA from the beginning of any device development or product development. Thank you. Are there any other questions that are in the chat? We had one good question. I think that was it. I think that's great. If we summed it up, I want to make sure that we were able to answer a few questions. Again, I want to thank you both for your perspective. I really appreciate today's discussion and how it aligns with our core message, and that is AI and healthcare, it's best when it works in a partnership with our human intelligence. I want to thank the audience today for joining us. Somebody said, great session. Thank you. All right. It will be recorded, I believe. This is recorded and it will be accessible through the HRS. Thanks to everybody who tuned in to us today. I hope you had a fun time. We had a fun time. Yes. Thank you.
Video Summary
In a recent webinar, panelists Amy Finans, Dr. Poole, and Kim Rodriguez explored how AI and human intelligence can synergistically enhance healthcare outcomes, particularly in cardiac monitoring. With AI's ability to process vast data sets, it presents potential for precise diagnostics and improved healthcare delivery. Dr. Poole emphasized the importance of machine learning in managing remote monitoring data, noting that AI in electrophysiology primarily involves filtering large data sets to assist clinicians in diagnosis. Both Finans and Dr. Poole stressed the necessity of human oversight to ensure clinical accuracy, despite AI's capabilities. They highlighted the evolving role of AI as an enabler rather than a replacement, supporting clinical decisions by providing timely, accurate data. The discussion also touched on the significance of trust in AI, emphasizing transparency and rigorous validation through collaboration with regulatory bodies. Looking ahead, the panelists expressed optimism about AI's potential to streamline workflows and democratize healthcare access, while maintaining a human-centered approach. The conversation underscored AI's evolving integration into healthcare, promoting efficiency and accuracy in patient care without compromising human insight and empathy.
Keywords
AI in healthcare
human intelligence
cardiac care
machine learning
diagnostic precision
predictive analytics
early disease detection
regulatory oversight
healthcare outcomes
cardiac monitoring
remote monitoring
clinical accuracy
human oversight
healthcare democratization
workflow optimization
HRX is a Heart Rhythm Society (HRS) experience. Registered 501(c)(3). EIN: 04-2694458.
Vision:
To end death and suffering due to heart rhythm disorders.
Mission:
To Improve the care of patients by promoting research, education, and optimal health care policies and standards.
© Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
×
Please select your language
1
English