false
Catalog
HRX Roundtable - Learning AI for Busy Clinicians: ...
Learning AI for Busy Clinicians
Learning AI for Busy Clinicians
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
So welcome to this HRX Session 2024. Very excited to be here. My name is Sanjeev Narayan. I'm moderating this session on learning AI for the busy clinician. How do we do it? What tips do we have and what new resources need to be identified? I'm really thrilled to be joined by a really phenomenal expert panel. I'm going to have them introduce themselves from, let's start with Amy. Hello, I'm Amy. I'm a nurse practitioner for the Stanford EP service. Yeah, that's the work. My name is Advey Bhatt. I'm one of the electric physiologists at Valley Health System in Paramus, New Jersey. Private hospital located there working with Sunit Mittal and the team. Monsieur Marouche, Tulane University, and I work with Sanjeev Narayan. Thomas Deniker, I'm from, I think I'm the only one from Europe here actually. I'm from Nuremberg. I'm an electric physiologist as well. There's a reason we picked you as the only European representative. You're looking forward to that then. So as we get started, there's so much, there's a barrage, there's an avalanche of papers, information on AI. Before we get on to learning it, what do we need to learn? So any thoughts? Do we need to read more papers? Do we need to all become data scientists? Do we all take degrees? What is it do we really need to know? What are we solving for? Monsieur, do you want to take that one? That's an excellent question, Sanjeev. That's a Sanjeev question. What do we need to know? What do you need to learn? I think the number one message I tell my team every day, bring more people, bring more talent, engage more people with AI into your cosmos. We tend to be, especially our group of physicians who trained in the last couple of years, very reluctant for integration and change. This change is coming if we say yes or no. It's happening, it's fast. So what I need to know and what I need to do is getting more people that know what they're doing in the AI field into my action, open them the databases, open them infrastructures, allow them to do the work instead of being protective and holding on data, on animal work, on experimental work, on outcome research, on outcome. Anything I do, if I get an opportunity, I get called by somebody to do AI work, I should now on say yes. Obviously, with understanding what's going on, but yes, we need to be more open. We need to welcome people with these ideas of AI, whatever they are, from wherever they come from, and allow them to be part of our system. Research, clinical, education. I think that's a very good answer. Could I ask Amy to weigh in, from the patient perspective, being in the front lines, dealing as part of the entire team, are patients worried about this? Is it even on their radar? What questions do they have? How do we begin to address them? Yeah, I think from a patient perspective, there's a lot of excitement and fear, at the same time, for AI. They always feel like, well, are we giving away a lot of data that I don't want to give away? I mean, there's a lot of, I think there just needs to be a lot more, not just for patients, I think for clinicians as well, we need to understand what AI, basic concept of AI, and how it's applied to clinical practice. Because I think we're taking, we're taking, we think that everybody knows what AI is, but maybe we don't really know, so maybe just learning basic AI, and how it applies to clinical practice, for both patient and the clinician, should be a good place to start. I think that's it. Yeah, please, I was going to ask you, Tom, go ahead. No, I got a question to this specific topic, because I think, I mean, you in the US are much farther into this process than we are in Germany. So, for us, it's not only how do we learn, where do we get this information, but also we need to keep everyone interested in this topic, not only patients, but also all the doctors. And in the field of cardiology, there are some very old doctors, some very young doctors, and I would call them digital naive, or digital natives, but how do we get all of them onto the same platform? I think that's crucial, because most of the digital naive doctors actually are very reluctant in going into any digital health models, and we need to make sure that they benefit from that as well. I think that's a great point. I was going to ask you somewhat related, so you've really made an excellent point. Is the goal to teach doctors, or is the goal to answer questions that patients have, that we may have about literature, and then train to that? So, for example, many US-based doctors don't want to learn about AI or computational stuff either, but our patients are coming to us saying, oh, I heard about this, my Apple Watch, my, you know, a live call. Are you finding that in Germany too? Yes, to a certain degree, and I think the questions are the one things that we need to be able to answer as doctors, and I'm not sure if we need to know all the basics behind it, but it's, I think we have to convey the message that we have an AI that helps us in the process. It's not like the AI does our job, but the AI helps us in our process of whatever we do. Deepak, can I? Yeah, I think from a very basic level, one thing that we need to do is have a common language to understand AI and to kind of evangelize it and promote it within our systems for the slow adopters to kind of contextualize it for our patients. So I think the term AI sometimes gets bandied about a little bit loosely, and I think it's incumbent upon us as these promoters of these technologies to kind of be able to bring it out to our hospital systems and to understand, not in great detail, differences in machine learning, janitor AI, other forms of AI, and where they might prove to be useful for some use cases and not so useful in others so we can help our administrators help our other doctors learn, because I think we're going to have to help push it forward to actually get a well-oiled functioning system in the long run. So we've raised some really interesting points. We've got, why do we need to learn it? Unless you really want to, you don't have to learn it just to learn it, but we have to answer patient questions. We have to speak to purchasing departments and administrators in our healthcare systems who might be barraged with people coming to try and vendors. We need to be able to understand the literature. And then I suppose we need to be on top of the field. We need to stay abreast. So which of those, how do we go about doing all those things? So Sanjeev, you put it nicely, you've got what we need, and I love you, the way you presented it. But there's something that nobody seems to be talking about, especially in our fields. For example, AI, right? We've been doing AI for a long time. People forget that. Very long time ago. You started it. Your product was an AI, right? We used to call it black box, but then I said AI. So the Carter system that we do with this, it's an AI, right? AI ECG machines that gives you analysis of your 12 leads. It's an AI. So it's been around us a long time. And it's surprising to me when it's the way it's happening now and getting into medicine now, that the machine, for example, the way Thomas presented it from German perspective is when you ask a physician, I want to do an AI research. That's what's missing. That's what's struggling. You ask a physician in Germany or in the States even, hey, I want to take a hundred thousand of your people and see if this single EDCG prostate echo will predict their strokes. Simple question. That's where the physician's been in a way, but integration and acceptance. The moment you come to Thomas or any German physician in the world with the product, call it AI or call it, hey, I call it the Carter system. I'm calling it Topera. They will take it and use it. Tool that helps the patient. What's missing, the way we're presenting it in this time and age, ah, AI, it's going to take your job away from you. It's a tool. The things we're developing day by day as tools to make our lives easier and the process of information that we're dealing with every day faster. Now there's prediction models where bypasses and they start into treatment. That's the thing that's coming. That's nice. New generation physicians are integrating it where AI start treating your patient. We're doing studies on this, all of us. That's a different topic, but I talk a lot on AI. I call it the low hanging fruits, AI that can be interpreted right now and today making, do we need to read an ECG? Why should I? Think about it. I don't have to read ECG anymore. The accuracy of the system is better than human. Why I have to do encounter map? Run the catheters in the atrium with five seconds and reproduce. I mean, things like this, these are AI interplantation. And I think we're accepting those. What are you talking about is when AI gets into treatment and getting my job away from me as a physician. That's where I see the next thing where all of us have to adjust and accept. And that's a challenge for implementation into our platforms, into our hospitals. That need to be discussed. A completely new generation of AI in healthcare. Treatment AI, digital treatment. That's a topic which is different from diagnostics and making our life easier today in the clinic as a physician as we know it. This has been so far just a phenomenal discussion. I fully agree. And I think what Tom has said, many people would agree that it's scary, as you said, and to see it's going to take jobs away. As you said, Amy, it's what kind of data do we have to give? But of course there's, I think the first step is basic understanding. We had a paper that's, I think, released in the HR journal this week concurrently, which is learning. And it's got a list of things that I wrote with Emma Svenberg in Sweden. And it's got things that you would expect, certain blog sites, certain types of online resource like Khan Academy, all the way, and then through conferences such as this, all the way through to actually taking courses. But what do we really need to know? I think the first thing is a basic language, as you said. And I think a basic understanding that automated ECG read that we've been using for 15 years is actually AI. So we don't fully appreciate that, but a lot of it is just an extension of statistics. Certain things are likely, certain things are less likely. So how do each of us on the board go about doing that? So on this panel, how would anyone want to give stories about how you picked it up? Because it doesn't have to be intimidating. I think the notion of whether we should or not is different. Because right now, as you said, Nassir, the question is, we've got, what are the objectives? We've got to be able to address patient questions, sort out the chalk from the cheese. So how would we go about learning it? Tom, do you want to pick that up or? Yeah, let me ask one additional question in that concern. I mean, if we look at the automated EKG, AI, as you call it, when we as clinicians look at ECGs and see what the algorithm puts out, we always look for the times the algorithm fails. And I think that's our general approach to AI-based things because we're trying to be smarter than the system. And actually, it turns out we are not. So I think the one thing with AI is we have to accept that it's a good thing, that it's doing good things for us, and it's augmenting our way in approaching things. I think that's right. I don't think we have to accept it's better than us. I think for almost every application that I've seen, FDA requires AI to be an assistant, just as you said. So I don't think it's taking over medical diagnosis anytime soon. But how would you go about, let's start with Amy, how do you, what resources have you used and found effective for learning it? YouTube and ChatGPT. That's a great first step. And what do you type in? You're not going through Khan Academy. No, probably Coursera, but I just put in AI healthcare practice and kind of how to improve workflow. Or actually, my buzzword is usually AI workflow healthcare. Any other thoughts? For myself, at least, I've used MIT OpenCourseWare, free resources, EDX, with Google AI resources, just to kind of stay abreast of things and to make sure I have an understanding of that common language. And then the Twitterverse, where there's always competing viewpoints is a good place to start. Other articles are out there that they promote. I also try to make sure I read from the countervailing viewpoints of the people who are more nihilistic about how quickly we're gonna get there, because I think that's very important to not be, not to buy in too quickly, basically, to be cautious, but to be willing to experiment and try and fail and try again. So what do you mean when not buy in too quickly? Explain that to me. Well, I feel like with any technological process, and at least in my time, they've always been saying, touting something as the next big thing, and that it's gonna change your life, basically. But I can only really count on the one thing in my pocket, my cell phone, that very rapidly changed our lives, in my lifespan, very rapidly. And there are very few things, if you take things even like EMRs, they will make your life easier. They sell you that all the time. And after we're in our 100th click to do one note, basically, we're all struggling, right? So I think there's, sometimes there's hype that we have to kind of sort through and figure out where to apply it. At least for my institution, we are a private institution, so we have to really understand what resources we apply to what problems, basically, because my thought is, we can't purchase 150 different AI algorithms at the moment and interleave it well in a well-functioning system. Let me just, Nassir, just a second, sorry. So I think, Nassir, I think that you pick up on a really good point. You're obviously deeply into this, as many of us in the table are. But you went in and you actually went to primary resources. You then started looking at things for and against, and that's helped you. Nassir, you're also deeply into this. How do you think we should be helping the whole of our field to get up to speed so we can address these questions? What techniques, what technologies are really promising? Which ones have gaps in their evidence base? How do we learn those things? That's a lecture. That's a 45-minute question that I can give a lecture to and invite me to Stanford Grand Rounds. But for going back, that's important what you just said. That's why I asked the question. We are complicating things more than they are complicated. We tend to do, because that's what we're trained to do. We are physicians and doctors. We see things as a problem. You have to give me a problem, I can talk to you. I trained by medical school, always problems. If you give me solutions, I can deal with it. So that's why I asked him the question. EMR, I disagree. For the first time in my life, since I went to med school, I don't want to tell you when was that, a long time ago. I can sit down. I went before you. I want to sit in the system and look at anything I did, how my patient is doing today, in terms of I can finally look at outcome of my patient population one click. I can look at doctors. The thing we're missing, you and me, we never protected the doctors in this process. We put in EMR, EMR is the best thing happened to us. Are you kidding me? I can do any kind of research today with one click. I remember when you hired research fellows for two years just to do through chart review. Our PhD in Germany, Dr. Arbeit was all chart review. We'd hire physicians to do doctor thesis on reviewing 1,500 charts to see if patient died or stroked. Now it takes me five minutes. That's AI. So it's there. Yes, it's complicated in our life. There's more things to do. We see the results of AI, more patients, the watches, algorithms, the results of AI, but the missing part that we physicians, we need to be aware of and we need to be aggressive about is how is our job getting less, not more? How is it that that's an AI for you, for example, working on it right now to process your clinic visit instead of clicking, we'll do it for you. There's algorithms and protocols being developed as we speak and this will be implemented. ECG, I used to spend 10 minutes reviewing if this correct was written up there. Now I don't have to read ECG. Patient treatment, how I use it today. Algorithm that we wrote in Tulane, treating post-ablation. This trip, Chen Ho's algorithm tells me if I give him a drug or not. It's a good thing. So here's a question for you then. Here's a question. How would other people evaluate that your algorithm, which I think was great. I love your work on that for the predictive algorithm for AF ablation success among others. How do we evaluate that? How do we get up to speed to read a paper like that and say if it's good or bad? What kind of resources would we need? Does HRS need to put on a course of a 101 primer? Do we require as part of CME for all boards, for all boards that there's a one hour section with basic questions that AI is an extension of statistics. Any thoughts on that? Just to bring out, because a lot of this is down to fear of the unknown because it's new. It's not that scary. All above. Simple, all above. Tobias, what do you think from the European perspective? It would be E, all of the above again, but let me, let me. Bye bye, Nasir. Thanks Nasir, fantastic. I think it's, we as societies and having these scientific meetings need to pick up more of this. I think this is a very good platform to convey the message and convey basic concepts of how do we deal with AI? How, what we can expect, what we cannot expect. I'm not sure in Germany that many people actually read manuscripts on AI. It's more what you get presented in a forum that you can digest. So from the, from the perspective from Germany, from perhaps Europe, on one hand, the European EPs are far ahead of us. Telecheck, Dominic's, you know, initiative, a lot of the stuff Emma did herself on StrokeStop, very advanced. So what's, where is the skepticism or the block? Where is, where do we need to work through that for education or for ethics, or for broader discussion? Well, one immense hurdle in Germany is that the highest right of any patient or people living in Germany is privacy. Privacy, policy, data safety is the one thing that drives everyone who wants to work with AI in Germany crazy. And it's just not adopted to our current way of doing science. So, I guess that's also an educational problem. We need to make sure that everyone knows, I mean, we all wear these wearables on the one hand but we would never allow people to use our data for any neural network or anything because we are afraid of de-identification or whatever. I don't know even what the general fear is. But I think this is all part of the educational process that we need to go through and that all the patients also need to go through. Amy, if you think we taught patients more about AI somehow, they'd be less scared of it or more scared of it? I know, I think probably more scared of it, I think. So let's talk a bit about the privacy concerns that some of, I've not heard this directly but it must be a big question. Can you list a few of the things that will worry people about AI versus, sorry, my battery's dying on this, versus other technologies that they might give data for? I think a lot of it comes from where, like, for example, as we're starting ambient AI in the clinics, they wanted the concern that somebody who had copies of their voices and stuff and then the other thing is that they would be discriminated against in terms of if somebody has access to their data, like their insurability in terms of the payers and stuff. It's kind of the same thing that they're concerned about when genetic came into play. They don't want to be tested for a certain thing because later on that will be an underlying condition and they may not be insurable. I think there's a lot of that coverage fears, I think, as part of it. Amy, can I extend on that? Do you think that in general, media per se needs to do more in educating people in regard to the benefits and the cons of AI in general? I mean, it's always, if I look into the news or anything, it's always the negative part. So, is it us or is it the general media acceptance? Where are we going there? I think with social media, there's just so much more voices out there and the patients don't really know who to filter out and who's trustworthy, what's the vetted form of information because everybody now thinks that they all have a seat at the table, so they all have like an opinion about things and people think of that as the gospel somehow. Adve, did you have any thoughts on that? I think the education aspect is going to be hard because in the media, it's really about what gets clicked, right? So then they'll either get a quote from Ray Kurzweil talking about the singularity or somebody else in the news and everyone's going to be talking about how they're going to be on bread lines. That's one aspect. I think we have to keep it very focused and say, this is a problem we're trying to solve and this is where we can apply it to provide better care for you. I don't know that we have to get very descriptive, but tell them that, and maybe this is more for the lawyers and the ethicists of that we will, to the utmost of the laws, we will maintain your privacy, but this will ultimately help you in your healthcare journey. And I think that's something that we would need to convey. And I think we're in a transitional phase, right? With as clinicians and as patients being exposed to AI, ideally we would have an AI where you don't even know it's there and the job is done basically, but because we have to interact with it and there are other concerns that the patients have with it, it doesn't work well, for example, chatbots, things like that all the time, that they're worried they're not getting good care. That's at least my perception of it. So I think we have to prepare our teams, we have to prepare our patients that we are in a transitional phase and there will be some bumps in the road basically. So I'm hearing a few things. So one is learning about AI, which of course was the topic of this session, but perhaps more interestingly is the notion of privacy, which isn't actually going to go away. And I don't know that there's any way we could really alleviate those fears. I mean, we could learn about AI, we could learn about the tools that we should get into on this session, how you would help people pick and recommend a certain type of device or wearable versus another. But the privacy concern is a massive one and it's not related to AI and medical devices. It's related to just aggregating databases and the fact that the message is really controlled by large data science companies. So one could make the case that there's very little in terms of confidentiality already, but that medical devices may not contribute to that risk. Is that going to help or not help? If you were to tell patients that privacy is a separate issue, but that we need to understand AI more in order to make informed choices, would that help or not? I really don't know. I don't, I'm not sure if that would be helpful, but I don't, I really got no idea. So if you look at public disclosure statements for any major company, you could look at X, you could look at Uber, you could look at Disney, and you do a search on their privacy statement, they will say things like, your personal data will not be shared and should you wish to cancel it, we'll delete it in 30 days. It all looks really good. And if you scroll down and look at data aggregation or secondary data uses, your fine clause is like, we reserve the right to aggregate data from your device with existing data such as your GPS location, other apps used. By the way, I don't know if you know that none of that is considered personal data by any international. Now GDPR has a clause for any data handling by anyone, but the data I've just described is not personal identifiable information or PHI. So that is all, that's usage of Twitter or Netflix, that's not related to your medical devices. So how do we get part, how do we get a handle on privacy in the medical space? Please don't ask me. Adve, do you have any thoughts on that? I don't know. I think it's a matter of trust. You know, I go back, when was this, you know, probably a decade ago where there were stories in the newspaper about Facebook approaching large healthcare systems and trying to get access to their data. There's no discussion of, on the patient advocate side of whether that data should be shared. They were just, seemed to me from the articles, they were just collecting a check basically to share the data. And that's very problematic. And in fact, there are several ongoing lawsuits in Europe and in the US on exactly that at the moment. Right. And so, you know, I think we have to be very mindful of, to be the data steward ourselves. And again, when we're working with these med tech companies who owns the data, as we've heard throughout this conference, it's a very gray area. And certainly we need better legal support and definition to kind of help protect our patients, help protect ourselves in the process, you know, and kind of move the field forward. I think it's a little bit of a, we're, it seems to me, everyone's a little bit fearful because we don't know there's too much gray area. I think that's a really excellent point. One of, it's incumbent upon us to act in the best interest of the patient and the fidelity of the data within our healthcare systems. And we should be working with IRBs and with information, whichever the IT sections are in our own groups, to make sure that consent is more robust. And that would help to alleviate some of the privacy concerns where we make sure that all consent forms state, we will not share data with the vendor. We will not put data into shareable databases. Now that can impede research on one hand, but if the goal is privacy, which it has to be. Well, yes, it is. And I mean, again, this may hold back some scientific research on the one hand, but if this is to the best of our patients, and I'm not 100% sure if that is really true, then it should not be shared. And it's definitely a major discussion in Germany, because again, privacy is something that is more important. We've seen that during the COVID pandemic, because there were all these apps out and you could put in your status, but this was actually blocked by the German government because it was against privacy things in Germany, even though it would have saved life. So I think we really have to work on both ends and make sure that we do the best for our patient, but on a look from 30,000 feet above. So when you're negotiating with new vendors or evaluating technologies, how do you talk about privacy and other issues? What are the considerations you bring in that would help others in how they make those decisions? Well, I think our biggest issue, question would be is what do they do with the data? What do they store? Every company's model of the way they do it is very different. And they all proclaim it's de-identified. And hopefully they would also eventually destroy the raw data as well. For example, ambient voice recordings, we trialed a system, we didn't have an issue with it, other than the fact that we didn't like it because we lost our own voice and our notes. It was a very vanilla note without no nuance. But I presume the patient's biggest concerns in those scenarios is my voice out there. And as we can see with other AI techniques to mimic likeness and voice, that's also a big problem, right? If someone's voice was stored, we use our voices at times to identify ourselves to our financial institutions, things like that. So it has to, I think there has to be clearly stated what they do with the data. And at some point, it can't be retained indefinitely. Do you think it would make sense to install this from a general basis? So just give a clear ruling throughout the country that these are the things that you can do and these are the things you cannot do? Because I'm not aware of any institution who does that or who allows different things in a legislative way. Could you say that again, Tom? You're not aware of an institution which restricts in what way? In the way that in a specific situation, what data can be used for what? I think that is an absolutely critical point you've just made. And I think we have to do that. I think moving forward, one of the privacy bounds would be data will be released for this but not for that. So you want to do a study on AF ablation. We will collect AF burden data, okay, we'll give you age and gender. But you're not going to have, I don't know, might a zip code because you can get socioeconomic status from that. You're not going to get information that could be used by my employer. There are statistical ways you can actually render data less identifiable. So it could be 15,000 people that look a bit like Tom Deneke but you don't know it's Tom Deneke. So I think we have to, that's one way that institutions could protect ourselves. And I think you're right. No one's doing that. Is that something that ever comes up or? Has not at this point. But I think, yeah, do we need to tier what the data, high level data, this cannot be shared, intermediate data can be shared in a certain way and other data is more freely shareable? I'm assuming that's the model you're kind of suggesting. I think the lack of, you know, everything is variable by state by state, like what's shareable in California or is protected in California is different in other states. So maybe like a more federally regulated privacy laws would be a bit more helpful. So I listened to this discussion and I think, wow, it's so confusing. You know, if we think about what the patient's thinking, what the practitioners, I'm confused. Does anyone actually know? I don't think anyone's on top of this field. It's truly evolving. So I think that certain guardrails in our own institutions are really important where we, you know, we put limits around the use of data, but it's happening anyway. And so are meetings like this helpful? How do we get more people to attend meetings like this to understand it? Do we make this part of lots of other forums? What are you doing in Germany, for instance? Is this part of the German Cardiac Society? Well, yeah, we get lectures, probably two lectures per out of 1000 or something like that. So we're not very upfront there. Is anyone in the room? Yeah, usually not. No, it's the early morning hours. So, you know, I think, but I think it must be the role of the societies as well to transfer this message and also, on the other hand, be the advocate for patient and clinicians. Because I mean, we don't have a voice in politics, really. And those who are heard in politics are usually not the ones who are the ones dealing with And so I think this is a crucial role for our societies. And that's why I think these meetings are very, very important. Even though that in this meeting, I mean, it would be a different meeting if there would be 10,000 people. So it's I guess you have to have different ways of dealing with this situation. meetings and the general scientific meetings where you need to put up the consensus of what is discussed here. So you said that so many things have been proposed and very few have made it through. What do you project in the next five years? Do you think the field will be burgeoning like it is now with a different set of players? Or do you think that much of this would have gone away? Well, you know, I think going back to our conundrum is how do we interleave, you know, so many different models, basically, to help us treat all of our patients in cardiovascular medicine? Will that become one giant company, say viz.ai is expanding out very rapidly? Or should we work with multiple different vendors? I think, inherently, I would be risk adverse to putting all my eggs in one basket. And I would try to work with many different systems, many different AIs, because I still feel it is a very transitional time. And there's a lot to learn, basically, until we get a very robust system. But you know, I would imagine, you know, none of our EMRs, at least, are very well tuned to do all these things, capture all the data that we want to do and very few places probably have the wherewithal to do things like Mayo did in terms of reorganizing all of their data, you know, basically going back, you know, decades to kind of do high-level data science on that. So out in a community hospital, you know, private hospital, we have to really think very carefully about how we use it and be very targeted at the moment. Are you acquiring a lot of technologies? You said that you're screening a fair number. Yeah, we've tried ambient dictation technologies. We will be trying, you know, AI-based stethoscopes. You know, we hope to be trying other in-the-lab EP, electrogram analysis modalities as well. So we're trying to kind of pinpoint where we think we could help ourselves and our patients the best, you know, and to try to do it in a smart way and then to understand it. Even if we think it's not providing value, to move away from it and look for the something else, something better at that point. So we're in the last couple of minutes of the session. It's been a great discussion going all the way from education, filling unmet needs from the patient and provider side through to what AI is going to do and privacy concerns. If we had to sum up in just a couple of sentences, where do you think we'll be in a year if we were having this discussion next year? So any thoughts on that? Tom? We'll be back in Atlanta, I guess. And no, I think there are lots of things going on and it's really hard to keep an overview. And I think that's one of the problems we really have. And there is so much unfiltered information. When looking back to the situation in Germany, I think one thing that is definitely different is who is paying for this. So in Germany, it's a general health care institute who pays for things like this. So the whatever comes out of it, at least is felt to have a general idea. Whereas if you have certain industry paying for it, they always want to have the benefit of that as well. And they want to have, they do an investment, but they want to get money out of that. And I think that is something that we at least need to think about if that is a good idea. Very good points. Amy? I think unless there's a true interdisciplinary approach to like the creation of AI, the deployment of AI, we may be back at kind of the same place, we'll be discussing the same things. Unless there's true, like he's saying, like somebody, the stakeholders should be all present at the table when we're trying to do AI in clinical practice. Really good. I mean, I think we'll all be excited, there'll be improvements, but we'll remain just as confused because I don't think we'll have any of the answers settled at that point. But you know, hopefully we'll still remain just as hopeful that this is a path forward to improve patient care and improve workflow and efficiency on the back end. I think that's lovely. I mean, I think, I hope we're more enthusiastic about its overall role. I think there'll be fewer players. I think there'll be consolidation. And I think there'll be a bit more skepticism, but again, hopefully there'll be optimism that ultimately the solutions work. I want to thank you all. It's been an absolutely great session. Thanks to Nasir who already had to go to the airport and enjoy the rest of the meeting. Thank you.
Video Summary
In HRX Session 2024, focused on "Learning AI for the Busy Clinician," moderator Sanjeev Narayan was joined by a diverse panel including Amy from Stanford, Advey Bhatt from Valley Health in New Jersey, Nasir Marouche from Tulane University, and Thomas Deniker from Nuremberg. The discussion revolved around how clinicians can learn about AI, the necessary resources, and the importance of integrating AI into clinical practice. It was highlighted that AI has been part of medicine for years, like ECG analysis tools, but now faces skepticism and privacy concerns.<br /><br />Key points included the need for better education and resources for clinicians, a common language to understand AI, and addressing patient fears around privacy and data security. The panel discussed leveraging platforms like YouTube, ChatGPT, and MIT OpenCourseWare for AI learning. The conversation also touched on the ethical considerations, the need for clear data usage policies, and the role of professional societies in advocating for responsible AI integration. Despite challenges, the panel was optimistic about AI's potential to enhance patient care and workflow efficiency but acknowledged the journey's complexity and the necessity of interdisciplinary collaboration.
Keywords
AI in healthcare
clinician education
privacy concerns
ethical considerations
data security
professional societies
interdisciplinary collaboration
patient care
HRX is a Heart Rhythm Society (HRS) experience. Registered 501(c)(3). EIN: 04-2694458.
Vision:
To end death and suffering due to heart rhythm disorders.
Mission:
To Improve the care of patients by promoting research, education, and optimal health care policies and standards.
© Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
×
Please select your language
1
English