false
Catalog
HRX Roundtable - Optimizing Clinical Operations wi ...
HRX Roundtable Optimizing Clinical Operations with ...
HRX Roundtable Optimizing Clinical Operations with AI
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Okay, so we'll get started. This is the session roundtable at HRX called Optimizing Clinical Operations with AI. My name is Janet Han. I am a cardiac electrophysiologist at the VA Greater Los Angeles Healthcare System and UCLA, and I'm pleased to welcome all the panelists and all the guests, and I'm gonna just have people go around the table and introduce themselves. So I'll start with Bryn. My name is Bryn Deckert. I'm a pediatric nurse practitioner and EP at the University of Michigan. I'm also on the board of trustees and was on the production team as Janet was, too, for HRX, and excited to be here. I'm Mike Rosenberg. I'm a cardiac electrophysiologist at the University of Colorado. I also wear a couple of different hats there that may be relevant to this. I run the ECG lab. I'm the liaison to our EPIC rollout, which we're transitioning to a new version of that, and then on top of that I do NIH funded research in artificial intelligence applied quantitative methods. Hey everyone, I'm Emeka Anyanwu. I'm a general cardiologist at University of Pennsylvania. I would describe myself as a web developer trapped in the body of a cardiologist, and so my bias and my focus and interest is always on the, you know, boots-on-the-ground implementation of software solutions for clinical workflow problems. At Penn, I work in the cardiovascular informatics space, as well as our Center for Healthcare Innovation. Hi, good morning everyone. I'm Kevin Thomas, and I'm a cardiac electrophysiologist at Duke University Medical Center, and health equity researcher, implementation science researcher, and also administratively I'm the vice dean for health equity at Duke. Good to see you all. My name is Laura Gravlin. I'm a cardiac electrophysiologist at Mount Carmel in Columbus, Ohio, part of the greater Trinity system. I'm director of the EP lab there, and I co-direct our women's heart program, and I have a grant-funded project where we're looking at natural language processing to address patients who are underserved, undertreated, under-recognized. Good morning everyone. My name is Julie Shea. I'm a nurse practitioner at Brigham and Women's Hospital in Boston. Hello everyone. I am Aileen Farrick, a nurse practitioner at White Plains Hospital in New York. Wonderful. So, well, it's a warm welcome to everybody. So, I think, you know, we'll just sort of dive right in. We have a great mix of people at the table, and I think clinical operations is always sort of, can have lots of pain points. Maybe that's the best way to sort of say it, but I kind of wanted to start off with just talking to everybody about what do you personally think about when you think about clinical operations. So, maybe I'll start with Julie. I mean, I guess in terms of clinical operations, I kind of think about the day-to-day processing of patients through the system, right? So, our clinic operations, our procedural obligations, and making sure that those are streamlined, obviously, to get patients through the system in a timely fashion, making sure that we're capturing all of charges for them. Those are kind of my initial thoughts. Yeah, anyone can jump in. Yeah, I was going to say, I think it's from the very beginning to the very end, right? So, it's scheduling the patient, getting their data, so that you can appropriately evaluate them, see them in the office. If they're going to have a procedure, then it's going to be the whole patient educational component, going through all that. For us, as well as advanced practice providers, we see the patient's post-procedure, so it's sort of developing that relationship with them, because you're going to be seeing them later on. And then, as Julie said, it's the whole process of educating your staff, as well, because of, you know, connecting them, getting them scheduled, having them feel good about their relationship with you, and having the process move smoothly and efficiently and appropriately, Bill, as well. I think clinical operations, you know, agree with the two of you, but it's also the front line. You know, what happens often is someone way higher up will make some sort of decision, and they have no idea how it will be implemented, carried out, and how it will affect the patient. So, I think we, you know, what you really want to know is, how is the front line handling this, and is this working? So, you know, from our standpoint, I think probably from an advanced practice standpoint, we are the front line. And, you know, we should be, and you all, too, we should be the ones telling leadership, this is working, it's not working, we need this, we need that. So, I think we just, as a team, have to recognize the, you know, front-line providers in our day-to-day operation. I think you just brought up a great point that was talked about yesterday in the opening session, which is you have to understand sort of the workflow as it exists before you can sort of implement any solutions, because otherwise you're just sort of throwing tech, see if it sticks, and then you're just, perhaps, even adding a layer that's not great, right, to a workflow that is already not great as well, right? So, I want to hear from our people who are working in that space what their thoughts are. You know, from my perspective, clinical operations, I tend to focus on the who, what, where, when, sometimes why, and so I think, you know, it's the processes that help us answer those questions. So, what patient is getting what done by whom, in which building, at what time, and what's the indication, is that appropriate? That tends to be how I kind of frame clinical operations, almost anything other than the actual direct clinical reasoning. No, that really makes sense. I think having some sort of framework to look at within that workflow is probably the most important to be able to implement it appropriately. Any thoughts, Mike? Yeah, actually, this has come up when we're talking about how to implement new Epic builds and things like that, is almost being unafraid of being too granular about what you're doing, like, you know, where is the monitor gonna sit? Like, if I'm using this new, you know, platform, how is it gonna load? You know, what's the specific steps involved in, you know, if it's in the clinic, you know, when is the patient brought in? When am I opening up to look at, you know, am I looking at the screen? Am I looking at the patient? You know, it's the boring, mundane types of things that you do over and over again that I think, you know, attention to those things that really plays a role in whether it's useful or not. Yeah, I think it's, I don't know that it's even mundane, right? It is UX and UI, and it's those things that cause those pain points, right, more than anything. As I sit there and I look at, you know, the government system is CPRS, and I don't know how many of you are familiar with CPRS, probably most of us, and for all of its weird quirks, it actually is pretty user-friendly. Weirdly, right, it's run off like an old DOS system, it's like beige, of course it's like beige, right? So you think the user, you know, user experience is not, or the user interface is not great, but surprisingly it makes sense, and surprisingly you can see any patient throughout the entire system across the states, which is really kind of unheard of, right, like people sort of want to be able to emulate that kind of work in Epic, in Cerner, in all these things, but that has maybe not been quite as successful. So would love to hear your guys thoughts about that, just what about just user interface and user experience in that clinical workflow? Yeah, maybe just to round out that last part that we were talking about, and I think when we think about clinical operations, the other really important thing that needs to be seamless, I love that all the answers, first of all, we're patient-centered, right, because sometimes I feel like as we think about innovation, you know, sometimes that can be lost, and so that's great, but it's also how we work together in defining our roles, right, and if we can do that clearly and people are really focused on their roles and doing it to the best of their job and being clear about what their roles are, I think then we have the opportunity to implement things that then can make things more efficient, make things more seamless, and do it effortlessly in the workflow that we've already created, and so I think that's a really important part. But getting back to your point, I think the key to all of this is the interface and how whatever we are planning to do to expedite what we are currently focused on as it relates to AI, generative AI, is ensuring that, again, it's user-friendly for the patient and that the patient plays an important role in this, and I love that you brought up CPRS because I have fond thoughts about that. Anybody who works in the VA healthcare system is going to be familiar with it, and it does, you know, for all the challenges that the VA system has, that is the one that has been a consistent and been pretty amazing, and so I also worry that as we think about entrepreneurship, capitalism, etc., that that's not going to get in the way of it. Even, you know, as you think about Epic role, how many of us have Epic, but there are different iterations of it, and they don't work well together. They're clunky, and so we've got to be mindful of that and figure out how we can obviously allow for people to grow and bring new things and personalize that, but that it doesn't get in the way for us, you know, trying to accomplish the end goals that we have. So when we talk about sort of those clinical operations and making sure that things don't get in the way, right, I think it's important to sort of talk about these pain points, like how do we make those pain points better, because clearly there's lots of pain points that we think about in our day, just even, you know, I will sit at CPRS and be like, dang, I wish, you know, I could just upload this image of whatever X, Y, or Z thing that I'm doing, and I can't do that in a sort of facilitated way, right? Where, where are those pain points, you know, for you guys in the current system? One that pops immediately to mind is our EKG system. So we have Midmark as an outpatient, and then we have Muse GE as an inpatient. And you can't always compare EKGs side-by-side. Our particular flavor of Epic, we do not have a connection, so we can't pull out discrete measurements. So when we're trying to identify patients with cure restoration using AI, we don't actually have that numerical measurement to try to find them. So we have tried to make GE the standard, but our health system isn't enforcing that. So just even the mechanism by which we are trying to collect the data and have the data accessible for something that you would think for cardiology, EKG is very straightforward. It's, it's, in fact, not. Yeah, I think I've heard that about Muse as well. We were actually just talking about that yesterday. Reading ECGs in Muse, you would think that at this point it would be leaps and bounds ahead, right? But again, not easy to compare, not easy to sort of use that system. It seems a little clunky. It probably could be better, and you wonder at some point, is it that, you know, we need to bridge that gap between the engineers that create the systems and the actual users, right? I think, you know, Penn upgraded to Muse NX maybe in the last 12 months, and a lot of new features came with it, like side-by-side comparison. And these were features that Penn, for a long time, had actually built into its own software because they weren't available. But, you know, boots-on-the-ground cardiologists had no idea that GE had developed this functionality. You know, if we had gone to GE and said, hey, you know, these are our pain points, they'd say, oh yeah, we released this like 12, 24 months ago. But, you know, I think the roadblock is, like, trying to convince our internal stakeholders that this matters to the people that are trying to read 100, 200, you know, ECGs a day. Trying to convince them that, hey, if we're reading an outside Holter monitor, we ought to embed the PDF and have a pop-up automatically, so I don't have to manage two different windows. Because, you know, death by a thousand clicks by the time I've read my tenth one, this really stacks up. So I think sometimes there's a disconnect between the people that are managing the available solutions, the people doing the work, and then the actual vendors. Yeah, I was gonna say, there's actually, I think if there's two levels of pain points, there's the technical one, which is when I'm doing my everyday tasks, where am I getting slowed down, or where could I, you know, improve efficiency. But then there's the practical pain point, which is I'm not even in control of how these things are deployed in front of me. And so, you know, competing ECG systems, you know, new technology rolled out from the top down. CPRS is an interesting example. I think the reason it's so good is because it's just been incrementally improved, because they let people use the same system and get used to it and adjust their workflows around it. Whereas anyone who's used Epic knows that within, like, every six months there's a new upgrade and everything looks completely different, and you have to keep constantly adjusting, and the scale is huge. This is something we deal with a lot, is, you know, something may seem great on a prototype where I'm showing you, for one, you know, ECG, this is how you can click through, but if I have to read 200 in half an hour, it's just not going to work. And so you need things that work, you know, at scale as well. So I think there's different levels, and I think what most of us, you know, we don't have hospital administrators at the table here, but most of us are, you know, would argue a lot of the pain points are, you know, the people who decide on the technology, not speaking to the people who actually have to use it. Even a similar situation happened to us, like we in EP read all of the, you know, event monitors and ambulatory ECGs, and a decision was made above us to go with a company that we had not really worked with or heard of. And it was difficult, right? It's difficult to be doing the work on the ground, as you said, and not have your thoughts taken into consideration, because you are the one every day clicking the death by 1000 clicks, right? So you want to make sure that all of that is facile. So how do we, you know, like, that's a different question on a different tangent, but how do we improve that communication? Like how do we get all the stakeholders at the table? I think that's a really important question. And at the main stage this morning, they alluded to the fact that, you know, if you are a practicing physician in leadership, then you have hands on knowledge of what those pain points can be. I think that administrations in a lot of places rely on the physician's altruism to advocate for the patient, so that we are often bringing problems to administrators that they're unaware of, because a lot of leadership is not physician. And I think, you know, having some sort of mandate, for lack of a better word, where leadership has to have, you know, either a dyad, where you always have a physician involved in the decision making, would be an important... A physician or an APP, like somebody on the ground, like boots on the ground people, right? That should be at the table, whether that's technicians, physicians, MPs, RNs, whoever that might be. Well, it always seems like it comes down to the bottom line of money, at least in my experience. Even if you do try to give input, it seems to me the bottom line is economically what is best for the department or the institute. And in the era of productivity, you know, you're a physician, so you can let your practice happen to you, or you can try to take control of your practice. But then that means a lot of hours, you know, with leadership, management, administration, for which you are not compensated in a lot of productivity models. Plus, I think people then start to feel a little vulnerable as well. That's actually an issue that we have, which is that the people who need to be providing the most input on the front lines are too busy clinically to actually spend time in a meeting to say actually how we could make things better. And I think I'd also add that in cardiology, and even more in EP, some of that productivity is so productive that these are things like some of these inefficiencies can just be solved with people, right? And there's no like thought that, hey, this person's job is doing something that could be done in, you know, a quarter of the time if we made it more efficient, just because, you know, we can afford to have more people to do the work. And I think that's a nice segue into like, how do we make those solutions? Like what, how do we implement or what sort of implementations should we be doing to make those tasks easier, more facile, more efficient? Like talking about those pain points and other pain points that we probably haven't talked about, like what solutions can you guys think of, or have you implemented already that have made things maybe better, a little bit better? I think working, sorry, I think working to standardize things when you can is important. Obviously, you know, I work in pediatrics and it's hard to standardize and that's the other problem. Not one size fits all. And so I think you have to work within your group, but if you have several different ways of doing something and the staff has to learn how to do this, how to do that, it just gets confusing and inefficient. So if there's a way you can standardize that's appropriate, I think that's one of the better ways. But that, again, takes time to figure out how to standardize. You need to get input, buy-in from your stakeholders in order to do that. I think HRX is a great way to bring different points of view. One of the things that we are lacking, not by, not by not trying, is to get payers here. I think, you know, having insurance payers listening to these conversations would be very important. I believe they've declined our invitation several years now to come, but I think it would be important for them to hear some of these conversations that we're having. And the other thing that, you know, we brought up a bunch of times right now is hospital leadership. I think that many of us in this room have, you know, high levels, but not the highest level. You know, they're not the COO of the hospital who makes the money decisions, as Aileen was alluding to. And obviously they have a lot of decisions to make. But I think that these conversations are very helpful for those types of people to hear. And hopefully as we keep talking and we're, you know, kind of moving in the upper echelon of leadership, you know, Mike was just saying he's helping out with the new MyChart rollout, or I call it MyChart's Epic rollout, that the voices will be heard kind of going up. Yeah, I don't have the solution on how to get attention. If anyone finds that, let me know. I will say on the more granular level when we're thinking about how to improve things, there's a great book called The Design of Everyday Things by Don Norman. It was written like in the 1980s or 90s. But it actually, when you read it, you realize it makes you think about the, you know, what we've talked about, these kind of simple processes of, you know, how do I do things? And, you know, when we're thinking about, you know, a new project or, you know, or, you know, it's what, this is where I think it's, you know, mundane is probably the wrong word, but it's, it's not NIH, interesting to the NIH to fund, you know, how do I cycle patients through the scheduling process faster as opposed to, oh, I'm predicting, you know, this new disease that's incrementally better than the current prediction model. But that's, those are, I think, the more important, you know, when we talk about like advanced technologies, digital technology applications, it's actually those simple tasks that you can automate and make quicker. But I think it all comes back to, again, this kind of basic thinking, how do these things work? What's the step-by-step process? I agree with that totally. And I think understanding streams of workflow are really important and, and really outlining that, like at each step, where are the inefficiencies, where are we excelling? And also, I think that's where the opportunity of bringing some of this new technology exists, right? I mean, you know, the low-hanging fruit has been the, if you look at the studies that have been done most, it's on EKGs, right? We should, we're pretty close to the point where we don't need to have individuals reading or over-reading EKGs, right? I think the data is pretty clear that we can create algorithms that are really adept at doing that for the most routine, routine things. And so that's like a no-brainer, let's press go, let's start implementing something like this. And that's going to help, you know, just dramatic things. And so, again, I think with this deluge of things that are happening with AI and the multiple levels, whether it's natural language processing, machine learning, deep learning, like we should be creating those things, but let's start with the low-hanging fruit first, right? We want to get to the complicated things that are going to take more time and require validation and trials and pragmatic trials, clinical trials, so forth. But we've got some easy button things that we can hit that we can begin to introduce early. And so I think it's important to have our voices heard about those things, because again, the innovators who are not involved in clinical care don't understand those things. And so I think it's important for us to focus on that. And I love that. I was going to say, I really love that, right? Because we are so sort of enamored with the bright and shiny and the sort of futuristic state of how things could be. And it's, it is, I get enamored with that and I love it. I read all the papers, but it is truly about getting the patients into the system first to be able to use those things, right? To predict their disease and then to treat them. So if we can't even get the patients into the system in a reasonable fashion to get them seen, to get them scheduled, to get them, you know, to those, those tests, right? And then after the test, to get them through the procedures, et cetera, we won't be able to continue to do just those very innovative processes. So. I've just, you know, one of the kind of going back to the pain points and along these same lines, one of our pain points is we use a call center to get our patients into scheduling them for clinic and they are completely disconnected from our team. They're not even on site. And I don't think that they're any better than if you had some sort of algorithm that if you are calling with these symptoms, you need to be scheduled with this team in this timeframe and just get rid of those inefficiencies and make it much easier. So I think kind of the low hanging fruit is, is also scheduling. And I think from a nursing perspective, since we do a lot of education is we are saying the same thing over and over and over again. I'm sure we make templates and we have dot phrases that, you know, you can hand out to, to patients, but, you know, I was just talking to a vendor yesterday. One of the things that they do is their company, you know, they get their, their kind of AI instructions, you know, their kind of chat GPT things. And then that company follows up to make sure they're, they're following these instructions. So I think another low hanging fruit is getting scheduling much more efficient. I think we're just spinning our wheels. I feel like every day they're coming to me one day. I'm like, I told you yesterday, the same patient with the same, you know, story I said, you know, within four weeks or something. And then they just asked, so it just is like, I'm like, I don't have time to answer every single one of these calls. If we made it more streamlined and again, standardized, then I think we could kind of smoothly get patients in and out pretty quickly. So I know that those solutions exist, right? So is anyone at the table using any of the solutions or is it, have someone come up with some sort of solution like that to get patients into the system efficiently, looking at the multitudes of schedules? I was going to say from a patient satisfaction perspective, sometimes it's the person that is first, as you're saying, the scheduler for me and the outpatient office, it's that person who's sitting out front, who is receiving that patient. They're not necessarily clinical like we are, right? So I think it's very important for them to be appropriately trained on, you know, how to interact with patients. And as to Bryn's point is knowing the big picture. So I think that there's a major component to training in having an efficient workflow or, you know, operation for your practice. I think there's an important principle there though, which is, I think, getting that people sometimes forget when they get excited about, you know, deep learning models or, you know, when I think of black box models, which is that most of the operations applications, you don't want to be a black box. You actually want to know the specific, you know, rule-based steps that it's following. And so I think that, you know, in fact, the example, I won't give the company's name, but we have adopted not for patient scheduling, for our own scheduling, like, you know, calls and things. They wanted to use one that uses sort of AI to like find the best schedule. And it was an absolute nightmare. I mean, we just had to, you know, try to, in fact, it won't let us turn off some of the functions to automate. And so, you know, cause it's just, there's too many rules and, you know, we don't know what it's doing. And so I think that that's actually, to me, the tension, which is, you know, sure, the large language model seems exciting and it's really cool that I can type things into chat GPT, but if I don't know what it's, how it's deciding, you know, if I'm, whether it's scheduling, whether it's decision-making, if I don't know why it's doing what it's doing, it's not really going to be helpful, especially when it doesn't work. You know, whereas if I have a set of rules or I know, understand what it's following, then I can dissect, why is it, you know, scheduling these people wrong or why is it, you know, predicting the wrong condition for this person. I actually have an admission to make is I'm actually a bit of an AI skeptic. I think it's really, really distracting. And I was trying to see how long I could go before even having this AI. And so we're halfway through. So, you know, I made it pretty far, but I really like what Bryn said maybe 10 minutes ago, which is, you know, just trying to like systematize things, get, understand that people are all doing the exact same thing, which is, to be honest, is not really that fun of work, right. You know, trying to get everybody to agree to the same process is a lot of governance work that is not fun, but at least achieving that, then you have a framework. Then you can start to measure how long does this part of the step work, you know, what, which parts get hung up, which parts can be made asynchronous, right. Cause a big part of the work is just like somebody has to be on the phone or have to face to face conversation. How can we turn this into something that, you know, people can do hybrid or, you know, remotely or, you know, at night or whenever they're going to work. And then I think once you start to get this all, I like to, you know, as a web developer, like trying to put the process on rails, like train tracks that, you know, we know where it's going. We have time checkpoints. Once you have all of that, then you start to have a framework that says, okay, look, we're spending, you know, two days, 30 minutes, whatever at this particular point, this is where we ought to focus. And then, you know, you implement, you know, maybe some sort of AI model. And then you can reassess if that's making a difference to me, that's kind of like the Holy grail of this whole concept of like the learning health system is that you have a systemized process and then you can start to experiment and see if it's working or not. And then move from there. Yeah. I love the things that are being said here because we're getting practical and this, you know, as an EP, this is going to sound strange, but I live in practicality. Right. And I think it's thinking about, you know, what are the big public health concerns and how do we keep that at the forefront of what we're doing here? Right. I think we have a lot of great things that are happening with AI and so on, so forth. I'm a bit skeptical too, maybe not as much as you. I brought into the conversation a little earlier. That's good. But it's like, you know, what is the biggest threat to patient outcomes and why are patient outcomes so poor in the U S despite all the innovative technologies that we have? It's access to care. So why aren't, why aren't there four booths around here about, here's how you're going to drive more people to be able to access care. The people who live in rural America have it the worst. Like why isn't the innovation around that? And I'm not, again, the other things are important. I'm not denying that, but how do we shift this conversation to the public health priorities that we have? It's the same thing for EP. You may say, well, EP is a subspecialty. Let the primary care folks focus on that. No, because how are they going to get to us? If they don't see, they're not going to go from the emergency room to EP. They're just not, that's not going to work. So the insurance companies aren't going to allow it. No one's going to allow it. So how do we begin to have the narrative around focusing on the things that we know are the public health crises in our country? I would say, I would agree with that a hundred percent because working in a VA system, that is probably one of our biggest problems is not being able to get our patients into the system because the system is either too crowded, not enough, you know, clinicians, there's not enough appointment slots. People don't know how to access the system and what have you. And we, you know, deal with patients who are also rural and they just can't drive in. Right. And so again, it's that access to care that I bet that there is actually an AI solution out there that could be implemented. Not, you know, of course there's other workflow algorithms that we could access and use prior to that. But I am sure that someone could build an AI algorithm that could say, well, the West Los Angeles VA has X number of providers, X number of days of clinic, X number of slots with this many patients in our cashman area. How do we fit those patients in so that they can access care and which appointments are redundant? I deal with that a lot, which is this patient has appointment this week and somehow I'd have two more appointments in the next two months. And I'm like, well, why is that? Right. And so then I'm stuck sort of canceling them myself as a physician. Right. Like I shouldn't have to do that in my, you know, 20 other clicks, right. In my day that takes me another 15 seconds, I could use on something else that is actually directly for patient care. One, you know, skeptic or not, I think one important thing and the way, you know, my answer to clinical operations, why I focus on the who, what, where, when, how is because I think a lot of the clinical operations is not in the hot path of medicine, right? It's not clinical reasoning. We, you know, the risk is not super high. And if there is risk, it's still, we still have time for that to be vetted. It's not going to obstruct clinical care. So I think there is opportunity to use AI in an assistive fashion when we're talking about things like scheduling, both staff scheduling, patient scheduling, you know, patient messaging, things that are asynchronous. We have a little bit of time where we can start to experiment a little bit more, but we're not, you know, talking about, Hey, a blade here, or, you know, this echo says this. Yeah. I mean, there's a, there's a saying that if it can be measured, it can be managed. And I think that that's actually the crux of the problem is, you know, cause it's not, it's not the AI, it's your ability to capture in data, what you're just describing. How do I capture in numbers, you know, digitally somehow that that appointment was redundant and didn't need to be, you know, scheduled. And this is what we spent a lot of, I spent a lot of time with our, you know, trying to get our fellows and stuff, learning how to think about data, you know, in terms of everyday life. Well, if I was to build an algorithm, how would I, you know, how would I think about redundant? Was it just that the appointment was a month later? Well, sometimes those are appropriate. So what is it about that? That's not appropriate. And can I pull that out of a digital health record? And as you start thinking about that, then you start building these kinds of algorithms. And again, I don't, they're not complex, like large language models. They're simple, like logic rule-based. If this, you know, data is beyond this time, but actually I think those are the more value, you know, in these. And again, eventually it seems magical because anything that's automated, you know, digitally, if you run it, I mean, run an algorithm on your computer, it's amazing how fast it runs. I mean, it seems magical, even if it's something simple. So I think it can still be impactful, even if it's not, you know, generating from, you know, a large transformer model. And I think when you start simple like that, just like you said, you know, you don't need AI for that kind of work. But once you start simple and iterate, sometimes the applications for AI then kind of jump out at you. You're like, oh, I'm reading these three sentences over and over and trying to pick out names or dates from, that's a great application for LLM or NLP, et cetera. But I think starting simple and going, you know, upwards, as opposed to starting with the solution being AI is really the way it ought to be done. I was actually just in terms of clinical operations, talking, bringing up the concept of patient safety and quality initiatives. I think, you know, with patients with atrial fibrillation, we see these all the time with kind of low-grade AFib in clinic, we're monitoring this on their devices. Sometimes they're anticoagulated, sometimes they're not because based upon how much AF burden that they're having. And we had an example of a patient the other day who, you know, our tech completely missed that they were in AFib. And the patient, when I finally saw them, got, you know, appropriate care. But we, I think, you know, this is a role where AI can play that we can kind of have safety mechanisms come up through remote monitoring that we now have stuff integrating into the EMR systems to alert, to say, you know, this patient's not anticoagulated. And I know Epic has the capability of doing that if you don't have the diagnosis in. But, you know, like all data things, it's as good as the data that you've put into it, right? So if you don't have the diagnosis in there, it may not pop up with that. So using AI to be able to identify that patients are in AF aren't anticoagulated or have inappropriate rate control, just to provide these safety mechanisms, these backups for patients, I think would be an important kind of downstream consideration. And to your point, Julie, you know, if we had that, how can we take the care to the patient? You know, we're in a system where we expect the patient to come to us, which just as we've been saying, depending on where you live can be really challenging. And so how can we, how can the first step we go to them, whether it's, you know, digital care or telehealth, you know, all those things that we can use, but we have to get into communities. And I think health equity is, you know, an issue that we need to focus on is getting the care to them and not expecting them to come to us. And sure, they're going to have to come to the hospital at some point if they need a procedure or something along those lines, but how do we at least get the first step of screening, making sure they're on anticoagulation, you know, all those types of things first, and then getting them in. And I think hopefully we can just change the paradigm and really start going to patients instead of, you know, I think more and more hospital systems are being, you know, bought up and they become these large conglomerates and we don't have these little community hospitals really anymore. And they're all, you know, kind of shutting down and how can we get the care back out into the communities to make sure that patients are getting the care they need? I was going to bring up that same point is if we're at Kevin's point in wellness and preventive and people not knowing even to come to us, how can we use AI to get out into the community? I mean, right, wasn't the famous barbershop study where people went and right in the barbershop realized that patients had hypertension and needed to be treated. So how can we do that? And is there a role for AI in helping us do that? I think that's a good example, though, to talk about the distinction between what you're modeling, because I think there's two ways you could approach that question. One is to take, you know, the monitor data and say, is there a simple rule that says if it's above a certain amount and then checks, you know, checks the record, is this medication present? If it's, you know, if it is, then it's fine. If it doesn't, it triggers an alert. That's kind of a simple rules based versus feeding it. You know, the way you train an AI model, you feed in the labels you want it to predict, and then you say, did it predict these? Yes or no. And so you could basically feed in all, you know, when people do this, they're feeding in raw data. They don't even know what they're feeding into this thing. And then saying, was this person treated appropriately? Yes or no. And then it alerts if they're not. The former, which I think we're sort of getting at is the one that you would desire, but that's not the sexy one. That's not the one that is going to get into the, you know, nature medicine paper because we've used AI to predict, you know, appropriate use of anticoagulation. But yet that's the one you would want is the simple rules based. And it's the same for ECG, you know, algorithms. You know, when I'm reading, I can tell that it's going to mismeasure the QT interval because it uses, you know, the interval measurement based on. So if it's flutter, it's going to misread that because it's, you know, reading the flutter waves. If it's a black box model, I have no idea why it predicted, you know, the QT was long in this patient versus another one. And that gets even worse when you get to things that a human can't even tell in an ECG. So again, I think that it's, you know, to your point, sometimes time to pump the brakes and think about, you know, what's the most useful thing is the thing I understand the most. It's not some model that's just throwing out predictions. And sometimes it's right, or the AUC is right enough to get into a paper. It's the one that I, if it doesn't work, I can go and see, well, which rule was causing all the problems. And in the case of, you know, an event monitor, I'm assuming that maybe showed some AFib that somebody hadn't realized, you know, I think sometimes you'd fall into the trap of taking high fidelity data and allowing it to become less, you know, lower fidelity data. And by that, I mean, you know, the company that provided you with that monitor already has discretized that, hey, there's AFib, the burden, what time it happened. But here we are taking the PDF of that, or maybe some text summary of that, and just dumping it in the EHR. And that, you know, the AFib, when it happened, how long, first time, whatever, that's completely lost. And so all the CDS, the clinical decision support, you've mentioned, none of that can trigger. And nothing we've talked about in the last like five sentences is AI related, right? And so I think sometimes AI can be very distracting and allow us to kind of skip some logical cheaper steps that we should be taking. But you know, it's not sexy, it's not fun. Not publishable. I've heard that come up a little bit. But important for the patient. But it's the dogmatic thing that we do, right? If you're an academic health system, for some people, that's the currency by which they're operating, right? I'm not saying it's right. It's just what it is. It's another hurdle that we have to think about and consider. And again, it also shows the collaborative nature of what we're doing, so that we also have to change the mindset of the journal editors about what they think is most important. You made the comment, it's not sexy, it's not, but maybe that should be the sexy thing that does get published and gets the recognition and shows up on X and everybody reposts and retweets, right? I think that's in the same desire as getting the hospital administrators to listen to the people using the... And I think on that note, it's perfect. We will bring this amazing session to a close. Thank you so much for all of your comments and your insights into maybe pumping the brakes on AI in clinical operations, at least for the little bit. Thanks a lot.
Video Summary
The "Optimizing Clinical Operations with AI" roundtable session at HRX was led by Dr. Janet Han, a cardiac electrophysiologist, featuring a diverse panel of cardiac care professionals. Panelists included pediatric and general cardiologists, cardiac electrophysiologists, nurse practitioners, and health equity researchers from various renowned institutions.<br /><br />The discussion commenced with introductions and an exploration of what clinical operations entail, focusing on the entire patient workflow from scheduling to post-procedure follow-ups. Emphasis was placed on the crucial role of front-line staff in implementing and assessing operational changes.<br /><br />A recurring theme was the necessity of practical, user-friendly technology and decision-making that considers front-line user input. Pain points such as inefficient EKG systems, lack of integration between inpatient and outpatient tools, and scheduling difficulties highlighted the need for standardized, streamlined processes. The panelists agreed that hospital leadership often lacks direct clinical insight, contributing to operational inefficiencies.<br /><br />Skepticism about AI's current application in clinical operations was voiced, with a consensus that simple, rule-based solutions might be more immediately beneficial than complex AI models. The panel stressed the importance of starting with straightforward process improvements, measuring their impact, and then exploring AI applications. The session concluded with a call for a paradigm shift toward patient-centered, easily accessible healthcare solutions.
Keywords
Clinical Operations
AI
Cardiac Care
Patient Workflow
Operational Efficiency
Healthcare Technology
Patient-Centered Care
HRX is a Heart Rhythm Society (HRS) experience. Registered 501(c)(3). EIN: 04-2694458.
Vision:
To end death and suffering due to heart rhythm disorders.
Mission:
To Improve the care of patients by promoting research, education, and optimal health care policies and standards.
© Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
×
Please select your language
1
English