false
Catalog
HRX/SCMR Joint Session: How can Cardiac MRI & AI I ...
HRX/SCMR Joint Session
HRX/SCMR Joint Session
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good morning, everybody. Can you hear me on your phones? Excellent. Good morning, everybody, and welcome to the joint SCMR HRS section on how you can use cardiac MRI, what the value of cardiac MRI is for patients with heart failure and heart rhythm disorders. This is, as you can see, a joint session by the HR Action SCMR. SCMR is the Society for Cardiovascular Magnetic Resonance, and we particularly partnered because we feel that in the field of cardiac imaging with MRI, there's many recent innovations, and it's a great opportunity for our two societies to partner and bring the imaging opportunities that we have and that we're all working on every day to this audience, and also hear your feedback and understand really how we can better form better partnership to utilize this great technology more efficiently and appropriately also, and to make sure that there's access. My name is Michael Markle. I'm a scientist by training. I've been working on development of cardiovascular MRI techniques for the last 25 years at Northwestern University in Chicago, and I'm also the current president of the SCMR. Before we get started, I'd like to give our three excellent panel members an opportunity to introduce themselves. So, Kate, why don't you get started? Hi, everyone. My name is Kate Hanneman. I'm a cardiac radiologist at the University of Toronto. I've known Michael for a number of years, mostly through SCMR, and have worked mostly in clinical outcomes research with a recent interest in the link between imaging and sustainability. Thank you. Ken, do you want to go next? Sure. Ken Vilcek. I work at the University of Virginia, where my clinical practice is electrophysiology, involved in committee work for both SCMR and for HRS, and I've integrated CMR into my research program related to heart failure, arrhythmias, and cardiac implantable devices. Thank you very much, Ken. And then, finally, last but not least, Luke. Hi. Good morning, everybody. Luke Ralston. I work for the Food and Drug Administration. I've been there for about 19 years now. My background's a little bit different. I came in with a focus on defibrillation, but also software, and over about the last 10 years have scaled up and shifted over to the bulk of my work being around artificial intelligence and machine learning, both the development of models, fine-tuning the models, rolling them out. Again, my background is a little bit more in the electrophysiology space, and so I've been picking Kate and Mike and Ken's brain for the last 36 hours to just make sure that I can speak intelligibly about the imaging space. But one of the things that I think is encouraging and exciting about this is that there is so much crossover between imaging and, you know, the type of time series data that I look at. So I'm happy to be part of the panel. Thanks. Thanks, Luke. So, as you can see, we have a very diverse panel here, a scientist, a radiologist, EP and somebody with a lot of FDA experience from the regulatory perspective. So please also use this opportunity to post questions in the chat to really ask the panel any questions you may have about imaging. But before we get started and really talk about the recent innovations in cardiac MRI and why this might be so interesting also for this field represented here in this conference, just let me provide a little bit of a background or kind of a baseline for us to start beginning the discussion. So we've put up a slide here on the back that you can see. And this is just to show you a little bit what cardiac MRI or CMR can actually do. And let me just walk you through this a little bit and we'll come back to some of these techniques and where the innovations lie and the improvements throughout this discussion. So on the top left, you see kind of the bread and butter cardiac MRI imaging, so-called CIN imaging, that's used to assess global cardiac function and volumes. But more recently, there have been a lot of interesting techniques developed that also allow you to get information on cardiac and left atrial strain with CINA-derived strain as an example here. And this is similar to speckle tracking where you kind of extract some of the information on the motion of the heart and derive strain from it. The other thing you may be very familiar with is MRI and geography for pulmonary vein mapping that has been around for a long time and is heavily used in this field. And then there's other techniques that really allow you to look in very much detail in tissue abnormalities of the heart and left atrium. And the most prominent and well-known techniques is probably here on the lower left. That's LGE or left gadolinium and late gadolinium enhancement that can be used to detect fibrosis in STAR. But more recently, there's really a whole array of parametric mapping techniques that have become available and they're shown here in color-coded T2 mapping, T1 mapping, and ECV mapping. And just to give you one example, EC, for example, stands for extracellular volume fraction. And it's a known metric that allows you to assess fibrosis of the heart. T2 is a method to look at edema and inflammation. What's really important about these new mapping techniques is that they are purely quantitative. So you're moving away from a visual impression of STAR, SCAR that you see on the lower left, to true quantitative technique that allows you really to follow a patient over time and really set a good cut of volume. And then finally, last but not least, on the top right, you see one of the newer techniques, 4D flow MRI, that allows you to measure and visualize and quantify 3D blood flow in the entire heart and great vessels. So starting here as a baseline, what I want to get started with before we again look at the innovations is, where are we here and now in a clinical practice? How would you use this technique, particularly with a focus on heart failure, heart rhythm disorders and atrial fibrillation? And I'll start with Ken as the kind of EP specialist to give us his perspective. Yep. I think in general, there's a lot of applications for CMR in EP. It's been a research tool for a lot of us for a number of years. And I think the challenge that we have is now to incorporate this into clinical practice. The use of CMR in EP, I think, could fall into two general categories. One would be risk stratification or patient selection for a particular therapy. And one that comes to mind is implantable cardioverter defibrillators, right? How can we use different scar patterns or other findings to identify the best patients for ICDs, for primary prevention, given that a lot of patients that are selected for primary prevention ICDs never receive any therapies from the device. Another general category of application is the use during EP procedures. And one application that comes to mind there is the use of MRI scar from late gadolinium enhancement for ablation of either ventricular tachycardia or atrial fibrillation. There are ways to integrate a three-dimensional MRI scar map into an electroanatomic mapping system for ablations of either of those arrhythmias. Of course, it's more challenging, as we'll talk about as we go on, in the left atrium than it is in the left ventricle, for example, but it still can be done. Thank you, Ken. Maybe Kate, next, from a radiologist's perspective. Yeah, I agree with everything Ken said. We work very closely with our heart failure and EP colleagues on the imaging side and, of course, in imaging in general. And I would say one of the other areas I was thinking about after our conversation yesterday, one of the things we get asked a lot about now is, you know, given the increased availability and frequency of genetic screening, so our EP colleagues now have identified a lot of patients through family cascade screening or otherwise who, you know, have a variant that's either pathogenic or VUS, and we are often doing cardiac MR, another key indication for those referrals, looking for evidence of ACM, particularly, you know, left-dominant phenotypes. And so I think that the diagnostic piece is also really important, especially in those patients who may not have findings that might be picked up on echo, for example, and so we also see that as well. And I think that that cohort of patients also straddles, you know, both diagnosis in terms of identifying, you know, higher-risk patients, but also that leads, of course, into prognosis and understanding how frequently they need to be followed and so forth. But I agree with Ken, you know, I think a lot of the work we do and we work closely with them is in that prognostic category, really identifying high-risk patients and identifying where a scar is and perhaps using that to guide management. Thank you, Kate and Ken. So it sounds like, or from what you're hearing here and what we know is that this is certainly a very useful test that could certainly enhance care for patients that are relevant also to this audience. Where do you see, and I can talk to this as well, but I want to start with you, where do you see the barriers, why this technique might not be more widely adopted in this field of, you know, heart rhythm disorders, heart failure, and so on? Maybe again, start with Ken. Yeah, I think the time it takes to do the CMR scan in a lot of centers is an issue, and this is something that as a panel we can talk about more. There are now AI algorithms to guide the CMR scan and allow technologists who may not have the expertise to do everything manually to do the scan with the help of the software, and this can have a huge impact on the availability of CMR for our patients, given that at many places there may be four to six weeks, for example, to wait for the CMR. So if you have, say, your experienced CMR techs who are there only during the day, if you had AI applications for CMR where the AI could guide the tech in performing the exam, then scans, CMR, could be done into the evenings for inpatients and others. And then we're also, as a CMR society is really interested in efficient CMR, so maybe not all patients need all the elements that you could possibly include in a CMR scan, like parametric mapping and other things, and maybe for certain patients a function and late gadolinium enhancement is what's needed, and so a focus scan could be done in a short period of time and increase access. So I think access is a big thing, and then the other component would be, for EP anyway, would be integration of the findings from CMR into either algorithms identified by humans, cutoffs for LGE or things along those lines, or AI-based algorithms, and what we'll bring Luke in, I think, as a conversation goes on to discuss the regulatory aspects of that. Yeah, I'll just add, I agree completely. I kind of think about the barriers in terms of on service delivery side, so on the us side, and on the patient ability to tolerate a long scan. And so on the service delivery side, I'd love a four to six week wait time. I work in a public payer system. We have a much longer wait time than that. And so that's absolutely a challenge, particularly in patients where you need to move quickly or you think the patient's at a high risk. And so I think one of the things that Ken touched on is efficient cardiac MR. So if we can move from this very, very long, over an hour exam, that addresses both barrier points. If we can shorten our exams, we can change our booking slots, we can get more patients in, ideally shortening our wait times. But that's also beneficial on the patient side. So patients, particularly those who are in heart failure, have traditionally not tolerated an hour long exam where they not only are just laying there for an hour, which is very difficult for them, but we're asked them to do repeated breath holds. It's very challenging. And some of our key sequences tend to be at the end of the protocol, after we've given contrast. And so those can have more artifact and so forth. So by shortening the protocols, we benefit, we can actually address both issues. And I absolutely think there's a role for AI to accelerate our images, for us to think about simple things like only tailoring the exam, as Ken said, not acquiring a one-stop shop, thinking, well, let's do everything just because the patient's here. We really want to focus those protocols. And I think we're seeing great work in that. Michael's led a lot of the efficient CMR work from the society's perspective. Yeah. Thank you both. Luke, do you want to comment? Yeah. Yeah. Those are some wonderful points. And I think we can never lose sight of the clinical perspective that people like Ken and Kate bring to this conversation. And as an engineer, though, if I step back and I think about the hurdles to more of these types of systems and models getting out there, I kind of break it down into three categories when we're talking about the development of a particular AI model. You really need data sets that you can use to train and test them. And the three hallmarks that I steal from people smarter than myself, but you can think of it in three primary needs. You need those data sets to be big, you need them to be clean, and you need them to be representative. And so one of the things that we see at the FDA and kind of across the medical industry right now is maybe you have access to big systems. I think we were talking about this a little bit yesterday because I'm not as familiar with cardiac MRI, but something like chest films are probably a dime a dozen. I'm not sure how available cardiac MRIs are. But I think imaging does kind of lead the way when we're talking about databases that are available that you can have access to that are of pretty high quality compared to some other databases of just clinical management of other types of conditions. I think where it gets tricky is clean. Clean to us means adjudicated. And I guess the question I would pose either to the panel or to developers in the audience is of those data sets you have of cardiac MRIs, would you as a clinician, do you think you'd be comfortable going in and picking out any random image, and do you think that you would agree with whatever the decision was that that physician made on that particular patient? And then I think also to consider is maybe if you agreed with it, if you showed it to two of your colleagues, would they agree with it? And what I'm getting to here is ground truth. How do we establish ground truth, right? And that's very use-specific. That's very indication-specific. And that is a hurdle that we see in other areas, and I could imagine probably being something to pay attention to in this particular area, depending on the type of model you're trying to build. I think especially, you know, Michael, you and I were talking about the risk stratifier models yesterday, some of the unique challenges there. And then finally, I think about it in terms of representativeness, and, you know, just because data's available doesn't mean that it's representative of a general patient population that you're going to see for a particular condition or a particular disease. I could see, you know, the patient population that you would consider representative for something like guiding ablation procedures would be very different than patient population data that you might have for determining the size of an infarct or other uses like that. So, you know, just, again, some more high-level considerations, not just from regulatory, but, you know, engineering and design and development for those developers out there that are looking into this space. Yeah, thanks, Luke. I think you're moving us already in the right direction, talking about this new innovation and what's needed to kind of get these kind of in the clinical space. But I want to take one step back quickly and go back to what Ken and Kate just said. So, if we think again about the barriers of, you know, doing cardiac MRI in a busy clinical environment, particularly an EP environment, there's probably two things that we need to talk about there. And they were mentioned. One is kind of efficient CMR, and one is, and we haven't talked about this too much, is more, I would say, accessible and efficient CMR, easy to perform, not like hour-long scans. Because it was mentioned, you need specialized technologists to know who to do the scan. Even if you have a CMR scanner, you may only be able to run it from eight to five because you only have one or two technologists who can actually do it. So there need to be technologies developed that make it easier to use, scalable, and also, you know, at the same time, that would improve access. And in parallel to the protocols, even if you make them easier, they need to be shorter and kind of more streamlined. So that's one. I would call this kind of really the data acquisition or image acquisition side of things. But then another kind of area is, if you look back at, behind me again, if you look at that slide, there's a lot of information. So obviously, also, you need a lot of expertise to interpret that information, kind of develop it into a risk profile for the individual patient. So that's also where AI can help. So that's really like taking all the information and combining it even maybe with other modalities and come up with a kind of more automated AI-driven risk stratification strategy. So I want to kind of talk about these two things separately because I think they are somewhat separate. And maybe we can start with the data acquisition part or image acquisition part. There's actually a lot of developments going on in the field right now. One key word that you may hear if you're kind of interested in the area is one-click CMR. And the idea behind that is you don't necessarily reinvent the CMR techniques, but the interaction at the scanner is fully automated. So really, the idea is you go to the scanner, put the patient in, put the EKG on, and then click. And then the AI does all the slice positioning, kind of finding out where everything needs to go, and so on. And then the technologist or somebody who is a kind of a super tech really just does QC or quality control and intervenes in the scans that's already necessary. And if you do that right, you could imagine that you could do this even with remote monitoring of a data acquisition where one technologist monitors four or five scanners with a kind of a technologist and a nurse still being there, but they don't have to have the dedicated expertise to really run and execute the cardiac MRI scan. So that's kind of one of the goals out there. And that combined with the efficient protocols is something that would really move the field forward, make it more accessible. And that would also mean you could then do cardiac MRI not only at expert centers and at academic centers, but you could really do it out anywhere in the periphery where this technique is available. But I want to maybe now at this point, again, bring it back to Luke. If we talk about, you know, if we have these technologies available and if you want to kind of bring them in the market, if you think about not so much about the risk score, but really about the automation, what would the FDA want to see to say, yes, this works, yes, we can clear this for, let's say, cardiac MRI scan at a community hospital somewhere out in the countryside, that they can actually do this. Maybe with some remote supervision, there could be a human-in-the-loop component to it, but again, just to get an understanding what the community would need to show to the FDA that this is actually a valuable, viable approach to this test. Yeah, great question. Almost like we coordinated this beforehand, you know. But I think one of the best examples of that was a product that was granted in 2019 and it's called Caption Health. And it's pretty easy to find if you have your, you can just Google FDA de novo Caption Health and it's from 2019. And we created an entire category for this type of assistive technology in acquiring imaging. This one is focused on acquiring echocardiographs, but I think it was written broadly enough that it could include this if it's just for the purposes of acquiring an image. So it is an issue that we've seen and we've dealt with. And the nice thing about the de novo pathway that this went through is it requires FDA to go through and identify what are the risks to health that we see with this? And then what are the special controls that we need? Special controls is just a fancy term for the mitigations that we think it needs. And then we publish a detailed list based on the risks to health of these are the special controls that you must implement in order to demonstrate that your device is safe and effective for this intended use. And so I think, you know, instead of me getting up here and giving you a 30 minute lecture through that document, it would, you know, anybody who's interested can go look at that. And it really is a good template that we have. I think the only caveat I would add to that is it is very specific to image acquisition. And it does assume that there is some type of a technician there to take it. And I don't believe that it makes any claims about the quality of the image. So it guides the person who maybe is not a trained echocardiograph technician, but perhaps a nurse or other allied professional to take a readable scan and then send it to somebody who can overread it. But I would think that there's a lot of room to innovate in this area and get to what you refer to as one click CMR, because the pathway already exists, I think. Yeah, thanks, Luke. Maybe going back to Kate now, you know, if you think about the Canadian healthcare system, which is a bit differently organized than the US, if you think about a technology like that, how implementable would that be? Or how what will be the approach there? And then maybe Ken next, if you think about an EP practice, you know, if they had that technique available, how would that change your practice? It's a great question. So we tend to be very centralized for these types of subspecialty type work. And so, you know, there's a benefit for working at one of these cautionary care centers where we see a lot of pathology. But of course, for the patients, this is not a benefit. So some of them travel, you know, thousands of kilometers and even miles, I can't even think of miles, but to access the services. And so I absolutely agree. I think access is critical. And I can totally see how this ties into efficiency as well. And so I think this touches on what Luke said is, and Ken earlier, is that not only will this allow us to offer cardiac MRI somewhere where patients, you know, it has not been available. So that kind of like, you know, kind of concrete access issue, but maybe this will standardize our image acquisition as well. So we do a lot of consults on outside imaging, you know, even if a center has cardiac MRI, but it's a low volume center, you know, often we'll see lots of repeats. So patients have been in scanner longer than they may have necessarily otherwise needed to, because the technologist maybe doesn't do as many cardiac MRIs. They're very specialized. And again, so that can prolong the study, but also there is a challenge in terms of this planning. And so again, potentially with automating this, we will have more reproducibility and standardization. So I think all of those issues while they're, while they're distinct, I think it's helpful to think about them, you know, tying together and ultimately, you know, to benefit the patient and improve access. So thanks, Kate. I think the standardization aspect is a really important one. That's something that we've certainly struggled with in the past because there's a lot of kind of isolated solutions at different centers. So anything that helps there is really important. Thanks for bringing that up. But Ken, I mean, maybe your comments on, you know, how this will work in your world. Yeah. I'm thinking a little bit about the comparison between echocardiography and CMR because oftentimes for EP applications, say you want to assess left ventricular function before you determine if someone meets criteria for an ICD, you want an accurate determination of the EF and then you have to repeat it three months later. And so we think about often as EPs, should I order an MRI, should I order a CMR or should I order an echocardiogram? And I think it used to be the case that you could get an echocardiogram faster than you could a CMR, but that's not always the case these days, depending on where you are, because we don't have enough sonographers either to do all the echocardiograms. And I think what, in fact, what's been seen at one particular institution where this AI-based protocol was instituted is that it sort of shifted the balance of work between the echocardiography lab and the CMR and MRI suites. That's one thing to think about. One particular example that comes to mind for an EP application and how efficiency might have an impact is a clinical trial going on called the left versus left clinical trial where patients are being randomized to conduction system pacing, typically with left bundle branch area pacing versus traditional CRT. And it's a PCORI-funded study. The MRI is encouraged before device implantation. An MRI is simpler to do before the device is implanted, although it certainly can be done afterwards. And you can get a very accurate assessment of cardiac function. And so you have to ask, well, if I can get all this additional information with CMR and it's not going to take me that long, then maybe CMR, maybe I should order a CMR instead of an echocardiogram. But again, that's going to, I think, depend on the current wait times and staffing for MRI technologists relative to sonographers. So just, I think that's a relevant consideration. Yeah, thanks. So it certainly is a complex problem, even if you have kind of the technique available to implement. It probably also depends on the setting where you implement it. What I want to talk about next is really, let's assume that we implement this and we kind of double the use of CMR. Somebody still has to read the images. So that's a challenge. And again, if you look at the slide behind us, it's a lot of information, it's a lot of expertise, and there's a lot of training that you have to be specialized as somebody who can efficiently read cardiac MRI. So if the goal is really to increase access, and I think we agree on that, then in parallel, we need to think about how we deal with all the data and how we kind of integrate AI as a new kind of tool helping us to analyze the data and make decisions. And if you take a step back and look at the practice today, AI already plays quite a substantial role. And I give you one example. The images you see here on the top left, these Cine images, they are really acquired routinely in every single patient who gets a cardiac MRI. I mean, this is really standard of care for everything. And what typically is done is then you delineate the contours of the left and right ventricle, and then you calculate things like stroke volume and diastolic volumes, ejection fraction for both the LV and RV. You could also do this for the left and right atrium. And maybe about five, six, seven years ago, this was pretty much done manually by the attending radiologist or cardiologist. And that took about maybe 15 to 20 minutes per case or something like that to analyze that data. That has almost been completely replaced by AI today. So this is a massive efficiency gain. If you think about that, you need to have a trained expert radiologist or cardiologist drawing circles, which is now replaced by the MRI. It's been striking to me that we couldn't come up with algorithms before AI that would draw circles, but apparently we couldn't. So that just tells you a little bit about the power of MRI to deal with heterogeneous data sets, large data sets. But that's really been a success story. So this is really bread and butter analysis, and it has really reduced the time. I mean, there's still some quality control. The radiologist or cardiologist looks at the contours, corrects them, but you're going from 20, 15 to 20 minutes to two minutes. So that has already happened. Same is true for these mapping kind of techniques you see. They also need a delineation of the ventricles and quantifying of numbers, and that can also be done automatically. So we already have tools that are FDA cleared that are commercially available and that have found their way into a clinical workflow, at least at Northwestern where I am, and Kate and Ken can comment a little bit about that as well at their institutions. But the next step then is, even if you bring that down to two minutes, you're still looking at a lot of information, need to put it all together and come up with a diagnosis and a report. And I think AI can help speed that up as well. But that's really the next step to do development. We're not there yet. So what I would like to maybe again, first hear from Luke, when we talk about this, now we're talking not only about drawing contours and making sure they're accurate. Now we're talking about things like risk scores, diagnoses or AI-aided diagnosis, things like that. So Luke, again, from a regular perspective, you talked a little bit about this earlier about the type of data we need, but if you could kind of bring us back to that conversation, that'd be great. Yeah, so I think I'll go in reverse order. So we do tend to see more or less two buckets of devices and models. One is the more straightforward, I think binary decision where you're deciding or you're recommending to the physician, they either have disease X or they don't have disease X. My personal impression of that, you know, we don't live inside the companies, they know their products better than we do. But my personal impression of that is that that might be largely driven by the amount of data that they have available. It tends to be just generally speaking a little bit easier to train a binary decision classifier than perhaps even a multi-class classifier or a continuous variable output model. The challenge that you see with the continuous output variable models is in addition to the normal testing, you need to ensure that the model is calibrated, which means that, you know, the risk that you're outputting at 2% is actually 2% and that at 50% is 50% and at 98% is 98%. Because if you have a miscalibrated model, it can be really detrimental and actually, you know, take extra time in the clinic to sort out a bad decision that's made upstream. I think then kind of working backwards from that, the other challenge that we see across the board in medical AI development is, you know, of that triad I spoke about initially, the being large, being clean and being representative is that representative piece. I think that that's been kind of a buzzword. Some people might confuse it with diversity. I do think that we need to think about expanding our idea of what we mean by representative. If you're doing a study on CMR for measuring the size of an infarct in patients, is it necessary that you collect all comers between the ages of 18 and 85? Probably not, right? Because how many patients 18 to 35 have a significant MI that you're going to need to do that scanning for? So thinking about that beforehand and saying, you know, who is our target population? Who are we designing this device for? Right? Who's the patient population? What's the intended use? And then, you know, taking some time to sit back and think what type of a population would be representative of this? And obviously, we consider the normal things that I think come to mind, demographics, right? You want to make sure that you have an appropriate demographic mix here in the U.S. And I think, you know, even in Canada, we're not as homogeneous as, you know, I was reading a paper a few weeks ago where they had two holdout test sets. One was from Sweden and one was from Japan. And that probably just is not going to apply to the U.S. And so you put this, you have to kind of think ahead of time and plan about how you're going to enroll these patients. I think the other thing to think about when we're talking about representativeness is, you know, I think exactly what's represented on these slides here. You guys have a lot of imaging, modality is the right word, for how you can acquire an image, whether it's the CINAH or T1, T2 mapping. And most companies probably would be a little bit reluctant to say we can design one model that can cover all of those inputs, right? And it would be easier to start small, focus on maybe what's the most common type of input, what's the most common type of patient population, get that on the market and expand out from there, you know, given the data that they have available. So again, I think those are probably reflected in what you see currently on the market, and I think ties into some of the hurdles that you may see companies right now who are trying to really expand and explore and innovate some of the hurdles that they have to that innovation and that expansion. Did I hit everything you asked? Yeah, look, this is great. I mean, very, very helpful. Thanks for kind of providing that overview. And I think it's important, I think, to really think about which is that, is it a yes-no decision or kind of a more continuous risk score? I can see how that's really two different kind of beasts that you have to deal with if you think about kind of AI in the clinical field. Yeah, one more thing that I forgot to bring up too is that another classification that we already have available to us in the cardiovascular space is one for adjunctive information that's being given to the clinician. And that's a space that's expanding a little bit more rapidly because I think if you're using it in an adjunctive way where it's understood from the start that it's going to be overread by a clinician, that does mitigate some of the risk of false positives and false negatives. How we develop from there to a system where you can truly just, you know, click it and forget it and then assume that you're going to have some very high level of confidence in the system, you know, there's definitely work being done in that area, but that's a bit of a chasm right now that we're trying to get across. And, you know, it remains to be seen whether that is six months down the road or five years down the road. Thanks Luke. I think I want to bring it back really to the clinic and to the, you know, people who are in the trenches every day. And maybe we start with Ken again. If you could talk a little bit about your practice, have you already kind of implemented AI? But if you could then also maybe bring it really to ventricular arrhythmia risk assessment, what's really critical for you? What is the role there and how do you use it? And what's your, and also for Kate, what's your wishlist? What would you really like to see in terms of technology that would be available to you? Yeah, this is something I'm certainly interested in and the field is interested in. So along lines of what Luke was talking about, I think there are a couple of ways you could design a decision, make clinical decision making around CMR results. So one is you could come up with a cutoff say for late gadolinia enhancement. So if the burden, say you're interested in non ischemic cardiomyopathy, maybe you say a burden greater than some percentage is going to be predictive of ventricular arrhythmia. And then maybe there are some other things you want to incorporate. There's this thing that Luke mentioned called parametric mapping or T1 mapping and T1 mapping indicates, it's an indicator of the health of the tissue, the degree of interstitial fibrosis, and that could be useful as well. But when it gets complicated, maybe you want to take into account a pixel-wise analysis of the MRI image. And when you say pixel-wise analysis of the image, then this brings in, this evokes this idea of machine learning, AI, and neural networks. And so there are a few different interesting ways you could approach this with AI. There's something called a convolutional neural network which manipulates the image and tries to classify whether they have a risk of ventricular arrhythmia. There's something in our practice, in fact, we have an MRI technique called dense displacement encoding with stimulated echoes which we've used for regional strain assessment and we've shown that it's better than commercial feature tracking. But one issue is that, as often happens in the MRI world, CMR world, that you develop a sequence but not everybody has it. And what everybody does have is the Cine imaging up there. And so we, in Fred Epstein's lab at the University of Virginia, we developed something called StrainNet where it's a neural network that's trained where dense and Cine imaging are trained together. And then eventually you can look at Cine imaging and generate what dense, what you think dense would predict for the strain. One other example I saw recently, there's a method called a generative adversarial network or GAN which is, you have, it has two components. One is a generator where from random noise it generates an image and then a discriminator where it tries to distinguish the truth from what you generate. And I saw a recent paper in Circulation where the investigators took native T1 maps and they had LGE at the same time and they used this approach, this AI approach, to generate what the AI network would predict for what the LGE would look like based on what the T1 maps would look like, the native T1 maps, which wouldn't require contrast. So those are some of the general approaches, I think. I think I want to make the contrast between just human-derived indices, where, say, with commercial software, we measure the LGE burden, and then we do a study, we do some kind of regression analysis, and then we come up with a cutoff for, you know, what the parameter higher than a certain number predicts in outcome versus an AI-based sort of pixel-wise analysis of the image. And that has implications for how the FDA is going to, I think, look at that and determine general applicability. So I think in clinical practice, we do follow the guidelines for, say, ICD prevention, but I think we do consider, if we have an MRI, we do consider the scar characteristics of the patient. If there's any LGE, that's, in general, known to be a predictor of poor outcomes in almost any cardiac condition. It's like you generate Kaplan-Meier curves, and for all these conditions, right, Kate and Michael? And, you know, it's like no matter what it is, it seems like, you know, the worst one is the one that has LGE, and the better one is the one that doesn't. So I think these are things that we consider now sort of qualitatively, and if someone's perhaps on the borderline with an EF or, you know, an ICD, you know, we can look at those additional things. But certainly, I think we want to move toward more systematic approaches like what I mentioned. Thanks, Luke. Kate? I agree with everything that's been said, and there's two kind of key things I wanted to pick up on. And one is, I think we've touched on this, is like the variability and the richness of the data from cardiac MR. And while that's obviously a benefit, we have so much information available to us, not just the tissue characterization, which is obviously a focus for the EP patients, but flow data, you know, data and function. And as cardiac imagers, like, you know, part of our role is not just to, you know, quantify the left ventricular ejection fracture or just quantify the LGE, but to integrate all of those data points. And so, you know, my wish list, to get back to what Michael said, is, you know, really to have an AI tool that can help us integrate those in some sort of seamless way. And so, you know, absolutely agree with Michael, the ability for us to essentially have the volumes and function at our fingertips without having to draw circles has completely changed the way our workflows work. And in many ways, I think it has made us more efficient. So our volumes have gone up, and this is fantastic. This will allow us, that's one piece and one potential bottleneck that would, you know, perhaps limit our ability to rapidly scale up our delivery of cardiac MR services. But that's one piece, and, you know, it's a relatively narrow patient population where the ejection fraction, per se, would be, like, the only relevant data point, maybe in a cardio-onc population. But as Ken said, you know, often we're looking at all of these data points and we're trying to either determine what the underlying diagnosis is, or in this patient population, more commonly re-stratify the patient. And that requires more data than just quantifying the amount of scar, for example. And so I'd love to see an AI model that really integrates all of that and perhaps gives us, you know, a report, not just with, you know, the volumes and function, but, you know, commenting on a pre-populated, you know, the amount of scar and so forth. And then perhaps a rank-ordered list of most common diagnosis, if that's what the question is for the cardiac MRI, or some sort of risk score, depending on what the underlying diagnosis is. And so, you know, to get back to what Luke said, I think those are obviously very challenging models where we're not just asking a binary question. And I think there's been some, you know, hesitancy to tackle that. It's just an order of magnitude more complicated than trying to train a model in a chest x-ray, which generally looks about the same depending on the vendor and then the patient. So that's kind of my wish list. But I think the one other thing that I just wanted to comment on, Michael, and I hope you don't mind, is like a more zoomed out view is where I think AI can help us in cardiac MRI in terms of my wish list is some of those other routine tasks that take away from our ability to spend time on like those high level, you know, thinking about what is the best for the patient. So like protocoling, I still spend a lot of time protocoling with my fellows. And so AI could also, you know, do that, say, you know, the clinicians entered an order for this patient. Maybe we can integrate even various data points like the troponin levels for this patient or their genes and say, okay, this is the best protocol for this patient. And maybe we, you know, maybe we sign it off, but we're not kind of doing that. And that could, you know, feed directly into our booking stream. So that's definitely a wish list for me is kind of, again, removing more of those low level kind of tasks or repetitive tasks that we spend time on that would allow us to think about more about what, you know, how to benefit the patient. Yeah, thanks, Kate. I think the last point was a really excellent one. I mean, really thinking about taking a step back and thinking about patient flows and operational efficiencies in a more general sense, how CMR fits in and how you can optimize the whole process. I mean, that's a really important piece because I think that you can probably eliminate a lot of the pain points by doing relatively simple things, like you said, protocoling and things like that, or at least set it up in a way that it's easier for you to do or for the fellows and residents to do in particular. I know we have about 15 minutes left. So I want to kind of switch gears a little bit and talk about imaging in patients with devices. I think that's probably very relevant for this audience. And I think there's two points I'd like to touch on. Before the patient gets the device, what could be the role of MRI for decision making? And then after the patient has the device, there's maybe a lot of thinking still in the community, or you can't do an MRI in patients with devices. That's actually no longer true. There have been many developments in the last years, both in the imaging technique size, safety of devices, and things like that. And you can now probably image 90 to 95% of patients who have devices with MRI. And we can talk a little bit about what that means for the image quality and things like that. But I want to start with, if you are in a situation where you need to make a decision, does this patient need a device? Has CMR a role there as well? And maybe start with Ken. Thanks. Yes, there is, both in terms of the risk of arrhythmia, and then I think also heart failure pacing devices are a type of those devices. For those devices, you can get a lot of information from the CMR, and we've demonstrated the utility of CMR for differences in regional strain, and also the combination of CMR with risk models like the Seattle Heart Failure Model, and how, to Kate's point, one paper we had in Jack Imaging demonstrated how the Seattle Heart Failure Model could be used with CMR for predicting outcomes in CRT. Now, it wasn't AI, right? It was based on regression analyses, and so maybe we could be moving toward AI. So I think before device, those are some things you might be interested in, whether their LV function might improve with CRT, or whether their arrhythmia risk justifies an ICD. After the device is in, I think there are also some really good reasons to want to do with CMR. They have CRT, and we know that we get better assessments of left ventricular function and right ventricular function with CMR, and so that gives you a more accurate assessment of function with CRT. Sometimes when you're drawing the contours with echocardiography, it can be a little subjective based on how good the echocardiographic windows are. Another thing, though, that's, I think, of great interest to the field are ICD generator changes. A lot of people are interested in this question of what's the best approach for a patient where they've had an ICD, they met primary prevention criteria, they've had it in for 10 years, they haven't had any therapies, and maybe their LV function has improved to over 35% now. Do you change out the generator? Whether they have scar could influence that decision, and we now have really good wide-band protocols for transvenous devices. They work really well. So I think those are a couple different areas where you might want to consider MRI, both pre- and post-device, and some of the technical considerations. Kate, do you want to comment on this, how you, in your kind of practice, deal with patients with devices? Yeah, so I'll just emphasize, you know, we do a ton of CMR pre-device implantation, working with our EP colleagues, and I think, you know, largely that's focused on identifying of the substrate, potentially, or scar. But I totally agree with both Michael and Ken's point in that, you know, even 10 years ago, you know, we just largely didn't do cardiac MRI in the patient's post-devices, and that has dramatically changed, but there's maybe not been quite enough of a shift in emphasizing that to our clinical colleagues that this is available. And I think AI will have a role in improving the image quality that we get from those, you know, specifically generative AI, but to be careful it's not hallucinating things, you know, where there might be artifacts. But I wanted to link back to what Luke said earlier about the ensuring we have representative data for training models, and so there has generally been under-representation of patients with devices, so often those patients have been excluded, of course, from CMR studies or studies where CMR is the endpoint because of concerns of safety or artifact. And so I think that's really important now that we can safely do CMR in these patients who have devices, that they are included in studies, not just the specific, you know, device studies where the focus is on those patients, but that they are included, and so that we have representative data, and I think that's really important. The other, you know, thing I was hoping maybe Luke can comment on as well is, you know, we have, we now are able to scan cardiac MRI in patients at 1.5T largely. The devices are, you know, now we would scan them at 3T, but now we have low-field scanners, and as our cardiac MRI technology changes, you know, how does the regulatory side keep up with that, and how do we address that? I think that's really important for us to think about. Maybe this is a great point, Kenneth and Kate, so maybe there's two questions for Luke that come out of this. Number one is, so we do have really good technologies for mitigating artifacts from devices, but there are still artifacts, so AI could help with that. So if somebody wanted to develop an AI technique for artifact removal, how would that work? And then if you wanted to translate everything that works at 1.5 and 3T to low-field, what would be the regulatory barriers or challenges or opportunities there from your perspective? So I guess I'll start off with the most generic comment, which is that these are very specific questions and very important questions, and we do have a program through FDA called a pre-submission process or a Q-submission process where, you know, you can come talk to us about ideas that you're having for these devices, these systems, but I think most importantly what your tests, how you're going to test it, and what your ideas for testing it are so that we can get a sense, you know, before we see the data that you're collecting it properly and that it's large enough and that you're mitigating bias in the right way. And so I think, you know, that's my first point, is that, you know, coming to talk to us early is kind of always the best thing because then we can know what you're thinking and how you're thinking about it before it shows up on our desk and we have to make a approve or disapprove decision and we're scrambling to answer these questions on the fly, you know? I think that, you know, from what I've seen in how models are advancing really rapidly, you know, there are a lot of different methods that are available to expanding one modality to another and that if you have, there's an old saying in, I mean, which is relative because AI is not that old, but there's an old saying in AI development that whoever has the most data wins. And so if you have, you know, sorry if I'm butchering this, again, my background is not in imaging. If you have a T1 model, right, and you want to expand it to a T2 model, you could probably use the same training process and the same algorithm if you have enough data to make it work and, you know, you pre-specify the performance criteria and you use the same metrics. So that's my general impression of this. And then I think I see we have under 10 minutes, so I'll take this opportunity to make a pitch. I don't generally make a pitch, but to any of the developers in the audience here is that to answer any of these questions that Michael was asking me is really a team effort and some of your best tools are clinicians like Ken and Kate because they're going to tell you what, you know, the clinically meaningful stuff is and they're going to help you suss out the risk because, let's face it, nothing we do in medicine is risk-free and the benefit. And what we're looking for at FDA is a positive benefit-risk profile, right? And so whether that means narrowing your patient population or changing the wording on your indications for use, the clinicians are often the best people to be able to assess that and understand, hey, when this goes into practice in my hospital or a hospital, what's going to be most important to the user to mitigate the risks that we know exist and to maximize the benefits? And then I think the other people that are generally underused in this area are statisticians. Statisticians have become my best friends when I look at some of these files. Why? Because they're exquisitely educated in the language of bias and is not I don't have enough time to talk about, you know, bias in AI systems, but it means a whole slew of things and they are really the ones who can key you in early on to say, you really might want to consider this source of bias. You really might want to consider enrolling more people from this patient population. You really might want to consider. So that's, in addition, hopefully I answered your questions, but that's my pitch in forums like this is your clinicians on your team, your statisticians can really help you, point you in the right direction for developing the next generation of whatever device you're looking at. It warms my heart to hear someone like Ken using terms like convolutional neural network and, you know, GANs and Kate talking about generative AI. And it's wonderful. You know, our clinicians are brilliant people and they learn a lot. And a lot of them have taken on a lot of extra work to learn that terminology of AI. But I think it's also incumbent on the developers to, you know, come to them with the questions about what they're experts at, which is, you know, the clinical practice of medicine and how to treat patients and, you know, risks and benefits that are acceptable in certain use conditions. So hopefully I didn't ramble too long. No, this was great. Actually, you know, I think you made a very important point that, you know, collaboration across disciplines is really key to all of this. That's why we're here. That's why there's an SCMR HRS session here right now with the scientists, the cardiologists, right? Somebody from regulatory. So I think we need more of this, this kind of crosstalk between the societies, between specialists, between engineers, clinicians, and people with regulatory expertise and, you know, biostatistics and everything that was mentioned. So great, great points, really, Luke. Thanks for sharing that. We have about four minutes left. So what I want to do just maybe last is like go around, go around everybody, like in five years from now, 10 years from now, Luke, Kate, Ken, EP and cardiac MRI, where will we be? Who wants to do the first? I can start. I also, you know, looking out in the audience, I also wanted to just mention left atrial imaging and atrial fibrillation too. And that's, you know, Dr. Passman and Dr. Markle, you know, that done a lot of work in this area. And certainly with, I think there's a lot to be done with electrocardiographic monitoring for anticoagulation and atrial fibrillation. And I am curious to see how the field of left atrial imaging and 4D flow evolves for that and how perhaps, I think you were getting it, multimodal approaches, right, how a multimodal approach integrates clinical factors, CMR and electrocardiographic assessment can get us beyond the CHADS VASc score, you know, that would particularly be interesting. And then, I think for VT ablation, that's going to be a big area for CMR to integrate that into our VT ablation procedures, make them more efficient, help us know whether we need to do epicardial or endocardial ablation, where to go, things like that. And then, you know, we still need a, you know, a good randomized trial showing how MRI can get us beyond the, get us to a higher accuracy in determining who needs ICDs, you know, particularly there are several conditions of interest, not only ischemic cardiomyopathy, but non-ischemic cardiomyopathy, hypertrophic cardiomyopathy, and others. And I think there's a lot of potential for AI and CMR to be integrated in a sort of multimodal way to help us there. Kate? I'll keep it brief. I want to highlight, I would love to see, you know, integration of all the multiomic data, so the blood work and the ECG, the whole two results into some sort of risk prediction models. But I also think it's really important for us to think about, you know, I'd love to see CMR not only be more efficient, but more sustainable, not just from the workforce perspective that we talked about, but from data storage and so forth, you know, how are we going to minimize our downstream impact in terms of the environment and so forth, and so able to ramp up our ability to deliver CMR while also minimizing our impact. Thanks Kate. Luke, any comments where you see the field going? So I think if I play it safe, I'll go back and just reflect on the fact that really the history of AI is built around image recognition, right? And so I would expect that that probably will continue to be the case here. So image acquisition, image refinement, perhaps artifact reduction might be a very fruitful area to explore. And then also, you know, how do you build on that? I'm looking at an example of PV mapping, and it certainly wouldn't be difficult, you know, using existing electrodes and probes to integrate that type of image acquisition with that real-time data you're getting from the probes to perhaps have more precise PV mapping or maybe have recommender systems to the EP as to, you know, the most fruitful places to ablate. But I think anything around the image acquisition and helping the radiologists focus in on areas that are either problematic or malignant or things like that is where I'd probably put my chips. All right. Thanks a lot. I think this brings us to the end of the session. I think it was a fantastic discussion. I want to thank our wonderful panel, Kate, Ken, and Luke, for their perspectives. And thank you, of course, for listening. And then also make a pitch for SCMR. Check us out at scmr.org. We're a great society. Become a member.
Video Summary
The video is a session about the use of cardiac MRI (CMR) in patients with heart failure and heart rhythm disorders, jointly hosted by the Society for Cardiovascular Magnetic Resonance (SCMR) and the Heart Rhythm Society (HRS). Dr. Michael Markle, the session moderator and SCMR president, introduces panel members: Dr. Kate Hanneman, a cardiac radiologist; Dr. Ken Vilcek, an electrophysiologist; and Luke Ralston from the FDA, who focuses on artificial intelligence (AI) and machine learning.<br /><br />Key points include recent innovations in cardiac MRI, like advanced imaging techniques (CINE, LGE, parametric mapping) that provide detailed insights into heart function and tissue abnormalities. Discussions center on how CMR can be used for risk stratification, guiding therapy decisions, and during EP procedures, especially for patients with devices.<br /><br />Challenges highlighted include the need for efficient and accessible CMR protocols, AI's role in improving workflow, and standardizing image acquisition. Luke Ralston discusses regulatory aspects, emphasizing data quality and representativeness. The session concludes with future aspirations: integrating multi-omics data for better risk prediction, improving CMR efficiency, sustainability, and leveraging AI to enhance image acquisition and clinical decision-making processes.<br /><br />Overall, the session underscores the collaborative efforts required to advance CMR technology and practice.
Keywords
cardiac MRI
heart failure
heart rhythm disorders
SCMR
HRS
artificial intelligence
advanced imaging techniques
risk stratification
multi-omics data
HRX is a Heart Rhythm Society (HRS) experience. Registered 501(c)(3). EIN: 04-2694458.
Vision:
To end death and suffering due to heart rhythm disorders.
Mission:
To Improve the care of patients by promoting research, education, and optimal health care policies and standards.
© Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
×
Please select your language
1
English