false
Catalog
HRX 2024 Top 5 AbstracX
HRX 2024 Top 5 AbstracX
HRX 2024 Top 5 AbstracX
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
afternoon everyone. Can you hear me okay? Wonderful. Thank you for being here. We have the top five abstracts that we will review today and we will start with our first one with Dr. Musad from Valley Health. He will talk to us about the initial experience of a novel augmented reality system for use during catheter ablation. The format is seven minutes of presentation and then three minutes of Q&A so I'd love to have your questions come through on the iPad. Okay hello everybody and I'm feeling very honored and humbled to be here on this stage and that our abstract was elected as one of the top five abstracts and on behalf of our co-authors I'm ready to give you a glimpse of in the future of the EP lab and ablations. Next. So you know electronic 3d mapping is the standard of care nowadays and a lot of us would not even know how to do an ablation without 3d mapping and going from the fluoroscopy guided ablation when you see only the heart you know contour of the heart and you go with the catheter there to a 3d mapping where it is very precise and you can see the the origin of your of your arrhythmia and go in there and ablate. So it's it has been a leap forward for EP world and for enhancement of the electrophysiology and ablation settings. But can we do better? Yes we can and I'd like to convince you next few minutes that we can do even better. So command EP by SentiAR it's a software that connects multiple sources and data and allows collaboration and through their display you know they create 3d real 3d hologram of the anatomy created by Carto3 and I remember first time almost half a century ago when I saw a hologram it was in the Star Wars the Princess Leia and as a young kid I was always dream to see a hologram and now the hologram is a reality and gives the position control of the model manipulation. Next it has a simple navigation you have a cursor in the center of the screen and you with the movements of the of the head you will be able to access navigation tools there which is on the on the wheel on the on the left side and you can rotate and manipulate the anatomy as you wish. Next and the most exciting that I find is the dynamic clipping plane so you have your model in front of your your eyes if you lean forward you can go inside the model you can go inside the heart as you are here you know in the room and you can see all the exits all the creases all the ridges and it's a different view that you are used to. Next so the study objective was to describe our initial experience with this command EP during the EP procedures in a real-world setting. Next and we can use this for RF ablations either to see the the the contact of the catheter to evaluate the integrity of the lines or you can see the esophageal probe as on the left side and you can see actually the the actual distance from the esophagus to the heart or to your catheter and it's it's it's changing it's again changing whenever you see it in 3d you see the depth. We started using also for the PFA catheters and as you can see here the PFA is represented as a as a lasso and you can see in the basket or in the flower and it gives you a much better and much precise location of the catheter compared with ice or with the with the fluoroscopy. Next our initial experience were the 11 patients who underwent ablations of 13 arrhythmias by five experienced electrophysiologists. Next the substrate we ablated mostly atrial fibrillation and atrial flutter but also we use it for SVTs or for AV node ablations. Next looking making a survey of five points of a Leichhardt scale you know we can see that most of the physicians feel that the headset is comfortable, reliable, easy to navigate and the image is very good and a lot of physicians also acknowledge that actually it improved the understanding of the anatomy and navigation of the catheters but also we learned that this there is a learning curve and by the eighth case you know everybody starts understanding better how to utilize the system and you know everybody was was liking it. Next so in conclusion a real-time 3d guidance using Command DP is promising several important advantages over 2d images of the 3d model which may lead to more efficient ablation but larger studies are necessary to confirm its superiority. However now are you ready for some fun? So I'd like to do a live demo to see the how is you can see the model and the ablation catheter so you don't hear this and I'm gonna put it on my head and hello how are you so you hit confirm here you have the model here and then you go to the menu and you can move and I'm gonna move to the target that's the target Jenny's the target there and then I can either further it up or bring it closer I like to bring it closer to me so I can where the to the to the where I can see the going to the the plane so then I locked it there and then I open up a wheel for movements and you can see here very nicely you can see the posterior wall with the veins this is an ablation case that use PFA and what I like to do usually to put up or down and then you know what with this to go inside the heart and as you can see here I'm going inside the heart and I can see very nicely this is the left veins these are the left veins that I can see here that I'm going in and out and you can see the the veins that were ablated the left veins that were ablated already the points from ablation and you can see the the ridge here and the appendage and of course then you see the posterior wall and you'll be surprised to see that usually the posterior is not in the posterior it's more higher up actually and then you go and you see the the right veins again coming in from outside inside the the veins and you can see the movement of the the PFA catheter the red one the catheter is the marker for the the tip of the wire that the J wire that it goes into the the vein and I can see the catheter either inside the heart here or I can see it on outside and you can see the the catheter in there but usually I like this vision to go inside the heart and to evaluate the the anatomy and the position and you can see here this is very well positioned catheter and so it encompasses the the the whole entrance of the vein with the carina but whenever we try to to move it to the the superior vein there you see that initially the the position is not as perfect although sometimes in the on the ice and on the fluoroscopy you see that it's it seems to be very well positioned here but as you can see here it's mostly posteriorly directed and whenever you don't want to see this anymore it just can hide it so I'd like to thank you very much and I'm open for the questions so I do have a question from the audience that I'd like to tack with a couple other questions so the the question is do you have to have glasses on to see the holographic which I assume is a yes yeah the glass yeah that's it right that's it yes does it affect ergonomics like some EP labs use the headset to communicate with the staff so no we have actually we do we have a headset another headset to communicate with the with the staff okay which I put it on on top of it okay but in the next generation I think it will be integrated in this this headset I think that'd be helpful and the other the other question was so this is a great teaching modality as well right and oh you're helping with challenging cases is there possibility for dual operator interaction yes absolutely so there is a I could not show you okay there was a share button there I think you can see on the one the wheel there it's a share button that you can share and connect two or more headsets you know to have like to link and to be synchronized and whatever you see the the others can see and you can do it either like the operator to to do the movements or you can have an assistant making the best best options for them for the movements and for the views that be a game-changer for fellowship training oh absolutely this is a excellent tool for training wonderful thank you okay we will do our second presentation now that's integration of cloud-enabled AI analysis of VT isthmus with the electro-anatomic mapping systems by Dr. Volonka who is the CTO of vector medical thanks everyone for being here and good afternoon my name is Christopher Volonka I'm the CTO co-founder and co-inventor at vector medical and today we'll be sharing our work on integration of a cloud a cloud-enabled AI analysis of ventricular tachycardia instances with electro-anatomic mapping systems this work was done in collaboration with my colleagues Christian Martone and Chris Schulte at vector medical as well as my electrophysiology co-founders Dr. David Krumman and Dr. Gordon Ho so mapping VT ablations is complex and it often involves understanding two underlying two underlying key mechanisms both the electrical origins of the VT as well as the arrhythmia substrate which is often involves diseased cardiac tissue and so current methods involve extensive catheter mapping to identify these early activation sites like on the left as well as scar border zone areas detected by high-density grid voltage mapping to show areas of diseased tissue while effective catheter mapping is time consuming complex and can be challenging and so this is where our latest work attempts to improve the workflow for EPs we envision a workflow that seamlessly integrates electrical information from the 12-lead ECG as well as structural information from CDE and ECG as well as data from the VTs to produce a 3D model that clearly shows where the arrhythmia substrate as well as the electrical source mapped from a 12-lead ECG are located. The model in the middle visualizes isocontours of myocardial wall thickness to indicate areas of potential arrhythmia substrate. The contours show areas of thin to thick myocardium by the gray to purple color scale which indicates wall thickness where gray is less than two millimeters of wall thickness and purple is normal heart tissue greater than 10 millimeters. A transparent layer is superimposed onto the wall thickness map to visualize the detected arrhythmia source of the VT which is obtained from computational mapping. Finally, the data should be easily exportable into catheter mapping systems so that electrophysiologists can view this in real time as well as help with pre-procedural mapping guidance. And so we have implemented this as a cloud-based end-to-end pipeline that processes the ECG and the CT data asynchronously. The ECG workflow are letters A, B, and C at the top and the imaging workflow is D, E, F, and G at the bottom. The outputs of both pipelines are combined at step H which calculates a fused model and from there exports into a catheter mapping system. The pipeline is non-invasive, rapid, and seamlessly integrates with current clinical workflows. Since we can utilize ECG and CT data from prior patient studies, this output can be used for pre-procedural planning as well as up-to-date guidance during the procedure. Here's how it works step by step. So first, a 12-lead ECG recording of the arrhythmia is collected either from in-clinic or during the procedure while arrhythmia episodes are being captured by the mapping system. Next, in B, it is analyzed in VMAP which is our commercially available computational ECG mapping system which provides beat-by-beat localization of arrhythmia sources such as VT. With just a single selection of an arrhythmia beat as shown in the screenshot of our system, VMAP provides a probabilistic hotspot on a CT-derived three-dimensional model of the heart. The hotspot has a white to blue gradient highlighting the areas of interest of the arrhythmia source. This whole workflow takes about five minutes from ECG data upload to identifying the arrhythmia beat to producing the visualization, and so it makes it feasible to perform this multiple times during a procedure as arrhythmias continue to be collected, which we have seen has been especially helpful for polymorphic VTs where you have a number of different morphologies coming from a number of different areas. Now on to our novel CT imaging pipeline. This is a fully automated process that involves attaining a CT of a patient, de-identifying it, and uploading it to our cloud-based pipeline. We have an ensemble of trained deep learning models that will identify 12 cardiac structures from the CT. From there, we produce a high-quality mesh from which we can calculate the wall thickness of the left ventricular myocardium, and from the calculations, we can produce an isocontour map of the LV wall thickness to potentially highlight areas of wall thinning, which may indicate areas of proarrhythmic activity. The model geometry and isocontour map can be packaged into a standard mesh file that can be downloaded onto a local workstation for further processing and visualization. This process, being fully automated, also takes less than five minutes to run. Finally, the arrhythmiasaurus location and the wall thickness maps are co-registered together to produce a fused model onto a patient-specific, onto the patient-specific mesh, and then the result is converted into file formats that are suitable for direct upload into catheter mapping systems like Abbott Insight X, shown here. So we've conducted studies using this pipeline on 31 VT patients so far at UCSD, and here is one prime example of how this combined analysis can prove useful information. So this analysis was performed a day ahead of the patient's procedure under an IRB-approved protocol. And the results showed a VT target that coincided with an apparent critical isthmus, as shown by an area of tissue that goes in between two thin areas in the LV basal inferolateral LV. The combined model was exported to Abbott End Cite and uploaded before the EP was scrubbed into the procedure. During the procedure, activation mapping at the predicted site identified mid-diastolic signals confirming the presence of the VT-critical isthmus. And I forgot to mention that because we had this uploaded before the EP scrubbed in, he was able to investigate this area of catheter mapping first, which saved him a lot of time doing high-density mapping all around the areas. So by going to this area first that we predicted, we saved him a lot of time. And then finally, after confirming those early signals, he proceeded to ablate at the site and achieved termination. Additionally, low-voltage fractionated potentials were observed in the region of wolfening, which further validated our model. In conclusion, our novel multimodal AI pipeline shows promise for advancing treatment for ventricular tachycardia ablations. By seamlessly integrating multimodal CT and ECG data onto an interactive 3D model, we can help guide focused activation and mapping, which can, in turn, reduce mapping time considerably. Finally, we designed the workflow to be non-invasive, rapid, and integrate seamlessly with current ablation workflows. We're looking forward to further validating this in more studies and helping improve VT patient outcomes. Thank you. Thank you. Do we have any questions from the audience? Yeah. So while we're waiting on questions, Dr. Blanco, the premise is anatomic VT mapping, right? It's wall thickness. So in your model, we know there's also functional VT. So in which patient cohort do you think this would be the best model for VT? Would be best applicable? That's a good question. I think that using the CT for the anatomical certainly has a role for those patients. And it would be interesting to see how the functional plays out with using the ECG mapping, since that really only looks at the electrical sources. So you could still have variability in the morphology due to these functional characteristics, which could be captured from the ECG mapping. So the 31 patients that were studied were ischemic or non-ischemic, or was it a mixed cohort? It was a mixed cohort. It was a mixed cohort. Thank you. Thank you. We do have a question for you, Dr. Musad, but I'll leave it to the end. OK. Sure. Our next presenter is Michael Mossi. He's a third year medical student at Columbia. And he'll talk to us about detection of structural heart disease using the 12 lead EKG, a comparative performance of cardiologists versus Echo Next AI. Hey, can you guys hear me? Yeah. All right. So what is structural heart disease? Structural heart disease is broadly defined as any cardiomyopathy or valvular heart disease that can be imaged via Echo. Despite the technology that we have, structural heart disease is underdiagnosed. For example, for patients that have moderate to severe valvular heart disease, over 50% of those patients are undiagnosed. And that leads to worse patient outcomes, increasing hospital costs, and will likely continue to worsen as the elderly patient population continues to increase. So to help remedy this issue, the team at Columbia decided to create Echo Next. And Echo Next is an AI model designed to detect structural heart disease from an ECG. And it was trained on over 1 million ECG echocardiogram pairs from eight New York Presbyterian hospitals. And that provided us a very diverse and robust data set to train our model on. And it was highly accurate at detecting structural heart disease, and that was defined as left ventricular ejection fraction of less than or equal to 45%, left ventricular hypertrophy, right ventricular dysfunction, pericardial effusion, valvular heart disease, and pulmonary hypertension. And it was tested at three external sites, Montreal, Cedars-Sinai, and UCSF. So we wanted to see how would this model compare against cardiologists in detecting structural heart disease from an ECG. So we surveyed cardiologists, and we provided them a survey of three blocks of 50 ECGs. And each ECG had the patient's age, their sex, and the different parameters that are typically listed on an ECG, as well as, of course, the ECG itself. And then we asked them, do they believe that this patient has structural heart disease and would benefit from having an echo done? And then after each block of 50 ECGs, we provided the cardiologists with the same ECGs, but we didn't tell them it was the same ECG. And we added the AI model score. And that model score ranged from 0 to 1, with a score of greater than or equal to 0.6, indicating that the model has a prediction that this patient has structural heart disease. And we asked them the same question, do they believe that this patient has structural heart disease with this added AI model score? So we had 13 cardiologists fill out the survey for us, and they interpreted over 3,000 ECGs with a 50-50 split with and without AI. And we found that the AI model enhanced the cardiologist's accuracy from 64% to 69%, but not to the AI model's accuracy by itself of 77%. And we found that this trend held true for the sensitivity and specificity as well. So in conclusion, we found that the AI model outperforms cardiologists in detecting structural heart disease via ECG. We also found that the AI model enhances the cardiologist's ability to detect structural heart disease, but not to the AI model's level. And we acknowledge that future research is needed to optimize the human-AI interaction. And one limitation that we do acknowledge for this study is that cardiologists are not trained to detect structural heart disease from just an ECG. They are usually provided with the patient's history and the physical exam. So we just wanted to see how we could model this and look at other cardiologists and see how we could just look and see if we could model that. So with that, I'll take any other questions. And thank you. Thank you, Michael. Oh, good. So we do have a question here for you. Do you think a similar algorithm could be developed based on one lead EKG coming from ambulatory devices or implantable loop recorders? I didn't catch the first part. Could you repeat that? Do you think that a similar algorithm could be used with a one lead or a limited lead EKG? Yeah, I definitely think so. I think when you have an image like an ECG and you're able to derive any data from that and correlate it with an echo finding like we did with our model, you're able to find that there is some correlation with the image to some echo finding. So it's definitely possible someone could do that with a one lead or any different type of modality. So I definitely think that's another possibility as well. Was the model trained on a single EKG or did it have the advantage of looking at prior EKGs? No, it was just looking at one ECG and then comparing that to the echo that was done for that patient as well. A single? Yes. Around the same time? Yes, yeah. Within a year, yeah. Within a year. Let's see. Oh, there's one more. What next step do you see for this technology to benefit patients? Yeah, that's a really good question. I think that's something that our team is always thinking about. And I think the nearest future thing that we see it being done in is more of a safety net. So again, like I talked about on the second slide, structural heart disease is still underdiagnosed. And if we're able to find patients that maybe present asymptomatically and don't present with the dyspnea or the orthopnea, how can we tailor those patients and find a way to get them to further testing like an echocardiogram? And so I definitely think more of a safety net for now, but I'm sure there's different applications that can be found later on in the future. Do you, I guess the final question would be, is this all validated in adult patients? Yes, yes, adult patients, yeah. Were there certain cohorts that were excluded, like pregnancy? Yes, so we found patients that had for, or can you, sorry, could you repeat the question again? So in adult patients, were there certain cohorts excluded, like pregnancy, which by itself has a different ECG? Yeah, yeah. So yeah, for patients like that, we excluded them. And we tried to find patients that maybe had the quote, unquote, symptoms of structural heart disease. And that's why they had an echo done, because if a patient doesn't have any symptoms, like we did. Only if they had an echo, yes. Yeah, so we're not going to have an echo to compare them. For asymptomatic healthy individuals. Exactly, so yeah. Great, thank you. Our next presentation is on deep learning approaches for prediction of ventricular arrhythmias using upstream electrograms from intracardiac devices by Dr. Bhatia from Emory University. Thank you. I'm Neil Bhatia. I'm a cardiac electrophysiologist at Emory University. And I'm going to be talking about our research endeavor predicting ventricular arrhythmias using upstream intracardiac electrograms using deep learning approaches. Sudden cardiac death due to ventricular arrhythmias continues to remain the leading cause of mortality in the United States. Now, at this time, implantable cardioverter defibrillators are the single most effective treatment for preventing sudden cardiac death due to these ventricular arrhythmias. However, they are not without their drawbacks. Studies have shown that patients within multiple defibrillator shocks have a higher risk of mortality. Despite our best efforts, we still struggle with reducing inappropriate defibrillator shocks due to atrial arrhythmias. And most importantly, our patients suffer from post-traumatic stress disorder, not knowing when the next defibrillator shock might occur. Now, to prevent ventricular arrhythmias, we use medications and ablations. But the success rates for these treatments are still suboptimal. And I think the big issue is we still do not understand why certain patients develop ventricular arrhythmias and others. And I think if we could predict real time who's going to have a ventricular arrhythmia, this might hold insight into the initiation and the rhythmogenesis of the ventricle. So here are two strips from a subcutaneous defibrillator. The top, as you might have guessed, looks like something bad's going to happen. There's non-sustained VT, short coupling interval. The one on the bottom is a presenting rhythm, meaning the device nominally sends an electrogram when the patient is sleeping or just at a random time. But as you can see, both of these events lead to a sustained ventricular arrhythmia. So the question is, what is happening in these upstream electrograms that is leading to this arrhythmogenesis? It's almost like there's two different signal phenotypes. And that is something we wanted to tackle. So how do we do that? Well, we basically constructed a digital pipeline where we extracted recorded electrogram episodes from a remote monitoring database. After extracting them, we uploaded them into an online annotator we created so we could look at these events. And the plan was, with these annotation events, to upload them into a machine learning model. So here's an example of a strip from the subcutaneous ICD. Now, we chose the subcutaneous ICD because in a transvenous system, you only get two or three seconds before an event, which is not useful, nor is it meaningful to act upon. So here, as you can see, you can see the ventricular arrhythmia event. But we are interested in the upstream event before the event happens to see what is going on in these episodes, what is going on in these upstream electrograms that might lead to a ventricular event. Now, here at Emory, we are a high subcutaneous ICD implanted center. However, we still did not have a significant amount of episodes to build a model from scratch. So we turned to a transfer learning model. Now, a transfer learning model allows us to leverage a pre-trained model, process data more efficiently, and has better model generizability. Now, the subcutaneous ICD in some ways does look like an EKG lead, because it's not a true intercardiac electrogram. So we use a previously trained model on a large EKG database for rhythm classification and used that model in ours to train these electrograms. So we had about 243 events of true annotated ventricular arrhythmia events and 8,545 events of presenting rhythm. These are just the nominal presenting rhythms where there is a no event. We used a five cross-fold validation patient-wise, so the same patient is not in the training as the testing. Our average RON AURC was about 0.96 and our AUPRC was 0.82, which is probably more applicable to this, given this was an imbalanced data set. But as you can see, our sensitivity and specificity were also quite excellent. Now, we did a grad cam so we can get a better sense of what is the model picking up. And as you can see, as we're getting closer to the ventricular event, the model way seems to be picking up on the QT interval. So that might be related more to polarization that is leading to the QT interval. Or maybe it's the polarization that is leading to ventricular arrhythmias. Now, the big question, what is this, is there, which I think based on this pilot data, there is a digital signature in these upstream electrograms. I think the big question is, when does this process start? Now, our arrhythmia ablation strategy has really shifted from a structural ablation to a functional, so clearly there are probably some functional changes in the ventricle that are leading some people to develop ventricular arrhythmias and others. And if we can figure that out, when that's happening, then this might actually give us an opportunity to maybe act upon on this event before it even occurs. Thank you. Okay, thank you, Dr. Patil. So while we're waiting for audience questions, you analyzed just immediately upstream of the episode, right? Yes. So clinically, that wouldn't really help us. So we did look at immediate upstream from the event, and I think the sense is, at this time, why are these patients having these ventricular arrhythmias? Like, why do, you know, out of all the ICDs, maybe 10, 15% of people are having ventricular arrhythmias. So looking upstream initially will kind of give us a better sense what is happening upstream that's causing these arrhythmias. Now, in terms of action, I think that is something we can look into, whether it's overdrive, you know, faster pacing, whether it's medications or whatnot. I think that you need to get a better sense of when these arrhythmias are upstream, when they're gonna happen, rather than, you know, a lot of papers have looked at 30 days ahead, 60 days ahead. These are not, you know, not very meaningful in terms of actionable. You know, like, oh, this might happen 30 seconds, 30 days ahead. Okay, well, I think if we can get a better sense of when these are happening real time, that might help us kind of deliver a more appropriate treatment. Do you think these, since this entire study was done in the sub-QICD population, which is generally a more select population with channelopathies and the likes, it's not generally an ATP population, a population where we do ATP entrainment. Do you think the QT prolongation is a signal of the cohort that most operators select for sub-QICDs? I will tell you in this population, this was not a primary genetic population at Emory. We have a lot of non-ischemic end-stage renal disease. There are some ischemic cardiomyopathies as well. So this patient population is in a young, long QT population, which I think a lot of surgeons do that for subcutaneous ICD. So it was more an all-comers population that you saw? Yes. We were on the IDE trial for subcutaneous ICD, so a lot of these patients, you know, when we were initially doing it, were for primary prevention, ischemic, non-ischemic cardiomyopathy, near a QRS, no need for pacing. It's fascinating. So it comes down to the QT every time. Okay. All right. Thank you. I'll just check one more time. Oh, there is another question from the audience. Is there enough information in a traditional ICD, EGM, such as the CAN-2-RV ring, to conduct a similar analysis? Are you aware of any other groups doing that kind of work? That's a great question. So when we're initially looking at this, nominally, the transvenous ICD only records two to three seconds upstream from an event. Now, the Mayo Clinic group just published on this, I think it was Biotronic ICDs, and they did use far-field, electric, both far-field and far-field as well as the bipolar electrogram, but they were able to look at one to two seconds before an event, and we wanted to go beyond that. So, you know, this SICD nominally records 30 to 40 seconds upstream of an event. Thank you. And our final presentation is also by Dr. Volonka. He will talk to us about validation of deep learning VT substrate models. Hello again, it's Chris. So today I'll be giving this talk on behalf of my colleague Christian Martone, who is the author and creator of these slides, who unfortunately couldn't be here today. And again, this work is a collaboration with Drs. David Krumman and Gordon Ho. And so he worked on validating the deep learning model ensemble that we use in our multimodal pipeline. Thank you. So as I discussed, VT is a complex arrhythmia that includes both electrical as well as anatomical and structural information, and leveraging these two pieces of key data from the clinic can help us generate actionable multimodal insights into heart malfunction. So again, VMAP can help provide that electrical information by providing the ability to map arrhythmia sources based on the 12 lead EKG, and we're now interested in sort of combining this this technology with the anatomical information that's embedded in CTs. And so we're particularly looking at myocardial substrates, which we're estimating using wall thickness maps from these CTs. And so a key ability to being able to construct these wall thickness maps, especially of the LV, is being able to accurately segment the left ventricular myocardium, which is sometimes a very difficult chamber to segment and separate from the other structures. In particular, VT patients have a number of cardiac anatomical abnormalities, including dilation and scarring. And a good point was brought up earlier that sometimes these patients are totally non-ischemic, and their VTs may be functional in nature, which VMAP can also help map independent of imaging. In addition, the CTs also contain a lot of confounding artifacts, as well as a wide range of variability in terms of fields of views, protocols, whether or not the chambers have contrast or not, different resolutions, and different manufacturers producing this image data. So our pipeline starts from the raw CT, and it's uploaded to our system, and our ensemble of models has been trained to identify 12 cardiac structures from the CT scans. So our ensemble of models was trained on over a thousand CTs, both from healthy and diseased patients, from a variety of different types of scans, as shown on the bottom left. So there's different protocols, different fields of views, some have contrast, some do not, some have different orientations, and the resolution of the images is also variable. The data also comes from a mix of both open source and proprietary datasets to increase the variability and robustness of our models. A key portion of just identifying the chambers is making the models robust to all kinds of artifacts. Probably our favorite one is lead artifacts from presently implanted devices, like ICDs, and they produce these really bright signals in the chambers and significantly reduce the contrast in the chambers that you're interested in. And so, as shown here on the right picture, our model can successfully identify a chamber even though there's this super bright lead implanted in there. We have also made our models robust to artifacts related to scaling, rotation, flipping, different types of Gaussian noise and blur, brightness and contrast variation, different resolutions, and nonlinear intensity variation. And so we've trained a number of different models on this data, including deep learning models known as DeepLab, UNet++, and pre-trained UNet model, among others. And so from these segmentations, we can generate a 3D mesh, and knowing the geometry of the LV myocardium can calculate a wall thickness map represented by these isocontours of myocardial wall thickness. The color scales indicate different degrees of wall thickness, from gray to red being less than four millimeters, all the way up to purple, which represents healthy tissue above a centimeter in wall thickness. We have conducted a test. We have done a test set of VT patients obtained from UCSD and compared the accuracy of these models using a DICE score that compares the generated masks from the model ensemble with physician-reviewed masks in these patient CTs. And we achieved very good DICE scores for each of the 12 chambers, including the left ventricular endocardium and myocardium, the left and right atria, the right ventricle, the aorta, pulmonary artery, left atrial appendage, esophagus, vena cava, and the pulmonary veins. And so here's one example patient of a 3D mesh, and we've calculated his wall thickness. This is showing an anterior and a an AP and a PA view, or sorry, this is the bottom is the lateral view of the LV. And so you can see an area of wall thinning that spans base to apex in the LV anterior lateral portion of the patient. When we compared the patient's high-density voltage map that was obtained during the procedure to the wall thickness map, we saw a good correlation of our predicted wall thickness with the measured with measured voltages on the patient's LV endocardium. Particularly in the area of wall thinning, there was an observed area of low voltage potentials, and it correlated well with the ablation termination, with the ablation sites indicated by the red markers, and ultimately the VT termination site. And so moving beyond the anatomical information, which is just one piece of the puzzle, we can, we are now developing the ability to incorporate the arrhythmia hotspots that we're getting from VMAP onto these models. And so as part of our study, we looked at, well, how often do our VMAP hotspots overlap with areas of wall thinning, particularly in cases for patients with ischemic VTs. And so looking at this one example patient, we showed a hotspot in this case in the LV apex, and this coincided with an area of wall thinning that measured about two millimeters, and a DICE score calculated between the hotspot and the area of wall thinning confirmed good alignment of these two modalities. And so in conclusion, we have demonstrated that our AI-based model can accurately segment 12 cardiac structures in VT patients. It can identify wall thinning, which agrees well with invasive catheter mapping, and the combination of structural and electrical information being jointly displayed on a patient-specific mesh can bring insights and hopefully improve the efficacy of VT catheter ablation. Thank you. Thank you again, Chris. So based on your prior talk, and I think it's a good segue to this talk, I believe you've answered this in some form, but the audience was wondering, do you think a scar-based arrhythmia can be mapped, a massive scar? And I think you did show an illustrative image in the second talk. But in that image, you had multiple lesions, right? How well do you think it performs in areas of very large scars? In very large scars? Yeah, where the entire area might be less than two millimeters. Yeah, so the analysis that I showed before the last slide was just showing the wall thickness map, and we know that in areas of large scars, it's kind of hard to know where on the large scar might the source be coming from, and that's where adding the source localization from VMAP can help to narrow that search down. So layering it with the VMAP information? Right, like you expect the source to, you know, lie maybe somewhere on the border zone of the tissue, and particularly when it's a really large scar, just knowing that it's a large area doesn't help that much. And so combining that information with sort of the source localization from single beats can help tell you, oh, it's on this part of the scar versus that one. That totally makes the anatomic physiologic sense. Okay, great. While we're waiting on more questions, I will go back to you, Dan. The audience was wondering what kind of study design and data will help you build a business case for Sentier? That's a very good question. So I think the study design would be, you know, to show improvement in the safety of the patient, and also, I think, as you mentioned, the training, you know, so I think this would be a very good tool for academic centers that are training physicians. They are doing courses for other physicians to train in different procedures, and I think, you know, the Sentier, so it's not the only the model for this. You know, you have to see in the future. You have to see like two or three generations ahead. So it's not going to be showing only the 3D mapping. Okay, you're likely we're going to have on the same display, you're going to have other inputs from the x-ray, from the from the eyes. So, you know, you can say that in the future, you might not need to have the boom in the room. You have only the Sentier, you know, and you do all your procedures based on this 3D holograms. I really like your last point, which would make EP labs more mobile, right? Because right now we are confined in spaces which have the equipment. And you need big rooms for this. Yeah, we need a certain foot real estate. You know, I think that also, so the safety and it's, you know, we already are doing pretty well with the safety for the ablation. So it would be a little bit hard. Okay, I think would be the adaptability if to see also if with this with this Sentier-R in a 3D mapping, would you be able to eliminate one of the catheters, for example, like the ice catheter. You know, so to to be able to do the ablations that, you know, and the real thing is that, you know, once you cross into the depth atrium, you know, I haven't been using too much ice afterwards, you know. So I'm actually using more and more the 3D and try to navigate in the real kind of 3D dimensions, which it's very very nice because you can see actually the depth, okay, which you're not able to appreciate on the two-dimensional screens that you have. So I think, yes, I think another study would be like economics, you know, trying to see if we can decrease the cost of the ablation. Yeah, less equipment, less access, quick ablation. Great, thank you. I do have a question for you. Chris, with VMAP and re-entrant VT, the identification of exit sites can be centimeters distant from the critical isthmus, which we kind of alluded to in my earlier question. But the audience had a good observation. Will this really be helpful in the same way as for PVCs? Do we have any experience using your model for PVCs? You mean for VMAP or the VMAP combination with the imaging? But it doesn't actually say, but I'd love to know what combination, yeah. Is it all for VTs or do you have lone PVCs alone? Yes, we routinely do PVCs alone. Part of our clinical study also demonstrated our accuracy for PVC cases, both in the aflotrax as well as in the myocardium. And it performed very similarly. Any last questions? Does anyone on the panel have questions? So I have a question for Neil. So you said that, you know, you used the subcutaneous ICD because they had many more beats before the event. Where did you see the changes in the event? So to me, it looked like the changes were in the last several beats before occurring the ventricular fibrillation, you know. So, with this in mind, can this be applied to the internal defibrillator? Well, we, so one of the things we had to do, so the presenting rhythm is about 12 seconds, okay. So, obviously we had to cut the electrograms from, the upstream is about 30 to 40 seconds. So we had to cut it at different intervals because if you put a longer electrogram, the model is going to know, oh, this is longer, this is going to end up being VT. So, you know, when we ultimately publish on this, we actually cut it up from the beginning to the end and the predictability did not change much from the very beginning of the event to the end of the event. So, and overall, the amount of, you know, I showed you the one strip with the ectopy, which was obviously something bad's happening. But the other one, zero ectopy, sinus rhythm. But the QT prolongation was only on the beats just preceding the VT? Yes, absolutely, yeah. Okay, so I think, I'm assuming that here you might create a model, you know, showing if you see that, if you have like, for the intracardiac ICDs, you know, if you have like, like two seconds or three seconds, if you have like at least two or three beats before, maybe you can see the change from the first to the last bit. So, might be feasible, I don't know. But I mean, even that, I mean, look, these are, you know, there's just one kind of heat map. But I think overall, the model predictability did not change from the beginning five to ten, five, the beginning 12 seconds to the last 12 seconds, which I think is actually pretty interesting, given that, you know, ultimately, it's just not, you know, everyone has PV, all these heart failure patients have PVDs, but not everyone has ventricular arrhythmias. So, I think understanding, you know, whether it is, you know, maybe QT prolongation or changes in repolarization that is leading some patients to have ventricular arrhythmias or not. Okay, everyone, thank you so much. That was a very, very nice session. And thank you for your engagement. We appreciate it.
Video Summary
The conference featured discussions on cutting-edge technology and methods in electrophysiology. Dr. Musad presented on a novel augmented reality (AR) system designed to enhance catheter ablation procedures. The system, Command EP by SentiAR, creates a real-time, 3D hologram of the heart, allowing for precise navigation and interaction during procedures. This technology aims to improve the efficiency and safety of ablations.<br /><br />Dr. Volonka introduced a cloud-enabled AI system for mapping ventricular tachycardia (VT) sources, integrating electrical data from ECGs and anatomical data from CT scans to create detailed 3D models. This approach enhances pre-procedural planning and real-time guidance during VT ablations, potentially reducing overall procedure time.<br /><br />Michael Mossi compared the efficacy of detecting structural heart disease using AI with cardiologists' interpretations of ECGs. The AI model outperformed cardiologists and improved their accuracy when used as a supplementary tool. Future developments aim to integrate this AI into clinical practices for better diagnostics.<br /><br />Dr. Bhatia presented on predicting ventricular arrhythmias using deep learning models on electrograms from subcutaneous ICDs. His study suggests that early detection of arrhythmogenic signatures could enable preemptive treatments, potentially reducing inappropriate shocks and improving patient outcomes.<br /><br />The session highlighted advancements in AR, AI, and deep learning, emphasizing the potential for these technologies to revolutionize electrophysiology by improving diagnostics, procedural efficiency, and patient outcomes.
Keywords
electrophysiology
augmented reality
catheter ablation
ventricular tachycardia
AI system
3D models
deep learning
ECG analysis
patient outcomes
HRX is a Heart Rhythm Society (HRS) experience. Registered 501(c)(3). EIN: 04-2694458.
Vision:
To end death and suffering due to heart rhythm disorders.
Mission:
To Improve the care of patients by promoting research, education, and optimal health care policies and standards.
© Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
×
Please select your language
1
English