false
Catalog
Pearls and Perils of the Metaverse in Medicine
Pearls and Perils of the Metaverse in Medicine
Pearls and Perils of the Metaverse in Medicine
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
One, two, three. Okay. Does this work? Yep, I can hear you. Can you hear me? Very soothing. Feel like we're on the radio. It does feel a little bit like that, okay. Traffic this morning is a little more intense than usual. Yes, and there are showers expected in the afternoon. I think that's true, actually. Oh my goodness. I now know how this panel is going to go. Sounds about right. I'd like to welcome you all to the session. It is entitled The Pearls and Perils of the Metaverse, and several of us have talked about this. What does that mean? Well, basically, we're going to be having a conversation about medical extended realities. I'd like to welcome our audience members, both those sitting here and those of you that are listening around the convention center. I thought we would start with some introductions. Why don't we start with Mina down at the other end, and we can work our way back. Yeah, thanks, Jen. It's super nice to be here with everyone. My name is Mina Fahim. I serve as CEO and President for Mediview. We are a surgical navigation augmented reality solution that today really focuses on the intervention for cancer, primarily liver, kidney, and bone. So we just got FDA clearance in July, done about 40 in human procedures, and really looking at how simple visualization, the telepresence, remote collaboration, and data can really help inform clinical decision making in the interoperative setting. So very nice to be here and excited for the conversation. Thank you for coming. Mitch. Hey, good morning, everyone. My name is Mitch Doughty. I'm working as a research engineer right now for Intuitive Surgical. At Intuitive, I'm part of our Future Forward research group. And within Future Forward, we're trying to define what the next generation surgical robotic system could look like. And as part of that work, I am really focused on using augmented and virtual reality or extended reality technology to try and figure out ways to improve the experience, both for our surgeons and clinicians, and also for our patients. So, so far, we've been doing a lot of work on the training and telepresence space, which I'm excited to get to talk about. And also, for those of you unfamiliar with our systems, I know Intuitive, they have some branding issues, but we work on the DaVinci robotic platform. If you've seen that video of surgery on a grape, that's our system that we produce. One interesting thing about Intuitive that people may not know is our surgeon console was actually one of the first widespread uses of medical VR, in the sense that when you put your head into the surgeon's console, you take on a stereo or 3D view that's represented from the tip of an endoscope inside of the patient. So you're sort of being teleported inside the patient. So I would say VR and XR in general is really central to our company and something that we are excited to keep pursuing. So yeah, I'm excited to be on the panel here today and looking forward to the discussion. Thank you for joining us. Dick. Yes, thank you for the invitation. My name is Dick Kwist, I'm the responsible person for the Abbott EP Advanced Technology Centers, the education centers we have around the world. And I think we're a little bit more the consumers of virtual reality or extended reality, as we have been using that since more than 15 years in educating our customers and our internal people. And I'm happy to talk about it more on our experience. Thank you. Justin. Hi everyone, my name's Justin. I originally actually started out my career in video games. I've been wanting to develop video games, been programming since middle school when I had a family member get sick. And that got me wondering if there's a way to use software and technology, these things I was very passionate about, not necessarily for entertainment, but to help people. So I ended up studying biomedical engineering in college, really wanting to invent new technology, but I didn't know how to get started. So I was asking around for advice and spoke to a mentor of mine and he said, if you want to invent something, you really need to understand the problem you're trying to solve first. And he thought a great way to understand medical problems was to be a doctor. So I took his advice maybe a little too literally, went to med school at UCLA and then did my orthopedic surgery training there. And that's really where I experienced firsthand what I think is one of the biggest problems in healthcare today, which is how we train and assess healthcare professionals with surgeries and procedures. So I was able to combine my two life passions, video games and healthcare, and start OsoVR in 2016. We're the world's largest VR training company. We have about 200 employees. We raise $109 million in venture capital. We have over 200 training modules, training up to about 5,000 healthcare professionals a month. And we just published a new study, so now we're at eight peer-reviewed studies showing the technology can improve performance anywhere from 200 to 300%. So really excited to speak to you all today about what we're working on. Exciting, thank you. And Burke. Hi everyone, Burke Toss. Thanks for having us here, Jen, and monitoring this panel. My esteemed panelists, thank you all. We're actually right there, for those of you sitting around here, our company over there, CentiAR. Fred, give it a wave. We developed a holographic guidance system for cardiac ablation surgeries, a product that Jen and her colleagues would use, actually. And the idea was to reduce complexity in the procedure. So our focus has been intra-procedural utilization of an advanced visualization platform, which in this case, it's a mixed reality system. And the way we look at that is a little more than just visualization. What can a physician do if they can connect with their digital tools in a new way, in an unprecedented way? Because when they're in the case, they're busy caring for the patient. And there's a lot of digital stuff around them. And currently they don't, really the only way to interact with it is through a terminal, keyboard, or a mouse. So this is beyond just seeing things in 3D or using mixed reality. It's actually getting access to data that they currently don't have access to or have control of. So that's the idea behind Sentiyar. We're currently clinical. We just got FDA clearance as well. We have 65 cases under our belt here in Boston. I thought we were in Boston for a second. And we're continuing our journey here. And I'm looking forward to sharing some confusing thoughts that we might have with all of you and look forward to it. Thank you. Thank you. Just a quick note for audience members. If you wanna put any questions that you may have for our panelists into the app, I'm happy to receive them and ask them. But until then, since I have the pleasure of knowing all of you on this panel and I anticipate a boisterous conversation, let's, oh, okay. Let's start with Justin and Mina and Dick. And let's talk, oh, I'm sorry, not Mina. Mitch. Thank you, Joan. I'm gonna keep you to argue with Burke on this. Let's start with how medical extended realities really shine in training purposes. And what's the data that we have that it works or that it doesn't work? And what do we have to create yet? What is the evidence that has to yet be generated to get you, audience members and people here believing in this technology? I have a lot of thoughts on this. I know it. How long do we all have? I think the first thing I'd say, and we're using a lot of terms here, like VR, AR, extended reality, and it does seem like there's a convergence in this space. You have the Magic Leap 2 can now has like a VR mode, right? And then with the Quest Pro and the Quest 3, you have a mixed reality mode with the Vision Pro. Clearly this does both. So there's, you know, the very near future of next year is that most headsets can work in both a VR and AR fashion. So there's really less of a difference from a headset standpoint, which is good news. And one thing that's slightly less confusing for everyone. One thing that Apple has done really well, I think, is gotten away from some of the terminology and just, I like the term spatial computing. I don't know. It's cool and just feels fresh. So medical spatial computing. I heard it here first. In terms of the data, does this work? I think what's really interesting about that is that in the same way that you may have a medical textbook that may be pretty horrific and unreliable, and then you have a medical textbook that is like the gold standard, like Netter's or, you know, in our space, Hoppenfeld is a really big one in orthopedics. You know, no one's debating the usefulness of books, right? And I think that's the trickiest thing in this space is that the software, you know, you have a lot of variables in terms of the headset that you're using, the software platform that you're using, and then the specific content that is being studied. So to be like, hey, does VR work? Does AR work? Is not the question. It should be, does this content on this platform work? I think is how we should be saying it. And so obviously I know the most about our platform, but in orthopedics, we have eight peer-reviewed studies. Like I said, I think, you know, we've studied everything from comparing OsoVR to a video. We did a motion analysis study at Wake Forest. This is currently being submitted for presentation where we put motion trackers on residents doing actual physical surgery and then training in VR. We looked at 64 different motion variables. 63 were statistically identical between the two groups, which is pretty wild. We even did another study. We get a lot of questions about haptic feedback and force feedback. And these newer, you know, right now with VR, you do have haptic feedback with the controllers. It's called cutaneous haptics. It's not true force feedback. And then with the Vision Pro and the shift towards what we call hand tracking or hand control, which we now support, there's no feedback at all. And so we wanted to understand that. So we actually put Oso at Johns Hopkins against a fully physical training model and found that there was no difference, which is supported with the past decade of research. It's kind of controversial. And then, like I said, in some validation studies, found a 200% improvement in performance, 300%. I'm not gonna go through all the studies, but long story short, it seems to work really, really well. But I think the one thing is the point that I discussed at the beginning about we need to like, not just paint a very broad brush of just sort of like VR works or doesn't work. And we also need to decide on what is good enough. The biggest challenge I think in digital health in general is that when you're studying a new pharmaceutical drug, it's very clear how you're supposed to study. Double blind, randomized, placebo controlled trial, P is less than 0.05, you're good. But it's very different, I think, in digital health, especially in training, because we've never really looked at training and been like, what is a valid form of training? Like, you know, it's a lot of times when I'm talking with residency programs or the medical device industry, and they're like, well, does this work? And I'm like, well, how do you know what you're doing now works? And they have, we've just never really looked at that. And so I think we all need to agree as different specialties or societies or like, what is the bar? And what can we all agree, is it acceptable? Because the challenge I find is that I'll go to one physician, one industry group, and they'll have, they'll be like, hey, this study is interesting, but have you looked at this? Or have you looked at that? And it's just, it's a constantly moving goalpost. And I think it would be very helpful if we all came together and created a standard of like, this is the line we're drawing on what would be a valid training technology. So that's my rant, I'm done. Thank you all for listening. Yes, thank you. I think there are some clearer indications of how it can be assessed. Just look at the complication rates of people starting with a new procedure. There is, for instance, a randomized clinical trials on lead placement, people who have been trained on mannequin-based simulation and people who have not. And there they see less complications and shorter fluoro time and procedure time in the group that has been trained there. Of course, small study, but it was randomized. And I think looking at the outcome of the training and the effect of the training on their clinical work can give, for me, a clear indication of how it works. And there are not very many randomized clinical trials. It was brought up earlier that there is the discussion, you refer to computational spacing. I think I still like to bring it a little bit down to more, to include also mannequin-based simulation because that's with a mannequin in front of you and just a number of screens in front of you where you're not there, but it still has the ability of absorbing the trainees. And they have the impression they are treating a real-life patient. And for me, that's, or they are in a virtual world during the training. So that's why I want to include that one as well. Mitch, jump in because I know what Justin's going to do right now. Hold on. Okay. I'll be back later. Yeah, I think there's been a lot of coverage of this point already, but I think sort of understanding the intended use case is very important for VR or AR-based training. One thing at Intuitive that recently we've just made public as of a few months ago, are our efforts in the training space with regards to learning on our da Vinci system, not as a surgeon, but as a bedside assistant. So some of the key benefits that we see that sort of get away from some of the limitations of like using traditional commercial VR hardware controllers is you're actually interacting with a physical system. So we're getting data from our surgical robot. As a user performs an action, we can understand what's going on. but we can also determine that these actions have been performed. So there's a number of ways, as we've talked about, of presenting this training information. You can imagine having a very similar training experience through a set of videos or dollhouse-based or augmented reality-based content on a phone. But really, the benefits of HMDs, or head-mounted displays, I think, come from having the ability to use both hands for interaction, which is going to be very important for our application in training on a robotic platform, but also many other applications as well. So I think it's been a very interesting experience for us. I think we've collected a number of internal data sets that indicate both the technical feasibility and the clinical viability of this training platform. I think next is trying to understand, from the business standpoint, how do you manage a fleet of headsets? How do you ensure that something as simple as connecting to a server inside of a hospital? How do you maintain data privacy? How do you ensure when someone finishes a training procedure, they unplug their headset and actually charge it? That's been a big issue that we've seen. But nonetheless, I think there's a lot of opportunity for training across a number of different applications, and we're just going to see more and more acceptance of these devices as they continue to develop and be refined in the commercial space. Okay, I swear, last point. No, no. I'm lying. It's not my last point. I think what you guys are working on is incredible, and I think something that I find is always very helpful is a lot of people view these technologies as like, you have to pick one, and that's it, and that does everything. Obviously, you guys understand that's not the case. There's a learning journey we all go on, especially with these new or more complex technologies like DaVinci, like most electrophysiology procedures, and these different platforms plug in at different steps in the learning journey, often where we have nothing to begin with. I think mannequin-based simulation and hands-on simulation, cadaver training, hands-on training with the DaVinci, that has always existed and will never go away and can be so much better with the newer technologies and augmented reality that we're adding. What I'm talking about here is just standalone VR where you have a $300 headset you can carry with you, and it's not going to be as immersive or realistic as what you guys are doing where you can hold the actual equipment, but it provides this basically gap where you're not doing anything, and so you can get 100 reps in VR over a two-week period prior to a mannequin-based lab or cadaver course and get such a better outcome. That's what our data shows, and I think there was a very interesting study out of, it was a CU and Penn, where they looked at the utilization of simulation labs by cardiac surgery residents, and so they surveyed all the residents, 30 out of 30 all had access to a simulation lab, and then they asked them over the past year how many of you have actually been there, and it was one out of 30. So there's a crucial element of we have incredible technologies, but if people aren't using them, if people aren't charging the headsets, it doesn't matter how good it is, and so there are some of these elements around portability, around just sort of like integration, charging, and like boring stuff that we do have to solve because the technology needs to be utilized, and I think just that portability and accessibility is critical as people get busier and busier. It is very helpful to have something that is just, you can access on the go in addition to the incredible high fidelity options we have. Absolutely agree. So to that point, I'm now going to pull in Mina and Burke, who are, I'd like to hear thoughts similarly on the intraprocedural use cases of spatial computing, if that's what we're going to call it for this panel. Let's go ahead and do it. Let's try it on and see how it feels. I will tell you none of my papers are going to have that phrase in it because it's too many letters. I'm really used to this, but I'd love to hear your guys' thoughts because I think that what we're going to next get to are the synergies between having intraprocedural use cases and training use cases working together to move the field forward. Mina said that you go, Burke. Oh, right. I was just dazing off thinking about this conversation that took place just a second ago. So we get this question a lot. We'll present, someone will demo it, and the next question after I said they use it during surgery, during the operation, four seconds later they'll go, so this is for training, right? Right. I'm not kidding you. This happens so often. So there is a very clear synergy here between training and what you do in the case and how spatial computing is going to contribute to that. But the way I think about this is a matter of either gaining skills or augmenting skills with the technology. So you have to gain the skills to do the job and that's the training piece of it. Now when you're doing the job, if you can augment those skills, then hopefully you can perform at a more uniform level, right? And hopefully better. So that whole bell curve situation, you know, are you a good driver? Yes, I'm an excellent driver and everybody says that, but the reality is that's not true, right? Not everyone can be excellent. So I think the point of how this sort of technology that we're talking about today, what it can do for skill gaining and augmentation, that's the way I think about it. So like for us, being able to understand a 3D anatomy from several 2D views is a very challenging task. And someone who doesn't do these procedures on a day-to-day basis, we don't really appreciate what an operator has to do on a day-to-day basis, hour by hour. There's a very challenging task, it's mentally loading. And they train on this for years, trying to gain these skills. Now some of them are naturally inclined, like how some people are very good at knowing where they are in the world. They can follow a virtual map in their mind. I'm not one of them, I can't do that. And there's several people who can't. So they spend a lot of time gaining these skills. Some just can't. It's just harder for them. So that's where the augmentation comes into play. They need to train, absolutely. Then we need to augment their skills. And then the third piece I'll add, and I'll throw to Mina, is that the ability to connect to a digital environment, like computers, right? Computers, monitors, sensors. During the case, I think there's a lot to be explored there. Because the physician who's operating, intuitive case is different because they're in front of the robot. But if you're not in a robot, and if you're in the room without a robot, then you really have no ability to control anything that happens in the room. Like literally, you have to ask for help for everything else. I mean, that's an enormous burden, right? We put like five people in the room, just so they can hit buttons. So I think that's a third piece that needs to evolve. And it's going to be a new skill, new capability that the physicians are going to have with this technology. Mina, what do you think? Yeah, I mean, I think I'm going to take a small step back because there's actually one piece of this ecosystem that's not represented on this panel.
Video Summary
The panel discussion revolves around medical extended realities, specifically focusing on uses in surgical navigation and training for various medical procedures. Panelists such as Mina, Mitch, Dick, Justin, and Burke share insights on how technologies like augmented and virtual reality are revolutionizing the healthcare industry. They discuss the importance of training and skill augmentation in using these technologies effectively, emphasizing the need for standardization and data-driven evidence to showcase their efficacy. The synergies between training and intra-procedural applications of spatial computing are also highlighted, showcasing how these technologies can enhance surgical workflows and decision-making. The conversation also touches on the challenges of integrating digital tools in the operating room and the potential benefits for improving patient outcomes. Overall, the discussion showcases the promise of medical extended realities in transforming medical training and procedures for better healthcare delivery.
Keywords
medical extended realities
surgical navigation
training
augmented reality
virtual reality
healthcare industry
standardization
data-driven evidence
spatial computing
patient outcomes
HRX is a Heart Rhythm Society (HRS) experience. Registered 501(c)(3). EIN: 04-2694458.
Vision:
To end death and suffering due to heart rhythm disorders.
Mission:
To Improve the care of patients by promoting research, education, and optimal health care policies and standards.
© Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
×
Please select your language
1
English