false
Catalog
Making Healthcare Better for All with Platform Thi ...
Making Healthcare Better For All With Platform Thi ...
Making Healthcare Better For All With Platform Thinking
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
All right, good morning all. I'm guessing you can hear me well enough. It's an absolute honor, John. You know, I've been looking forward to this day for a while to actually have a seat right next to you and be on par with you, at least for this conversation. You know, John really needs no introduction. I think everybody knows he's the president of the Mayo Clinic platform. He's had a phenomenal journey. And if I might get some things wrong, I would share it, forgive me for that. But undergrad in Stanford and then Berkeley and then UCSF Med School, subsequently went to UCLA, did your residency in emergency medicine out there, and then came to Harvard as the CIO at B.I. Deaconess. Therefore, several years, and in 2020, if I'm correct, moved to Mayo Clinic and is the president of the Mayo Clinic platform. A pretty phenomenal journey out there, you know, and we're gonna learn things outside of that journey today too. But we're here today to really talk about platform thinking and how we can change healthcare for all. Now, some of you may know what platform thinking already is. I didn't know a whole lot about it and I'm looking forward to getting educated because that is the future. And I think Mayo Clinic is already well ahead of many of the other institutions on that front. So, before we get into the conversation, couple of things, there's a Q&A function on your app. So feel free to send questions to me. Just send questions across so I can ask questions based on the priority. You can even up-prioritize the questions. But before we embark on the Q&A and the fireside chat, I'm gonna ask John to give a 10-minute overview as to what platform thinking is so we can have a level playing field out here, we can understand what it means and how it can impact care in the future, and then we can dive in deep and you'll have better questions to ask being armed with that insight too. John, welcome. Thanks for that great introduction. I'm just thrilled to be here. So, you're innovators and ask yourself the question, you've just woken up this morning, you have a wonderful idea, a new product, a new service, a new collaboration. How long is that going to take you to build a production system and make it a reality? Well, I've been around the world to a lot of academic medical centers and the answer is usually somewhere between 18, 36 months, something like that. Well, what if it could be 18 minutes? What would you need to do inside your organization to take any collaboration or partnership and reduce the cycle time on technology and policy and process decision-making? So, to your point, in 2019, Gianrico Farrugia, the CEO of Mayo said, what could we do so that now there's a front door to Mayo Clinic that any collaborator in the world can come through that front door and we can start doing good work almost immediately? Well, what would be the things you would have to do? Well, I think we all know that if you're gonna deal with predictive or generative AI, novel analytics or devices, it starts with data. And one of the challenges we all have is we know data is messy and we have so many different kinds of data scattered in silos, some in the electronic health records, some in clinical trials, some in registries. So could you, as an organization, bring all your data together in one place and organize it longitudinally so for every patient birth to death, you had every event, every test, every observation ever made about them? And then could you get your privacy, compliance and security people to agree on what would be the safe use of that data with collaborators? You'd have to de-identify it. Well, de-identifying data turns out to be not so simple. We could take structured data like a problem list, a medication list, remove the 18 HIPAA identifiers like name and phone number and address and okay, fine, that is now de-identified. But what about the note? So he's just written a cardiology note that said this former president of the United States. Well, it didn't have a name, but the bin size of former presidents of the United States that are living and can go to a cardiology appointment would be pretty small. So you'd have to look at billions of notes, look at job roles, geographies, familial relationships, maybe even dates and be able to do what we call hiding in plain sight. If you changed former president of the United States to this politician, oh, okay, well, there are lots of those. So what Mayo said was let's take every bite of data in the entire organization, 150 years of data. And it's not just again, EHR data, it's also telemetry and every image and omics and digital pathology, put it in a de-identified cloud container and then start bringing collaborators into that cloud container. Okay, so one foundation of your platform is this DID data. Second would be with all these solution developers coming in, you start to develop a library of algorithms, products and services. And once you have a large corpus of that, then you can start to deploy it in workflow. And that workflow might be inside your organization, might be in other organizations. And the idea here is that every participant in this ecosystem benefits from the presence of every other participant. The more data you have, the more solution providers, the more solutions you make available to your providers and suddenly becomes a virtuous ecosystem. But there's a problem. Mayo has 10 million patients. Are those 10 million patients, which are a lot of Arizona, Florida and Minnesota, representative of the population of Georgia? Not really. And so you better build collaborations. And as we were talking about at breakfast, some of the problems we solve are technological, some are policy and a lot of them are psychiatry, right? How do you start to say, I actually will let down my guard and bring in collaborators. If I don't have data in Georgia, go talk to Emory, right? If you want, say, North Carolina, go talk to Duke and consider these people equal peers in your journey. So what Mayo did is we brought in Mercy Health. They are a large Catholic healthcare system in the Midwest. UHN in Canada, Albert Einstein, Brazil, Sheba, the country of Singapore, the country of South Korea, India, now four countries in Sub-Saharan Africa. And all of these folks are doing what we did, de-identifying their data, putting it in a cloud container and working into a common governance so that we can bring in collaborators around the world and do so with an equal peer-to-peer footing. It's not a hub and spoke kind of thing. And so when we talk about vocabulary, that's what a platform is, an ecosystem where every participant benefits from the presence of others. In the case of what we do, founded on data and collaboration and deploying those solutions globally. But one other thing to mention, again, since you're innovators, everything that we're going to do over these next couple of years for algorithm development is gonna follow a pattern. We're gonna come up with an idea. The idea that you have is going to need certain kinds of data to make it a reality. So make sure you have it, right? So if you're gonna need, as you've said, patients that have certain aspects of their physiology measured in the home, well, do you have that data? Maybe you don't. So what do you need to do to get that data so you can develop your product or service? So again, Mayo's thinking about our data, others' data, home data, patient report of outcome data, company data all being brought together. And then understand your data. What are its biases? Does it include those of socioeconomic status that is going to represent, say, the most stressed of our society? And once you understand your data, then develop a model. And then you're gonna need to test the model. And then you're gonna need to deploy the model and monitor the model. And this is a virtuous cycle that is never done. It is just continuous. The world of predictive AI, I think we have a handle on. We can understand how algorithms work and don't. The world of generative AI, much harder. And we have some of our Mayo cardiologists in the audience today, and they have had a fair amount of experience playing with generative AI. And the quick example to share, which they've written in the literature. You know, I have a patient in front of me who just had a fresh sternotomy. Now they've got an arrhythmia. I need to put a pacemaker wire one centimeter from a fresh sternotomy. Is that safe? I know, let's ask CHAT-GPT. And CHAT-GPT, this is actually real, came back and said, there's actually been a controlled trial with 5,000 patients, perfectly safe. In fact, if you wanna go read the trial, it's the Journal of Clinical Electrophysiology, January edition, page 412. The journal doesn't exist. The study's never been done. And I asked the leadership of OpenAI, wait, we asked a clinical question, and not only did we get a clinical answer and a controlled trial, we got references. And the leader of OpenAI said, well, you know that all of our chatbots are trained to never exfiltrate their training data? If you ever get a reference, it will always be fake. Okay then. So this is the challenge. So as I, you know, we'll have our dialogue, but as I turn it over to you, understand there are certain problems we've solved. Technology and policy, we can overcome psychiatry, data availability and partnership, all good, but measuring the accuracy, the quality, the consistency of generative AI, still a work in process. So look forward to the dialogue. Thank you, yeah, no, that's great. CHAT-GPT almost sounds like me. I make up things all the time. You know, I throw these references out and I hope nobody looks them up, but oftentimes they do. But anyway, getting back to platform thinking for a bit, can you give us some specific projects or specific use cases that you've demonstrated that it actually helps? And can you break it down for the average clinician how that actually transpires? Well, sure, and so there's so many fields and you have to look at some of the work we've done, cardiology, radiology, radiation oncology, but let's give you an example. So radiation oncology, as you start to look at doing the physics of complex head and neck tumors, it's hard. It takes a lot of time. So the radiation oncologist came to me and said, could we take every image we have ever taken of a tumor in history and every auto-contouring profile that we have ever developed in history and build a model where now the physics and the math of radiation oncology can be done by an AI model and then humans review it. So it's not replacing the human, it's augmenting the human. So we actually did this. And what we found was incredible accuracy, safety, and quality. And here was the fascinating thing. It used to take 16 hours for humans to take complex head and neck tumor images and develop an algorithm and get to a auto-contouring profile. Now it takes less than an hour. And so here are the kinds of email I'm getting from radiation oncologists. I had dinner with my family every night for the last five days. And why? Because you had algorithms that were helping me do my work that were based on the availability of longitudinal multimodal platform data and collaborators developing algorithms. So that's a use case I understand completely. And I think it'll transform the way we practice medicine in radiology. Cardiology is complex, right? I mean, look at heart failure patients, they're multi-covariate diseases with not just some underlying single causative ideology. And the workflow of care or the care pathways involved in their care are fairly complex. And as much as platform thinking appears very inspirational, how do you relate it to the end user, to the frontline clinician? How do you kind of integrate it into the workflow? Well, sure. And so that was just one example of one algorithm. Mayo has 250 algorithms, as you say, in various kinds of specialties, especially imaging. But let's ask the question. So I went to medical school in the 80s. And you know that probably 50% of what I was taught is wrong, I just don't know which 50%. And so wouldn't instead of my relying on anecdotal experience of the patients that I saw last, or the literature that I read, what if I could say, let's take the last million patients like the one in front of me now, what was their journey? What tests did they have? What diagnoses were considered? What treatments worked and didn't? And so that again, when you start to look at 100 million plus patient journeys, and you can start asking what events in their lives are likely to happen next where that could be a lab test, it could be a medication, it could be a surgery, et cetera. And you give that clinician that sort of notion of what the journey ahead might be. Again, it's not replacing a human, but it's augmenting the human, it's opening their eyes based on the evidence of patients like the one in front of you, a kind of digital twin. Got it, got it. So much of the data that you're talking about is multimodal data, right? Several sources, whether it's videos, images, texts, audio. And right at the start, when you were introducing platform thinking, you also talked about privacy. And having so many forms of digital fingerprints can erode our privacy in some way. And you know, I mean, you talked about this many times that our power to compute can re-identify, de-identify data. And what we know now is not what we will be able to do in a few years from now. And are we setting up ourselves for some huge big problem in the future where all our data can get re-identified and we may be put in a situation? Thoughts? What a brilliant question. Now I know why you run the place. So what is de-identified data? The answer is it means that there's a low probability of being able to re-identify a human today. And so again, as I said, I think for structured data, pretty good, right? If there's a patient that has hypertension, hyperlipidemia, and that's all it says, probably not gonna be able to re-identify that no matter how much technology we have. Well, the note, again, there are ways you could remove enough from the note that it's gonna be pretty nonspecific. But so Peter Noseworthy and I and Paul Friedman were having this discussion recently. You know, the 12-lead, that's de-identified, right? You know, you just remove the metadata, the medical record number, anything else, and you know, it's just a bunch of vectors, that's all it is. Well, what is a 12-lead? I mean, all of you know, it's a vector representation of a three-dimensional salad called your body. And actually, shouldn't we be able to reconstruct from those vectors what body it came from? And so Peter asserted, you know how sometimes you get into your car by, you know, using a fob? Well, why not just touch the door and just take a one-lead off touching the door and probably it's gonna be identifiable to you? Right, right. So to your point is, what we consider de-identified today, and you ask the Office of Civil Rights and CMS, you know, is the genome de-identified? And their answer is, well, hmm, HIPAA doesn't say anything about the genome, must be okay. Right, right. We're gonna have to, for every data type, ask about risk of re-ID. And so, CTs of the head. Well, Terra Recon can simply do a 3D reconstruction of the thin slices and produce a face. You better start blurring the skin on CTs before they're de-identified. With EKGs, maybe we can only use five seconds of one lead. On genomes, maybe we can only use subsets and SNPs. Right, and that's the kind of thinking we have to have. But there also have to be policies, and this is what we've done. We don't allow external data sets to be brought into the environment and linked. Because there are a lot of famous cases where in Massachusetts in the late 90s, one of my co-postdocs brought in the Massachusetts voter registration rolls into the Massachusetts state employee de-identified data set and linked the two, and found the governor's medical record, right? Did not spread that, but told the governor, you know, this is not so de-ID'd. So if you don't allow the attempts to re-ID, you don't allow data linkage, you have specific monitoring, auditing, and penalties, and you're constantly evolving what de-ID means, that's about the best you can do. So that's about re-IDing patients, but there's a whole concept that you talk about quite often, and I've written about also, is the algorithmically underprivileged or underserved. And, you know, data sets largely belong to large academic centers that have the ability to collect data, to analyze that data, curate that data. And despite all the collaborations that you're developing, having your algorithm work in rural India, for example, how do you see, because I think by this whole platform concept, you're trying to promote health equity, social equity, how do you think you're gonna bridge that disparity gap? Do all data sets have to be local eventually if they need to be representative of the population that we're trying to treat? And so the answer to that question is yes, and some way explain. Eric Horvitz, the chief scientist of Microsoft, recounted some years ago when Microsoft did some algorithm development in Washington, D.C. They went to MedStar, and MedStar, a great place, developed some great algorithms, and then they said, hey, now that we've got these Washington, D.C.-based algorithms, let's go six blocks away to a Medicaid clinic and see how the algorithms worked. And they failed totally. And it was because the nature of the socioeconomic stressors and the population in the Medicaid clinic six blocks away was just so different. And so it's really important that we develop these things, we test these things, validate them, make sure they were done correctly. Then they need to be locally tuned. And so to give you an example, this is a real-world example of what we're doing. So Aga Khan University has some major facilities in Kenya and Nigeria and Pakistan. So we went to Aga Khan and said, could we curate with you the data of Kenya? And Aga Khan said, you know, we actually have some pretty good data sets. They need to be cleaned. And we just don't have the funding to clean and curate and normalize them. And so Mayo was able to raise philanthropy to curate the data of Kenya. And so now our algorithms are shipped to Kenya, where then they're locally tuned and validated to Kenya. Same sort of thing with Northern India. Got it. So just sticking to the Kenya part or Northern India part, we recognize that there is data that exists in our EMR that we understand. That's the kind of the iceberg analogy, right? What's above we know, demographics, medications, covariates, we understand. But there is this whole under the surface determinants of health, whether it's social, cultural, environmental, the omics, the genomics, proteomics. How do you see us bridging that gap through the platform thinking analogy or just AI for the future of being able to, because they say, I may be wrong, that almost 60% of your health is determined by your social, cultural, environmental situation, not so much by the single covariates that you have. What are your thoughts on this? Sure, so here's what we've done to date. So go look in your electronic health record at all the social determinants of health, data that has been input by your registration clerks. Now maybe your organization is different than every other organization I have ever seen. The data quality is miserable. So it's 3 AM on a Sunday. You have abdominal pain, and your registration clerk is saying, so do you think of yourself as a non-Hispanic Latin? I'm in pain. Click white, whatever. It's horrible. And so what we said is, can we take the latitude and longitude of where you live, the address, and then if we did a map of the entire United States of every superfund site, look at air, water, land pollution, every crime statistic, every grocery store, educational opportunities, and 40 different social determinants of health that are in the 100 square feet around your latitude and longitude, could we infer something about your social determinants of health? And that's what we have done for every patient. So now every patient has the self-reported data, or not self-reported data, but then 40 data elements inferred from latitude and longitude, and those are used in all the algorithms as an adjuster for social determinants of health correction. Got it. And this is already widely deployed, or it's in the process? Already fully deployed. Amazing. So you talk about collaborations, and institutions have big egos. Organizations have big egos. Tech organizations and academic organizations have big egos. How do you cross those barriers? What is the carrot, and what's the stick in navigating that space? Sure. So I think all of you familiar with the work of John Cotter as we were talking about at breakfast, you need an urgency to change. A vision and an urgency. And what are the kinds of urgencies you might see? Well, a lot of organizations have margin pressure right now. It's like, oh, I'm looking at the long-term sustainability of my organization. My margins are very low. Business as usual, not going to sustain us. We better change. Or my staff is burning out. You know, far too much keyboard and pajama time. What do I need to change to reduce that? Or maybe I'm worried about public embarrassment from quality scores or sentinel events. Well, that's anxiety that I want to resolve. So in every organization, there'll be an urgency to change. And you need to align the values, the currency of every individual in the organization around that urgency. And then you can change. So talking about change, you know, everybody perceives that differently. And it's easy. It's not easy. I don't want to put words in your mouth out here. To understand what the ideal future state looks like. You know, idly. But it's the transition which is the challenge. Because that can take several years, oftentimes decades, of uncertainty, unpredictability, and variable engagement from all the stakeholders. So how have you done it at Mayo? And how do you motivate the average disinterested clinician who just wants to finish their work and go home and not start becoming a part of this whole change management? All right, well, a really good question. So Mayo's approach has been to start small, think big, and move fast. Because we don't know if something is going to work or not. As we were talking about at breakfast, we had this theory in July of 2020 that advanced care in the home, high acuity hospital care, could be done in your living room safely, with good outcomes, potentially reductions in readmission rates, lower cost. But we just had to try it. And so you find a motivated physician who was willing to give it a try with one patient and figure out some of the challenges of supply chain and staffing and just the logistics of remote patient management and that kind of thing. So we did it with one patient, then 10, then 100, then 1,000. And now we've done 40,000 patients, seen the outcomes, the quality, the safety the same. Patient satisfaction, a net promoter score of 94%. You never see that in health care. And readmission rates, 25% of what we saw in the past. Reimbursement covers the cost. And the nosocomial infections and falls have disappeared. So suddenly, the organization said, well, we tried it. We built upon our success. It worked. And now the entire organization is saying, oh, we've got a capacity problem. Wildly too much demand, not enough beds. Do we want to build more beds? No, more advanced care in the home. And it's that kind of thing. Like any dissemination of innovation, it takes an exemplar of success, and then success begets success. Got it, got it. So when HRX was founded, Dr. Al-Khatib will agree with me, is our vision of what the future would look like was a combination of virtual care with censorated strategies, powered by AI algorithms with sustainable workflows that translate into better clinical outcomes. A lot of parts to the equation out there. But I want to thread the needle on one aspect of care that has been intrinsically important to us as cardiologists and electrophysiologists is trying to change the mode of the transactional care to something that's more continuous. Because we do some of that through our remote monitoring. I know Peter Noseworthy does that all the time. And what we try to do through that is something called exception-based care, where you see the patient only when they need to be seen. So we used to have devices. Patients used to come on a monthly basis, then three-monthly and six-monthly. Now they say it's yearly or two-yearly, depending on the data that comes from their device. How do you envisage medicine, whether through platform thinking or not, just medicine from your balcony view, transforming to allow us to do that? Will we ever have the compute power and the live data sets that will allow us to be able to make that transformation? Right. And I think it's imperative. So at the World Economic Forum this year, you would have thought the discussion in Davos would have been about global conflict or climate change. It actually wasn't. It was about the demographic shifts It was about the demographic shifts we're going to see in 2060 to 2080, when the number of deaths will exceed the number of births. And we're an aging society around the world, and there are no caregivers for those who need the care. So what's the answer? Exactly as you say. AI augmented care, where it's continuous monitoring and oversight, and then seen when the highest use of your license for the best patient at the right time in the right place at the right cost that needs to be seen. And that's going to be forced by these demographic shifts. And Peter knows because he supervised my case, and I have no privacy. It's all fine. But so I have an SBT. My heart rate goes from 50 to 170, irritating, not life threatening. So Peter said, oh, just wear these 5G monitors, and we'll actually be able to continuously record, look at what your problem is, look at potential therapies. And the end result after doing this for two weeks and using algorithms on the data, they gave me $0.20 of diltiazem every day, and I'm cured. And now maybe I'll get ablated in the future, who knows. But for the moment, diltiazem does the job. So wait, this was home-based remote monitoring with algorithms that resulted in a $0.20 drug being given to cure me. That's a change in paradigm. We're happy to ablate you at Mass General. No, I'm just kidding. So 80% of hospitals are operating on negative margins. And we're talking about this utopia. It's not utopia, but I'm going to look at it from that perspective of this digital transformation that requires significant investments. How do you convince the administration that we need to be doing this to save money in the future? Or what are the ways to convince people who are so wedded to looking at their fiscal future from quarter to quarter to how we need to transform the way we look to the future? Thoughts. Right, I mean, what a wonderful question. Now, I know value-based purchasing is not going to take hold in every part of the United States. I sometimes say that we're five countries, the East Coast, the West Coast, the Midwest, the South, and Texas, its own country. And each of those has different reimbursement mixes. But you start to look at the value-based purchasing or risk-based contracts that we have on the East Coast and West Coast, and people are willing to look at models that are different because it's the only way they can survive risk contracting. So part of it will be reimbursement schemes and policy. Got it. But other parts of it is, as I say, a workforce. It's just hard to hire and retain specialists. And you start to need to look at different models if you're going to just be able to deliver the services that people are demanding. And it's going to take some creativity. And I think the CEOs recognize that if you're losing money on every patient, you can't make it up in volume. And so something has to change. Now, on that workforce thing, I'd love your thoughts on this. Do you envisage our workforce transforming? So our cardiovascular workforce, we have general cardiologists, we have interventionists, we have advanced care practitioners. Do you think the dynamics between how we populate our workforce today compared to five years from now when we have this platform thinking becoming a part of our lives and we have digital and AI-based strategies transforming? Is there any particular direction you can see that going in? And so probably a controversial topic to discuss, but let's do it because this is a fun group. Corner of silence out here. So one of the things that Peter wanted to do to study my particular cardiac condition was get an echo. Now, I don't know in your institutions how many echo techs you have and the availability of echo. Maybe it's fine elsewhere, but in a lot of places in the United States, there just aren't enough echo techs to go around. So what you're starting to see is some innovative companies be able to say, well, I'm going to take a person who's, say, a paramedic, not an echo-trained specialist, but I'm going to give them a device that enables them to place it on the chest. And through AI, saying move left, move right, go up, go down, I'll do correction of the image, gather a reasonably effective echo. And could you imagine, yeah, you have to stratify what patient needs what service, but can we move some of the care to other kinds of caregivers? And again, this is not about reducing income for cardiologists or reducing the need for training echo techs. It's just dealing with the reality of we will have supply-demand mismatch, and therefore starting to look at different kinds of service delivery options. Mayo has taken paramedics and re-skilled them to do a lot of this care in the home. And so it's AI-augmented new workflows that enable us to do this nationwide, where we don't have to hire every person to do it. We just have to do training programs and certification. And so maybe we deal with the problem by looking at different skill sets, different classes of people for different patients. So one thing I didn't tell right at the start is that John has written 15 books and several hundred papers and still practices as a clinician. And that's what I wanted to get to, is the clinical part of it. Now, you work, you're an emergency medicine doc. You still practice medicine. How do you think that active practice that you're engaged in arms or empowers your innovation? And would you do anything differently than what you're doing right now? So I think it's really important to be actively involved in patient care so you understand the problems. You understand the stressors. You understand, oh, it takes 10 minutes to see the patient and 20 minutes to document the encounter. And you feel that. And you look at your productivity and say, if there's a supply-demand mismatch for my skills and I'm spending 20 minutes a day for every patient on a keyboard, maybe that's not right. So you can start to identify what are the problems and potential solutions. So I'm sure all of you are looking into ambient listening, where the patient, the doctor, have a conversation and the entire note is written magically for you. And you just simply read sign and done. And Mayo has deployed that kind of technology. And you only would know to do that if you were actively suffering the challenges of administrative overload and burnout by practicing medicine. Perfect, perfect. So I just want to pivot a little bit to the academic institutions and something related to how practice is kind of non-sustainable right now. We're all operating on negative margins. When I look at the practice of medicine, I look at it in a hospital setting, is as a pyramid, where the top portion of the pyramid is subspecialty care. The one below that is midstream care, which is chronic disease. And then the mainstream care, which is wellness. I don't think we're focused enough on the wellness and the chronic disease because we make our margins off specialty care and procedural end of things. How do you think we're going to flip that argument or flip the financial equation? And how do you see this transforming in the future? And where does platform thinking fit into changing this equation? Again, really interesting question. Some of it, of course, as I look at societies and I talk to prime ministers or health ministers, they're all saying we can't just have a high acuity health care system that is dealing with the sick. We actually have to work on wellness. It's the only way society will be able to deliver the care that's needed and afford it. But let me actually, again, getting back to some personal comments, I'll pose a puzzle to you. I've been a vegan for 25 years. And so I went to my insurer and said, you know, if I don't crash my car for a decade, you give me a safe driving discount. You know, my LDL is 60. And I've been a vegan for 25 years with a body mass index of 21. Shouldn't I get a safe eating discount? And my insurer said, we thank you so much for your total lack of medical claims because all the people who are eating KFC, smoking, and drinking, and have body mass index of 35 need your funding. And so that's a problem, right? There's no incentive to be well. And we have to start at the person level, creating incentives for wellness. Otherwise, we'll never get change. So some health plans are already kind of doing that, right? Giving you gym membership, or watching your step counts, and trying to cut down your premiums a little bit. But it's not enough. And I totally get that. I know we're getting close to time. There are a couple of things I want to very quickly touch on because I know there are a lot of young, aspiring individuals out here. Education, I think all of us feel that our critical thinking is at risk now with generative AI. And how do you see that evolving in the near future, five years and 10 years from now? Is it going to change the phenotype of students applying for med school? Or is it going to remain the same, just their way of navigating the space is going to change? Thoughts on that. So I have aspirations, but I also have a sad reality. So let me start with a sad reality. A few months ago, I had a chance to meet with 90 medical school deans. And I said, how many of you are teaching the Krebs cycle? And every dean said, it's the foundation of all pharmacology. I mean, are you suggesting we shouldn't be teaching the Krebs cycle? I said, well, OK, I understand that you feel that every student needs, when prescribing any medication, to understand 150 years of history as to how the medication was developed. If I ask you what time it is, do you know how your Apple Watch works? Do you care? No, you just want to know the time. And so our medical school deans haven't figured this out yet, that we should stop teaching the Krebs cycle and teach data science. How are you as a practicing clinician going to know to assess quality, safety, reliability, consistency of generative AI if you haven't been given the tools to understand the data and the way the thing works? So medical education needs radical change. We don't want to memorize. We want to be knowledge navigators armed with the tools to build this AI future ourselves. Before we have a few minutes more, I want to get inside your brain. You've been a visionary. You've been 10 to 20 years ahead of where people have ever been at any given point in time, and you're already envisioning what's going to happen in the next 10 years. What makes you tick? What drives that passion to innovate? What are those synapses that you have that differentiate you from the rest of us out here? Well, and again, I don't think I'm different than anybody, but I would say that my life has been about bringing people together, creating coalitions. And whenever there's a huge problem to solve, just don't do it on your own. Anything you can do to facilitate collaboration and partnership and be a catalyst for change, that's the power. I mean, as we were chatting at a breakfast, we now have 3,100 organizations working together in the Coalition for Health AI, chai.org, a nonprofit, free organization. Feel free to participate so that we can, all of us, understand what safe, reliable, fair, and equitable AI really is and measure it. So it's that building a community that is going to get you to where we need to be faster. Amazing. So we have a lot of young innovators out here who are all trying to at least reach 10% of where John Halamka has reached. I incidentally sent your CV to my daughter who's in med school at UCLA. And I said, try to be 10% of this guy. And I just got an emoji back from her with her eyes rolled. So what advice would you leave all our colleagues at different stages of their career? What the future looks like, and what lessons you've learned? I know that you could go on for an hour on this. But what are the lessons that you've learned that are essential to who you are that could help benefit all of us out here around the table? Well, sure. So when I started at Mayo, I asked the board of directors, do I have the freedom to fail? And I said, you know, look at Steve Jobs, Elon Musk, pick whoever. Probably 60% of what they've started has actually not worked. Do I have the permission to fail 60% of the time? And they said, yes. Because unless you have permission to fail, you're never going to find that breakthrough. I was recently in a major European country and asked the health minister, how about innovation in your country? And the health minister said, yes, we want innovation as long as there's no risk. OK, that's going to work so well. Because, of course, if there's no risk, it's implementation, not innovation. So I think my advice would be, you're going to fail, and you're going to dust yourself off, and you're going to go to the next thing. And it's your failures that will teach you more than your successes. And if you look at my history, I've had lots of failures. My face is on the front page of a magazine in 2002 with the headline, Worst IT Disaster in History. Well, there you go. It taught me a little bit about resilience. Amazing. Amazing. With that, John, I want to thank you for your time, for your knowledge, and your insights. Let's give John Halamka a big hand out there. Thank you so much. Thank you. Total pleasure. Total pleasure, Mr. Bailon.
Video Summary
The video features a conversation with John Halamka, who has a distinguished career journey from Stanford and Berkeley to president of the Mayo Clinic Platform. The discussion centers on "platform thinking" and its potential to revolutionize healthcare. Halamka outlines the concept, which involves seamless collaboration and data integration to enhance care delivery. He explains challenges like data de-identification to ensure privacy and the need for representative data sets across diverse populations. Halamka cites successful applications, such as using AI in radiation oncology to reduce time and improve accuracy, and shares Mayo's focus on developing a global ecosystem based on data collaboration with numerous international partners.<br /><br />The conversation also tackles the complexity of transforming healthcare via AI and dealing with socioeconomic determinants of health. Halamka emphasizes the importance of continuous monitoring and AI-augmented care to address demographic challenges. Moreover, he discusses the cultural shifts needed in education and organizational structures to adopt such innovations. Concluding, Halamka offers advice on fostering innovation, underlining the necessity of collaboration and learning from failures to drive successful change.
Keywords
John Halamka
Mayo Clinic Platform
platform thinking
healthcare innovation
AI in healthcare
data integration
socioeconomic determinants
global ecosystem
HRX is a Heart Rhythm Society (HRS) experience. Registered 501(c)(3). EIN: 04-2694458.
Vision:
To end death and suffering due to heart rhythm disorders.
Mission:
To Improve the care of patients by promoting research, education, and optimal health care policies and standards.
© Heart Rhythm Society
1325 G Street NW, Suite 500
Washington, DC 20005
×
Please select your language
1
English