AI in Health Care - Promises and Concerns of Artificial Intelligence and Health

66.85k views6857 WordsCopy TextShare
UC Davis Health
One of the most intriguing topics in health care for 2024 is the use of Artificial Intelligence in p...
Video Transcript:
(gentle music) - Hello, and thanks for joining us for this discussion on the topic of health care and artificial intelligence or AI. I'm Pamela Wu, Director of News and Media Relations here at UC Davis Health. Today we are joined by two experts on the topic of AI in healthcare, Dr David Lubarsky, CEO of UC Davis Health and Vice Chancellor of Human Health Sciences, and Dennis Chornenky, former advisor to the White House on AI who joined UC Davis Health this year as our first AI advisor.
Dr Lubarsky and Dennis, welcome. Thanks for being with us. - Thank you.
It's a pleasure. - Yep. - I sort of want this to be a free flowing conversation.
I know both of you have a lot of really interesting thoughts on AI and I wanna start by saying that if you ask the average person what comes to mind when you say AI in healthcare, they're probably thinking of analyzing patient data, helping to make diagnoses, but there is so much more than that. How is UC Davis Health approaching AI's role in patient care and health? - Well, I think the first and most important thing to say, Pamela, is that doctors and nurses are in charge.
Doctors and nurses will always be in charge of not only the decision making, but in being the partner to the patient in the decision making. And, you know, AI is artificial intelligence, but it's not. In healthcare, it's really augmented intelligence.
It's about giving your doctor and your nurse more tools to make better decisions for the patient. - Yeah, there are a lot of areas where AI can make a big difference, of course. So the patient provider relationship, but also on the administrative side, operations business side, how health systems, large academic medical centers think about workforce transformation, creating better recruiting, retention career paths for people in all the different roles that are involved in patient care administration and everything else.
So I think we're looking at all of those things very broadly and looking to advance a holistic AI strategy that helps us really answer kind of key questions of why we wanna adopt AI in the first place. Whom will it really be benefiting, and in what ways can we do that. How do we ensure safety when we are adopting it?
And then which use cases and applications and in which areas do we really wanna pursue and prioritize? - Patients want their care personalized to them. We hear this over and over, we aim to deliver that.
How big of a role could AI have in personalizing medicine? - Well, I think AI is actually the route to getting truly personalized recommendations. And we are using AI already, just, we don't know it.
When Amazon sells you the reading lamp that goes along with your book purchase, it knows you know what you want, right? And then it recommends a movie that might be along the same lines, right? It's running algorithms in the background all the time.
So number one, those personalized recommendations foundationally are from AI. There's no reason we can't apply, and we are trying to, that same thinking, if you will, to make all the past decisions and all the past diseases and all the past labs that have ever shown up on a patient's chart help inform what the next step should be for that patient in their journey towards wellness. And so I think that when you take a step back and you realize that self-service is the future, right?
I don't know the last time you called a travel agent, my memory doesn't go back that far, right? Everything is now computerized and organized for us and recommended for us. So it's the same thing.
So we're used to self-service, especially the younger age group. And there's a study that came out recently that said 44% of young adults, 18 to 34 believe that by using the internet and chat GBT, they can know as much as their doctor about a disease process. - Okay.
- A little scary, right? - Yeah. - I'm just telling you that that's- - Not really true.
- No, it's not true. But the point is, we are evolving to where people expect to quickly master a topic and become a true partner in their care. And I think that's where this is going.
Self identification of a problem, self-diagnosis, self triage, and self-treatment, if guided correctly by health professionals, could truly extend our ability to serve what is an ever burgeoning need, and questions about personal healthcare. - So that's what I was going to ask, right? Like, what does self-service healthcare look like?
But also that sort of becomes our job, if you will, to sort of thread that needle to ensure that we are providing the service, the self-service opportunities that patients want, but also ensuring that the care they receive is sound. - Right, and so that means that you can't just, it's just like anything else, right? You just do a, a search on the internet today or in chat GPT, you can get a bunch of stuff that isn't right.
So the databases to which, and the large language models that generate stuff for patients have to be vetted or constructed in such a way that erroneous and errant information won't show up. It has to be carefully tested. And that's why that's last on the list.
What's first on the list? I have my iPhone here on purpose. It's a prop, right?
Why is it a prop? There's a one out a million chance, a one out of a million chance it will open up for another face other than mine. Now, facial recognition is great.
I dunno about you. It's incredibly reduced my need for passwords and everything else. It is computer vision, that's part of AI.
It can read chest x-rays as well as it can read the lines on my face. We need to be employing that. It's very, very accurate.
It also can be used for evil. The Chinese government does a tremendous amount of facial recognition software all around looking for protestors and whatever. That's not okay.
That doesn't mean we shouldn't use facial recognition on our iPhones, and it is the control of, the direction of, and the positive social good by mastering technology that will drive AI to the forefront. - Dennis, what are your thoughts in terms of personalizing medicine, in terms of self-service, especially since you've worked in like the regulatory space and in terms of government? What goes through your mind when you think about people sort of helping themselves to diagnoses, if you will, talking to, right chat GPT about their own care and what concerns might regulators have about that?
- Yeah, I think certainly there's a reason why, you know, we have medical schools and licensing. - Yes please. - Residencies and all of these things, so I think it's very important that we build off of that infrastructure, that value infrastructure and that responsibility and those guardrails that we do have in place.
At the same time, at least personally, I feel like we haven't always done a great job as a society of educating consumers and patients about how to really achieve wellbeing and wellness in their lives. There is a little bit of a mentality that if the tiniest thing is wrong with you, you go to your doctor and your doctor's gonna fix it. That it's, you know, your wellness is your doctor's responsibility in some ways.
And of course, it's primarily our responsibility starting, you know, as patients, as consumers. And so to the extent that, you know, AI, especially generative AI technologies can help consumers can help direct them to live healthier lives. They're gonna need less care.
And when they do need care, they will have better guidance, I think, on the kind of care that they might need, how to connect with the right professionals and how to stay on course, you know, with the right recommendations and why it's important to listen to medical professionals. - When it comes to AI and healthcare and its implications, what else are regulators keeping a close eye on? - Yeah, the regulatory environment's very interesting.
That conversation has rapidly accelerated, especially in the last few months. You know, there've been a lot of discussions and things over the last few years, but over the last few months we've seen some really interesting things happening. Of course, we had the AI executive order coming out of the White House towards the end of October that builds on some previous executive actions, but really takes it further now, looking at more specific requirement for the private sector.
Not just directing government to ensure AI safety and government AI systems and government use, but how our markets in the private sector can help ensure consumer safety and patient safety with the use of AI technology. So things like watermarking AI generated content, for example, or other forms of disclosure so that folks know that they're speaking to an AI chat bot rather than, you know, a chat bot pretending to be a human, to try to create a more human experience or something like that. I think it's very important that we always help make people aware of what exactly they're interacting with and in what ways.
And there are a lot of implications from these regulations that are coming out, including the AI Act and the EU that's still kind of being discussed and advanced that health systems, academic medical centers are really gonna have to, you know, get more thoughtful about their adoption of AI and how we think about governing AI. Another thing that's coming out of, in the regulatory environment, at least for the federal government, is that federal agencies are gonna be required to have AI governance boards to ensure safety, efficacy, ethics of AI systems, and also the requirement to have chief AI officers or advisors, somebody leading that function. You know, I think currently in academic medical centers, health systems, you know, kind of large enterprises broadly, we have technology groups, we have IT departments, and there's typically some people with some AI expertise within there.
And there are some budgets for AI applications or vendors kind of within larger IT or software budgets. But we're really getting to a point where we have to start looking at AI, you know, more specifically and creating more specific mechanisms and groups with that expertise to help guide prioritization, adoption, monitoring of those kinds of technologies for different organizations. And so I think that regulation is trying to go in that direction, but it's very important, you know, policy and lawmakers I think are doing their best considering they do have kind of a gap in understanding these technologies, but they're listening to a lot of people in the private sector, and they're doing their best to try to strike a balance between ensuring safety and allowing innovation.
- A common thread that I'm hearing in your comments is that it's about shared responsibility, right? So much shared responsibility and agency as well. And you actually started off this conversation, Dr Lubarsky, by saying the humans are still in charge.
- Yep. - Your doctor, your clinical staff is still in charge. And so that was sort of, you know, what I was thinking in terms of like, who is ultimately responsible when AI is used to support decision making and patient care.
You've made it clear it's the people. - It's the people. - But like, what is the relationship between artificial intelligence and human intelligence in terms of how they reinforce one another?
- You know, so that's a great question. A lot of people think that AI exists. It's a magic thing, right?
It's not a magic thing. It's ability as a computer. - It's a tool.
- It's a tool. Just we used to, right, not be able to copy and paste from Microsoft Word to PowerPoint, right? I mean, it's about integrating data and information and then, but someone still has to make up the PowerPoint presentation, but it's easier now, right?
It's the same thing. So we're working with a company that does remote patient monitoring, and right now it has eight different vital signs that it collects every minute of the day. That's 1,440 minutes, eight vital signs each minute.
Okay, that's 11,500 or so data points per patient. Right, and it can, in beginning you applying AI, which looks at patterns of these vital signs can very, very, very early on detect who might be deteriorating, allowing the doctor and the nurse to keep a closer eye on that patient, to intervene earlier, to be prepared for a deterioration. It's not telling the doctor what to do, and then they're gonna eventually expand it to 16 variables.
Now there'll be 24,000 data points per day per patient. A human being can't process that. And they can't say, oh, you know, this variable moved here.
And then in relation to this one, it moved here. It's just too complicated for the human brain. But AI is built to analyze those patterns.
So number one is pattern identification. Extremely well developed in AI. The decision making that stems from that pattern identification, that we are not yet ready, right, to seed at all, because there's bad data, there's incorrect information.
AI doesn't generate any new thought. It just looks at all the stuff that's been done before, including vital signs that have been taken before to identify things. So we have to understand what AI is really doing for us.
And I'll say another thing. Two thirds of patients would like the doctor and their medical record to know all the information collected on this prop my, iWatch, okay, or your Fitbit or whatever. There's too many data points.
- [Pamela] Yeah. - You can't, how well I slept, how many minutes did I toss and did I turn? - It's like everything about you all the time.
- Right. - Yeah. - But it could be incredibly valuable if an AI engine was running behind it and said I've looked at your sleep pattern and you're not sleeping through the night anymore.
What are the causes of that? Are you drinking alcohol? Are you anxious?
Have you changed your pillows? Are you having allergy attacks in the middle of the night? It prompts your doctor to ask the right question.
They can't possibly, them or the nurse, have time to parse through all that data. - [Pamela] Right. - AI will make your care more personalized.
And it doesn't have to mean it's making the decisions either for you or for your doctor. It just is packaging ideas and information in a way that prompts that personalized attention. - So you talked about pattern identification.
It's excellent at that. Dennis mentioned earlier, another important type of AI, generative AI, and this is the AI that generates new data, text or other kinds of media stuff like chat GPT. - Yes.
- What is the. . .
- I have that on here too. - (laughs) Okay. Of course you do.
What is the role of generative AI in healthcare and where do you see that headed? - Well, more than 40, oftentimes more than 50% of the time that nurses spend are spent writing notes and documenting what they've done. None of that is necessary.
For physicians, their biggest complaint is filling stuff in about patient visits into the electronic medical record. We have added very low value interactive time with keyboards to the most expensive labor like in the United States, right? We've turned our brightest and best and most compassionate healthcare providers into typists.
And so what generative AI will do is free them. It doesn't mean that we will let AI write the notes. I mean, they will write the notes.
We'll still be responsible for what's in the note, right? - Right 'cause it's a tool. - Because it's a tool.
But that tool can erase the burden. It can eliminate, right, the contribution of overzealous documentation leading to burnout. It's not a fun thing to do.
It's repetitive. It is thankless, and to be honest with you, it so often is populated with irrelevant things that it's a true waste of your time. So I can't wait and that, by the way, is the number one initiative that we are pursuing here at UC Davis Health, because we care about our providers.
Because when we care about them, they're able to care for their patients. - That's right. - What a time saver, right.
- Huge. - And just like the mental energy too. - Yeah.
- Yeah. - And if you imagine, I dunno, when the last time you or anybody out there might be watching this went to a doctor's office, there's always a keyboard and a screen either between you and the doc. - Yes, yes, yes.
- And the nurse or off to the side, so they're constantly talking to you, and then they turn around and- - Typing. - Exactly. We're gonna eliminate that.
We're gonna eliminate the electronic barrier that we have placed between patients and providers. And that means- - And generative AI is gonna do it,. - And that means better care.
- Yes. - Yeah, the really interesting thing with generative AI is that, you know, it's just one of many different AI ML methodologies, but it's really having its day right now, it's had a huge leap in terms of its technological capability and the public, you know, our society has just been enamored with what this can do. And one of the reasons is that it's very versatile.
It's very powerful. It can write code, it can, you know, help your child do their homework. It can help a physician, you know, diagnose a disease or come up with a treatment plan.
It can, the same foundation models can do all these different things, right? So it's a tremendously exciting time. And I think generative AI will have more transformative impact on healthcare in, let's say the short to medium term than any other type of AI machine learning methodology.
I think others will probably have their day in the next 10, 20, 30 years, very difficult to predict which ones exactly those will be. But right now is really the time of generative AI. And to that, thanks to Dr Lubarsky's vision and our CIO and Chief Digital Officer, Dr Ashish Atreja, we just launched, had a very successful launch of a new collaborative bringing health systems together.
We've now got, I think around 40 health systems and leading health systems and payers, academic medical centers, covering the entire country that have come together to help advance the responsible adoption of generative AI technologies. So really focused on execution, valid identification, discovery, validation of use cases across our member organizations to help build that capacity mutually together, because in isolation, these technologies are just moving too quickly for us to be able to, I think, for any one organization to really be able to figure it out on its own. You know, there's so many research papers coming out on generative AI right now.
You know, it was near zero, you know, per month in certain publication databases, you know, even a year and a half ago. But now it's getting to hundreds per month and, you know, very quickly climbing, it seems like it's doubling every few months. And so the joke is that we're gonna need generative AI to help us understand research on generative AI.
And it's actually maybe not so much a joke. - That's so meta. - It's just true.
- So it's- - Well you know, again, I always like to say where am I seeing generative AI being used and is it useful, right? And so now if you go to Amazon, sorry, I spend a lot of time on Amazon, right? And you wanna parse through 14,000 reviews, how do you do that?
Amazon doesn't even make you do that. Now at the top of the review section- - That's right. There's a blurb.
- There's a blurb. - AI generated. - AI generated.
Now that doesn't always mean that all the information you're seeking, but it's a pretty good summary. It's a pretty good summary and it's very pertinent. And it's the same thing we've done, like I'm a little worried about the patient's hemoglobin, and you can ask the record, please provide all the hemoglobins that have ever been drawn on this patient for the last 10 years.
Date, time, and you can have a table generated for you, right? Where it would previously take a long time for a doctor to parse through all the individual labs drawn, right? The capability of, again, personalizing the care by extracting with a simple query all the pertinent information that you need.
And then you could ask chat GPT, although talk about this, okay, what are all the causes of a low blood count, a low hemoglobin in a patient? And, you know, you've thought about 39 of the 40 and go, you know what, I hadn't thought about that 40th one, it's not saying what you should do. It's saying it's doing a complete information search for you so that you don't ever forget anything.
You know, when the iPhone came out, people, well, who needs all this stuff? Well, we do, right and I have to say, I had a sort of photographic memory for medical stuff when I was young, and that was very special, but it's not special anymore because it's not needed anymore. In a second, you can get the information you want on a good Google search, let alone chat.
Chat GPT can give you some false information. - Right, it's not always, evidence isn't always sound. - But the next generation of Chat GPT will provide references if you want them for each of its recommendations or statements.
Once that happens, we can now get the validation and verification that it was a correct interpretation. You have to do some work. Eventually we'll get to the point where that validation verification will be monitored by another AI program.
- Right. - Right. Just to make sure, right, that it's not just making this stuff up.
- Right, that's that shared responsibility again. - Right, so again the future of healthcare, and I'm gonna say this again, is not to do low value repetitive work that is about information searches across large databases. It is about understanding the implications of a disease, the treatment pathways, there's always more than one, the preference trade-offs, right?
Some more aggressive treatments lead to a poorer life, but a longer life, right? Those discussions will never be run by AI. Doctors will become the partners for personalized healthcare decision making, 'cause they are freed from spending all their time trying to find out some arcane information.
- So I've heard from both of you what you're excited about, what the best potential benefits are of AI in healthcare for patients, for providers, for employees and employers too. But let's talk about the cons. What do you think warrants skepticism as we see more AI in healthcare?
What issues and challenges are you keeping an eye on? - I think one of the biggest dangers with AI, especially with chat GPT, it's too easy to use. I mean, it really is.
- It's so easy. - Yes. - It's stupid easy.
- And you might be tempted as a care provider to say, I'm not sure what to do. I'll just look it up on Chat GPT. And because it's so easy to use and you're always so busy, you might actually accidentally or shortcut it and say, yeah, that sounds right.
And so we made it really clear actually, that our healthcare providers cannot, should not, and will not ever see judgment or courses of treatment to what's suggested on the internet, and specifically with chat GPT. - [Pamela] Is this formalized somewhere? - It is.
We actually added an AI paragraph to our medical staff bylaws about, you know, what constitutes the responsibility of the physician to the patient. And we made it really clear that they were not to ever rely on that in terms of driving their decision making. - Well, and sometimes the training data in these models will get actually mixed up when it's producing answers.
And I mean, I've had instances where I've asked about whether or not there are current clinical trials happening, or recent clinical trials in a very specific area that I had interest in and Chat GPT would come back and say, oh yes, there's four trials that are ongoing. And they were completely made up. It looked like it drew from 12 different trials and conflated them into somehow being in the category of what I asked about.
And I was surprised at first 'cause I'm not aware of these trials, you know, going on in these areas. And when I looked it up, surely- - Because they don't exist. - None of them existed.
None of them existed. So, there is this potential for, you know, what are called hallucinations, these kind of fake responses and so this is one of the reasons it's so important to double check everything for human beings. We're just not at the point where, you know, the large language models failure rate, you know, is one in a million or one in a billion.
It can be a lot more frequent. And it's also a bit of a social choice or choice for us in terms of technology and how we want to use it. Because in some ways, hallucinations actually can be a measure of creativity in a model.
So if you completely want to eliminate the potential for hallucinations, and maybe we want that in certain environments, right? You're really reducing that model's ability only to very precisely and almost verbatim kind of spit back things that it's gotten from its training data. But if we want to give it a little bit more flexibility for interpretation or for suggestions, right, or for creative solutions to certain problems, we sort of have to set the parameters a little bit differently.
And this is where we may have a higher likelihood of slightly unusual or crazy responses or hallucinations. But I think it's the same way with human beings, actually. You know, when we want creativity- - Yeah, thinking outside of the box.
- For human beings, yeah we want a bunch of different ideas thrown on the table- - Including wild ones. - Yes including the wild ones. Sometimes there may be some kernel, you know, of truth or insight that's, you know, that can come from unexpected places.
And so, you know, that's I think a social conversation and how our interaction with this technology will evolve over time. But I think for, you know, environments like ours in healthcare, especially now in the earlier kind of stages of these technologies, we really need to err on the side of caution. - Right and I think the key here is like, the part that worries us is way down the road.
It's five years, 10 years before we'll have the right level of insight into data to really let AI really suggest treatment suggestions. But all the rest of it's really worked out and we're just not employing it. Vision computing, ambient computing, listening, generative AI which says, just says, you've just talked for 17 minutes here, let me summarize what you said.
And I can do that in four sentences, 'cause you've really been talking a lot and not saying a lot, but right, and so all those, all of that already exists and summarizing not always perfectly what has been written by others like Amazon on the review sessions, right? All that stuff exists and pattern recognition, that's great. And facial recognition.
We can do all of that and not seed one ounce of responsibility or decision making to computers. We can make doctors more efficient. And give you an example, breast mammograms, right?
You really need a trained breast radiologist to get the best possible result when to get them read. Well, when they added AI into the mix with breast trained radiologists, they were able to actually cut the number of people required to do a day's worth of readings in half. You may say, oh, someone's gonna lose their job.
And I'm like, no, no, actually only half the women in America who should have their breasts done for mammograms get them read. Imagine if we, without adding one penny to the labor workforce, we can now get to 100% of women and have their breast mammograms read by professional- - Spending access to care. - Yeah.
- Yeah, doing more on behalf of the patient. - Right, we will never, ever be able to catch up with the demand right now because of the aging of the population, the expansion of the possibilities, and hopefully a continuing journey towards wellness for a much longer period of time in life, we need to change how we work. We will never be able to fill the gap by just training more people.
AI allows us to change the work that we're doing. So we're all working at the very top of our capabilities and all the low level stuff like a normal breast mammogram can be read by the computer and you don't, all you need is the doctor to say, yeah, yeah, there's nothing there, right, as opposed to them doing the full reading. It is gonna make us better at treating people who need to be treated.
- Such an important point too about not reducing the workforce, but rather expanding the access and the care. - Yes. - Expanding possibilities.
- Yes. - Yeah. - Okay.
Let's talk about the equity piece too, because as AI is looking at existing historical data, there are patient populations that historically and now still are not receiving the level of care that they should, that medicine has not served as well as it should. How do we make sure that we're not perpetuating inequities, right, by looking at old patterns to inform new ones? - That is an incredibly important topic.
And I will use a couple concrete examples. We know for a fact that when black and brown children come to the emergency room, they don't give them as much pain meds as a white child. I mean, there you can't find a doctor or nurse who is saying, I'm purposefully not giving, you know, someone who looks different less medicine.
But that's what you do when you. . .
So how does AI help that? - Right so when you look at the data, there's implicit bias. - Correct, so if you just said to an AI driven engine, how much pain medicine should I, seating responsibility, give to this child?
If it just looked at all the medical records in the United States and said, well, on average this child would need three milligrams of morphine and this white child would need four milligrams of morphine. 'Cause that's all that exists in the database. - Right so it's like just as little as before.
- Yes. - Do that again. - So, right, and so we have to be very, very careful that we don't institutionalize the biases.
And, but here's the thing. The way out of that, it also turns out that it turns out it's often not the color of someone's skin. It's their familiarity with English.
That if you don't speak English as a first language, you're unable to communicate as well the desires for additional treatment or a different expectation. And so that leads to undertreatment and inequity in care. Well now what can AI do?
It can say, this person, they're listening, doesn't speak English that well, let me do simultaneously automatic translation from their native tongue to your native tongue. So that that expectation and that of care and the ability to be concrete about or nuanced rather not just concrete nuanced about do I need more pain medicine or not? That discussion can occur in that patient's own language.
So AI could fix the very problem that if you depended on it for just a treatment recommendation, that would be bad. But it also has the opportunity to literally eliminate the problems in my, you know, with translation needs. - It's making it work for you.
- Yes. - Yeah. - Yeah, so in order to enable more and more of exactly these kinds of examples that you just gave, what we really need to do is provide machine learning models and technology companies that want to train models and create models better access to more diverse, more equitable data sets.
So here at UC Davis Health, we have, I think, one of the most diverse patient populations communities in the country that we serve. And that makes our data sets actually very valuable in that regard and there are certainly other academic medical centers that also have a lot of very valuable data, but historically healthcare data has been so siloed and so difficult to access and even difficult to discover to begin with, even knowing who has what or how you would get access, even internally within your own organization. You know, you may be trying to build a model to better serve a particular patient population, but it's very hard to get access to the data that you need.
So one very interesting thing that I think is going to help with this that was actually mentioned in the executive order, the federal government is really trying to promote the use of what's called privacy preserving technologies. And this is something that's mentioned that we should get more investment in this, we should try to accelerate the development of these technologies that the executive order specifically talks about. Because what it allows us to do is it actually allows us to do machine learning modeling on data that stays encrypted.
So the data never has to actually get exposed or unencrypted or you know, sometimes we try to de-identify data, but there's always the risk of it being re-identified in some ways. We can kind of skip all those risks and still be able to essentially provide better access for folks that want advanced medical science using these more diverse set of. .
. Because what's happened historically with encryption, just as a very quick bit of background, you know, we used to have no encryption when it came to data. Then we got encryption at rest while data's sitting there.
Okay, and that was great. And then we got encryption in transit while we're transferring it place to place. For example, if we're doing a telehealth consultation, all that data is encrypted now, right?
As it should be. And now we've got encryption and modeling or what people are referring to as confidential computing or the application of these privacy preserving technologies. And so as healthcare executives, administrators, I think we all have a certain obligation to keep data private and protected, legal obligations, ethical obligations.
And so we very much view ourselves as stewards of this data, and we're always, you know, very concerned about the potential risks of, you know, patient data being exposed somehow. But at the same time, we know that this data can be very valuable in advancing medical science and research and innovation. And so we're stuck with this dilemma of how do we make this data accessible, but without sacrificing safety and privacy, privacy preserving technologies can help us significantly advance in that regard.
One more thing I'll mention that I think will also be helpful is over the last couple of years there's been a special task force out of the White House and a couple of other agencies to provide recommendations on creating a new national AI research resource that is intended to help make AI research and more data sets accessible, including in healthcare to help advance more equitable AI models and applications. And so that task force produced some, put out some recommendations in January of this year that have since actually been approved through legislative action in Congress. So we're getting a national AI research resource that is intended to help actually democratize AI research and model building by bringing together more public and private data sets that are relevant in an environment where they can be used.
And also providing access to compute resources, computing resources that can be very expensive sometimes, especially for smaller institutions or individual researchers. You know, things that are usually more available to larger institutions, but this national AI research resource is really trying to create an environment that's gonna allow more research and innovation, including in medicine more broadly, all across all types of researchers and institutions. - Right, I think that's really critically important.
Again the guidance and thinking the large thoughts, you know, make the small stuff possible to do in an ethical manner, and you know, one of the things that people don't realize about AI is that it's application to both populations and real-time care of individual patients that we can be analyzing every single thing we're doing, every dose we're giving. And we talked about differences in pain medicine application, you know, we spend a lot of time hammering away at our care providers and administrators about eliminating all the implicit biases. You know, we're in California.
We're a little more aggressive about that. But there are implicit biases that really govern a lot of attitudes and actions across the United States. Healthcare being no exception.
And so if you had an AI engine running in the background and saying, for every physician, for every type of patient, for every nurse, right, were they delivering the right type of care? And not to harm the provider, but to educate the provider. Like we are seeing this difference, you know, in how you're treating people.
And you know, right now it takes an amazing amount of effort. Like we have a major population health effort to make sure that our underserved communities who see us for primary care and have their blood pressure being controlled, are getting the same outcomes with the same level of control. And we're almost there.
And that, I mean, we started out with like a 10% difference in the amount of control because we weren't looking at the data. Now we've got all these people, right, but if AI were doing it, it would've been telling us you are separated. You need to not only give people the same treatments, you need to start doing a different line of questioning around their diet or their family or their stress or whatever else might be driving up their blood pressures and not just giving 'em the same medicines.
Maybe giving them different advice or different medicine, right? Now we'll stop doing that study and then things might sink back to the same way or 'cause people are not being culturally sensitive or not asking the right questions. If we have AI running in the background, it can never go back without someone pointing it out to the doc or the nurse saying, you're seeing a divergence in how people are responding to your well-intentioned treatments.
And there's not a single healthcare provider who wouldn't stop, take a look, reassess, and get things back on track. - That's great for issue spotting. - Issue spotting.
That's a great, that's. . .
Yes. - One last question. I'll ask it of you, Dr Lubarsky, this is sort of our final word.
What is one takeaway, if there's just one takeaway from this conversation that you want our patients to know and one takeaway that you want our employees to know, what would those be? - AI is augmented intelligence. It's for every employee, every nurse, every doctor to use on behalf of their patients for whom they are solely responsible.
And we will never seed control of our care for human beings to computers. - Thank you. Dr Lubarsky, CEO of UC Davis Health.
Dennis Chornenky, our first Chief AI Advisor at US Davis Health. This has been a discussion on artificial intelligence or AI in healthcare. Find more interviews on our UC Davis Health YouTube channel, and more information at our website, health.
ucdavis. edu. Thanks for joining us.
Copyright © 2024. Made with ♥ in London by YTScribe.com