Tech Refactored

Artificial Intelligence in Medical Devices

August 12, 2022 Nebraska Governance and Technology Center Season 2 Episode 50
Tech Refactored
Artificial Intelligence in Medical Devices
Show Notes Transcript

Gus is joined by Charlotte Tschider, an Assistant Professor at Loyola University Chicago School of Law. With a focus on the global health care industry, Charlotte specializes in information privacy, cybersecurity law, and artificial intelligence. In this episode, Gus and Charlotte discuss a range of topics centered around artificial intelligence in medical devices, who regulates these devices, how companies are using artificial intelligence to make better devices, and more.

Follow Charlotte on Twitter @cybersimplesec

Links
Beyond the Black Box by Charlotte Tschider
AI's Legitimate Interest: Towards a Public Benefit Privacy Model by Charlotte Tschider
Enhancing Cybersecurity for the Digital Health Marketplace by Charlotte Tschider
Privacy and Accountability in Black-Box Medicine by Roger Allan Ford, W. Nicholson Price ll
Medical AI and Contextual Bis by W. Nicholson Price ll
Universal Paperclips

----    ----

Host | Executive Producer - Gus Hurwitz
Producer - James Fleege
Music - Collin McCarthy

Tech Refactored is part of the Menard Governance and Technology Programming Series hosted by the Nebraska Governance and Technology Center

NGTC Twitter - @UNL_NGTC
NGTC Instagram - @UNL_NGTC

This is tech refactored, a podcast in which we explore the ever-changing relationship between technology society and the law. I'm your host, Gus Hurwitz, the Menard director of the Nebraska governance and technology center.

On this episode of tech refactored, I talk with Charlotte Tschider from Loyola University of Chicago. Charlotte is an expert in well, many things, including cyber security and medical devices before becoming a lawyer. And then ultimately a law professor, she worked on the business side of both the cyber security and medical devices industries.

Charlotte joined me to discuss her work on the use of artificial intelligence in medical devices. We're going to have a wide ranging discussion that touches on everything from what medical devices are– yes, it turns out a toothbrush is as much of a medical device as an insulin pump– to who regulates them privacy and some thorny medical ethics issues.

How companies are using artificial intelligence to make better devices and the cyber security issues that are on the frontier of this technology. I always learn a lot when I talk to Charlotte and this discussion was no exception. I hope you enjoyed listening in as much as I enjoyed recording it.

Thank you so much for having me today, I'm Charlotte Tschider, I'm an assistant professor of Law at the Loyola University, Chicago, and a great deal of my work operates at sort of the intersection of data policy and healthcare. Largely kind of construed. My previous experience included working in cyber security at target corporation, um, and a variety of other corporations.

And then I worked in the healthcare compliance and privacy area, focusing on things like data policy for Medtronic corporation, which is one of the, uh, largest organizations that sells medical devices in the world. So I had a unique opportunity then, and then after that fact, um, working and consulting with artificial intelligence, technology, different companies that are working together to enhance healthcare outcomes by using data in new and different ways and all along this path, I've asked this question of, okay,

how do we create safe systems that are good for people that enable available, uh, system functionality? How do we ensure that we're protecting individuals through this process when we're collecting data about them and using data about them? Um, hopefully in some cases for positive outcomes and in other cases, perhaps, uh, for less positive to the individual outcomes, such as commercialization, um, and economic growth for those corporations.

And in additionally, how do we promote fairness and accountability? In the space of artificial intelligence, you know, we often focus on, let's say data policy broadly construed, but we don't always take the time to focus more directly on healthcare and medical technologies. So I've been sort of teasing out various problems through this process, um, with everything from FDA regulation to the common law and what we can do about it when people actually are injured and everything in between.

I have to start just with a, a really interesting, uh, a phrase that you're using there, uh, that just jumps out to my ears, which is data policy, instead of data privacy. Everyone in this field, in many of these tech related fields, we talk about data privacy, both. I expect that you're using that for a reason.

And second, I, I expect that it sounds weird to  everyone who studies this stuff's ears. So could you say a bit about your use of the, that phrase data? Yeah. So, so let's start with kind of how we tr- you know, traditionally think about data privacy. Data privacy, and the idea of privacy generally is about the individual person and data that is identifiable about the individual person.

The problem with only focusing on privacy interests, though, what we can do or not do with information that is identifiable about a person, is that it kind of leaves a lot more on the table. So for example, if we didn't think about, let's say availability issues related to medical devices say that you had a pacemaker implanted in your body, we may not be doing anything bad with personal information.

And yet, if we do not have the appropriate cybersecurity approach to actually protect that device and the different signals and information going to it, you might end up with a person who doesn't have a working pacemaker, which could cause a major injury, or they could potentially die. That's something that is much bigger

than simply a privacy concern. I'd say in the fairness area too, we see this issue. So you could have a well secured, uh, device. You could have a device that clearly describes what, you know, the data policies are that are associated with it regarding personal information from a privacy perspective. And yet you could be using data

that's not representative of the community in which this device is going to be used causing potential differential results for different types of people, based on, for example, their racial background. These are issues that are not squarely in the privacy space, but are extremely important.  And so part of why I use this phrasing "data policy" is to think more broadly about data that can be used and how it can be used rather than simply saying, oh, this is identifiable personal information, or no, this isn't identifiable personal information.

And the last thing I might mention is just that, even our understanding of what identifiability means is something that is currently being discussed at a lot of different levels. You know, it used to be very clear; this is personal information and this is not, but the complexity of systems we use now is creating new inferences, new information that could hypothetically be considered personal information,

but nevertheless wouldn't be covered under data privacy and data privacy law. So, uh,  as always, um, a whole lot in there, I, I guess, uh, it would be useful to, uh, perhaps continue taking steps back, um, and ask how are medical devices different or unique from other areas of both privacy and cybersecurity?

That's a great question. And, you know, and I actually get this question a lot. What's, you know, so special about medical devices. Well, I think that there are probably three key things I, I can think of off the top of my head.  The first thing is that most of the time they're compulsory. And what I mean by compulsory is that a person doesn't really have a choice as to whether or not they can use a device or not. Usually the alternative kind of device or the alternative type of treatment that somebody could seek is not efficient,

it's not comfortable, it's something that requires regular and frequent medical care. And so the alternative choice to having a medical device is not very good. So your ability to say, "Hey, I don't like what this company is doing," uh, whether it's cybersecurity or privacy or related to fairness or discrimination or safety, "I don't like this.

I'm gonna go with the alternative," it's not really a good secondary choice. So most of the time people who are using these devices actually must use them or don't have a great option for an alternative. The second thing is that we have, or at least the law requires us to some degree to have trust in our medical professionals.

And unfortunately, medical professionals are not experts in artificial intelligence or medical devices. Now we all wish that they were, it would be great if they knew everything about these devices, but the reality is that manufacturing companies and really, really smart people, scientists at these companies are developing these devices.

And the information that is available to a doctor is limited. There's only so much that is disclosed to that doctor and that the doctor can possibly know about them. So you have patients who are trusting doctors to make good choices for them, but the doctors themselves don't actually understand how these devices work, how data might be protected, um, and whether or not they could be safe.

And I'd say the third piece is of course that in the healthcare space, we have a fair amount of consolidation in terms of innovative technologies. So from a choice perspective, not only are we talking about undesirable other options, but there's a lot of consolidation of businesses, which means that a lot of the data and the practices are consolidated too. And whether this is a result of mergers and acquisition, or if this is simply a result of intellectual property,

sort of what, what comes out of innovation? The reality is that we can't really influence those decisions in a way that's very useful. There's a really interesting point. Uh, in the, the second of those points that you made there, you make the observation that doctors and healthcare professionals aren't technologists, they don't understand the AI or the security or the, the device engineering side of things.

Does the flip side of that also apply? How familiar with the healthcare needs of both doctors and patients are the engineers who design these technologies? They probably are also not that familiar with the needs of the individuals. Um, and certainly we've seen that in the user interface of some of these devices being designed, um, in ways that are not particularly useful, you know, for large groups of people, potential issues too, under the ADA.

In terms of the usability of some of these devices for individuals just distinguishing here between usability being something that is easy to use versus accessibility being a required design change that's necessary for somebody to use a device effectively. So certainly there is not a lot of communication back and forth mm-hmm , but I would at least just having worked in the manufacturing area.

There are doctors who are involved from a consulting perspective. At least I would say that less-so  patients are involved in that discussion. And this has actually been sort of an evolution that we've seen in the European union, where there's been a desire now to get patients involved, you know, closer to the innovation process than maybe in the past.

So certainly this could be an area that could be fruitful for research in the future. That that's a fascinating observation. And I think speaking from some family experience a really important omission, the, the role of the patient, the consumer, uh, in the design of these products, I know. And we'll come back in a moment to talk about the role of

regulation and regulators here, but, um, a lot of medical devices are designed to be usable and used by, uh, vulnerable populations. And, uh, I'll include in that children and the elderly who might not, uh, have, uh, as much mental capacity or, uh, dexterity to operate these devices. Um, so they tend to be designed for that sort of user community.

And that's a very different user community than the middle aged or younger, uh, population of high activity level individuals. So you, you can have a mismatch between the user interface and the user experience and what the medical of a device is trying to do. And one of the biggest problems in all of medicine is patient compliance.

Mm-hmm,  just trying to get patients to take their drugs at the same time every day. That's the easiest thing in the world. And it's so hard to do. And if you're telling patients use this device constantly, and it doesn't work in the way that you expect, it's not intuitive, it's frustrating. Well, patients just aren't going to use it, the right way.

No, I think that's true. And when you start to add layers to this onion, so let's say that you now have an artificially intelligent medical device, which by the way, are routinely used today, this is not something that is, you know, well into the future today, we have artificially intelligent medical devices.

If you add that layer to it, and I know we'll talk about regulation here in a second, but if you have a human or if you have a doctor that is part of the validation of whatever recommendation is being made by that system, for example, deliver additional insulin. The expectation is that the individual who is using that product, whether it is an individual or if it is a doctor, that they are able to second- guess what the computer is telling them, and that they might be able to make a correction.

And the result has been that the FDA tends to be less concerned about these devices. So, if you can imagine let's put these pieces together, you have potential usability issues where they're very hard for a person to even understand how to use the device. Now we're relying on that same individual to second, guess the device and question whether it's making the right recommendation.

It's sort of like building on, you know, already pretty shaky ground and that's very concerning. So you used a, a phrase there that there are already devices that have AI in them. What does that mean? What, what is this artificial intelligence? Um, uh, is, is my medical device out there plotting how to turn me into paperclips?

that's an artificial intelligence joke, uh, for listeners. Um, if you're not familiar with the game universal paperclip, you should go Google it. It's great. It will scare you about how artificial intelligence is going to turn us all into paperclips. But is, is that what you mean Charlotte by ? Um, no, no. I, what I mean is really more advanced machine learning.

So what we mean by that is you're feeding a large amount of data. Some of it could be structured. Some of it may be unstructured, um, but curated data, useful data into a system that then identifies relationships between those data points and creates an algorithm that then makes recommendations or directs the function of a technology.

So these can be relatively. Um, let's say simple, you know, just one step above a, you know, a traditional human made algorithm, or you could have something that has many, many layers of decision making. That's sort of hidden from the individual user and hidden from the device. Uh, we often call those neural networks.

so in both of these cases, we're not talking about sentient, AI. We're not, we're not talking about any AI trying to create, you know, make us into paperclips. What we're talking about is something that works most efficiently to create. Um, let's say new recommendations that take into account additional pieces of information that maybe a human might not have thought of the.

And so they tend to be very powerful. You might get, um, additional information. For example, if we're, we're talking about diabetes, not just from how a person's body is functioning, but by what activities they're engaged in every day, um, what community do they live in? Are there other medications that they're taking?

What kind of food might they eat? And that kind of information can be included together. And different types of inferences can be created from that combination. That then make a recommendation for insulin delivery. That is going to be more optimal than if you only had one piece of information. And I use this just as an example, because it's an easy example.

Um, I know that IBM has moved into the, the diabetes area and a lot of other big tech companies have too, because there are so many people in this world that have various types of diabetes. And it's the kind of thing where you can bring in lots of different data points from all kinds of places that hypothetically could optimize how we're treating, uh, That that's a really useful point to highlight.

Uh, we should just take a moment when we're talking about medical devices. What, what are we talking about? So in insulin pumps, that's one example and they, they get to be really complicated once you start adding in closed loop, continuous glucose monitoring systems, um, and the like, uh, uh, pacemaker come to mind, I guess.

Do you include smart watches that do things like that? Uh, what, what is the scope of this field of medical device? So let's start by saying that a medical device is really anything that is not a pharmaceutical that treats health a toothbrush is a medical device. For example, in its most simple form, it's not a highly regulated medical device.

I, I immediately went to like an electronic toothbrush, but you, you mean just a piece of stick with some bristles on the end? That's a medical device. Yeah, exactly. That's a medical device. Um, a medical glove is a medical device, some wrap for your knee. You know, when you twist it is a medical device, but a lot of medical devices are not embodied, meaning they don't have any housing.

They don't have any tangible function. We call those software as medical devices. So certainly if you have a diagnostic system that says you have a 98% chance of breast cancer, based on your image, compared with, you know, thousands of other images that negatively and positively identify a certain kind of breast cancer that is also considered a medical device.

But we make this sort of distinction between pharmaceuticals and medical devices as the defining line, we also have what are called combination devices, which combine drug delivery along with a medical device. So you can kind of combine them together in a combo. Um, but most of the time we're talking about non-pharmaceutical.

Um, but there are many, many examples of medical devices. Some that are embodied some that are not some that are implanted in your body. Some that are implanted just under your skin, some that you wear on your body. And then there certainly are medical devices that could be used for medical purposes. But are sold as consumer devices.

A great example is like you mentioned the Fitbit. Um, but we do see some types of doctors who actually prescribe consumer health devices as a medical device, as part of, um, your treatment. And I think you made a really great point, which is how do we just distinguish between different parts of one system.

And sort of what's, what's one medical device versus what are all of these pieces? And increasingly more often we have medical devices that talk to each other. When I say, talk to each other, they have short form types of computer language that's not readily decipherable by a human being that enable them to communicate and coordinate the functioning within your body and, and exchange information.

And we're seeing a lot more interest in that because a lot of times people will have multiple devices. Say that you have an interesting device in, um, in your brain that helps to manage pain or something like that. You also have a pacemaker, you maybe have an insulin pump, um, and perhaps you're wearing a fit.

It might be kind of useful to be able to integrate all of those components into one mobile interface to sort of manage your health. Um, you know, humans increasingly are interested in sort of managing our health as much as we can, but that also means that now we have data going pretty much everywhere and we have questions about how are we going to protect it?

How are we going to kind of put guardrails on it? So that it doesn't spill over the sides and go everywhere. We don't want it to go. So I I'm just imagining a world and I'm not sure if this is, uh, utopia. I don't think it is or a dystopia. Maybe it is where you've got all those devices and they're connected to your phone via Bluetooth and also.

The scale in your bathroom and your toothbrush, your electronic toothbrush. And if you haven't brushed your teeth and you walk into the bathroom, your phone can tell that you're in the bathroom because it can sense the scale. So it turns the toothbrush on to make it buzz, to remind you, to brush your teeth.

Um, it, I guess that might be a future that we have on the horizon and, uh, leave it to you to tell us if this is a dystopia or not.  You know, everything depends on how it's used. It's never about the technology itself. I don't believe in evil technologies or bad technologies, you know, it's simply how you use them.

And one area of a lot of opportunity that we've seen is being able to keep older adults in their homes longer. So a lot of times as people age, um, you know, they're not able to maybe take care of everything themselves, but perhaps there are a lot of things they can take care of. And they'd really love to stay in their home surroundings.

You know, this is something that I think all of us would probably like to do as we age. And just because one thing we can't maintain in our own doesn't necessarily mean we want to be, you know, completely outside of our home surroundings. So a lot of the systems now are actually doing what you're talking.

You have a connected scale, you have a Fitbit, you might be wearing an insulin pump. You also have your connected pacemaker. You also are, have a connected, uh, blood pressure monitor. You might have a glucose monitor and you might have something else that gives your regular heart rate. And you wear a lot of this, or you have some of it plugged into your house.

And there's a common hub that brings all of that information together and sends it regularly to your doctor with some algorithms in between. They basically say, this is a person who maybe needs to come into the clinic, or this is a person who needs to be, uh, brought into the hospital or this person is just fine.

We're not concerned about this person today. And that enables people to stay in their homes longer and lead really long and healthy lives. Um, and to boot, you know, it shouldn't be our first consideration, but it's a real consideration at a reduced cost from a healthcare perspective. It tends to be one of the things that benefits everybody so long as we can use technology in the way that it's intended to be used.

And again, in a way that is safe and effective for everybody we'll, uh, come back and touch on some of the privacy concerns there. But I, I think it's worth highlighting. Uh, you, you spoke about that as this is data that you could collect and send to a healthcare professional, this could just be data that you send to your family, that you send to your kids.

And it might not even be what we think of as traditional, medically relevant information, but you can monitor some level of activity and changes in behavior. Um, has your aging father, um, stopped brushing his teeth.  That tells you something, it, it tells you something about their behavior. Maybe it is because they aren't remembering to maybe it's because they have other concerns going on.

Maybe it's because they can't physically operate. It's painful to actually engage in that activity. That might not be that that's not your, uh, uh, heart rate. Um, that's not your, uh, blood sugar level. Um, so it might not be traditional, medically sensitive information. It. Are you brushing your teeth regularly, but it's really powerful information.

Mm-hmm  absolutely. And we have, you know, maybe on the, the side of a little bit more sentient, I'm still not gonna say there's anything that really is sentient yet. Um, but we have care robotics that are being used. I think currently they're in clinical trials for use with Alzheimer's patient. That help individuals to find a sense of calm when they need it just through interaction with different types of robotics.

So, I mean, there is a lot of opportunity here and that's, I think the challenge, because when you have so much opportunity, both for the benefit of people and potentially for financial savings, there is a wave of investment in this space and a lot of influence and push to get these through regulatory processes as quickly as possible.

I need to put in a brief plug for the work of, um, one of the governance and technology centers, faculty fellows, Valerie Jones, who has done some research on using devices like, uh, the echo, um, devices that you can interact with and talk to, um, giving them to elderly individual. To see how they react with them and how it affects their health.

And ha- has really come to, uh, very positive, I'd say to me, at least startling, uh, results that again, this might not be traditional healthcare that we think of, but in terms of loneliness and wellbeing, um, which is a healthcare outcome, uh, these devices, they can be really powerful devices. And that brings me to, uh, my next question, you, you gave us these categories of devices and pharmaceutical.

Where do we get those categories? Are these regulatory defined? And if so, what, what's the role of regulation in, uh, the background here? Yes, they are statutorily defined under the FDCA, the food drug and cosmetics act in 1938. That was our kind of very first law in this space. And since then we've seen many amendments.

Um, the medical devices, amendments act is what we typically use in the medical device field to kind of tell us what is required in terms of regulation of medical devices. And I would say that pharmaceuticals and medical devices are just they're different in how we regulate them medical devices for many years, until really the late 1970s enjoyed almost no regulation whatsoever.

So any movement, since that time into a greater regulatory space feels like more than what the FDA is probably used to in the medical device space. Two, we have a really wide range of different types of medical devices, which makes them a little bit harder to regulate everything from the toothbrush we talked about.

To the most advanced type of diagnostic breast cancer software as a medical device. I mean, those are radically different types of technologies. And the way that you think about potential safety issues with them is also radically different. So part of the challenge that the FDA faces is in regulating, let's say very different types of technologies.

Then you also have the regulation of pharmaceuticals, which we know have the potential to cause a lot of major problems if they are unsafe, but also food, you know, things like the baby formula, um, issues that we've had recently related to safety, um, that falls under the FDA as well, as well as cosmetics, as well as tobacco products.

So that's a lot for one agency to handle. Um, and I beat up the FDA a lot, but the reality is that I realized they've got a lot to do. It's hard to most efficiently regulate if you're doing it all within one organization. And so what happens is often the FDA will work with experts in the field, uh, will bring them in actually to the panels to do part of the review.

But even when they do that, the reality is that you have some devices that simply don't pose enough risk where the FDA wants to regulate them. So we have things that might be considered medical device. I like the toothbrush that generally are not regulated because they simply are not risky enough from an FDA perspective.

Maybe that's because we have a pretty robust history in how we've regulated them historically. And so everyone kind of knows what materials to use in a toothbrush. Um, it could be just from a safety risk perspective. We're less concerned about it. So software as a medical device, for example, is an area where the FDA has taken a lot of steps.

From its previous regulation because Hey, it's just software and a doctor, you know, well, they can challenge the diagnostic result of the breast care diagnostic, um, you know, determination. Well, the reality is that somebody maybe goes through unnecessary tests, let's say unnecessary, uh, procedures, potentially unnecessary stress, um, as a result of an inaccurate result in those situations.

So it's not that it is risk. But the FDA seems to think that these are not as risky in particular because you have a human that's involved in the process. And there are a, a range of, uh, background issues, uh, that I'm, I'm sure you can highlight even more, but doctors have bedside manners. AI doesn't necessarily have a, uh, bedside manner.

And if there's a, a risk of a false positive, for instance, um, having someone there to contextualize what you're being told can be really important and really, uh, helpful instead of just getting information. And then we have the information vacuum. This is one of the things that we've seen with at home COVID testing, where we are lacking a very important input into public health discussions when people can get tested at home.

At the same time, people get tested a whole lot more at home when they have the convenience of low cost, the COVID tests. Um, so that's a really complex trade off. I, I know you study a whole lot and have really deep knowledge here. So I'm going to ask you a question, uh, that. Maybe outside of the, the realm of what you study, because you already do so much, but are we asking the FDA to do too much?

Is this, uh, agency, uh, up to this task? How well is it performing and the way they want to ask that question is, do you know how things look in other countries, um, medical innovation in other countries. How do we stack up? How do we compare? Are we a lagger to leader? What, what does that comparison look like?

And, and I'm asking you to go into a completely other field that I'm sure you could make an entire career of studying. Yeah. So I'll maybe just use a couple examples. I think that number one, we can make our processes that are currently in. More effective and better designed to prevent potential safety issues.

I'll give you an example. Um, one thing that the FDA does, and we often do this for efficiency, which makes a lot of sense, right? The FDA's got a lot to handle. There are a lot of major medical device companies that operate in the us. There are a lot of products that have to go through the process. Okay.

One of the things that we permit in the FDA is what we call the componentization of medical. So for example, say that you had an AI enabled pacemaker, you might be able to have the physical pacemaker itself, which is similar to a predicate device, maybe a previous pacemaker that was not AI enabled, go through a review process on its own.

Then you can take the infrastructure behind the scenes that actually feeds all the information and maybe tunes the functioning of the pacemaker and have that go through a review process separately. Now the positive thing about that is if you have an infrastructure that you can use for a lot of different devices, then you might be able to only go through one time.

You don't have to resubmit it every single time that you're, you know, taking that AI infrastructure, um, and architecture and attaching it to say the intersystem device that somebody has in their brain. Okay. So that's good from an efficiency perspective, from a safety perspective, it's very dangerous. The reason for that is because

that pacemaker is designed to be given some direction from that background infrastructure and architecture of the AI system. So if you're not thinking about it holistically, you might be missing certain key safety issues related to how, for example, the infrastructure is giving direction to the device.

For example, do you have the right security controls in place to ensure that the data is not manipulated between the infrastructure that you have and the physical device that you have. Now what Europe does is they generally do not separate into individual components. So that can be really positive for devices like this, because you're thinking about it more holistically.

Um, and I just use this as an example. Of course, there are a lot of complexities to the process.  The second thing that I'll mention is that generally speaking, the FDA is not concerned with privacy as part of the submission process. And I would say that the FDA has made some strides from a cyber security perspective.

There certainly are more documents that are out there, but there is not a heavy cyber security review that is completed as part of the process of approval for new devices in comparison in the EU, privacy and security are part of the review process. For example, um, in France, you actually have to demonstrate ISO 27,001 uh, certification, which is a security, um, standard may not be the best security standard, but it's a security standard before you get approval for your medical device.

So you have to demonstrate that you not only have um, the foundational components of a security system, but that you have applied them to this particular technology. And then the technology goes through a review process. So now that situation is not radically different in terms of what is reviewed in both cases, everything is reviewed.

The question is how you're reviewing it and what steps you need to take as part of that review process. So there's certainly are ways that we can improve it and learn from others. The only caution that I might have is that, of course, like a lot of our regulatory processes, it's designed to be as efficient as possible and to get things through the gate as quickly as possible.

And as I just mentioned earlier, Because there is so much interest in AI healthcare, um, in, in medical devices in general, to get them out there, there's a huge rush and push to move them through the process as quickly as possible. And I'll give you just one more example. I use the term predicate device.

What a predicate device is, is a device that has been previously submitted and approved and reviewed by the FDA. So there's a comprehensive process that an organization has to go through when they're submitting a new device for review and approval, especially when it has significant risk to the individual person.

But if you're submitting a new device that is very similar substantially, similar to a predicate device, you can go through a lesser process of review, which means that it happens faster and it's not as comprehensive of a. So you can see in that situation, that the componentization of these devices is very useful to you because if you can break this out and say, oh wait, here is this pacemaker.

And look, it's really simple. It's very similar to this predicate device. Take us through the five, 10 K instead of, um, you know, a different process. The, the five 10 K that's one of the FDA's approval process. Exactly. That's the shorter approval process. You're gonna get through the process faster. You're gonna get to market faster and you'll be able to sell this faster and maybe save lives faster and maybe save lives faster.

So there are benefits to getting things through the process quickly.  the second, I guess, difference we talked about the process is the degree to which we might accept third parties or others outside of the regulatory process to sort of step in, in a pseudo regulatory kind of a role. Um, we've seen it in the FDA related to the manufacturing process.

So right now the FDA does permit manufacturers to have a third party come in and actually do a review of their manufacturing. And to submit, and these are, you know, pre-approved organizations that do this and in the rest of the world, you know, kind of here and there, there's acceptance of third parties doing this work.

And the degree of, I guess, certification matters. These aren't just any third parties. They're third parties that have gone through an approval process that presumably have the skillset to submit the right report. But we have not done this in doing things like reviewing the quality of the AI, reviewing the fairness of the AI.

Reviewing whether or not the training data that we use to create the algorithm is going to create better or worse results. Mm-hmm  um, the FDA currently is trying to do that review themselves, but again, we're talking about a highly specialized set of skills that is highly sought after in private industry.

So it's very difficult to get individuals who really understand all of the potential issues in designing an AI system, working for the FDA. And I'm sure, but it could be a solution here. And I'm, I'm sure that if you want to understand, uh, the, the fairness of an AI system's design in the health

setting it's not enough just to get a machine learning or artificial intelligence expert or even one who, uh, has expertise in, uh, fairness issues in, uh, uh, machine learning. You need to have someone who understands the health equity. Fairness issues, which is an entire subfield and highly specialized. So the, uh, Venn diagrams within Venn diagrams is what we're talking about here, though.

I, I guess if you want a, a good career path that might be an option for you though, I guess we're, we're also saying that jobs with the FDA might not, uh, pay as well as if you only do, uh, the, uh, machine learning AI expertise and go into private in. I want to ask when we're talking about medical device, cybersecurity, uh, I apologize.

Uh, Charlotte, you've probably heard this example so many times and you're sick of it. You probably know what I'm going to ask, but are, are we just talking about, uh, Dick Chaney's pacemaker or what, what are the cybersecurity concerns with medical devices? Well, you know, and, and this is a, a summary and I, I know that, uh, you know, cybersecurity folks who are listening to this podcast may say, wait a second.

That is an oversimplification. Um, but it is useful to think about CIA. So confidentiality, integrity, and availability as it applies to medical devices because they play out in different ways. So for example, if we care about the confidentiality of information related to devices most of the time in the medical setting, we're talking about potential privacy issues.

Do we have rights to use the data? Um, do we not have rights to use the data? What could be beneficial about having this data and are the data contained within the boundaries where we set those boundaries or is there a potential for, uh, for example, um, you know, an attacker to get in and actually steal that information and use it for nefarious purpose?

Most of the time when we're talking about medical devices, the type of data we're talking about is not gonna lend itself particularly well to things like healthcare, insurance fraud, um, or stealing someone's identity. It might still be useful for other reasons, but is not gonna have that direct fraud applicability that we usually think of as sort of the easy grab and, and go.

I am personally more concerned with integrity and availability. Um, both because we have devices that essentially keep people alive. So if somebody is in your case, right, the, the Dick Chaney example, attacking a pacemaker, the reality is that a person's heart probably can't function or function optimally without it.

And if everybody is connected to the same infrastructure, which is often the case, when we're talking about AI enabled medical device, If somebody is able to attack, um, the infrastructure, hypothetically, they can attack everybody who has a device that's connected to it. That's the challenge. Um, in the past we had devices that were, um, let's say kind of in one place, right?

Not connected, not connected to your mobile device, not connected to any background infrastructure, not connected to the hospital system, which often don't have the best security, uh, track record either. And so we just have more avenues for. That could affect either the data that's coming in, the actual nature of that data, switching things around changing values, which would be more of an integrity attack or just general availability.

Let's shut everything down. Ransomware attacks have been on the rise and especially in healthcare, we're very concerned about ransomware attacks for a lot of different reasons, um, in part, because they're increasing in frequency, but also because if you can't get to your resource, You have major problems in the healthcare space.

So medical devices are really no different than those broader concerns. So there there's a, I, I, I love working in cyber security. It's a great field, but it, it, it also, uh, bring, brings out the worst and may, um, and makes me think terrible, terrible things. Uh, so when we're talking about rents and where, uh, here, uh, first there have been as you, you know, Charlotte, um, over our recent years, hospitals that are getting targeted, uh, with, uh, ransomware attacks, their equipment is encrypted.

They need to pay ransoms and all that, but we, we don't even need to go the encryption sort of route. You could imagine an attacker when we're starting to transition to these, uh, smart hub based systems that aggregate information and interact with devices. You could imagine someone pushing out a malicious update to 300,000 medical smart hubs that has the ability to deliver a massive dose of insulin or to trigger pacemaker or to do terrible, terrible things.

And they present messages saying, we're going to kill you in three days, unless you give us five bitcoin  and that's a horribly scary prospect. Um, just the threat of it. And there it, it is a conceivable thing that they actually could deliver on, which means you have to take that threat seriously. You absolutely have to take that threat seriously.

And there is a greater risk that you are going to just pay the ransom because we're talking about human lives. You know, we're not just talking about the loss of some confidential business information.  When there is a human cost, suddenly the money seems worth it. Even though the FBI might tell us not to pay the ransom.

The reality is that if somebody's life is on the line, it's probably gonna happen. So that makes it a useful model from an attacker's perspective, because they're more likely to get the money. So you, you spoke a bit about confidentiality, which brings us to, uh, another, uh, side. Um, so confidentiality from the cybersecurity perspective is closely related to, uh, privacy issues.

Um, so I, I wonder if you could say a bit about, uh, that the unique, uh, privacy issues that come up in the, uh, medical device context and healthcare context. Sure. So first let's talk about the sheer amount of data that are needed for medical devices to work most optimally with most medical devices, you need continuous feeding of, of a device for it to work most effectively, a device is not gonna work as effectively.

If it doesn't have information about your current status about for example, your current environment, a colleague, um, and I from, um, from Syracuse, uh, Dr. Christa Kennedy, we've worked together to study hearing. And connected hearing aids in particular. And one of the ways that those hearing aids work most optimally is to capture continuous environmental data about the individual.

And to send that data back to an AI system, that then crunches the information and makes automatic direction of changing how that hearing aid is going to function so that it can automatically adjust when you start to get certain kinds of information coming through environmentally. So, number one, you need the data.

Number two, when we add artificial intelligence to an existing system, the data needs that we have are even higher and it's not just any data. It's representative data, it's contextual data. Uh, it's good data, high quality data. And we already have a healthcare issue in that it's very hard to collect data from a variety of different communities to fully represent those communities in the technologies that are designed.

And there are reasons for that. There's a healthy distrust in some communities in the healthcare systems and in clinical trials and for good reason. But ultimately that means that if you're designing an AI system, there is a risk that whatever you're designing is not going to deliver optimally, the right kind of treatment or direction or functionality for the groups that you are trying to help.

So we have big data needs. We have diverse data needs. We have contextual data needs. And at the same time we have this concern that, well, maybe we're gonna use those data for purposes that, you know, we didn't disclose. AI is unique in that we cannot always know which data will be useful. Sometimes it's useful to have more data than you might otherwise think to gather.

There might be hidden relationships between those data points that actually are very useful from a device perspective. But sometimes we don't know the difference between are we collecting data for these purposes, which seem pretty legitimate, or are we collecting data for purposes that are not so legitimate?

Like we want to deliver, uh, more effective marketing to your mobile device of all of these other devices. We think that you should buy. And ultimately it's hard to make that kind of a differentiation as a patient because often those uses, even though HIPAA requires us to separate them are sometimes, you know, pulled together in a way that is not tremendously easy to disentangle.

Mm-hmm  yeah. And there there's. So much a again, scary and concerning there, but also good stuff. But to, to throw out the scary example and maybe argue for the need to have the doctor in the loop, a sort of setting, let's say that there's a, a baseline risk that most people have of 1% for some condition.

And if there's some data point out there it's increased to, uh, 1.5%. You can imagine the nefarious marketer who i- identifies, uh, you're in the population with that 1.5% marking to you, you have a 50% greater likelihood of experiencing this condition. You should go buy our device and having the doctor there saying, well, you have a 1.5% chance.

It's really still very small. You shouldn't go spend $50,000. Um, that that's really important and data is dangerous. It is, it is. And we have another problem with just being able to communicate that information in a way that's understandable, you know, a lot of these risks feel, you know, like we're, we're heady, right?

We're in the clouds. They don't feel real to us. They're not visceral. Um, they just seem like, all right, well, they're gonna have too much of my data. What could possibly go wrong? Unfortunately, you know, you could have data that are then used to impersonate you to potentially later change how your device functions, but I'm also a believer that simply the taking of personal information itself without a person knowing what is happening is in and of itself damaging.

Because it reduces our humanity. It treats a person like they're just a font of data and not an actual person, an actual patient that probably has a very serious health condition that that's the other privacy conundrum. That this is something that I, I personally really struggle with. Um, health.

Information is a different sort of information. It, it's not just about you as an individual, but it's about you as a human being and everyone else is a human being too. So it's not just about you. It's about everyone. So for, for instance, let's say that, um, I am a, uh, 30 year old, so we know we're not talking about me.

I'm older than 30. Um, I I'm a, a 30 year old, who's had two heart attacks. That is a weird medical condition. I probably would not want that information to be publicly disclosed. I would probably view that as pretty sensitive personal information, but it's a pretty unique, uh, medical condition that from a social medical research perspective, we want to be able to collect that data and learn about you because you can probably tell us a whole lot.

Other people with that condition or with similar conditions. So how do the privacy equities play out here? Do I have a right to withhold this information? That's so valuable and important to the rest of society, simply because I'm uncomfortable with people knowing about it. It's an excellent question.

And I wish I could give a really clear answer that would solve this for everybody. Um, but just some things to maybe think about. So in the medical world, we can use data without restriction for the most part, when we've collected it as part of a clinical trial. We do that because we believe in these particular situations, we are innovating.

Data is going to be most useful in creating the next round of innovation. And a lot of those are actually funded by federal grants that taxpayers pay for. So in those situations, it's part of, sort of the, the social contract in a way that a person that's part of a clinical trial might. You know, have a chance at some individual benefit, but that the benefit in those situations is more utilitarian.

It's about what will happen after those data are collected and how that can be used for future innovation. So that's how we kind of think about that piece of it. Once something is commercialized, we don't think about data in a utilitarian way really anymore. The only time that we really care about it is when we can call it a public policy reason for processing the data.

So for example, during the COVID 19 pandemic, um, we use data in a lot of different ways that we might not have otherwise. Um, and the department of health and human services gave known exemptions for organizations that were not adhering to minimum necessary requirements under HIPAA, because it was falling under this public policy sort of exception.

So there is an exception, but that kind of exception is not typically used for the development of healthcare technology. Now in the United States is something that, or actually outside of the United States and something that I've advocated for is sort of a new construction for how we use data and what we call it, um, is legitimate interest.

So under legitimate interest, the expectation is that you calculate the benefit to individuals, the benefit to their community, the benefit to the public at large, and you weigh it against your organization's individual benefit. So for example, collecting data just to market to people is probably not gonna fly under a legitimate interest analysis.

However, using data, uh, from somebody who has a heart condition to create a better heart technology that might be different than the one that they have that could presumably benefit that person or a category of individuals with a similar heart condition, probably going to fall under legitimate interest because it's something that's tightly connected to the individual who is actually giving the data and it has significant public benefit.

So I'm really interested in how we can think more differently about what public benefit means and how we can actually demonstrate that legitimacy. Based on the connection between the individual giving the data and the potential benefit. Which should be somewhat close to them. If you think of it as sort of concentric circles, um, versus just general public benefit, which might be less persuasive.

So that I think takes us, uh, straight back to the start of our discussion and the, the idea of data policy. And I want to, uh, close by asking what, one of the things that you said when you were describing the work that you do and what you study, um, is that you think about how we can create safe systems and I'm not going to ask

you "so how do we create safe systems?" Because I know, uh, there, there is no easy answer. Um, but, uh, I, I wonder if you have any pointers, uh, to some of your own work, uh, that interested folks might, uh, want to look at or other folks, uh, whose work you want to highlight in this field? Well, there are many that I could highlight.

Um, but I'll, I'll point out something at least from, uh, it's from a little ways back. I think it might have been from 2016, which was, uh, professor Roger Allen Ford and W. Nicholson Price who wrote on this idea that in terms of complimenting the FDA processes that we might think about third parties being part of that process more explicitly.

Um, I think, you know, in the beginning, I wasn't sure if that was really the direction, but as we've seen sort of the blossoming of AI and the need for more expertise, I think that the system we have has to somehow involve private parties. I don't think it's something the government can handle themselves. So I'll kind of say that that is one thing that I think could be very useful.

A second thing that I've advocated for is the idea that we sort of hang out the dirty laundry, so to speak, um, this idea that if you have adverse parties, competitors that are actually testing your systems, for potential safety issues or potential fairness issues that you're more likely to get a better result faster.

And so finding ways, especially when we're talking about highly complex algorithms that are very hard to, um, to even decipher. Hosting those in a place where they can be actively tested. Um, those of you in software might be familiar with black box testing. This is idea that you feed data in, you see what comes out and you try and find potential issues.

I would love to see something like that in the AI space. Um, because I think it could reveal potential issues much faster and promote the change that we need to see to make AI healthcare in particular safer and, and more effective.

I'd like to thank Charlotte Tschider for having joined us on this episode of tech refactored as always. I learn a great deal whenever I talk to her and today's episode was no exception. We had a great discussion. It's hard to believe how much, uh, we, we covered. Um, and I I'd just like to throw out a reminder.

Anytime we talk about universal paper clips, it's a good discussion in my book. If you're not familiar with universal paperclips, you should go Google it. I guarantee you it's going to waste at least an hour of your time, possibly an entire week. Have a great day.

Tech refactored is part of the Menard governance and technology programming series hosted by the Nebraska governance and technology center. The  NGTC is a partnership led by the college of law in collaboration with the colleges of engineering business in journalism and mass communications at the university of Nebraska.

Tech refactored is hosted and executive produced by Gus Herwitz. James Fleege is our producer. Additional production assistance is provided by the  NGTC staff. The series theme music was created by Colin McCarthy. You can find supplemental information for this episode at the links provided in the show notes to stay up to date on what's happening with the Nebraska governance and technology center.

Visit our website at NGTC.unl.edu. You can also follow us on Twitter and Instagram at UNL underscore NGTC.