Annie Kuo Becker: Hello everyone. Welcome to the Discovery podcast at the 红桃视频. I'm your host, Annie Kuo Becker. On this podcast, we interview our distinguished faculty and legal experts who visit the law school on topics like today's matter 鈥 disability discrimination in the clinical use of PDMP algorithms. PDMP being electronic prescription drug monitoring programs, which the U.S. federal government encouraged states to create during the first wave of the opioid epidemic. And we welcome an in-house expert, Elizabeth Pendo, our Kellye Testy Professor of Law and Senior Associate Dean for Academic Affairs and director of the upcoming Health Law and Policy Program. Welcome Elizabeth.
Elizabeth Pendo: Thank you so much.
AKB: Elizabeth Pendo, for our listeners, is an example of how our 红桃视频 Faculty are going beyond theory to engage students in applying legal knowledge to solve some of our most complex societal problems, something the interdisciplinary Health Law and Policy Program will address. Professor Pendo is a nationally recognized expert in health law and policy, bioethics and the law and disability law.
So, let's get into the matter today about this essay that you and Jennifer Oliva co-wrote in the Hastings Center Report, a longer version of which just published in the North Carolina Law Review. By the way, listeners, we're going to link to both versions in our show notes, which you can find on the law school website.
Elizabeth, can you tell us about how and when PDMP鈥檚 originated and their intended purpose in the surveillance of controlled substances?
EP: Yes, prescription drug monitoring plans, or PDMPs, are, as you mentioned in the introduction, state electronic databases that track controlled substance prescriptions in a given state. Federal law classifies drugs into schedules. It's levels of control based on perceptions for potential for abuse or dependency. Some of those drugs can be legally prescribed based on a recognized medical use. Examples would be opioids or stimulants, depressants, hallucinogens, right? Lots of drugs we might think of as regularly prescribed are controlled substances. And PDMPs were designed initially as a law enforcement tool to surveil the prescription of controlled substances. And they really flourished in the 1990s, which was the first wave of the opioid drug poisoning crisis, and really singled out supply side solutions to the opioid crisis, and in particular a perception that over prescribing was really driving the opioid epidemic.
So, as a law enforcement tool, it was really used to aid criminal investigations and prosecutions of prescribers, and it was also used by state medical boards to monitor stances of practice around prescription of controlled substances. They were incentivized by the federal government with significant federal funding. Today, all 50 states, as well as several territories, have authorized PDMPs. So, a law enforcement tool designed for law enforcement. In recent years, PDMP information has really been touted and marketed as a healthcare tool to guide and inform clinical decision making.
AKB: Could you briefly explain the data collection piece and how these algorithms are employed for use with regards to producing those risk scores for each patient?
EP: Yes, I can tell you what we know. PDMPs purport to identify specific data points prescription-related information that they collect. They identify them as proxies for drug misuse or drug diversion or overdose risk, and then they use those data points through application of an algorithm to generate a numerical score. That's the score that we're talking about. And the score is explained as identifying the level of risk that a patient has a substance use disorder or might develop a substance use disorder. However, we do not know exactly how this is done, because the developers of the algorithm used by PDMPs say that the algorithm is proprietary and therefore not subject to disclosure. So, we do not know how the algorithm works, exactly what information it considers, how it weighs those factors, how it generates a score. Because it's never been disclosed, it's, of course, never been subject to peer review, nor has it been approved or vetted by any federal agency.
We have some information about what data points go into it, and we get that information from the developers of the algorithm themselves. Nearly all of the PDMPs include information about prescribing of scheduled drugs. Many also track what they call 鈥渄rugs of concern.鈥 States are permitted to identify what drugs of concern are. An example might be a Gabapentin. It is not a controlled substance, but because it's used to treat pain, some states have begun to track it. States can also choose to track other drugs. For example, one state, Nebraska, actually tracks all prescription drugs, not just controlled substances.
We also know that PDMPs collect information related to the prescription, the type of drug that's dispensed, the date, the quantity and how long it's supposed to last, prescriber and patient information, for example. But there is a modern trend of collecting, really a wealth of information that's not necessarily prescription-related. For example, the method of payment. Did you pay cash? Did you use a credit card? Did you use insurance? It's also become very popular to track related information like medical marijuana dispensing, mental health issues, if the patient has ever experienced overdose, criminal court information, even child welfare case information that relates to drug-related convictions. So, it really goes far beyond simply tracking the prescription and prescription information to include a lot of social information or socio-economic information about the patient.
AKB: Can we address the risks of overreliance on risk scores in clinical decision making? In your paper, I noticed that there are several scenarios in which these risk scores can be used to stigmatize or stereotype certain groups of patients. Could we talk about the risk of overreliance on these scores?
EP: Absolutely. There are dangers of importing a law enforcement tool into clinical care. It was never designed to support clinical decision-making. If you think about what's included, I think it's really also important to notice what is not included. What is not included in the PDMP is the patient's diagnosis, any treatment plan or health outcomes 鈥 three pieces of information that are critically important to clinical care. What that means is the PDMP risk scoring could see as identical a patient experiencing stage four cancer who may be prescribed significant amounts of pain management drugs, as say, an individual who is being actively treated for substance use disorder. It would see no distinction between patients based on diagnosis, treatment or outcome.
AKB: That seems outrageous.
EP: It's deeply problematic. Combined with that is the flaws and the gaps that we know exist in the PDMP. Again, never been subject to peer review, never been disclosed, never been reviewed or vetted by any federal agency. So, it's a classic black box algorithm, meaning we know it takes in certain information and then generates certain information, but we really do not know anything that happens in between in that black box of algorithm application.
There is a wonderful researcher, Dr. Angela Kilby at Northeastern University. She's a health economist. She actually has studied how PDMPs work and attempted to reverse engineer or see if this algorithm 鈥 this mysterious, undisclosed algorithm 鈥 could it do what it says it does? Because what it says it does is identify people with substance use disorder or who are at risk of substance use disorder, and her research shows that it does not do that. There's a very high rate of false negatives, meaning individuals who perhaps do have a substance use disorder, who wouldn't be flagged as such, and also individuals who do not have a substance use disorder, but could be denied appropriate care based on that false information.
We do know that PDMP-related information and risk scores are being used by healthcare institutions and clinicians to deny care, to provide different and worse care and to stigmatize patients. Any clinical algorithm which would include a PDMP is supposed to assist or support clinical judgment, not replace or override clinical judgment.
One concern that we see very clearly in the literature is PDMP scores and information are being used to deny care to patients. Patients are being refused appointments in primary and secondary care settings. They're being denied needed surgeries because of concerns about how pain might be managed afterwards. There was a case where a patient was denied a lifesaving lung transplant, because that patient was being treated for substance use disorder, and this was done without investigation of whether or not refusal of care was medically necessary.
AKB: So, let's talk about this. If you could comment please on the impacts on certain groups when clinical decisions are based primarily on PDMP information.
EP: I think one thing that's important to recognize from the outset is substance use disorder itself is a protected disability under federal civil rights law. It is very stigmatized. There are decades of literature showing us that. So much so, that I think it often isn't seen as a protected characteristic as a disability under federal civil rights law.
The interesting thing about federal civil rights law is it defines what disability is. It's a physical or mental impairment that substantially limits a major life activity. Federal agencies and case law have confirmed that, in fact, substance use disorder does fit into that definition. People who are thought to have substance use disorder, but do not in fact have substance use disorder are also protected. So, the PDMP could falsely identify someone as having substance use disorder. They would also be protected. The PDMP also appears to, according to some of the research that we discussed in our paper, identify a lot of people with chronic pain conditions who are being treated with controlled substances appropriately and within the standard of care. That does not mean that they have substance use disorder or even that they're at risk of developing substance use disorder, but the association with opioids, in particular for pain management, and our popular ideas about addiction or dependency are so strong that it leads to an assumption or false belief that a patient has substance use disorder and needs to be denied certain kinds of care.
If you think about chronic pain conditions, women are overrepresented amongst patients with chronic pain conditions, and there is plenty of research showing us that women tend to be, as a group, offered less pain relief, offered pain relief later or disproportionately not believed by their healthcare providers. Similarly, because of the history of how controlled substances have been regulated, both in terms of civil law and criminal law, racial stereotypes and assumptions play a very strong role here. So, people who are racialized, particularly as black, may be disproportionately denied care. We know from other literature that people who are black, in particular, face additional hurdles even receiving treatment for substance use disorder. But even if you're lucky enough to receive treatment for substance use disorder, you may still be stigmatized and denied other care that you need.
AKB: Could you tell us about why you and your co-author, Jennifer Oliva, came to write on this topic? What were the issues that you wanted to address?
EP: In this larger conversation about potential discrimination or unfair treatment that results from use of clinical algorithms, generally, the PDMP is simply one of those clinical algorithms, but there are many. There has been a lot of concern about the disproportionate impact of the use of clinical algorithms on different groups 鈥 on women, on people of color, on people with disabilities. And I noticed in some of the initial scholarly discussion around that problem, people were noting that traditional civil rights laws may not be as helpful as we had hoped, because for many, civil rights law remains the predominant model that discrimination is intentional. And what we were seeing with these clinical algorithms is less of a disparate treatment of people based on protected characteristics more of a disparate impact.
So, the algorithm might appear facially neutral, but when we see it in operation, it disadvantages a particular group. What I think is unique about PDMPs is they are not facially neutral. They intentionally target people with substance use disorder or thought to have substance use disorder, which is a disability. So, I think it's really important to call out that difference, and that's one of the reasons that my co-author and I were so excited about exploring the potential of disability rights law here and what it could tell us about the PDMP and other clinical algorithms.
AKB: Would you tell us about some of the protections offered by disability anti-discrimination laws? Specifically, you mentioned that substance use disorder is a disability, and so it should be protected, those patient groups with SUDs. Specifically, could you address the protections by the ADA and Section 1557, of the Patient Protection and Affordable Care Act.
EP: When we talk about disability anti-discrimination law on the federal level, we're really thinking about three different laws. The Rehabilitation Act of 1973, more than 50 years old, prohibits entities that receive federal funding from discriminating on the basis of disability. Later in 1990, the Americans with Disabilities Act was passed. It's not dependent on federal funding, and it reaches across broad areas of society 鈥 employment, private businesses that serve the public and all activities, services and programs of public entities. That reaches quite far into health care. It means public hospitals for the Rehabilitation Act. Federal funding includes receipt of Medicare or Medicaid reimbursement. The ADA also applies to private health care regardless of receipt of federal funding. And we argue in our more expanded paper that algorithmic discrimination is prohibited under those preexisting laws.
Section 1557, of the Affordable Care Act extends and expands the protections of preexisting laws into healthcare settings. It amends the Rehabilitation Act of 1973, and why we discuss section 1557 is because it specifically addresses discrimination through the use of clinical algorithms. We think disability law is a very good fit here for two different reasons, both the way it applies and its specific provisions. It applies because substance use disorder is a recognized disability under all three of these laws, which use a very similar definition. The PDMP also fits within the definition of clinical algorithm or clinical decision support tool under Section 1557. And third, we really want to focus on not the development or the construction of an algorithm, which is, of course, very important, and many other scholars are working on that issue. We wanted to focus on the equitable use and these laws apply directly to providers and healthcare entities. So, it's a good fit in terms of targeting the specific problem.
If you look at the specific provisions of disability rights law, they also address some of the really specific, well documented harms that we saw in the literature from the use of PDMPs in clinical settings. Disability rights law prohibits refusing to treat or treating differently patients because of disability. We argue that that is what's happening here, when people refuse to make appointments for people with certain PDMP scores, refuse to treat them, refuse to admit them into, for example, a skilled nursing center. Plenty of evidence of that, all of which we summarize in our paper.
Disability rights law also requires that you examine people as individuals. That's the individual assessment requirement. That's really designed to interrupt bias and reliance on assumptions and false beliefs around disability. It requires taking a moment and actually evaluating the patient as a specific individual. So, rather than having a blanket rule, for example, we won't make an appointment with a patient who has a risk score above X, it requires you to look at that individual patient and try to accommodate their individual needs.
Individual assessment is also especially important when we think about using PDMP scores to guide clinical decisions, and that's because of the known errors and gaps and omissions in the PDMP algorithmic information. Again, we don't know exactly what goes into making the score, because it's a black box algorithm, but we do know it's missing some key information, like diagnosis or health outcome, and it includes some information that is not directly related to the prescription and, in fact, may not be medically relevant and may invite in different kinds of bias. I think method of payment, for example. If you pay by credit card versus cash versus insurance, that can affect your score in the PDMP.
AKB: Can you tell us about the current proposed section 1557 rule? And then there's three additional requirements that would strengthen it. In your paper, you discuss those at some length. Could you tell us about the proposed section 1557 rule that would help improve the scenarios.
EP: 听The rule implementing section 1557 was actually finalized in May of 2024, and it included some really interesting provisions directly addressing the use of clinical algorithms. It required that a covered entity, and again, that's going to be most healthcare providers and healthcare institutions, make efforts to identify patient care decision support tools 鈥 sort of a more generic umbrella term that includes clinical algorithms 鈥 identify those support tools that are being used and whether they employ input variables that measure race, color, national origin, sex, age or disability. So, are you currently using a decision support tool that might intake some of these variables based on protected class?
And then, interestingly, for each of those tools, the regulation requires that the entity make reasonable efforts to mitigate the risk of discrimination. I think that's very interesting and important language, because it acknowledges there is a risk that perhaps can't be eliminated. It says, mitigate the risk. So, the acknowledgement there that these are not purely neutral tools with purely neutral outcomes, but instead we need to monitor the equitable use and equitable outcomes of these tools. I think that's the unique contribution of anti-discrimination law in this context. We have other legal efforts which we support around regulating the development of algorithms and vetting the algorithms to ensure that they are accurate and can do what they say.
Even if we have an algorithm that has clinical utility and validity, we still need to monitor how it is being used and how it impacts vulnerable groups. For example, even if the PDMP could accurately identify who has substance use disorder or who's at risk, which, again, the literature tells us it can't, but imagine a world where it could. What are we using that information for? Currently, we're using that information to deny patients care, as opposed to, for example, identifying patients who need care. I think it's really important that the anti-discrimination lens 鈥 that equity analysis 鈥 be part of the conversation around the use of clinical algorithms.
AKB: And what do you think is the best means or method for validating these PDMP risk scoring algorithms?
EP: We suggested three ways that the final rule could be further developed to achieve its aims, and one of them was, as a threshold matter, requiring entities that are using these algorithms in clinical settings to ensure that they work as intended, that they have clinical utility and validity as just a threshold question. That could be established in a number of ways. It could be through a review by a relevant federal agency. It could be through peer review. Again, these algorithms deployed by PDMPs have never been disclosed and have never been subject to peer review so we do not know if they can do what they say they do, and the research strongly suggests that they cannot do what they say they do. So, at a minimum, you should need to ensure that it has clinical utility and validity, and there are several ways that that could be done so.
We also would like to see development of regulations around ensuring that healthcare institutions develop publicly available standards for using these. Do you know if your care has been guided by a clinical algorithm? Perhaps not. Most people would not know that. So, how are you using them, and are you ensuring that they are a support or aid to clinical judgment rather than overriding it?
And third, we would like to see active monitoring of the equitable use and impact of these tools and algorithms. We believe that many clinical algorithms currently in use would have no problem clearing that first hurdle of clinical utility and validity. However, even a valid algorithm needs to be monitored. What is the impact of that algorithm on vulnerable populations? How is it impacting access to care and health outcomes? We think that's very important and the unique contribution of anti-discrimination law.
AKB: The health law and policy program. We're very excited that this program is launching at the UW School of Law. Can you tell us about some of the ways that the program will serve as a hub here across campus for understanding and improving health laws and policies?
We see the revitalization of the Health Law Program as having a couple of different components. One would clearly be research, the interdisciplinary function, working with other units across campus to really further our mission and the university's mission to make global impact in the areas of health and the well-being of communities here in Washington and across the country.
We're also extremely fortunate to be able to interact with a really vibrant life sciences research community here in the Pacific Northwest, which is a tremendous strength to draw upon. I think our commitment to students and the student experience will lead us to develop curriculum and job opportunities, externships, fellowships, lots of ways for students to participate in our research and develop their own careers. So, we're very excited.
AKB: Yeah, it's it sounds like an amazing opportunity for any students who are interested in this intersection of health law and policy.
Thank you, Elizabeth, so much for joining us today on the podcast to talk about the subject of challenging discrimination and the use of these PDMP algorithms. We congratulate you on leading the launch of the health law and policy program, and are really looking forward to seeing how that unfolds across campus with all the partnerships and collaborations.
Elizabeth Pendo is Kellye Testy Professor of Law and Senior Associate Dean for Academic Affairs and director of the upcoming Health Law and Policy Program at the University of Washington.