Use of Health IT To Reduce Medication Errors and Improve Patient Safety

Use of Health IT To Reduce Medication Errors and Improve Patient Safety


(P. Jon White, MD, Moderator)
Thank you so much, welcome everybody. My name is Jon White. I direct the Health IT Portfolio with the Agency for Healthcare Research and Quality, and we’re very pleased to bring you today one in a series of web conferences about the evidence that is supported by AHRQ’s Health IT program. I have a couple of great speakers lined up for you today, and a great topic, so, without further ado I present to you “A National Web Conference on the Use of Health IT–” so, what we have here–oh, thank you–“To Reduce Medication Errors and Improve Patient Safety.” Next slide. I am your moderator. Your presenters are listed here. I will introduce them one by one before their talk. I do have some words to allow participants to obtain continuing education credits for this session. I am required to let you know that neither myself as the moderator, nor Drs. Atlas, Grant, Basco, and Weiner, have any conflicts of interest to disclose at this time. So, thank you for joining us, and let us move on to the next side, and I will introduce our first speakers. Excellent. The first talk here is on the Medication Metronome Trial. Your speakers are Dr. Steven Atlas and Dr. Richard Grant. Dr. Steven J. Atlas is an Associate Professor of Medicine at Harvard Medical School and Director of the Practice-Based Research and Quality Improvement Network in the General Medicine Division at Massachusetts General Hospital, where he is also a practicing primary care physician. He has developed novel patient attribution methodologies to connect patients to physicians and practices within primary care networks. The research he will present today addresses how health information technology can foster between-visit medication management and laboratory monitoring for patients with chronic conditions. Also presenting to you for this first section here, is Dr. Richard W. Grant, MD, MPH, who is a research scientist at the Division of Research, Kaiser Permanente Northern California. Dr. Grant is a board-certified primary care physician. He received his Medical Degree from the UC San Francisco, completed his medical residency training at Massachusetts General Hospital, and received his MPH from the Harvard School of Public Health. His research focuses on identifying and overcoming barriers to effective primary care in patients with complex diseases. He has a special interest in Type 2 diabetes and related chronic conditions. Dr. Grant has published over 100 peer-reviewed scientific papers, and is a Deputy Editor for the Journal of General Internal Medicine. So, thank you very much, Dr. Atlas, the floor is yours. (Steven J. Atlas, MD, MPH)
Thanks very much. Good afternoon everyone, or good morning, depending on where you are in the country. I’m going to be starting off the presentation, and then I will pass it off halfway through to Dr. Grant. The background for our study, is that despite the availability of effective therapies, many patients in the United States with common chronic conditions such as diabetes, hyperlipidemia, and hypertension, do not reach their treatment goals. For medications we often add them for patients who are not succeeding to these conditions with lifestyle interventions, but even with medications many still do not achieve the goals their therapy recommended. We know that novel health information technology tools have the potential to support chronic condition management in primary care settings. We also know that there are barriers to such care, and that the lack of timely medication intensification and inadequate safety monitoring, are two prevalent and potentially modifiable barriers to effective and safe management of chronic conditions. There are major challenges in visit-based care that include competing demands for time during sessions, and also patients who may miss scheduled follow-up sessions where care cannot be provided. Current visit-based delivery models do not include systematic efforts to engage patients in active risk factor management between their office visits. An example of this is here, where the most common statement from a doctor is that “You’ll come back and see me in 3 months.” As background for this study, we looked at the scheduling of office visits in our practice network, and found that they were an unreliable method for planning future medication changes. For example, among individuals who were over 50, or for patients with diabetes, only about two-thirds of patients over a year in which we looked, actually completed the follow-up visits that were planned. So with this as the background, we proposed and implemented the Medication Metronome trial, and our goal was to create an information technology infrastructure to support planned medication adjustments. And that it involved writing prescriptions that would trigger future laboratory testing, and this would support non-visit based management of individuals with chronic conditions. And our objective was to test this model of chronic disease management for between-visit laboratory monitoring. We focused on medications related to diabetes or A1c, cholesterol with monitoring with LDL, and blood pressure-related medicines, and our hypotheses were to: one, that our system would reduce delays in efficacy and safety monitoring; and that reducing these delays between monitoring and prescribing would result in better, faster, risk factor control. The study setting was two primary care practices within our MGH practice-based research network. The study included primary care physicians within these practices, who were randomly assigned to either intervention or control groups, and we enrolled 44 physicians, which was the majority of physicians in these two practices, and they were randomized in a one-to-one fashion as shown. Intervention physicians were trained to use the Medication Metronome tool prior to the start of the study, and the tool was active for a one-year period after the study began. For control physicians, basically it was usual care– that they used an electronic health record that had a medication prescription interface. For the intervention physicians, they had to use the same interface, with additional features that would provide the ability to order future laboratory tests at the time the scheduling–excuse me, of ordering the medications. And that this will then trigger reminder letters that would be sent to the patient when the lab was due. And that the system would then track the completion of the tests, and would provide the results, or the lack of the results, to the ordering physician. This was active, this medication intervention interface, was active when ordering new prescriptions or changing the dose of the medication being treated. And the agents included oral hypoglycemic medications for Type 2 diabetes, for hypertensive patients we examined the use of diuretics, ACE-inhibitors, and angiotensin blocking agents, ARBs, and finally, for hyperlipidemic patients, we examined the use of statin medications. This is just a picture of our medication-ordering interface within our electronic health record and just shows the standard features in ordering medications, and this was what was available to our control group. For the intervention group, we included the same interface, with a monitoring module that would allow the ordering of laboratory tests at the time of the medication prescription. And a detailed view of that ordering interface shows that it provided information on relevant tests that may have been performed, and when they were performed, and the lab results. So in this case, liver function tests, in a patient who was being prescribed a statin. The medication module defaulted to ordering the efficacy tests. This was the case that if you ordered a new statin, then a lipid panel would be automatically ordered at a specified date. The physician could turn off that order if they wished. For safety monitoring, the default was that the physicians preferred that we would not order that as a default, because they may not view that as indicated at all times, but that they could add to the order by clicking on a new order, to add that at the time of ordering the efficacy tests. This was done during the process of ordering the medication, or changing the dose. The Medication Metronome module also would then initiate automated patient outreach after that ordering was performed. So, a mailed letter with a lab slip would be sent one week prior to the test’s due date. If the patient did not complete the lab tests, a second letter would be mailed one week after the test’s due date. And if the patient still did not have the lab completed, the notification of it being persistently overdue was sent to the primary care physician three weeks after the due date. The watch list, or the result manager function within the electronic health record, so an example would be this, where the patient would be notified about either a test result and/or that it was overdue. In terms of outcomes, we looked at the time to event outcomes for A1c and LDL testing, and these were for the efficacy outcomes. And so, specifically, we looked at the time from drug ordering to the next lab testing performed, and the time from the drug order to the lab tests being at or below goal. We examined these using Cox proportional hazard regression models. We also looked at the proportion of time that the patient was at goal, specifically, the percent of follow-up time over the following 6 months that a patient was at or below the efficacy goal for that specific risk factor. So, for example, for hemoglobin A1c we look to see whether the patient was less than or equal to seven or nine, among those who were prescribed hypoglycemic agents. For cholesterol monitoring, we looked for patients whose LDL was less than or equal to 130, or less than 100 for those with cardiovascular risk factors or diabetes, and these analyses were performed using linear regression models. We also looked at safety outcomes, and this included the percent of safety lab monitoring that was completed within 12 weeks of the medication order, and this included renal function, or creatinine, for those on diuretics, ACE-inhibitors, or metformin. We also looked at liver function testing, after the initiation of statins. These analyses were done using logistic regression models, and all of our models used multi-variant adjustment for patient age, gender, race and ethnicity, primary language spoken, the baseline lab value that was being examined, and we also performed clustering by individual physician. I am now going to turn it over to Dr. Grant, who will present the results of the study. (Richard W. Grant, MD, MPH)
Thank you Dr. Atlas. So now I’ll talk about our results. As Dr. Atlas mentioned, we randomized at the physician level, and this Table 1 shows you the baseline characteristics of the patients who were involved in the study. We ended up analyzing 3,655 unique patients, and specifically 5,454 prescriptions that were of study drugs written during the study period. What you’ll see here, is that the intervention and control patients were basically very similar, there were some small, statistically significant but clinically minor, differences in gender and primary language. Our first outcome that we looked at was the time to the next lab test. This was a hypothesis that if we could build a system, that would help physicians prescribe not only visit, I mean follow lab tests after visits, we might have more efficient lab testing. What we see in this diagram, is the time to the next LDL test. The control patients are in red in the intervention patients are in black. We found a statistically significant difference in the time to the next LDL test, with it being 30 days shorter in 40% of the patients in the intervention arm. This hazard ratio was 1.15, and after adjusting for the small differences at baseline, and also for clustering by physician. We also looked at time to reaching LDL goal among patients prescribed statins, and here we limited our analysis to those patients who were above goal when statin was prescribed. This difference that we saw between the control and intervention arms, did not reach statistical significance, but it was very close, so there was some indication that, at least for statins, the metronome system led to more rapid achievement of the LDL test, and then more rapid reaching LDL goal. When we looked at patients who were prescribed oral medications for A1C, this was 880 patients received a prescription during the study period, and unfortunately, we found no difference in the time to the next A1c. And similarly, we found no difference in the time from the prescription to reaching A1c goal among the 622 patients who were above goal at baseline. We also redid this analysis with the subset of patients who had A1c above nine, and this was 175 patients. Here, these patients with higher A1c, there was some indication that we may have been getting to goal more quickly, but this was a small population, and it was not statistically significant. The two outcomes that I just showed, which were time to the test and time to goal, were in our Cotswold model on the pathway towards better overall control, and so our other main outcome was what percentage of time after prescription were patients at goal. And here, comparing intervention and control patients among patients with elevated LDL, and elevated A1c, we saw no significant differences between study arms. So in terms of the Medication Metronome’s ability to improve the efficacy of treatment for A1c and LDL, our results indicate that there some indication that it might have improved the process of care, particularly for statins, but it didn’t ultimately have an impact on risk factor control over the long-term. We were also interested in–oh I’m sorry–this is the same table, but this time looking at patients who actually received the Medication Metronome lab order, and again there was no difference between study arms. So, to clarify, this is the intention-to-treat analysis of all patients, and this is an on-treatment analysis, which was a smaller number of patients, and again, no difference between arms. We also were interested in safety– did the use of this Medication Metronome tool lead to more frequent monitoring of safety labs? And so for these, for blood pressure we looked at creatinine and potassium level, for metformin we looked at creatinine, and for statins we looked at liver function tests, and here we found that there’s no difference between study arms, and in fact, the use of safety laboratory monitoring was not terribly common. Most common, in half of the prescriptions for blood pressure, renal function monitoring, and less so for creatinine for metformin, and even less so for LFTs for statins, and again, no difference between study arms. We conducted a survey after the study was completed, to try to gain some insight into what barriers to use there were for the Medication Metronome tool. We found that, although our PCPs were initially very enthusiastic about using the tool, and liked the concept of having a between-visit system, they found that in practice, there were barriers to use, and these included the fact that in a fee-for-service system, there was no financial incentive to do non-visit patient care. They also found that, because our patients have the opportunity to get laboratory results done outside of our system, which we could not capture, it sometimes led to some confusion about why they were receiving letters to get labs done that they had already done. And then there was a sense that there was a lot of pressure at the time to improve productivity, and a lot of other initiatives happening simultaneously, and so in some ways, once it was implemented it was difficult for providers to fully embrace. So there are a number of study limitations to consider. As I mentioned, despite our initial enthusiasm from our PCP stakeholders and advisors, the system was used in only 21% of possible orders, and as I mentioned, the lack of incentive, and also the lack of an established workflow within our system, and here we have physicians who’ve sort of developed a workaround on how to monitor and follow results of prescriptions, and we developed a new system and it was difficult to adjust that to their current workflow. So in conclusion, we developed a health IT tool to support between-visit laboratory monitoring, following the initiation or change of a chronic disease medication. We found that while there’s some indication that it may have helped the process of cholesterol management, it did not ultimately increase risk factor control or safety monitoring, compared to usual care. And the implications of our study are that there are persistent gaps in goal attainment for managing chronic disease, which supports the role of non-visit based care to supplement and expand basic face-to-face interactions. For health IT interventions to support between-visit work, this represents a new model of care that requires more patient and provider input and support of standard workflow and educational outreach, and that ultimately, new payment models that reimburse for non-visit-based medication management may be needed before visit-independent medication management systems will be more widely adopted. And, Dr. Atlas and I would like to thank AHRQ for their support, and also I’d like to thank the other members of our team who were involved in this project. Thank you. (Dr. White)
Very good. It sounds like that finishes your presentation, is that correct? (Dr. Grant)
That’s correct. (Dr. White)
Okay, very good. Thank you very much for your excellent presentation. We’re going to hold questions until the end, if that’s okay. If you have questions, please feel free to submit them through the Q&A bar at the right-hand side, bottom right, of the WebEx window. I’m going to mention a couple of housekeeping notes here. A number of folks have been sending questions about occasionally having logistic trouble with either the slides aren’t advancing or sometimes the sound’s not quite working right. First of all, if you’re having trouble with your slides not advancing, the WebEx window may not have completely loaded, so you should probably at that point logout and then log back in. However, if you have missed a portion of this, do not fret. The slides and a recorded version of this WebEx will be available at http://healthIT.AHRQ.gov within about two weeks after this event, and we’ll make sure that you all know what the website is. And then finally, I just want say that, although I know that we don’t let other participants see other participants for privacy reasons, however, I thought you would like to know that there are about 400 of you. And at least three continents are represented, in terms of your distance, so thank you so much for taking the time today to join and listen to the findings that are being presented here. All right, so on to our next presentation, “Assessment of Pediatric Look-alike, Sound-alike, or LASA, Substitution Errors.” Your presenter is Dr. Bill Basco, who is a Professor of Pediatrics and Director of the Division of General Pediatrics at Medical University of South Carolina. His AHRQ-sponsored research has focused on dosing and drug substitution errors in outpatient child prescriptions. He also serves as the mentor for junior faculty in his Division, examining the use of health IT in improving asthma care, and the use of health IT in the care of hospitalized children. Dr. Basco, the floor is yours. (William T. Basco, Jr., MD, MS)
All right, thank you, Dr. White. I appreciate the opportunity to present today. I’m going to be talking to you about a type of error that many of you may be unfamiliar with. But, pediatric look-alike, sound-alike errors occur when the names of two drugs either look alike, and that’s called orthographic similarity, and an example would be Tegretol and Tequin, or sound-alike, and that’s called phonetic similarity, and an example would be Adderall and Inderal. If you’ll notice, that’s also how these drug pairs are published. A look-alike, sound-alike error occurs in a pair of drugs, and it’s not meant to imply directionality. You could have a substitution either way. That will become pertinent later. Our own past work showed that these errors occur less commonly than dosing errors in children, but yet there is a significant potential problem, with the 2008 report identifying over 1,500 drug pairs with look-alike, sound-alike error potential. Two previous studies were completed by Phatek and co-authors that evaluated look-alike, sound-alike errors in pediatric data. They tested drug pairs empirically on just the orthographic similarity of the drug names. They found over a thousand potential errors and, more interestingly, they found that the error probability increased with increasing orthographic similarity. So, the more similar the drug names were, the more likely they were to find potential substitution errors. Unfortunately, neither of those studies reported the look-alike, sound-alike error frequency. They did define a couple of approaches that were helpful. They defined a refill error, where they would identify a patient who received Drug A, for example, in the look-alike, sound-alike pair three times, and then received Drug B, and they found that those constituted about 80% of the errors they identified. They also defined the initial dispensing error, where they found a patient who got one drug and then received three of the paired drug, and that occurred about 20% of the time. Our own past work and this current work focuses on refill errors, because those are the ones you can identify at the point of dispensing and potentially avoid, whereas the initial dispensing error is only evident well after it has occurred. Our approach has always been from the pharmacy viewpoint, where dispensing patterns could be used to trigger a screening alert in the pharmacy computer system at the point of dispensing. That alert would prompt the pharmacist to query the patient as to whether this receiving the second drug in the pair was appropriate. So, we looked at a pattern in our previous research where the patient received Drug A three times within a 12-month period, and we termed that the patient’s “usual drug.” And then we identified times when they received Drug B in the pair within six months of qualifying as a usual drug, and we said, okay, what would happen if that would trigger a screening alert? Again, just to remind you, either drug in a pair could serve as the usual drug. And once you can identify these patterns, you can calculate the frequency of screening alerts, and the frequency of screening alerts is really the proxy for the frequency of look-alike, sound-alike error. In our 2010 project, published in Academic Pediatrics, we tested this approach with 11 look-alike, sound-alike drug pairs, and in those 11 pairs we found a screening alert frequency of 0.28 screening alerts per 1,000 prescriptions. As a comparison, that’s much lower than dosing error frequency in pediatric outpatient prescriptions, which has occurred in the order of 7-15%. Our conclusions from that work, were that, the frequency of these errors–in children, at least– appeared to be much lower than other types of pediatric medication errors. And, that evaluating these as screening alerts at the point of dispensing would not appear to impose an unbearable burden on pharmacies. But, all the way through this, we’ve been conscious of the need to pay attention to the trade-off between signal and noise. Because all of you that use electronic health records know how many pop-up warnings you ignore every day, and we didn’t want our future work to be a part of that problem. Now, on to the Health IT Portfolio project. This was an R03 mechanism. Again, we kept our pharmacy screening perspective, where those dispensing patterns could trigger an alert at the pharmacy and the pharmacist could inquire as to the appropriateness of that prescription. The aims of the study were to utilize a modified Delphi panel approach to evaluate the potential severity of specific look-alike, sound-alike drug substitution errors. That was to help us identify what drug pairs to prioritize down the road. We also wanted to estimate, again, the frequencies of screening alerts in these drug pairs with a wider set than we had done previously. We utilized two sources for our published list of look-alike, sound-alike pairs, and those are from the Institute for Safe Medication Practices as well as MedMARX, which is the US voluntary error reporting system. We merged those two lists and kept the unique pairs. In the merged list there were 1,784 unique pairs, but again, the substitution could be in either direction, so you really have to take the reciprocals of each. So, after merging those lists, we were dealing with over 3,500 pairs. That was a daunting list, and we knew that we needed to reduce it. It had always been our intention to focus on outpatient preparations. So, we conducted a review process that ended up removing 867 of those initial 1,784 pairs, and because we were focusing on outpatient drugs, some of the examples of things we removed were any pair where were one or both of the preparations were injectable, non-oral preparations, vitamins, and all of the above choices certainly introduced limitations, and we can discuss those if we have time. But after the exclusions, we retained 917 pairs. And, again, if you flip each of those, it’s 1,834. Now, for the Delphi plan, we had intended to have practicing pediatricians score these substitution errors based on the degree of potential harm to the patient. As part of our survey development, we conducted cognitive pretesting of the concepts we were going to be asking these pediatricians to score, as well as the terminology to use. I will just say that the pediatricians involved in the pretesting did not participate in the later Delphi panel. We conducted online piloting of the surveys for the wording, the format, and to determine time to completion. A 50-pair survey took approximately 20 minutes, and we felt that was an appropriate duration. That resulted in 37 versions of the survey. We recruited a convenience sample of 37 participants from professional organizations via listservs, and we ended up with participants, 59% of whom were female, and who were from nine different states. Looking at a pair error, again, each pair consists of two drugs, and you can call those Drug A and Drug B. But again, it could be–the direction of substitution could be in either way. Again, our Drug A–through our pretesting, our participants determined that the best terminology to use for this part was, Drug A was the “intended” drug, and Drug B was the “delivered” drug. So, each participant–we realized that for each LASA error (look-alike, sound-alike error), there were actually two drug errors that occurred that led to two problems for the patient. We had to ask the panelists to estimate the potential harm of not receiving the intended drug, and also to estimate the potential harm of receiving the delivered drug instead. We utilized RedCAP as an online survey tool, and for Round 1 we e-mailed a unique survey to each participant, and they scored those 50 pairs. For Round 2, we basically took the same 37 surveys and mailed a different link to each of our participants. So, between Rounds 1 and 2, each pair was scored by two participants. This is a screen capture of what the RedCAP survey instrument looked like. The participants scored the pairs on potential harm, using a continuous scale from no harm, little harm,–let me get the arrow going here– to moderate harm, severe harm, and death. This is the nomenclature that is consistent with the MedMARX terminology. In the yellow box here you see the pair, in this case it’s ZYVOX and ZYFLO–the intended drug and the delivered drug– as well as, in the parentheses, the generic name if needed, and the action or class of the drug. Again, they were asked to estimate the potential harm of not receiving the intended drug, and the harm from receiving the delivered drug. They would do that by grabbing these blue boxes and moving them back and forth on the continuous scale that is converted to a number here in the box, which is what is actually downloaded to the data set. We realized through pretesting that we had to lay out the assumptions. We asked our participants to imagine that the patients had no medical conditions other than the one for which they were supposed to receive the intended drug. They were to not consider allergies in their scoring, not to consider dose changes. Also, it was interesting that our pretest participants felt we really needed to find the error period, because they felt they might score the potential harm differently for a one-month error versus a patient who receives the wrong drug month after month. We also had to be very clear to make sure–and we did this through several methods– to make sure that they were not estimating the chance that harm would occur. We were asking them to evaluate the degree of potential harm that might occur, should the patient experience an adverse effect. This scatterplot shows the distributions for Rounds 1 and 2, and there are some overlying points, I will tell you that. Round 1 results are shown in blue, Round 2 in red. But basically, each point represents a look-alike, sound-alike error. So, a pair’s error potential, scored on the y-axis by the estimate of harm for not receiving the intended drug, and on the x-axis, the estimate of harm for receiving delivered drug. We felt pretty good about the distribution. They certainly identified drugs in the drug pairs in the lower left-hand corner, where they felt potential harm that might occur was pretty low for both parts of the error. Of course, in the right-hand corner are the pairs where they felt the potential harm was high, both from not receiving your intended drug, and from getting the delivered drug. Nonetheless, we had to figure out how to cull this down to a manageable set of pairs. We conducted cluster analysis to identify some reasonable cut points to help us identify the high-potential harm clusters. So after the cluster analysis, we kept any pair where either of two participants scored one of the two errors, so either of the two parts of the look-alike, sound-alike pair error above a cut-off of 82. That identified 608 pairs that were kept for Round 3. For Round 3, we were able to have each pair scored by three participants, and we were able to average those three results for the final scatterplot shown here. The first thing you can notice, is that the cluster analysis was effective in removing the pairs that were generally of low potential harm. Those would have been the ones here in the left lower quadrant for either part of the substitution, and we were left with pairs that were generally high, on at least one aspect of the substitution. Again, these in the upper right-hand corner were the ones that scored high, either way. Now, I’m in the process of revising and resubmitting an abstract to Academic Pediatrics on this, and in that manuscript, I hope to publish the whole 608 pairs as an online appendix, ranked. But, I will show you today the top 10 errors both ways. This is the errors ranked by potential harm of receiving the delivered drug in error. One thing you might notice, is that the anticoagulants, appear–sorry not Lanoxin–multiple times, just in the top 10. That was of interest to us, because there aren’t that many children on anticoagulants. The highest ranked one was a patient who would have gotten K Dur, which is a potassium supplement, instead of Kayexalate, which is typically a drug given to patients who already have an elevated potassium. At least on face validity, they did a good job ranking that one very high. Then, looking at the top 10 errors ranked by potential harm of not receiving the intended drug, again, there are some anticoagulants that appear. But, also, you start to see maintenance drugs, so Prograf, used for patients with organ transplants, and Norvir, for patients who have HIV. Again, we felt like we had some face validity here that they identified drugs that would be bad for patients to miss. Now, onto aim two, which was the frequency estimates. Again, the frequency of screening alerts is really an estimate of the actual error frequency. And it’s also–we realized by this point in our study, that it it’s really an estimate of the pharmacy screening burden. You know, how many alerts would we generate using this approach? In order to calculate these frequencies, we used 10 years of South Carolina Medicaid-paid ambulatory claims data for patients under 20 years old, and we ran the frequencies on those 608 pairs that were included in Round 3. We decided to start with a very simple definition of error. We took any subject who received both drugs in a pair within six months of each other. So, that’s a patient who got Drug A and then within six months got Drug B, and we said, “Let’s use that as our simple measure of a potential substitution error.” We also realized that that would give us our maximum error estimate. First the good news. For 34% of the pairs there are was not a single patient in 10 years of data who received both drugs in a six-month period. For an additional 49% of the pairs, the cumulative total of subjects who received both drugs in a pair would have amounted to less than one screening alert per day in the whole state over that 10 years of data. So, we felt for 83% of those pairs that we kept in Round 3, the pharmacy screening burden could be considered low, and in fact, you could probably put all of those going forward into any sort of screening program and not overburden pharmacies. For the bad news, by contrast, among the other 17% of pairs, that approach would have generated 27 screening alerts per day. That’s less than one per county per day, but still a lot different number than what we had seen for the others. There were 19 pairs, 3% of the total, where there were over 1,000 subjects who received both drugs within six months of each other. You can look at those drugs and realize those are both common pediatric medications, as well as medications that might be appropriate for a patient to be on at the same time. It calls into question whether we can really use this approach for some of these pairs that are so common. For some of our global project limitations, because of the sheer number of drugs involved, we chose to limit the pairs that we scored in the Delphi process, and we know we eliminated drugs that would be of interest to other parties, particularly in-patient drugs. The other thing we realized, is that these lists of look-alike, sound-alike pairs are not pediatric-specific. Pediatricians may be unfamiliar with some of the drugs we were asking them to score. For example, almost all of the Parkinson drugs it seems, are in look-alike, sound-alike pairs. Certainly most pediatricians are not going to have familiarity with those drugs. Something I’d like to rectify in the future is, for the Delphi process, we received input only from pediatricians, but in future work we’d like to include pediatric clinical pharmacists, as well. Our conclusions regarding the harm ratings, are that pediatricians have ranked 608 potential look-alike, sound-alike error combinations by harm rating for children. That is a valuable tool we feel, because it gives researchers and clinicians an idea of how to prioritize an approach for these drug errors going forward. Conclusions and implications regarding frequency, are that, again, for 83% of those pairs, we can take a fairly simple approach, and that a child receiving both drugs in a pair within 6-month period could trigger a screening alert identifying a potential substitution error, and we don’t think this would produce significant burden on pharmacies. For 17% of the pairs we’re going to need to do more work to refine what dispensing patterns we could use to trigger a screening alert in order to maximize the trade-off between the degree of potential harm, versus the screening burden it would produce, and then again, for 3% of those pairs, so many subjects received the drugs within a six-month period that screening for those errors may not be possible with this approach. Our next steps, for the 17% of drugs where it gets a little complicated, we have a process to evaluate the positive predictive value of those potential errors, and that’s described somewhat in our 2010 paper. And so, we want to identify the dispensing patterns to maximize that positive predictive value. For 3%, we are going to have to look harder at combining the error frequency data with potential-harm data, to determine which of those pairs we can use going forward. What we would like to do ultimately, is to put together a process to test real-time screening for these errors in clinical pharmacies. This was all empirical, or a theoretical approach. We haven’t tried to actually do this in pharmacies yet, but that is where we are trying to go, eventually. So, how about overall project lessons and challenges? One thing is that, look-alike, sound-alike pair lists are always being updated, so it’s a bit of a challenge to us, for how often we go back to try to add drug pairs. It’s certainly simple to recalculate the frequencies of these errors, but it’s not a small thing to go back and re-create the Delphi process to rank any new pairs that are added. Another thing we encountered was that finding good lists of generic preparations of the brand names that appear in a pair list, has been a real challenge. If the drug in a look-alike, sound-alike pair is a brand-name, you have to crosswalk it with all of the generic names of the drug, and then any brand versions of that generic form, because the prescription is written for one drug, but it might– you might experience an in-pharmacy substitution, depending on what’s available, and it’s the drug that’s actually dispensed in the pharmacy that goes into the claims data that we use. We had to link to all those drugs to make sure we weren’t missing potential name-substitution errors. Of course, a screening alert is not the same as a true look-alike, sound-alike error. And in claims data that we’ve used to this point, we have limited clinical data available to answer this. That’s something we hope to rectify with our implementation efforts, where we have real prescriptions in hand and we can do some chart reviews to try to identify the real, positive predictive value of some of these alerts. I’ve included some selected references that touch on some of the topics I’ve covered. My contact information is here. Thank you, and I will turn it back to Dr. White to introduce our next speaker. (Dr. White)
Thank you so much Dr. Basco. Excellent talk and you guys have great questions that are coming in through the Q&A, which, again, we will get to toward the end. Next up we have, rounding out the presentations, we have “Improving the Approach to Electronic Medication Reconciliation” by Dr. Michael Weiner. Dr. Weiner is an Associate Professor of Medicine at the Indiana University School of Medicine, and is the Director of the Indiana University Center for Health Services and Outcomes Research, Director of the Regenstrief Center for Health Services Research, and the Principal Investigator of the Department of Veterans Affairs, Health Services Research and Development Center for Health Information and Communication. I have no idea how you have time to do this presentation. You are a busy guy! His clinical and health services research focuses on measuring and improving the quality coordination and delivery of health services. He studies the effects of health information and health information technology on clinicians’ practices and patients’ outcomes. Dr. Weiner. (Michael Weiner, MD, MPH)
Thanks, very much. My goal for today is to identify at least three findings related to improving the approach to electronic medication reconciliation used in electronic prescribing. I’m going to present more than three along the way, and I think you will get a feel for it. I will use some abbreviations and acronyms shown here. You may be familiar with some or many of these. PADE will refer will refer to potential adverse drug events, and PAML will refer to preadmission medication lists, which in most cases refers to an outpatient medication list. Here’s an example of a real transition in care from a patient case. The patient at Day 0 is discharged from hospital for heart failure. Furosemide, which the patient was taking was discontinued in the orders but not in the discharge instructions. On Day 5 the home nurse discovered furosemide in pill boxes in the patient’s home, and on the following day the patient’s creatinine increased–was found increased from 1.6 to 3.7. The patient declined emergency care, and by Day 12 the patient was found to be confused at home, with a creatinine of 5.2, and a life-threatening potassium of 7.6, leading to intensive care admission. This case demonstrates that problematic medication management can actually harm patients. A few studies show that approximately 41% of discharges from hospitals have medication discrepancies. About one quarter of these have the potential to harm patients. In about 20% of discharges to home, patients experienced adverse events, and most of these adverse events are due to medications. This is an example of a computerized medication list output. It’s just an excerpt, and in this excerpt you can see that there are three listings of dispensed medications, all of these in this case are for aspirin, and they are all for different dates, and different numbers of tablets or refills or newly prescribed. The point here is to recognize that conventional means of displaying medication output can be very confusing and time-consuming to review. The process of comparing current medications to planned medications at transition points in medical care, is part of the medication reconciliation process. There’s more to it than that, and I generally think of medication reconciliation in about five steps: assessing the medications, comparing current medications to planned medications, deciding what to do, communicating the plan, and then, documenting it. The joint commission does have safety goals for medication management. These goals specify that one should coordinate medication information during transitions in care inside and outside the organization. We should communicate with other providers, educate patients about safe medication use, provide patients directly with written information about their medications, including why the medications are prescribed, and we should advise patients to carry medication information with them at all times. Next, I’d like to describe a project I conducted, sponsored by AHRQ. I will tell you the methods used. The results will be forthcoming in a future publication. I worked with the team to conduct a medication reconciliation study, where we integrated an electronic medication reconciliation system with an electronic prescribing system. We conducted a randomized controlled trial of medication reconciliation, and we determined whether facilitating med recon altered the process and the incidence of errors in ambulatory care. Our hypothesis was that facilitating medication reconciliation would improve completion of the process and would decrease the incidence of drug-related medical errors. To do this, we built on an existing electronic medical record system, which had access to medications that were prescribed, and the data was fed into the electronic medical record. We created a new browser-based, plug-in module to this electronic medical record system, and that module facilitated the management and documentation of medications. The output of that was fed into electronic prescribing process, which on our system was called Gopher. This slide shows you a screenshot of the med recon module. It lists medications that patients were taking, and then allows for annotation of those medications, including a way to note any differences to the dosing, and whether the patient was actually taking the medications as they were prescribed. We tested this module in a controlled fashion, on an inpatient medicine service, and we looked at a number of outcomes and variables, including patients’ and providers’ characteristics, the number of medications the patient was taking, whether a medication reconciliation was performed, who did it, what medications were then prescribed, and in particular, an issue I think is important to measure, is whether there was a reason provided for not continuing a medication that the patient was previously taking. We looked at medication reconciliation in outpatient follow-up visits following discharge. Many reasons could be found for not continuing a medication that was previously prescribed. These are examples of reasons for not continuing medications that we used in our study. The drug could be contraindicated. A nondrug approach could be used. A different drug could be used because of a formulary issue, and in some cases, a different drug could be used. In thinking about how to measure discrepancies in medications, we should consider the possibility that a substitution of a drug could be used for a good reason, such as a formulary change. These are examples of ways in which drugs can match. There could be an exact match in the medication, which would mean an exact match in the drug, the dose, the route, and the frequency. Or, there could be a change to the dose, route, frequency. There could be a change to the drug but not the drug class, or there could be a change to the drug class, but not the indication for the drug, such as if hypertension were being treated, but with a different class of drug. In thinking about ways to classify potential for harm and severity, these are examples of ways to classify those things. Confidence in potential for harm could be measured through a subjective concentrating, and then one should consider, or could consider, potential severity of harm, which is different from potential for harm in general. In using an electronic medical record system to identify potential adverse drug events, one could look for many different kinds of triggers; examples are shown here. For example, one could look for specific drugs, which themselves might be an indication of an adverse drug event. For example, diphenhydramine for an allergy treatment. Another example might be a combination of a diagnosis and a drug, such as the presence of angioedema, when an ACE Inhibitor is also described at the same time. You can see some of the other examples on this page. If one programs electronic records systems, or data from it, to identify these triggers, the actual occurrence of an adverse drug event would need to be confirmed after the identification of these triggers. Thinking about usability, it may be a good idea, depending on the stage of your work, these are examples of metrics that are related to usability. They include time spent, measurement of actual medication errors, measurement of incidents that might affect performance or satisfaction, actually rating satisfaction, and then measuring aspects of workload, such as mental demand, physical demand, and performance. I will describe a couple of studies along the way, here. This is one from a survey of providers done by Blake Lesselroth and his team. They’ve done quite a bit of work in the field, and looked at, in this study, the accuracy of identifying medications, and the availability of tools and resources to help with med recon. They found that one should consider potential clinical benefits and the quality of tools. Workflow compatibility, and climate for implementation, are also important. The climate for implementation refers primarily to the support of leadership to the conduct of med recon, and also the availability of resources and logistics to support the process. Based on our work, we came up with a prototype or an example of a new design for med recon. Compared to the previous design, you might see some improvements here where medications are listed in a very clear fashion. They are sorted and potentially sortable by different means. Dosing is easy to see, and it may also be possible to estimate the patient’s adherence to each of these medications using available information, such as prescribing data from the electronic records system. Potentially, providing information about the patient’s adherence could either improve the reconciliation process or improve other outcomes of care. That remains to be tested. The study by Sanchez et al. looked at the planning process for med recon in interviews of 13 healthcare professionals in different types of roles. They assessed perceptions of implementation process, facilitators, and barriers, using multidisciplinary teams. They found that there were some key findings. Understanding the principles of performance improvement may be helpful in facilitating implementation. There is a need to integrate reconciliation into diverse workflows in the workplace. Some changes to roles may be needed to facilitate the process. Training is important and one should monitor both compliance and the impact on prescribing. Lehnbom and colleagues did a review of med recon, and found some additional activities to consider. Here are a few. One should consider hospital readmissions, outpatient visits and morbidity. In planning studies, it’s helpful to include a control group or a comparison group, and an adequate sample size. Consider randomizing, and understand the activity of the control group. We’ve got to know what the control group is actually doing in the process of med recon, because they might actually be doing it on their own, or they might not. And, it’s important to look at multiple sources of information in the med recon process. Additional things to consider in planning studies or implementations of med recon– try to integrate the process with ordering and decision support. That’s actually one of the most difficult things to do, because in some cases it requires a lot of technical skill or resources. Think about the role of the subspecialist. Is the subspecialist going to reconcile all medications? Or only the ones within their area of expertise? And, do try to make sure the medications listed in the discharge summary in the hospital case match the medication list in the instructions to the patient. Again, something that can be tricky to do, but is doable with the right workflow. Another study by Lesselroth actually implemented a self-service kiosk in a primary care clinic, so that patients could directly indicate which of their prescribed medications they were taking. The study looked at 91 primary care providers who were surveyed about attitudes, perceptions, and climate for implementation, and 43% indicated that they didn’t think they have the necessary resources to manage medication discrepancies. The climate for implementation was not optimal, and nevertheless the system was an improvement over their previous types of usual care. Most respondents indicated this approach was still better than what they had before. Ken Boockvar and his team, also experienced in medication reconciliation, have looked at nursing home transfers to hospital in this study of 469 patients, and compared hospitals that had electronic health record systems to those that didn’t. They measured medication discrepancies at hospital transfer, and looked at adverse drug events. They didn’t find a significant difference between the EHR and the non-EHR groups. They concluded that perhaps additional, more specialized computer tools might be needed to lead to important differences in care. A study by Schnipper and colleagues showed that the reconciliation process can improve discrepancies. They randomized 14 medicine teams in 2 hospitals, and studied 322 patients admitted to the teams. They looked at discrepancies, and found that in looking at potential adverse drug events related to the discrepancies, there was a significant difference. When you have an intervention that can facilitate this process, essentially a computer-based recon tool for managing meds and facilitating decision making. They identified an adjusted relative risk of 0.72 with the intervention group. Finally, the summary by Lehnbom et al. that was just published this year, was a systematic review identifying 83 articles studying medication discrepancies and problems. This review found that the process does improve with med recon tools, that unintentional medication discrepancies can occur in a wide range of patients, but that we don’t have a lot of evidence, yet, that there is an improvement in length of stay, readmissions, and mortality with interventions. So, finally, I present for you the key lessons from the work that I’ve shown you. If you can actually implement all of these bullets, it would be terrific, and you would have a fantastic implementation or study in med recon. First, review and develop policies carefully. Policies in the facility may be very important in actually promoting adoption of med recon. Involve people from diverse services–you can’t just have medicine, you need nursing, patient safety, pharmacy, potentially others. Seek early feedback on your approach. Be adaptable, because work places differ, implementation styles differ, cultures differ. And, if you’re doing research in one of these environments, think about how to do it without actually delaying or stalling the implementation process. Identify a comparison group so that you can know if your process is actually working. Train the professionals. Clarify who is supposed to do what, and when. If you’re using electronic records, which just about everybody is now, work with your software developers to see if you can improve the user interface. Build that so that it takes into what the workflow of the provider is. Think about principles of performance improvement. Target processes, outcomes, and communication with both patients and providers, and try to get patients involved in self-reporting their medication history as much as possible. Include the ambulatory setting as well as the in-patient setting, and try to provide value for both patients and clinicians. I want to thank AHRQ for supporting this R18 work, and I’m going to now turn the session back to Dr. White to moderate questions and answers. (Dr. White)
Thank you so much, Dr. Weiner. Fantastic presentations, one and all. We’ve had a lot of great questions come up along the way, we’re going to move to the Q&A session now. We’ve got about 25 minutes left for questions, which is great. I’ll throw out a couple of questions to each of you, specifically, if you don’t mind, starting with Dr. Atlas and Dr. Grant, and then, we can try to get to some questions posed by attendees. So, Dr. Atlas or Dr. Grant, in the process of listening to the presentations, a couple of questions came up about financial incentives, and payments to support some of the changes that you all talked about. If you all want to talk about financial incentives, generally, in terms of implementing some of the work you’ve been talking about, if there’s anything specific whether it’s involvement of pharmacists, or financially supporting, treatment management, what have you, what are your thoughts on the subject? (Dr. Atlas)
This is Steve Atlas. I’ll answer first, and then I’ll let Dr. Grant make some additional comments. First, is that in our network, our physicians are used to functioning in a fee-for-service world. So, despite the fact that patients often cancel, or physicians cancel, follow-up visits, as we’ve shown that is still the primary method that doctors are comfortable with. So there’s just some barriers to just sort of how doctors currently practice, to implementing between-visit care. One of them is clearly reimbursement for time, and how one is paid for that. I think that these ACO models or other models of care that provide some compensation for these types of activities, would go a long way. But then, there’s still issues around workflow to do this. We found that some of our doctors had developed their own workflow, and their patients had expectations for how they would wish to do it. In some cases, our system would have been more efficient, but because the patient and doctor had an established way for getting subsequent laboratory tests done, it was hard for them to change it during the study. So, I think there’s a lot of barriers to this type of work, and some of them include sort of how do we redesign care to take advantage of the time between visits, making that observable to both the physicians as well as the patients, and then supporting physicians in their practices in doing this. That gets to, sort of, is this something that the physician has to do? One could argue that this could be a function for pharmacists, though many practices, including ours, do not have a practice-based pharmacist. So, that could be an option for systems that have those types of arrangements, but unfortunately, for many like ours, that isn’t an option. Richard, do you have any other comments? (Dr. Grant)
Yeah, I’ll make one comment, sort of from the researcher perspective, which is, you know–we followed the best practices suggested in the last talk that we just heard, including getting early feedback from our key stakeholders, and we have a PCP advisory committee, and we spent nearly a year eliciting feedback and getting their input into the design, and I was really struck by the contrast between the enthusiasm of the concept of what we were doing, in fact, the ease-of-use and the user interface and usability, of the idea, and of the tool, and then, in practice, its underuse. So, what was really instructive for me is that you can have a really good idea that everyone really likes, that you design well, and it still won’t work if the environment, and in this case we think from talking to our docs afterwards, the financial environment, and the workflow environment, isn’t ready for what we have to offer. And so I think this idea of between-visit disease management–it’s obvious– I mean, we have to do this. It’s ridiculous to think of managing chronic diseases on a visit basis. That’s the old way of doing medicine, and we’re moving towards a new way of doing medicine. This transition is very painful because we have to break out of our old financial reimbursement methods, and our old workarounds, to adopt this new approach. That was something that was really interesting to experience, from implementing this study. (Dr. White)
So, great answers. Since you talked about your experiences, I’m going to ask the two of you to pull out the retrospectoscope, and I’ll ask you a question. Knowing what you know now, what would you have done differently in designing this study? Actually, for what it’s worth, as the guy who directs the program and funds the research, that’s actually kind of a fascinating question for me to be able to hear the answer to. (Dr. Atlas)
I’ll start again. This is Steve Atlas. What I would probably say, is that we would have spent a lot more time in the initial phases of the study, trying to go back one-on-one with doctors to go through the system, to understand what their workflow was, how they were currently doing things, and then how this tool could have been adapted to doing that. We probably would have also reached out more to patients to provide them information about what was going on, because the doctors sometimes would not necessarily give much information, and the patient would all of a sudden get a letter asking for a lab test being done. There was sometimes confusion around that. So I think a lot of it, in retrospect, would’ve been more on the line– sort of engaging patients and doctors in trying to understand how to implement this in a fashion that was more understandable to them. Albeit, that would have been a lot of work to do. (Dr. Grant)
Yeah, I agree with Steve. I think that this is a three-year grant that included an RCT and follow-up, so we had this limited time in the beginning, and what’s interesting with IT development, you know, medical informatics interventions, is there’s a dark window between the pre-development and then the implementation. I feel that we actually did a good job conceptually with the doctors of what we were trying to do, and with the screenshots of what it would look like, and then it sort of disappeared into the black hole of the programmers actually creating the thing. I think that when we implemented the tool, sort of as Dr. Atlas said, intensive academic detailing of the actual live tool at the beginning of the study certainly would have helped. That’s a tremendous amount of resource to get that kind of thing happening. I think, I totally agree with more patient involvement, and obviously as researchers we are moving much more toward patient-centered outcomes research, and we could’ve done more of that back then. And I think probably the third piece I would add, and this is also one of the big challenges of this kind of research, is greater involvement of the upper-level policymakers and leaders of the institution. This was basically coming from researchers, and it was connecting with PCPs, but having the Practice Director and then the Hospital Director actively supporting this change, would have helped the study. But, again, this is the balance between being innovative, and trying to experiment with things that the system is not necessarily ready for. You could do an RCT in equipoise versus trying to implement a program. And so, we are always walking that fine line of doing what we can with the resources we have, and trying to be innovative, and yet trying to get as much backing and support to actually have it work. So, it’s challenging. (Dr. White)
All right. Thank you for not saying that you would’ve asked for more funding. I appreciate that. (Dr. Grant)
That was the under-text. [Laughter] (Dr. White)
Yeah [Laughter]. Let me turn to Dr. Basco. I’m actually going to pull one of the audience questions here. One person said “How are the drug errors made?” (These are the look-alike, sound-alike drug errors.) “Were they were entered into an electronic system, or was there a verbal or a written order?” So, do you have more–in your study, of look-alike, sound-alike drug medication errors– do you have a better insight into how that kind of thing happens, and are there aspects [inaudible] to facilitate look-alike, sound-alike errors? (Dr. Basco)
Right. So, for the frequency calculations, we used Medicaid claims data. So, those data reflect the drugs as dispensed, so that is a limitation. We don’t know where the error might have originated. There are other error studies where–in-patient pediatric studies– that have shown the in-patient clinical pharmacists catch about 75% of pediatric medication order errors. So we do think it’s logical to assume that the true rate at the point of producing the prescription by the provider may be higher than what the pharmacist actually dispenses, because the data we have up to this point, the 2010 study and this one, are as dispensed. But again, that’s one of the advantages of testing this in real time, eventually is–or maybe I should say studying it in real time– because then you can track the actual prescription, and see how it appeared. You can answer some of those questions I brought up about within-pharmacy substitution, as well as you could determine the proportion of errors that are introduced at the point of dispensing. You might have the look-alike, sound-alike error introduced, but the prescription produced was correct. But then, because these drug names were so similar, you could have the error introduced at the point of dispensing. We have a model that I’ve used for the grant for the papers. I didn’t have time to review. It is a limitation that we don’t know where in the process of delivering the prescription, these substitutions might have occurred, but they can occur in each of those. (Dr. White)
All right. That sounds like Dr. Basco’s next grant application. Very good, thank you. That’s helpful. Let me turn to Dr. Weiner. A question from the audience, here. This may be a little bit broad, but I think it’s a good question. “What measures have been developed or used to evaluate the quality completion of medication reconciliation in the ambulatory or primary care setting?” And then, there’s a related question, which is, “Has anyone been able to require an indication for drugs when they are ordered, and if so, has that been implemented?” I wonder if you could try to wrestle with those two for a minute. (Dr. Weiner)
Sure. Well, I think one thing to keep in mind is that the process of medication reconciliation itself doesn’t really depend on a certain type of setting, although the specific details of the implementation would need to be tailored or customized according to the workflow and the kinds of people who are involved. The process is basically the same in the inpatient setting as the outpatient setting. You are looking at medications the patient was on, you’re deciding what to do next, and then you’re making some comparisons to make sure that you’ve actually covered everything and that medications haven’t been overlooked. Although we have tended to do studies of this process in the inpatient setting, there are some ambulatory studies. One was published by and [inaudible] and colleagues out of the Mayo Clinic in 2009, and three things they looked at were whether medication lists were complete, meaning, was the dose, route and frequency included with the drug for every drug that was documented? Was the list correct? Meaning, did the list of medications in the medical record match what the patient was actually taking? And, did the patient participate in the process of the reconciliation? Did they contribute to the medication history in that way that we expect should happen? So there are some examples of some very concrete measures that could be made in the ambulatory care setting that could tell you if you are heading in the right direction. In terms of indications for drugs, I think that’s a really important point. I can’t recall specific studies I’ve seen that looked at indication as one measure of an appropriate reconciliation process. I do advocate that that should be done. And so I think we could use more studies. I know that in some states and some settings, including an indication for a drug is required by law. In my own clinical activities in the VA right now in my facility, I’ve heard recently that indications are going to be required for all mental health service prescriptions. I think it should be done for all prescriptions in all settings. But it would be a very useful thing to look at. You know, in my practice I’ve found that including the indication for the drug not only helps the patient as they see the indication appear on prescription bottles, for example, but helps the provider to remind them about why they’re using the drug, and in some cases, where they think twice about whether an indication is right, they might actually decide to cancel some drugs that could be discontinued. (Dr. White)
Okay, those were both great answers. This is Jon White. I will add a little bit myself, which is to say that several months ago, AHRQ published the special emphasis notice to our grant funding opportunities, that said that in 2014 we’re particularly interested in applications that look at the safety of health IT and how to improve the safety of those products as they’re used. And although I can’t say names at this point, we are getting ready to fund a project that over the next two or three years is going to look at how you incorporate including the indication for a given drug in electronic health records. So, coming soon to a theater near you. That’s exciting to be able to talk about. So, all the questions, let me give you a question that I think it’s a good one, but I think that everybody probably will have a little bit of something to talk about. Maybe, if you want to start with, you know, going in the order that we’ve been going in– Dr. Atlas, Dr. Grant, Dr. Basco, and Dr. Weiner. So, the question has to deal with off-label drugs. This is related to the indication question we were just talking about. The question that was asked, is “How to detect and evaluate off-label drugs?” In particular, this is a curious problem because it’s definitely been a widespread issue, right, in terms of the prescribing and the uses of medication? And it’s also safety issues. So, in your practices, in your research, in the systems where you work, how do you all deal with detecting and evaluating off-label use of drugs? (Dr. Atlas)
This is Steve Atlas, I’ll let Richard answer that one first. (Dr. Grant)
No, no, you go ahead. (Dr. Atlas)
This may be better for some of the other presenters. In our system, we, and as an example of what we presented, is there is tremendous challenge in doing– within the class of medications for which there is approved indication– getting patients to goal, and getting it done in a timely basis. So, patients who have high risk for–due to high blood pressure, there are effective medications and many of our patients are not on them. How do we move them in an efficient manner? The issue of non-label use of medications sort of just adds a level of complexity to that. A lot of what we’re focused on is making sure that effective, well-proven treatments are given within our large populations. You are implicitly bringing up the issue of what medications are out there that shouldn’t be used, and it may be the equivalent of the efforts to avoid things like MRIs for back pain. The equivalent would be a non-label use medication for which there may be little benefit. That’s something that in our network, we really haven’t tackled, yet. (Dr. Basco)
I’ll offer a little comment, this is Bill Basco. So, first off, as a pediatrician, there’s an often-quoted statistic that 80% of pediatric prescribing is off label. I’m not sure that I want to stop it, because it would be a problem for pediatricians, and that’s the impetus behind the FDA’s approach to offer extended exclusivity to companies who will conduct efforts to obtain pediatric indications for some of their new drugs, and that effort has been very successful in improving the number of pediatric drugs. I’m sorry, drugs that have been approved with pediatric indications, in the past decade. I think, specific to look-alike, sound-alike, it doesn’t really apply because it’s a drug name issue in look-alike, sound-alike– whether the drug is being prescribed off label or appropriate. And in honesty, for children, most of that off-label prescribing is not for different conditions, it’s the fact that the drug has adult approval, but trials weren’t conducted under 18. So we use it for the same disease in children, but even that is technically off label. That’s where we deal with it most in pediatrics, and I think the previous presenter’s point is right. Maybe thinking of it as a never-do list of things for which there is really scant evidence, and it’s off label, because I think everyone should remember there are actually off-label uses for which there is a lot of evidence, even if the drug companies never go back and ask for FDA approval. (Dr. Weiner)
This is Mike Weiner. I would say for medication reconciliation that the approach to off-label drugs is going to be very similar, or the same as, the approach with other conventional indications. Meaning that in the pursuit of a complete and correct medication list for the patient, one would want to ask about drugs in a way that includes off-label drugs and their uses, and then, also track those drugs across all transition points in the same way that one would do for other types of drugs. (Dr. White)
Great answers. Good stuff. We’ve got about three minutes left. Let me just try, because there’s a number of questions here, and I’m sorry we are not going to be able to get all of them in the Q&A period. Let’s try one quick one. Anybody can comment if you want to, or you don’t have to if you don’t want to. Michael, this is probably targeted best at you, but everybody can try to take a stab at it. “Does the electronic medical records medication reconciliation interface have usability issues, and how is an human user interface being implementing in the development of workflow?” As we all work with these systems, usability is certainly a big deal. You heard Steve Atlas and Richard Grant talk about how the enthusiasm did not necessarily translate into use, so how’s the– Michael, I guess I’m going to ask you to start first, and then others can chime in. Does usability factor into what you’ve been doing? (Dr. Weiner)
Well, absolutely. Usability is a big issue, and I think one of the biggest stumbling blocks to general use and efficiency of electronic health record systems, in general. We’ve certainly been looking at the usability of our newest developments and our prototypes and new implementations. So, I think that one needs a systematic approach to assessing usability. There are tools and techniques to actually do that that have been published. Many of them are–don’t take terribly long to learn, and they don’t necessarily require a huge expense or specialized equipment. So, I think finding some folks who know how to do that, or identifying resources that describe it, is important. I’m not sure I’ve answered your question. Was there a question about adherence that I didn’t catch? (Dr. White)
No. There is a second question about adherence, which is different than usability. But, no, you made a pretty good stab at it. Other presenters, do you have any last words to say on usability or any other issues? Okay. I’m going to take that as a no. All right. We are right at 4:00. I want to thank our presenters, thank you so much. Fascinating presentations, wonderful work, great discussions. Thank you so much to our participants, as well, for staying on. Wonderful questions. I love your engagement, thank you so much, and literally, from around the world–I mentioned there were three continents– It actually turns out there were five continents listening in on our call today. Thank you very much for your time, everybody. (Presenters)
Thanks, John. Thank you. [Event concluded]

Leave a Reply

Your email address will not be published. Required fields are marked *