While artificial intelligence (AI) use is skyrocketing in healthcare, their algorithms have come under increasing spotlight.
The Office of the National Coordinator for Health Information Technology (ONC) is flexing its regulatory muscles. The news is out—they are seeking to impose a ‘nutrition label,’ of sorts, for how artificial intelligence (AI) is being used in the electronic health record systems.
While this move has flown a bit under the radar, it’s a fascinating hint of how the Biden administration plans, at some level, to tame the wild beast that is AI. The word from an ONC spokesperson suggests that we might see this proposed rule solidified sooner than we think—perhaps by end of this year.
The proposal, the comment period for which ended recently, is straightforward. If you’re a developer of electronic health record systems and you’ve jumped on the AI bandwagon, you’re going to have to start providing a little more detail on how your AI performs, including what data is a part of the algorithm’s “dance routine.”
It’s a few more spins and turns to add to current “two-step” of ONC certification.
Health IT “czar” Micky Tripathi, who heads up the health IT division within the U.S. Department of Health and Human Services, states it plainly. In a recent interview with FedScoop, he shared, “The idea is that you should have a standardized nutrition label for an algorithm,”
Tripathi cites an example of a user in San Juan, Puerto Rico gaining awareness that the algorithm in their EHR was trained on data from Minnesota’s Mayo Clinic. Not exactly the perfect population match. His example is spot on, as health-related behaviors, socioeconomic factors, and environmental factors (via SDoH), from different populations, weigh heavily in impacting healthy outcomes.
While the ONC’s certification program for health IT is more of a suggestion than a rule, hospitals and physicians are nudged towards using certified systems to get in on certain CMS payment programs. And to that end, one major concern with growth and inclusion of AI in healthcare data, and decisioning, is underlying bias within the algorithms themselves.
Groundbreaking research conducted in 2019 uncovered that a widely used clinical algorithm, employed by hospitals to determine patient care needs, was displaying racial prejudice. It was observed that Black patients needed to be far more ill than their white counterparts to receive the same care recommendations. This occurred due to the algorithm’s training on historical health care expenditure data, which mirrored a past where Black patients had fewer resources for health care compared to white patients, a reflection of enduring wealth and income inequalities.
Although this discriminatory behavior of the algorithm was identified and amended, it has led the Department of Health and Human Services to focus on algorithmic bias, as a key effort to ensure health equity by design.
Will more transparency prevent the lurking nightmare of algorithmic bias impacting health equity and care outcomes? ONC sure hopes so.
This AI and algorithm portion are fragments of the bigger picture, which is ONC’s proposed rule titled “Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing” (HTI-1).
It’s a bundle of changes for the Health IT Certification Program. As part of this, AI will get its own shiny new category under clinical decision support (CDS) systems. According to Tripathi, the impact of AI in CDS has lent to the ONC stepping in address the crucial link with EHRs.
The approach for transparency isn’t without its fans. A swath of healthcare industry stakeholders, including the College of American Pathologists, are calling for more information on datasets used to train AI systems.
Ron Wyatt, chief science and medical officer at the Society to Improve Diagnosis in Medicine (SIDM) wants to take the rule even further. Calling for algorithmic information to go beyond providers and patients—and be shared with the public. From Dr. Wyatt’s public comments:
We agree that transparency of source attribute information for users is necessary, but that alone is not adequate safe-guarding of patient and public interest, given practicing clinicians and health systems are not the locus of expertise on the nuances of sources of bias or error in AI data sets or algorithms. Many of the recent, high-profile instances of data, data-set, and algorithmic biases and errors in predictive DSIs -ranging from appointment scheduling systems to sepsis detection to dozens of models that inappropriately consider race –were unearthed by curious researchers. For this reason, we believe that the information presented under “source attributes” should be in the public domain and not just presented to end users, so that it is exposed to the expert academic research and developer communities that now are sensitized to these problems and focused on making algorithmic assistance a trustworthy adjuvant to patient care.
But there’s also been resistance. The HIMSS Electronic Record Association (EHRA), which represents 30 companies—including Allscripts, Epic, Cerner, and athenahealth—believes that the timeframes associated with the HTI-1 proposed rule would place excess burden on providers and healthcare IT developers. In addition, EHRA shared the following:
“Many of the ONC’s proposed requirements are at odds with key priorities raised by our healthcare provider customers to reduce administrative burden, as they continue to face immense financial and operational strains while emerging from the COVID-19 pandemic.”
It’s still up in the air how ONC will incorporate all this feedback. Whether you’re excited or apprehensive about this proposed rule involving increased AI transparency, one thing is for certain: the way providers and HIT companies interact with AI in healthcare is about to change, and it’s going to be a wild ride.
Giddy up!