Artificial intelligence (AI), the two most common buzzwords in the history of buzzwords, is everywhere and seemingly growing exponentially as a factor in the economy, the way we live, and how we view the future. Called a potential salvation by some (and boosted in the stock market as such) and the sure ruination of human work by others, AI is nonetheless making early-stage inroads in the skilled nursing sector.
Even with the curiosity of what AI can bring to skilled nursing facilities (SNFs) – all to be learned at one of the scores of conferences out there on the subject - there are stakeholders who want to draw a red line around where AI should be allowed to be deployed and where it should not.
One of these areas is foundational to SNFs: the Minimum Data Set (MDS).
Experts in MDS/reimbursement/compliance who once quietly cautioned about compliance issues with MDS and AI are now closer to shouting about it and the need for restrictions when it comes to decision-making on assessments. It’s that serious.
Attesting to the Coding
Jessie McGill is a senior curriculum development specialist for AAPACN, the American Association of Post-Acute Care Nursing, and she has opinions on the matter.
“My initial thought about AI is that it could be a really helpful tool, but we must be cautious that we don’t use it to replace our clinical judgment,” she said.
An example: A nurse may use AI to help figure out a coding question, but with the complexities of the MDS coding rules, there may be the need to ask an additional three questions to the answer and oftentimes, the question does not provide enough information to complete a full answer and to consider all the variables.
“AI sees things in black and white. It does not have the ability to compare an answer to how it impacts residents and to understand what should be coded or indicated for care planning,” McGill said.
AI and Risk
She sees AI as a phenomenal tool as a research assistant, for example, and a concept that can be applied to lower-level MDS needs, like where to find specific information in a Medicare claims manual, or billing rules for interrupted stays, but not one to answer complex coding. That brings in compliance issues and meeting rules and regulations that if unmet can get a provider in real trouble.
“When we sign the MDS, there are two signature areas. One is for who coded the MDS item and attests to the accuracy that items, and then there is also an overall completion of assessment signed by the RN,” McGill said.
“You will be open to compliance issues and audit risks if you code based on AI-provided answers. You cannot have AI sign the attestation. You are attesting the accuracy of the item and the way the information was collected. The attestation also makes you personally subject to criminal, civil, and/or administrative penalties if you knowingly submit false information.”
Vincent Fedele is a partner at Zimmet Healthcare Services Group and chief operating officer of z.Apps, He echoes concerns about how AI quantifies and predicts the human condition, “People are not standardized products; we’re all uniquely calibrated. The goal is to balance efficiency, accuracy, and compliance in MDS workflows. We use different models best suited for each section, but no matter how accurate a model becomes, the nurse’s clinician judgment is all that matters.”
“Using software to support a clinician’s assessment is the proper protocol – CMS has made that abundantly clear. When the model presents the response, the assessor must go in to validate. In a matter of weeks, the nurse accepts the suggestion as fact; confirmation review protocols become biased - humans accept the generated response as authoritative, and reviews become biased to favor what’s been put forth. This doesn’t apply only to nursing – the world’s perspective is seeing what it’s told to see. This is contra to CMS regulations; right or wrong, there are consequences. We’ve seen it happen, especially when third-party software puts an invisible layer between system and user,” he said.
For technology that is supposed to make life easier and faster, Fedele said there is just too much need to backtrack and check on the accuracy of anything AI flags on MDS codes. “It's counterproductive. It's like the whole point is to make you work faster, but now you're investigating these triggers for things that aren't reality most of the time. That backtracking fades as time goes by, and before you know it, clinical judgement becomes an underdeveloped skill set,” he said.
AI Crossing a Line?
As McGill stressed in her comments above, Marc Zimmet, CEO of Zimmet Healthcare Services Group, said the basis of compliance in MDS is based on clinical judgment. Automating MDS completion crosses a line that’s too wide to ignore. “This isn’t a technology question anymore – no one needs a sledgehammer to drive in a thumb tack. It’s about how the tools are embedded into MDS workflows. When you’re given the likely answer before you’ve read the question, behavioral science takes over. Assessors stop looking for confirmation; they accept the answer and look for hard evidence that disproves the software. Confirmation reviews lose effect because users never find what they don’t look for. These are behavioral responses that apply to all of us,” he said.
“Do I want pattern recognition software reviewing my MRI instead of a human? Every single time. Do I want AI teasing out the nuanced facial expressions that belie true condition? If you saw my face right now, you’d know. The suggestions should not be embedded in the assessment. It makes the MDS nurse’s job easier and opens compliance risks no one needs.”
Applied in the MDS space, algorithmic “suggestions” heavily influence decision-making. Interdisciplinary input and time constraints make the MDS process particularly susceptible to anchoring, confirmation bias, and cognitive offloading, Zimmet said.
“In other words, we subconsciously process prompts as authoritative, even when we know the source lacks the nuanced clinical judgement and insight required by CMS for resident assessment.”
The risk is far greater when facilities place extensions on top of their EHR+. You’re not working in your EHR directly, but your mind believes you are. It’s a perversion of principle. When CMS accepts a software’s findings with no accountability on nursing, our concerns would be mitigated. CMS isn’t doing that. Separate the processes, he added.
Industry Wants Interoperability First
In recent comments to CMS for an RFI concerning AI, the American Health Care Association/National Center for Assisted Living (AHCA/NCAL) said the sector must first solve its digital interoperability infrastructure gaps before any meaningful adoption of AI in clinical care can occur.
“Before we can provide responses to the ten specific questions this RFI seeks comment on, we believe it is important to share contextual information about our AHCA/NCAL provider members, the resident populations they serve, their current fragmented baseline digital capabilities, and historical financial, legislative, and regulatory barriers to initiating or accelerating the adoption of and use of AI as part of clinical care,” the association said.
AHCA/NCAL responded to the assistant secretary for technology policy and the Office of the National Coordinator for Health Information Technology (ASTP/ONC) within the U.S. Dept. of Health and Human Services (HHS) in response to its RFI titled “Accelerating the Adoption and Use of Artificial Intelligence and use of in Clinical Care.”
“We believe a critical issue that ASTP/ONC needs to consider as strategies are developed in response to the comments received to this RFI, is that increased adoption of AI in clinical care will remain extremely challenging if not impossible for LTPAC providers without first providing adequate support to eliminate the digital interoperability infrastructure gaps.”
AHCA/NCAL said true functional interoperability capacity across the healthcare ecosystem “would provide the comprehensive information necessary to permit a secure, safe, and effective use of AI in clinical care.”
Suggested Paths to Take
The RFI outlines the HHS strategy to support AI in clinical care with a focus on regulation, reimbursement, and R&D.
Key AHCA/NCAL recommendations included in its RFI response were to:
- Mandate age-stratified validation and bias testing for AI tools intended for use in Medicare populations.
- Expand interoperability standards to include geriatric-specific data elements (functional status, cognitive status, social determinants).
- Align payment policies to incentivize high-value AI adoption in LTPAC settings.
- Provide infrastructure support and technical assistance to enable AI readiness in under-resourced LTPAC providers.
- Prioritize research funding for AI applications addressing multimorbidity, functional decline, and other priorities for aging populations.
- Establish clear regulatory frameworks addressing liability, privacy, and algorithmic transparency for non-medical device AI tools.
- Create validation testbeds and evaluation frameworks that include LTPAC settings and geriatric populations.
What’s missing from that list? Allowing AI to make those decisions alone, Zimmet noted.
The RFI can be accessed here.
Comments or questions? Contact Patrick Connole at pconnole@parkplacelive.com.

