Date Published
March 11, 2026
Author
Marc Zimmet
Lessons in SNF Data: Three Pointless Studies Teach Nothing but Tell All
Marc Zimmet argues widely cited SNF studies misinterpret data by treating all nursing homes as comparable, ignoring payer mix, labor markets, and economics.

What three widely cited publications reveal about the flawed framework supporting skilled nursing facility payment, staffing, and ratings.

Picture1.jpg

If the words “Hello, Cleveland!” make you laugh, you’ve seen Rob Reiner’s 1984 masterpiece “This Is Spinal Tap.” One scene resonates personally; the band becomes hopelessly lost wandering through a maze of narrow hallways to find the stadium’s concert stage in Ohio.

Now imagine two other bands doing the same concurrently. They repeatedly cross paths, politely but hurriedly asking for directions, each convinced they are headed onstage; none realizing they are completely lost. More than lost, as two of the bands showed up on the wrong night to begin with.

A more appropriate metaphor for skilled nursing’s regulatory and reimbursement analytics you will never find.

Hello, Cleveland.

Three recent papers circulating in the long-term care press attempt to explain how policy and payment conditions affect nursing facility performance:

  1. The Journal of the American Medical Directors Association (JAMDA) concludes that higher Medicaid payment rates were associated with increased probability of a SNF achieving 4- or 5-star ratings, but Medicaid rates had no consistent relationship with the Quality Measure rating.

  2. Health Affairs brings us “The Effects of Labor Unions” on Nurse Staffing Ratios and Quality of Care, which finds unionization changes staffing patterns, and while the net result is fewer RN hours, the shift does not affect care quality.

  3. The annual “Best Nursing Home” rankings produced by U.S. News & World Report and Newsweekfail to note the connection between Medicare dollars and staffing levels.

  4. * Honorable mention: Cost report compendiums from multiple CPA firms that, respectfully, should know better than to mix apples and oranges and sell the product as lemonade.

Suspend disbelief for a moment and take these studies seriously. Their conclusions are so intrinsically contradictory that they reveal something far more important than any individual finding.

Individually, each analysis feels thin and overly simple; taken together, they expose the gravitas of a deeply flawed analytical framework distorting skilled nursing policy. To those of us who have spent decades backstage, the sector’s reimbursement and ratings mechanics are immediately recognizable.

Too many SNF stakeholders are wandering the same backstage hallways.

_________________________________________

1. JAMDA

Medicaid Rates Increase Staffing but only Modestly Affect Quality

The JAMDA study reports that higher Medicaid reimbursement correlates with increased Staffing and Health Inspection scores but had no consistent directional association with quality indicators. The conclusion sounds straightforward but reveals something more fundamental.

The paper does not demonstrate that additional staffing fails to improve care. It demonstrates that the measurement system used to evaluate nursing homes is largely incapable of detecting marginal improvements.

Staffing can increase while quality metrics remain unchanged because the indicators themselves barely move across normal operating conditions. *

2. Health Affairs

Unionization Changes Staffing Patterns but Not Quality

A study published in Health Affairs concludes that unionization alters staffing composition, specifically reducing RN staffing levels, while producing no measurable change in quality outcomes. Again, the study observes a real economic response but misinterprets the absence of movement in quality indicators.

Unionization raises labor costs. Facilities respond by adjusting staffing composition while maintaining regulatory thresholds, while states applying Medicaid cost-based reimbursement methodologies pay the Medicaid program’s relative share. As long as the facility remains compliant, publicly reported quality metrics remain largely unchanged. **

The study reveals the same limitation as JAMDA’s: operational changes occur, but the measurement system fails to register them. The authors therefore conclude that nothing changed. If they believed the measurement system was problematic, then the measurement system would have been the subject of study.

3. ‘Best Nursing Home’ Rankings

U.S. News and Newsweek release rankings identifying the nation’s “best” nursing homes.

A ranking system is meaningful only if the entities being compared share a common operating framework. Skilled nursing facilities do not.

SNFs vary widely across the provider class and lack essential commonality. Yet these rankings rely almost entirely on the same federal data sources used in nearly every study involving providers. Apparently, the working assumption is that if two facilities have the same number of beds, they must operate under the same economic principles.

I will not waste more time on this one. Suffice it to say that CCRCs and high-Medicare providers appear atop these lists with remarkable consistency.

The problem is not that analysts are getting the answers wrong.

It’s that they keep asking the wrong questions.

Individually, the results are unsatisfying and contain nothing of practical value to inform policy, but they come together beautifully as a lesson in causation. The authors draw specious conclusions because they examine the wrong variables from a misguided perspective.

The common mistake is treating all SNFs as comparable units when they operate under radically different economic conditions; they vary widely in form, structure, and opportunity. Poorly positioned providers cannot outsmart, outrun, or otherwise defy what their market allows them to be; operators ride the currents as downstream players, judged and ranked as if choice alone dictated their position when most were dragged there. The forces that drive performance remain largely outside the analytical framework of measured assessment.

In SNFonomics, those variables are not background conditions - they are the system.

These forces determine how facilities allocate resources and organize care. Yet the sector continues to evaluate performance using a narrow set of quality indicators that barely move across normal operating conditions. When analysis focuses on metrics that are largely insensitive to the underlying economics of the sector, the resulting conclusions become analytically hollow.

  • Staffing can rise without measurable improvement in quality.

  • Staffing can shift without measurable deterioration.

  • Facilities can be ranked using the same static indicators in either case.

Under these conditions, statistical findings often describe movement within the measurement system itself rather than meaningful changes in how nursing homes operate.

Variables of Significance

A more useful perspective begins by examining key structural drivers directly: payer mix, Medicaid rate construction, labor markets, census volatility, and regulatory thresholds – and understanding how they interact as markets evolve.

That broader view will be explored in the forthcoming State of the States (SoS) Report. Rather than beginning with static quality indicators and working backward, the SoS starts with the economic structure of the system and works outward to its observable outcomes.

By examining the forces that shape each provider’s position, we measure potential relative to the market. Only then can we assess performance and produce insights that move beyond statistical curiosities toward conclusions that matter.

The problem lies not with the findings, but with the questions. Park Place Live and z-INTEL are where you’ll find both.

Numbers are not data. Context matters.

* Quality is a questionable measure of performance even with the benefit of modern software. This study uses 2013 as baseline for its 2021 endpoint. What I found most surprising is that it was deemed appropriate for publication.

** In my opinion, this study takes such gratuitous liberties with the degraded fabric of cost reporting, it must be considered unactionable. To be clear, the Medicaid payment-to-cost ratio is not a static or standard measure. It ignores Medicare Part B supplemental revenue and is directly correlated with a provider’s Medicare share of days. The fact the authors found no significant association between Medicaid payment-to-cost ratios and improved star ratings when the ratio is below 0.90 (i.e., when Medicaid covers less than 90% of the cost) is telling. Per MACPAC, a plurality of facilities receive less than 90% (but this analysis is equally suspect).

Questions or comments? Contact Patrick Connole at pconnole@parkplacelive.com.

More insights

Discover the latest trends, best practices, and expert opinions that can reshape your perspective