Nursing home oversight is often discussed as if it functions the same way everywhere. The rules are federal. The survey process is standardized. Outcomes are published side-by-side.
But anyone who has worked in more than one state knows that surveys do not feel the same everywhere, and the data confirm it.
During the COVID-19 public health emergency, long-standing differences in state survey capacity became impossible to ignore. Surveys were delayed, backlogs accumulated and states recovered oversight capacity at very different rates.
Research by Skinner and Stevenson documented these disparities and raised questions about how variation in surveyor staffing and workload might shape regulatory outcomes. The authors followed up with a more recent study in the Journal of the American Geriatrics Society, which quantified differences in surveyor staffing, clinical composition and workload across states.
The JAGS findings were straightforward: Surveyor capacity per state is not uniform. What remained unclear — and what this analysis explores — is how those differences in capacity show up in the outcomes that facilities, operators and lenders rely on.
When national comparisons fall short
When surveyor characteristics are examined nationally against outcomes such as total deficiencies, G–L level citations, or immediate Jeopardy rates, the relationships appear weak.
That result is often interpreted to mean surveyor capacity does not matter. A closer look suggests something else entirely: The influence of surveyor capacity is not national, but contextual.
Grouping states by oversight capacity
Rather than forcing all states into a single national model, we grouped them using two features the JAGS study itself emphasized:
- Clinical capacity, measured by the share of survey activity performed by registered nurses
- Surveyor workload, measured by the average number of nursing homes visited per active surveyor
Using median splits, states naturally clustered into four distinct oversight environments. Once outcomes were examined within those environments, the differences stopped being subtle.
Here’s what that looks like:
Surveyor capacity profiles and regulatory outcomes
State-level medians; states grouped using median splits

What the patterns show
States with high RN involvement and lower surveyor workload tend to cite more deficiencies overall. That’s uncomfortable for providers, but it’s also consistent with deeper review. Importantly, those states do not show proportionally higher IJ rates. More findings do not automatically mean more harm.
States with high RN involvement but heavier surveyor workload fall between extremes. Clinical expertise appears to moderate severity, but limited time constrains the breadth of detection.
The most concerning pattern appears in states with low RN involvement and high surveyor workload. These states show both higher total deficiencies and the highest concentration of G–L citations, with elevated IJ rates. This combination suggests that when surveyors are stretched thin and clinical depth is limited, more significant problems are identified.
Finally, states with low RN involvement and low surveyor workload show the fewest deficiencies overall and the lowest IJ rates. That may reflect stronger baseline care, or it may reflect different enforcement philosophies. The data alone can’t tell us which is true, and pretending otherwise would be a mistake.
The takeaway is straightforward: Surveyor capacity doesn’t determine whether deficiencies exist. It shapes which deficiencies are found, when they’re found and how severe they are by the time they surface. In short, higher deficiency counts may reflect more intensive detection rather than poorer care quality.
What this means for nursing home operators
For operators, this analysis reinforces a reality most already understand but rarely see acknowledged in public reporting: Deficiency patterns reflect the surveyor team as much as the facility’s performance.
Facilities operating in high-capacity oversight (more RN involvement) environments may accumulate more deficiencies without delivering worse care. Facilities operating in low-capacity, high-workload environments may face greater exposure to severe citations. Quality improvement strategies that ignore the local enforcement context may end up chasing the wrong signals.
What this means for lenders, REITs and other external stakeholders
For external stakeholders, deficiency data should never be read in isolation. Higher deficiency counts may signal more intensive detection, not greater liability. Concentrations of severe citations may reflect late detection rather than sudden operational decline.
And changes in state survey staffing, whether from hiring surges, attrition or post-pandemic recovery, can materially alter regulatory exposure without any change in facility operations.
Understanding how oversight capacity shapes regulatory outcomes is essential for informed underwriting and long-term risk management.
This analysis isn’t an argument that some surveyors are too strict or too lenient, or that some states are doing oversight “right” or “wrong.” It doesn’t claim causality between surveyor staffing and care quality. It does show that treating survey systems as interchangeable makes the data less useful. Surveyor capacity shapes what gets detected and how it’s reported. Ignoring that context distorts public reporting.
Looking forward
National averages are convenient. They’re also misleading.
Surveyor capacity operates within state systems, under real constraints and real tradeoffs. If we want public reporting, Five-Star ratings and financial decisions to mean something, we must reckon with that reality, even when it complicates the story. Especially when it complicates the story.
Steven Littlehale is a gerontological clinical nurse specialist and chief innovation officer at Zimmet Healthcare Services Group.
The opinions expressed in McKnight’s Long-Term Care News guest submissions are the author’s and are not necessarily those of McKnight’s Long-Term Care News or its editors.


