January 2026 CQC Ratings: Overview and Analysis

In January 2026, the Care Quality Commission published 181 care home inspection reports. The outcomes were as follows:

  • Outstanding: 7
  • Good: 87
  • Requires Improvement: 72
  • Inadequate: 15

At the headline level, the distribution is familiar. Most services sit within the middle ground, with a relatively small proportion at either end of the ratings spectrum. What is more revealing is not the number of Inadequate ratings in isolation, but what inspection findings consistently show about why services fall into difficulty and what enables others to remain resilient.

What the CQC inspection report data is really telling us

January’s inspection outcomes reinforce a pattern seen repeatedly across the sector. Services that receive Inadequate or fragile ratings are rarely failing because of a single operational lapse. More often, inspectors identify weaknesses in leadership oversight, assurance and governance that develop gradually and remain unaddressed.

Many providers continue to deliver safe and effective care in challenging conditions. However, a significant proportion struggle to demonstrate that leaders and boards have a clear, current understanding of what is happening in practice on a day-to-day basis. Increasingly, the difference between Good and Outstanding, and between Requires Improvement and Inadequate, lies in the quality, frequency and independence of oversight.

From a board or ownership perspective, inspection outcomes are not a measure of effort or commitment. They are a reflection of how governance operates in practice.

Common themes behind weaker CQC ratings

A review of recent Inadequate inspection reports highlights a consistent set of issues.

Governance frameworks exist, but do not drive improvement
Policies, audits and action plans are usually present, yet they often fail to identify the most significant risks, are not followed through, or do not result in sustained change. Inspectors are not identifying gaps in documentation, but gaps in challenge and follow-through.

Oversight weakens during pressure or change
Where services experience leadership transitions, operational pressure or incidents, governance arrangements frequently lose momentum. Oversight becomes reactive, risks escalate and issues persist until external scrutiny intervenes.

Training is tracked, but competence is not demonstrated
Training compliance is commonly high, but leaders are often unable to evidence how learning translates into consistent practice. Where competence is assumed rather than observed and tested, confidence in safety and quality deteriorates quickly.

Known risks are acknowledged, but not managed effectively
Risk assessments may exist, but they are often outdated, incomplete or insufficiently embedded into daily care delivery. In many cases, risks are known but action is delayed, fragmented or poorly evidenced.

Board-level information reassures rather than informs
Boards are frequently presented with information that provides comfort rather than insight. Without independent validation or robust evidence, it becomes difficult to understand where risk truly sits across a service or portfolio.

What this means in practice

Inspection outcomes are shaped by how leadership, oversight and assurance function routinely, not by inspection preparation alone.

Services that achieve and sustain stronger ratings typically demonstrate:

  • visible leadership presence
  • governance systems that test practice rather than simply record activity
  • regular, credible evidence that risks are identified, acted upon and reviewed

Where these elements are inconsistent, ratings tend to drift downwards, often rapidly and with limited warning.

 Learning from Outstanding Service Ratings

Although Outstanding ratings account for a small proportion of January’s inspections, they offer valuable insight into what effective governance looks like in practice.

Recent Outstanding reports consistently show services with strong leadership visibility, well-embedded governance arrangements and a clear line of sight between policy, practice and outcomes. These services can evidence how quality is monitored, how learning is embedded and how improvement is sustained over time.

What distinguishes them is not the absence of risk, but how risks are surfaced early, discussed openly and addressed decisively. Oversight systems are actively used, leaders remain engaged, and assumptions are regularly challenged.

Importantly, Outstanding services do not treat governance as a periodic exercise. They continue to test themselves, particularly during periods of change, growth or increased pressure. Oversight remains continuous, evidence remains current, and scrutiny does not diminish once a strong rating is secured.

Outstanding ratings versus Inadequate ratings: the real difference

When comparing Outstanding and Inadequate services, the distinction is rarely one of intent or effort. Inadequate services often have the same tools in place, policies, audits and action plans, but governance is passive and slow to respond. Outstanding services use those tools dynamically. They follow issues through, maintain visibility of risk and seek evidence rather than reassurance.

Governance is not something they return to when problems emerge; it is something they work with every day.

Implications for boards and owners

January’s data reinforces a simple but critical point. Where leaders and boards lack a clear, evidence-based understanding of operational reality, inspection risk increases.

Strong governance is not about adding more processes. It is about maintaining visibility, independent challenge and timely action, particularly during periods of pressure or transition.

The most serious inspection failures occur when leaders lose sight of what is happening in practice, not because people do not care, but because oversight systems stop doing what they are meant to do.

Good governance is not about reassurance.

It is about maintaining a clear, evidence-based line of sight to risk, at all times.