Back to the future: what do Ofsted-style ratings for hospitals tell us about Jeremy Hunt?

imagesCA27KXV5

We’ve been here before.  Differential ratings for hospitals are back in fashion.  Initially introduced under Alan Milburn (star ratings), changed under John Reid (the health check) and then abolished by Andrew Lansley, Jeremy Hunt has now signalled their reintroduction.

This is about more than just policy-making turning full circle.  It illustrates an important difference in approach between Jeremy Hunt and his predecessor.  Whereas Andrew Lansley believed that information should be placed in the public domain and that third parties should then be allowed to analyse and present it, Hunt is arguing that there is a role for the state in analysing and presenting information on quality: leaving it to the market alone, will not be sufficient.

Of course, Jeremy Hunt argues that his approach is very different from the past: “I am not advocating a return to the old ‘star ratings’”. But the direction of travel is clear: “the principle that there should be an easy to understand, independent and expert assessment of how well somewhere is doing relative to its peers must be right.”

The exciting – and radical – part of Jeremy Hunt’s thinking is the ambition to provide ratings at a service level, as reported by the Health Service Journal.  This would make the information far more meaningful to patients (who are interested in the carethey would receive rather than that of patients with different conditions, receiving different treatment in very different circumstances, who just happen to be at the same hospital).

It is heartening that one of the key tasks of Jennifer Dixon’s review will be to map existing sources of data and consider how they could be applied.  In some areas, such as cancer, most of the information already exists at team level but is underutilised.  We know in great detail about the experience reported by patients and we will soon know about their quality of life after treatment.  We also know about the quality of team organisation and their compliance with national standards.  Datasets on treatment will enable us to assess intervention rates and, for some cancers, clinical audits will provide further detail on the outcomes achieved.  And of course we know about cancer waiting times.  This all, combined with the work of the NCIN, provides the information necessary to reach an informed and nuanced assessment of quality at a level which is meaningful to patients.  The mycancertreatment.nhs.uk initiative, due to be launched next month, begins to bring some of this information together in a user-friendly manner.  The next step would be to produce an aggregate assessment, but this should be doable.

If this is possible for cancer, then it is possible for other specialties.  The issue is that it costs money, both to collect the data, to analyse it and to present it in a meaningful way.  There is no point in hiding this.  If the Secretary of State genuinely values this form of assessment, then he will have to put his money where his mouth is.  However, the Department of Health’s announcement made clear that the new ratings system should mean no ‘new bureaucracy.’  The question will be whether the collection of the high quality information will be considered bureaucracy or an essential part of quality care.  If it is the latter, then it could be truly radical.