SSRG Workshop
Leicester
24.4.02
‘The weakest links –
priorities for improvement
in Personal Social Services
performance indicators’
Nick Miller
Norfolk County Council SSD
(views expressed in this presentation do not necessarily reflect those of Norfolk CC)
A full text version of this slide presentation is available on the SSRG website:
BACKGROUND
· Ministers have used PAF results to ‘name and shame’ ‘poorly performing’ CSSRs
· The new corporate Best Value regime makes use of selected indicators
· Directors of Social Services are held to account by their Chief Executives and members for their PIs – and by press reaction on publication of Audit Commission and DH league tables
· CSSR ‘star ratings’ and performance ‘rewards’ (including LPSAs) are already being assessed (at least in part) based on PAF and other performance measures
· The Joint Review process uses PIs extensively
· The DH and the Treasury use the PIs to measure achievement and to justify extra funding within the Comprehensive Spending Review
‘These (PAF) tables show the variation in performance across social care is just too great.
This is not primarily about money – it is about management and organisation.
These tables remove the excuses for unacceptable variations in performance.’
Alan Milburn Harrogate October 2001
FOUR KEY QUESTIONS
arise from the above contention
which need answers from the available evidence..
1. What makes a good indicator / indicator set? Is the DH and its Secretary of State actually able to assess CSSR ‘performance’ using the indicators they currently have?
2. Why does reported performance apparently vary?
3. How far can management / re-organisation change the ‘performance’ of CSSRs as the Secretary of State expects?
4. When should the Secretary of State be expecting to see significant change in the PAF indicators?
What makes a good indicator / indicator set?
A good indicator (or, more properly, indicator set) should demonstrate that it is :
· Relevant to practice and policy
· Important
· Oriented towards outcomes not processes, likely to lead to improvement of service for users and carers
· Attributable
· Reliable and robust
· Responsive
· Comprehensible
· Timely
and that it does not :
· Lead to ‘perverse incentives’
· Contain internal ‘contradictions’ as against other indicators
· Cost too much to collect
WHAT IS THE EVIDENCE BASE FOR THE PAF INDICATORS?
Dow we know from evidence that the indicators do indicate good practice?
As far as I am aware the hard evidence is MINIMAL…….
Some is emerging from :
· inter-LA collaboration, often led by benchmarking groups
· the Starfish consultancy (including material presented at the Leicester workshop – see elsewhere on the SSRG website )
· SSRADU (IfSC demonstrator work etc)
· PSSRU *
· Thomas Coram Research Unit *
· University of East Anglia *
· Research in Practice * papers on children looked after / adopted
(* : references in full can be found in the full text on the SSRG website)
ANY MORE??
The 3 'E's - economy, efficiency and effectiveness
The one dimensional analysis of the PIs needs to be more sophisticated to take account of the interdependencies within the 3 ‘Es’ :
**: ‘Horizontal efficiency’ – how far does the service / package offered cover the numbers in need of the service? (evidenced by the rate per 1000 in a need state(s) provided with the service – and the rate where service is provided but is not needed)
‘Vertical efficiency’ – how far does the service offered meet the needs of each individual who is in need of care? (% of aspects of need met by the service for each individual provided with the service / package)
AN EXAMPLE OF THE THREE ‘Es’ AND INTER-DEPENDENCY
a CSSR may score ‘well’ on a specific (low) unit cost
but
this may reflect any number of factors including –
· A lower dependency user group (possibly with a high unit cost for the more dependent elsewhere)
· Lower input costs – easier to attract staff at lower rates of pay in more deprived areas
· Other providers (notably NHS and housing) making ‘substitutable’ provision
· Differential levels of need coming to CSSRs because carers are available and able to provide
· User and carer dissatisfaction at a poor quality and level of service
· Poor outcome measures – quicker transitions to higher need states and more expensive service use patterns, earlier death, lower morale and higher depression
Do CSSRs yet have measurement systems in place to analyse whether these hypotheses are borne out? Probably only a very few, if any do… a five year + development programme..?
Why does reported performance apparently vary?
Five hypotheses
Reported ‘performance’ varies in part as a function of :
1. overload on CSSR information teams, operational staff and CSSR ‘systems’
2. different understandings between CSSRs about what to count **
3. misleading comparators (especially unit costs and population data)
4. real variation but this is appropriate – local variation of policy and practice meets local needs but may not be ‘visible’ in the PIs **
5. The Milburn hypothesis =
Reported ‘performance’ variation is ‘real’ and this is not appropriate
** full explanations of problems with the PAF set‘s definitions and interpretation by CSSRs can be found in the volumes on Children’s PIs and Adults PIs on the SSRG website. Notes on changes over time are also included there.
An example of real variability which affects indicators
looked after / accommodated children – NOT a homogeneous group
· needing a permanent different home (potential for adoption)
· needing temporary support / protection because of home crises, particularly deriving from carer stress / mental health problems or substance abuse or being a single parent on low income (or any combination of these factors)
· needing respite care because of the child’s disability
· being looked after because of the young person’s criminal behaviour or other anti-social behaviour
· needing specialist health /education inputs because of health or psychological problems or psychiatric needs
· needing opportunities to start life as a young adult after preparation for leaving care because they have no viable home in which to make this transition
· unaccompanied asylum seekers.
This ‘breakdown’ is equally needed for adult groups – e.g. older people with functional and organic mental illness, different sub-sets of those aged 18-64 with psychiatric problems
How far CAN management / re-organisation change the ‘performance’ of CSSRs as the Secretary of State expects?
EXAMPLES OF REALITY AT TEAM LEVEL
IMPERATIVE….. BUT what happens to …….
reduce permanent supported admissions to residential care (C26 / C27) / bed blocking rates (D41)increase intensive home care C28 ** / budgets and possible risks to users – may in some cases increase user and carer dissatisfaction
not re-admit to psychiatric hospital (at least not till 91 days after discharge) A6
not encourage direct payments / ability to hit targets for intensive home care C28 ** (and B11 and B12 **)
not keep a child on the Child Protection Register for more than 2 years (C21) / re-registering if things go wrong (A3)
think hard about moving a child looked after, especially if they have been looked after continuously for 4 years or more (A1**, D35)
not offer planned respite care (s 20 agreements) / unit costs - they will not reflect the activity, only the spend (B8 ** – B10 : also in E44**).
a bit ‘tongue in cheek’….. what is local evidence at Directorate and team level?
When should the Secretary of State expect to see significant change?
* Change can take time – LA residential / home care out-sourcing / new types of provision
* Change may take money – priorities?
* New systems (IT, processes) in CSSRs take time and money
‘Tides’ may be running against CSSRs …
such as
· Demography
- very elderly ‘the 1919 effect’ (see graph
1) – impacts informal carers as well as
direct demand
- more teenagers (see graph 2)
- unaccompanied asylum seekers
· Employment markets – ‘new Tescos’ effect
· Property markets – e.g. selling off nursing homes
· Voluntary sector pressures – lack of volunteers
· Changes in illicit drugs ‘markets’
· …………
However …..
Not all gloom – fitter, better housed, better educated etc – and we can make a difference (e.g. reviewing process of starting to be looked after, admission to residential care, CSSR impacts on health care pathways etc)
GRAPH 1 :
Source : ONS single year of age data for England
Compare rates of demand / provision for those aged 85+ as against those 65-74 or 75-84 – see fuller details in text of presentation. Also Health Survey for England 2000 table 4 (next slide)
For every 1 person aged 65-74 supported by an English CSSR in a permanent residential or nursing placement there are 3 aged 75-84 and 4.6 aged 85+ (SR1 data March 2000)
GRAPH 2
Source : ONS single year of age data for England
Relate to numbers in 13-16 age groups who are looked after and in most expensive placements – see text of presentation and following graphs from DH feedback volume on Children Looked After at 31.3.2000
Conclusions
· many of the indicators used fail to satisfy criteria set for PIs
· reasons for variations in reported performance are more complex than admitted
· decisions made within CSSRs are more sophisticated and complex than PIs can reflect
· ‘evidence’ is limited - 1-dimensional view of economy, efficiency and effectiveness
· PIs do not reflect the agenda of change, notably joined-up working
· some PIs may be dangerous for practice if taken too literally
BUT
· some improvements in CSSRs are badly needed – and have been achieved
· CSSRs are getting the means to measure their activity better – new computer systems etc
· The DH is working on datasets to allow better PIs – needs inputs from CSSRs via SSRG and ADSS
· CSSR culture is changing – growing interest in ‘evidence’ and improving quality