Image of data being analysed

There’s no Magic Way of Measuring Impact

Wouldn’t it be great if there was a way of measuring your social impact across multiple projects using a single dependable statistic? Well, I’ve got some bad news, and some good.

I was recently talking to a charity who wanted to know how if they could go about measuring and reporting the overall impact of the organisation on children and families. With multiple strands each aiming to achieve different things, they asked if a single outcome measure – one accurate, reliable number – to sum up the impact of the whole organisation was either possible or desirable.

First, here’s the bad news: it’s very unlikely – I might even be so bold as to say impossible – that any such thing exists. You might think you’ve found one that works but when you put in front of a critic (or a nitpicking critical friend, like me) it will probably get ripped apart in seconds.

Of course, if there is a measure that works across multiple projects, even if not all of them, you should use it, but don’t be tempted to shoehorn other projects into that same framework.

It’s true that measuring impact requires compromise but an arbitrary measure, or one that doesn’t stand up to scrutiny, is the wrong compromise to make.

The Good News

There is, however, a compromise that can work, and that is having the confidence to aggregate upwards knowing your project level data are sound. You might say, for example, that together your projects improved outcomes for 10,000 families, and then give a single example from an individual project that improved service access or well-being to support the claim. In most situations that will be more meaningful than any contrived, supposedly universal measure of impact.

Confidence is the key, though: for this to work you need to find a reliable way of measuring and expressing the success of each individual project, and have ready in reserve information robust enough to hold up to scrutiny.

Measuring Means Data

In conclusion, the underlying solution to the challenge of measuring impact, and communicating it, is a foundation of good project level data. That will also make it easier to improve performance and give you more room to manoeuvre. Placing your faith in a single measure, even if you can decide upon one, could leave you vulnerable in a shifting landscape.

 

Images showing analysis, in a light bulb to illustrate project evalution

You Might Be Winning but Not Know It

Have you ever eagerly awaited the results of a project impact study or external evaluation only to be disappointed to be told you had no impact? ‘How can this be?’ you might ask. ‘The users liked it, the staff saw the difference being made, and the funding provider was ecstatic!’ The fact is, if you’re trying to gauge the final success of a project without having analysed your data throughout its life, proving you made a difference is bound to be difficult.

Of course we would all like to know before we invest in a project whether it’s going to work. As that’s practically impossible (sorry) the next best thing is to know as soon as we can whether it is on a path to success or, after the fact, whether it has been successful. But even that, in my view, isn’t always quite the right question: more often we should be asking instead what it has achieved, and for whom.

In most cases – rugby matches and elections aside – success isn’t binary, it’s complex, but good data analysed intelligently can reduce the noise and help to make sense of what is really going on.

A service might in practice work brilliantly for one cohort but have negligible impact on another, skewing anecdotal results. Changes might, for example, boost achievement among girls but do next to nothing for boys, leading to the erroneous conclusion that it has failed outright. Or perhaps across the entire group, attainment is stubbornly unmoving but attendance is improving – a significant success, just not the one anyone expected. Dispassionate, unprejudiced data can reveal that your project is achieving more than you’d hoped for.

Equally, if the goalposts are set in concrete, consistently mining that data can give you the insight you need to learn, improve and change tack to achieve the impact you want while the project is underway. Or, at least, to check that you’re collecting and reviewing the right data – if the answer to any of your questions is a baffled shrug or an anecdote (and it too often is, in my experience) then you have a problem.

I’ll be circling back for a detailed look at some of the case studies hinted at above, as well as several others covering various fields, in later posts in this series.

In the meantime, consider the project that keeps you awake at night – where are its dark corners, and what good news might be lurking there?

The Centre for Youth Impact logo

The Centre for Youth Impact – Conference 16th March

We’re pleased to announce that Jack will be hosting a workshop at The Centre for Youth Impact’s ‘The Measure and the Treasure: Evaluation in personal and social development’ conference on 16th March.

The Centre for Youth Impact is hosting the day-long conference to focus on issues of measurement and personal and social development.

“The day will explore policy, practical and philosophical debates about whether, why and how we should seek to measure the development of social and emotional skills (or non-cognitive skills, soft skills and character, amongst other terms) in young people. We are planning a thought-provoking and engaging day that introduces participants to a range of ideas and activities, and are particularly keen to draw in thinking from outside the ‘traditional’ youth sector.

The question of how to measure and evidence the development of social and emotional skills in young people remains one of the key challenges youth organisations feel they are facing, and many practitioners raise ethical, technical and practical questions about the extent to which this is feasible, desirable and even useful. As such, we want to convene a day that will bring individuals with a wide range of perspectives into the same space to share, explore and progress their thinking, with a focus on practical application.”

Kenton Hall – Communications Officer, The Centre for Youth Impact

To find out more visit http://bit.ly/2mBVOWY

 

Crime doesn't pay written on a blackboard

CRC Reoffending Rates: Who’s Striving, Who’s Struggling?

CRC Reoffending Rates: Who’s striving, who’s struggling? Or should we all move to the North West?

Last week the MoJ released the interim Community Rehabilitation Companies (CRCs) proven reoffending rates for cohorts that started orders or licences between October and December 2015. As it is now 12 months since the first payment by results cohort started, I thought it would be interesting and helpful to review the performance of the CRCs – the final results will be available in October 2017. If you don’t know, the CRCs are privately run probation services set up in 2014 to manage less harmful offenders and to reduce reoffending rates. As such, part of their payment from the Ministry of Justice is based upon how well they reduce reoffending compared to a 2011 baseline. If the reoffending rate increases the CRC’s contract could be terminated.

The payment mechanism is complex – see here for the important documents. The basics are that if the CRC’s reoffending rate is lower than the 2011 baseline rate, having adjusted any differences in offenders’ likelihood of reoffending, then the CRC will receive a payment. Being just 0.1% lower would not be good enough to trigger a payment – the CRC’s rate must be statistically significantly lower (I have not found in the public domain information on how significant the reduction must be – do contact me if you know of a source).

Not enough data have been released to estimate which CRC will receive a payment (the 2011 baseline rates have not been published for example) but there are enough published data to assess the relative performance of each CRC. Each CRC’s published reoffending rate cannot be directly compared because the CRCs’ offenders will present with varying likelihoods of reoffending. Fortunately the OGRS4 expected one year reoffending rates were published. With this information, I estimated what each CRC’s proven reoffending rate would be if likelihood of reoffending was exactly the same in company (the adjusted rate – I used the average OGRS4 rate across all CRCs which was 45.7%). The results are in the chart below.

Figure 1: Interim and adjusted reoffending rates in the 21 CRCS (Oct to Dec 2015 Cohort, reoffending rate to December 2016).

Table showing Interim and adjusted reoffending rates in the 21 CRCS
Source: Interim proven reoffending statistics for the Community Rehabilitation Companies and National Probation Service supporting tables January 2017. See table for bases.

There is a large 17.2% spread in interim proven reoffending rate in the MoJ data. This is reduced to a 9.5% spread in the adjusted reoffending rate. The lowest adjusted rate is in Merseyside (34.4%) and the highest is in Warwickshire & West Mercia (43.9%).  There appears to be regional clustering at the top and bottom of the chart. Three of the four CRCs with the lowest adjusted rates are in the North West – Merseyside, Cumbria and Lancashire and Cheshire & Greater Manchester – and the neighbouring areas South Yorkshire, Humberside, Lincolnshire & North Yorkshire, and Durham Tees Valley make up three of the four areas with the highest adjusted rates. This clustering could be due to police practice, the CRCs’ effective work, efficient court processes or in or an unknown factor altogether.

These results do not mean the providers in the North West can expect a payment to come their way. We do not know the baseline to which they will be compared and there is another 6 months of convictions data to be included (the proven reoffending measure includes offences within 12 months that are convicted within 18 months). Also there is likely to be an area effect that is not included in the OGRS4 measure but will be in the 2011 baseline rate. However, the differences do suggest that persons interested in how to reduce reoffending rates should visit the North West to understand if they are doing anything different there.

Table 1: Results in all 21 CRCs

Table Showing Results in all 21 CRCs

Source: Interim proven reoffending statistics for the Community Rehabilitation Companies and National Probation Service supporting tables January 2017.

Impact Management Programme Logo

GtD Approved Provider for Impact Management Programme

Get the Data are pleased to announce that we are an approved provider to the Access Foundation’s Impact Management Programme.

The Impact Management Programme aims to build the capacity of charities and social enterprises to manage their impact. This will help them to increase their social impact and diversify income.

Get the Data will support organisations to build impact measurement tools, develop impact plans, report performance, manage data, analyse data and design a theory of change. Please contact jack.cattell@getthedata.co.uk to learn how to take advantage of the fund.

Training is being held at locations across the UK for organisations who wish to participate in the programme:

  • London 9th February
  • Liverpool 23rd February
  • Birmingham 1st March
  • Bristol 23rd March

Visit http://accessimpact.org/events/ for further information or to book onto a training session.

The word truth in a maze

Upholding Evidence in the Post-Truth Era

Brexit and the new Trump administration have added “post-truth politics”, “fake news” and “alternative facts” to our political lexicon. Just words?

Well, “words matter”, as the former president reminded us when he launched his own candidacy eight years ago. And these new words matter very much when accompanied with calls to dismiss experts in favour of gut feeling and instinct. Democracy needs to be informed so hearing a senior member of the British Cabinet claim that “people in this country have had enough of experts” was one of the most lamentable claims of the Brexit campaign. All of this is surely at odds with a community of policy researchers and evaluators whose stock in trade is objectivity, fact-finding and balance. So where do we go from here?

Much food for thought comes from the 2016 “Evidence Works” conference. In September 2016, the British based “Alliance for Useful Evidence” collaborated with its American counterpart “Results for All” to convene “Evidence Works 2016: A Global Forum for Government”. Over two days in London, delegates from around the world shared ideas of how governments can use evidence and data in policymaking to improve outcomes for citizens and communities, and their findings are worth sharing here:

  • Government needs diversity of evidence to answer policy questions, and this evidence needs to be timely.
  • To maintain credibility, there is value in keeping some distance between evidence production and government. Independence gives extra authority to evaluation or analysis of policy.
  • Good communication is vital. It’s not easy to pare down a large body of evidence, and technocrats may need to be trained on how to write concise policy briefings.
  • Political leaders and members of the public need to be encouraged to use and demand evidence. Persuading politicians and those controlling the money is not enough. In democracies, we also need to persuade the public to care about evidence.

This was a truly global initiative and there were some notable examples of good practice that we in the West can learn from developing countries. A copy of the report can be found here.

At its best evaluation serves as Socrates’s “gadfly”, challenging assumptions and questioning implicit principles. Public servants – policy makers and evaluators – need to stand their ground, be reasonable and communicate evidence effectively.  If we are indeed entering an era of “post-truth” politics then we need more than ever “to speak truth to power”.

 

 

Youth Justice: Evidence Based Policy in 2017?

This month has seen the publication of Lord Taylor’s Review of the Youth Justice System in England & Wales.

In his comprehensive report, Taylor makes a number of recommendations “to transform the youth justice system in which young people are treated as children first and offenders second, and in which they are held to account for their offending”.

Of particular interest to me was Taylor’s emphasis on diverting children out of the justice system, where possible, and directing the police, local authorities and health authorities to operate these schemes jointly. Taylor identifies such multi-agency leadership as one of the principles of good practice in diversion that includes, proportionality, speed, sensitivity to victims, light touch assessment and access to other services.

If these recommendations are implemented, then there are reasons to be cheerful about the future direction of youth justice in England and Wales. But, have we not been here before? It is nearly 20 years since Tony Blair formed the ‘New Labour’ administration in 1997, with its commitment to ‘evidence-led’ policy. Nowhere was this more evident than in the 1998 youth justice reforms and the creation of multi-agency Youth Offending Teams. “Tough on crime; tough on the causes of crime” was the slogan of the day, and I cut my evaluation teeth in the boom of criminological research and evaluation, and the search for “what works?” in youth justice.

Within a decade, however, the laudable attempt to re-set youth justice by informed policy had become jaded. Traditional law and order politics were reasserting themselves, something that Barry Goldston recognised in his excellent article “The sleep of (criminological) reason: Knowledge–policy rupture and New Labour’s youth justice legacy“. Giving his retrospective, Goldson identified a “trajectory of policy [that] has ultimately moved in a diametrically opposed direction to the route signalled by research-based knowledge and practice-based evidence”. In other words, the knowledge-base was telling policy makers to be doing one thing, but they appeared to be doing just the opposite.

In his article Goldson identified five areas where the rupture between the policy and practice was most evident: research tells us that young people committing crime is relatively ‘normal’ (but the response is to be intolerant of this); the evidence is that rates of youth crime are relatively low (but politicians tend to amplify it and “define it up”); evaluation shows that diversion is effective (but the response is for earlier intervention and ‘net widening’); universal services of welfare, education and health are effective (but punishment becomes ascendant while welfare is in retreat); decarceration is known to be cost-effective (but the use of custody increases).

The publication of the Taylor report provides us an opportunity to reset youth justice and its recommendations seek to repair the “knowledge-practice rupture”. Are we seeing an awakening of criminological reason? I trust we are and look forward to continuing to play my part in providing evidence of what works to policy makers and practitioners.

A Quick Word On Restorative Justice

Restorative Justice and the Restorative Forum

The Restorative Forum is the ‘go-to’ place for anyone interested in restorative justice and has an excellent YouTube series of short interviews on all aspects of RJ. Here I am being interviewed on the value of evaluation, and along the line it is discovered that I am not American but Scottish! https://www.restorativeforum.org.uk/Forum

GtD - Jack Cattell Presenting at PPMRC

GtD’s Jack Cattell – Managing the Data Glut

Last month I had the great pleasure of spending a week in the USA to see for myself how GtD is developing its services in Atlanta, before attending the 9th annual Public Performance Measurement and Reporting Conference at Rutgers University in New Jersey.

Americans and the British may be “divided by a common language”, but I was struck by the common challenges that policy makers and practitioners face on both sides of the Atlantic: ensuring that practice is grounded in “what works?”, getting more “bang for your buck” in service delivery and managing “big data” to understand how policy makers and practitioners are responding to complex social problems. Whether they are based in the U.S. or the U.K., all of us at GtD feel privileged to help our clients to meet these challenges by providing definitive social impact analytics that are helping them to monitor their activities; learn quickly how to improve them; and, ultimately to prove their effectiveness – definitively.

It was the theme of “big data” that took Alan and I to Rutgers University where we were delighted to contribute our thoughts and experiences of “managing the data glut” at the PPMR conference. Drawing on over 20 years of experience of research and evaluation in the criminal justice system, we used our work in the development of the DASHBOARD for the CJS as an example of how to manage the data glut. It provided a clear, illustrative example of how we rationalised over 1,500 separate performance indicators to provide a highly visual and user-friendly dashboard of data that provides managers across the criminal justice system a single version of their performance in bringing offenders to justice.

The presentation we gave at the conference is available to view on our LinkedIn page, please do follow us for updates and information relating to social impact analytics.

Contact my colleague Alan or myself if you would like more information about any of our projects or how our social impact analytics could benefit your organisation. If you believe you are ready to embark on your own social impact journey, you might be interested in our free Strategic Impact Assessment. Please do get in touch if we can be of assistance by emailing us at:

Jack Cattell jack.cattell@getthedata.co.uk

Alan Mackie alan.mackie@gethedata.co.uk

Juvenile justice

Preparing Tomorrow’s Juvenile Justice Leaders for Success

Preparing Tomorrow’s Leaders for Success. In delivering evidence of “what works?” GtD is working with juvenile justice leaders on both sides of the Atlantic. Recently, Alan Mackie demonstrated the ‘value of evaluation’ to a new generation of leaders at Georgia Gwinnett College. Using GtD’s evaluation of the Youth Restorative Intervention as a case study, Alan showed how evaluation provides definitive evidence of what works in reducing both reoffending and costs to the tax payer. Contact Alan via alan.mackie@getthedata.co.uk to learn how GtD can help your juvenile justice project.