Posts

Alan Mackie presenting about Data Driven Policy Development

Using Data to Shed the Cloak of Invisibility

Since time immemorial we have been fascinated by the power of invisibility. Plato used the “Ring of Gyges” to wrestle with the ethical and moral dilemmas it poses. Just think of H.G. Wells’ “Invisible Man”, Tolkien’s One-Ring, or Harry Potter’s Cloak of Invisibility.  Does invisibility free us from moral obligations, and if we possessed this power would we become corrupted or would we enjoy secretly righting wrongs?

Outside the realms of science fiction, “invisibility” is often an every-day problem for our fellow citizens. I am thinking of those whose problems just don’t make the 24/7 news cycle: the homeless woman sleeping in a shop doorway, the ex-offender walking through the prison gate with nowhere to go, the young man who has dropped out of school with no grades. Often we simply ignore those populations, or – to borrow from Harry Potter – throw the “cloak of invisibility” around them. To be out of sight is often to be out of mind.

The ‘invisibility’ of those with developmental disabilities was thrown into sharp relief during my recent visit to the U.S. Virgin Islands. I was invited by the VI Developmental Disabilities Council to present on ‘Data Driven Policy Development’. My audience was policy makers, including those standing for elected office. If I had any doubts about the relevance of my presentation, the audience quickly rose to the occasion. “How could the needs of those with disabilities be met in the absence of data?”, they demanded. Without data, this population was “invisible”, their needs unknown to those in charge of policy. Without data, those who advocated for better services were hampered in their arguments. Without data, practitioners lacked the evidence to seek funding for new services.

The US Virgin Islands are still recovering from the two hurricanes that devasted the territory last year. As they rebuild the fabric of their communities, it was clear to me that data were not a luxury or a ‘nice-to-have”. Rather, my audience recognized the value of data in throwing light on a population that had often been marginalized prior to the hurricanes, and whose needs had since become more acute. While there is much more work to be done in defining data and setting up systems to collect and analyze data, I suggest that their demand for data was a good one and a great call to action.

It might be fun to speculate what we would do if we had our own cloak of invisibility. In the real world, however, we need data to shine a light on social problems. Or to borrow again from Harry Potter, “Lumus Maxima!”

 

Make Social Impact Your Goal! - GtD and Street Soccer Academy

Make Social Impact Your Goal!

I am delighted to be working again with Street Soccer Academy (SSA) to put evidence of their social impact at the heart of their work with ex-offenders. This important work has been made possible by a grant from the Access Impact Foundation whose mission is to make charities and social enterprises more financially resilient and self-reliant, so that they can sustain or increase their impact. Without the generous funding from the Access Impact Foundation we would not able to provide Street Soccer Academy with our powerful data analysis.

Street Soccer Academy is a great client to work with. They use professionally organised sports-based programmes in the rehabilitation and reintegration of people from some of the nation’s hardest to reach groups, including ex-offenders. Our task is to prove that SSA’s pro-social models are affecting the attitudes and thinking of the men and women with whom they work, with particular emphasis on their relationships and roles in society.

In the coming months we will be using our rigorous social impact analytics to contribute to the knowledge of what makes ex-offenders desist from crime. Our previous evaluation of the academy’s prison to community service, produced evidence of SSA’s excellent engagement of ex-offenders into their programme. With the foundation’s funding we will build on that by using our advanced statistical analysis to identify who benefits from the programme, how and in what circumstances. These analyses will assist the academy to identify the most effective practice and allow it to develop its professional programmes. To ensure that it has the right information at the right time we will be building a dashboard to communicate this data to those delivering the programme, their managers and the funder.

Not only will this improve their practice, but it has important implications for government’s Transforming Rehabilitation agenda. That agenda depends on organisations like SSA being commissioned to deliver services through the private community rehabilitation companies. However, the participation of such organisations has been low as they struggle to demonstrate their impact on reoffending pathways and desistance from crime. Access Impact is helping to overcome those obstacles and by funding our work, will enable SSA to attract further funding and make the systemic changes that are essential to support men and women to desist from crime.

To find out more about our analytics services and how we can help your organisation demonstrate your impact definitively, contact us on 020 3371 8950 or email jack.cattell@getthedata.co.uk

TR: PbR results - speakers who presented to Community Rehabilitation Companies and other justice sector employees

Community Rehabilitation Companies: PbR Results Event

Transforming Rehabilitation is the UK government’s programme of outsourcing probation services to new community rehabilitation companies. In a radical move, the government is now paying these new companies by the reduction in reoffending results they achieve. GtD is at the forefront of this by providing our cutting-edge social impact analytics to Sodexo Justice Services who manage a number of these new companies.

The first PbR figures were published last month and GtD has been active in informing the debate on their significance. As part of this debate, we recently hosted a sell-out event for senior management and practitioners working in community rehabilitation companies and the justice sector.

An expert panel comprising Prof. Darrick Jolliffe of Greenwich University (above left), Dr Sam King of Leicester University (above right) and GtD’s own Jay Hughes (above centre left), considered the initial findings and what to do next, with Jack Cattell (above centre right) setting out a new vision of how predictive analyses can be used by practitioners to improve performance.

Prof. Darrick Jolliffe – University of Greenwich

If you were unable to attend but would like to learn more about how GtD could support you in evaluating your social impact outcomes or for a free predictive analytic roadmap for your CRC, contact Jack Cattell  The event presentations can also be viewed via the link below:

Transforming Rehabilitation – Learning from the PbR results presentations

Dr Sam King – University of Leicester

 

Jack Cattell – GtD

 

We’ve also set up a LinkedIn group as a forum for shared learning and discussion, for individuals who work or have an interest in, the fields of probation, offender rehabilitation and Transforming Rehabilitation. Click here to request to join – Transforming Rehabilitation

Scales of Justice representing Transforming Rehabilitation

Transforming Rehabilitation: Payment by Results Figures

Last week saw the release of the Transforming Rehabilitation (TR) Payment by Results figures for the October to December 2015 cohort.

The overall result was encouraging, and defy the view that Transforming Rehabilitation’s radical changes to probation, and the ensuing problems, would result in increased reoffending – though it is very important that I point out that this is just the first set of results of many and overall judgement should be reserved for at least a year. The reoffending rate for all CRCs was 45.6% compared to a 2011 baseline rate of 47.5%. I had to make some (conservative) assumptions to estimate the baseline rate but I think it is also safe to say that the difference was statistically significant, suggesting reoffending rates have reduced under TR. Please see the the note at the end of this blog to understand better how I completed the analysis.

 

Transforming Rehabilitation – CRC Performance

The chart below describes each CRC’s reoffending rate in relation to the baseline 2011 rate. The grey line represents the range of reoffending rates that would indicate no change from 2011 (the baseline confidence interval). If the CRC’s rate is outside this range, we are confident in statistically terms to state that the CRC’s performance was either better or worse than the reoffending rate achieved in 2011. The green bars represent the reoffending rates of CRCs that outperformed 2011, the orange bars represent those that performed the same as 2011 and the red bars present those that performed worse than 2011.

Chart to show CRC Performance

Source: Ministry of Justice Final Proven Reoffending Rates TR (Oct to Dec 2015 cohort).

Thirteen of the CRCs beat the baseline rate. The best performing CRC was Cumbria and Lancashire, which beat the baseline rate by 8.2% (49.9% to 41.7%). The nest best was Hampshire and the Isle of Wight which beat the baseline by 5.4% and the third best was Northumbria with a better rate by 4.3%. Two of the CRCs performed worse than the 2011 baseline. Warwickshire and West Mercia recorded a reoffending rate 3% worse than the baseline rate, and South Yorkshire’s rate was 2.8% worse. With most CRCs, however, outperforming the reoffending rate form 2011, the figures are a promising set of results.

 

Transforming Rehabilitation – Comparing CRC performance

Now that the baseline rates have been published, we can better understand how well each area was performing in 2011 and whether a CRC is now being asked to better good or bad performance achieved in that year. The chart below describes the difference between the actual baseline rate and the 2011 baseline’s OGRS score (in other words their expected rate of reoffending). A negative result in the chart means the area performed better in 2011 than the OGRS score expected.

Chart showing the difference between a CRC's actual baseline rate and the 2011 baseline’s OGRS score

Source: Ministry of Justice Final Proven Reoffending Rates TR (Oct to Dec 2015 cohort).

The charts highlights that six of the CRCs are being asked to beat better than expected performance in 2011 (in other words to be better than good). Whereas other CRCs, notably London and Wales, are being asked to outperform potentially poor performance in 2011. It it interesting that South Yorkshire and, Warwickshire & West Mercia – the two areas that recorded poor performance for TR – are being asked to beat good performance from 2011. Merseyside and Cheshire & Greater Manchester, however, are equally being asked to beat good performance from 2011 and were able to do so for the October to December 2015 cohort. The OGRS score does not allow for area effects, which will exist and could explain the differences between the OGRS score and the baseline rate. It not possible now to conclude whether payment by results will be easier in some areas than others, but, going forward, I will monitor the impact of whether a CRC is being asked to perform better than good or poor performance from 2011 on their ability to achieve payment by results bonuses.

 

Notes on analysis

The latest Ministry of Justice bulletin released more data than was previously available and I was able to complete a statistical analysis of the impact of TR. This could only be completed with making conservative assumptions that would make finding a statistically significant result less likely. The following actions were taken:

  • I assumed the spread of offenders across CRCs in 2011 was exactly the same as it was in the October to December 2015 cohort. This would not be the case but any analysis would want to weight the two samples so they represented each other so the impact of this assumption is minimal.
  • The 2011 sample size was assumed to be the same as that of the October to December 2015 cohort. The 2011 sample will be considerably bigger, so this assumption meant the standard error used for the analysis was larger than it should be.
  • A t-test with unequal variances assumed was used to test the difference between the cohort’s and the baseline’s reoffending rate. The t statistic result was 4.6.

Blog originally guest posted on http://www.russellwebster.com 31st October 2017 with additional commentary from Russell Webster.

Evaluation for accountability courts

Building on Success: Evaluation for Accountability Courts

Across the U.S. Accountability Courts are proving effective in reducing substance misuse and lowering recidivism. Once again, our home state of Georgia is in the vanguard of reform. Next week, GtD will be attending the annual conference of the Council of Accountability Court Judges of Georgia to show how evaluation can be used to build on these successes.   

Accountability courts provide interventions that address the mental health, substance misuse and other health issues that can be associated with an individual’s criminal behaviour. Designed to keep nonviolent offenders out of prison, those who are eligible will agree to completing a plan of action that includes counselling, support and regular drug testing by the court. While sanctions are imposed for those who violate a rule of the program or relapse, the court will be the forum for recognizing and congratulating an individual’s progress.

Once again Georgia is in the vanguard of criminal justice reform. Of the estimated 2,500 accountability courts in the U.S., 93 of them are in Georgia.  With the mounting evidence that they are successful in reducing substance misuse and lowering recidivism, there is also a good economic case to promote accountability courts over the use of jail. But if the evidence is there, what is the role of evaluation?  To answer this question GtD will be attending the annual conference of the Council of Accountability Courts of Georgia.

Building on Success

Although accountability courts can be successful, more can be done to improve, replicate and sustain this innovative approach. So, whether a new accountability court is being set up, an existing program is being extended or a funder needs evidence of impact, then GtD’s social impact analytics can provide definitive data to assist courts measure, learn and prove their impact.

Measuring

If a court is implementing a new program, GtD’s Impact Measurement Service will determine its intended impacts, how to measure them and identify what resources will be required. In providing this service we will collect data and report analyses that will be relevant to judges, court managers and practitioners.

Learning

GtD’s social impact analytics can help accountability courts improve their existing programs. Our Predictive Analysis service will help practitioners identify what is working best for offenders, and will provide information for managers to re-define high-quality interventions and deliver a more effective program.

Proving

Ultimately, accountability courts will want to prove their impact. Our rigorous Impact Evaluation service will provide definitive evidence of reductions in recidivism, lower substance misuse and the wider benefits to individual offenders, the local criminal justice system and community.

If you are attending next week’s conference, come by GtD’s table in the exhibition hall to learn more of the value of our social impact analytics for your court.    

Influence through Data

“Yeah, Says Who?” – Influence Through Data

You know you’ve achieved results – the data tells you so – but how do you influence sceptics to believe it?

It can be a rude awakening to take the findings of a study outside your own team or organisation, where trust and mutual support are more or less a given. In front of a wider audience of funding providers or other stakeholders, you will inevitably in my experience find yourself being challenged hard.

This is as it should be – scrutiny is a key part of a healthy system – but, at the same time, it’s always a shame to see an impactful project or programme struggle purely because its operators fail to sell it effectively.

Fortunately, while there are no black-and-white rules, there are some things you can do to improve your chances.

Confidence = Influence

When I present findings I do so with a confidence that comes with experience and from really understanding the underlying mechanics. But if you’re not a specialist and don’t have that experience there are things you can do to make yourself feel more confident and thus inspire greater confidence in your audience.

First, make sure you have thought through and recorded a data management policy. Are you clear how often data should be entered? If information is missing, what will you do to fill the gaps? What are your processes for cleaning and regularising data? Is there information you don’t need to record? A professional, formalised approach to keeping timely and accurate data sends all the right signals about your competence and the underlying foundations of your work.

Secondly, use the data as often as possible, and share the analysis with those who enter your data so that they can understand its purpose, and own it. Demonstrating that your data is valued and has dedicated, accountable managers hugely increases its (and your) credibility.

Thirdly, take the initiative in checking the reliability and validity of your own tools. If you use well-being questionnaires, for example, take the time to check whether they are really measuring what you want to measure in most instances. In other words, try to find fault with your own approach before your stakeholders so that when they find a weak point you have an answer ready that not only reassures them but also underlines the objectivity with which you approach your work.

Own Your Data’s Imperfections

Finally, and this might feel counterintuitive, you should identify the weaknesses in your own data and analysis and be honest about them. All data and analysis has limitations and being clear about those, and the compromises made to work around them demonstrates objectivity which, again, reinforces credibility.

In conclusion, the better you understand your own data and analysis, flaws and all, the more comfortable and confident you will feel when it, in turn, comes under scrutiny.

Hallmarks of a Good Evaluation Plan Part 2 – Change & Competence

Hallmarks of a Good Evaluation Plan Part 2 – Change & Competence

People don’t want to fund projects, or organisations, or even people – they want to fund change. And they want to work with professionals who know the territory.

Last week  I introduced the three hallmarks of a good evaluation plan and covered the first of those, “relevance”, in some detail. This week, I’m unpacking the others.

The second hallmark is evidence that evaluation, as planned, will promote learning and change within an organisation.  In our experience at Get the Data, we know that not all organisations are ready for change, so reassuring funding bodies of that willingness at the outset is a good tactical move. You can support this by engaging with changemakers within your organisation – those individuals who, if the evaluation demands change, have the desire and ability to make it happen.

For our part, Get the Data’s cutting edge predictive analyses are helping practitioners to identify what will work best for their clients. Managers are using that information to improve interventions, predict future impact and, in the case of social impact bonds, forecast future income. All of which, of course, goes to demonstrate a focus on improving results through intelligent change.

Knowing Your Stuff

The third and final hallmark of a good evaluation plan is evidence of technical competence which will reassure funding assessors that they are dealing with people who are truly immersed in the field in which they are working.

In practice, that means employing the agreed professional nomenclature of inputs, outputs, outcomes and impacts; and also demonstrating an awareness of the appropriate methods for impact and process evaluation. Though this is partly about sending certain signals (like wearing appropriate clothing to a job interview) it is by no means superficial: it also enables assessors to compare your bid fairly against others, like for like, which is especially important in today’s competitive environment. In effect, it makes their job easier.

Organisations that commission Get the Data are working with some of the most vulnerable people in society. We value their work and are committed to using quantitative methods of evaluation to determine their impact. We are proud that our impact evaluations are not only delivering definitive reports on the impact of their work but also play a decisive role in ensuring vital interventions continue. A rigorous evaluation is a business case, a funding argument and publicity material all in one.

I hope you have found this short introduction to the hallmarks of a good evaluation plan useful.  If you want to learn more about how our social impact analytics can support your application for grant funding then contact me or sign up for a free one-hour Strategic Impact Assessment via our website.

 

Image of data being analysed

There’s no Magic Way of Measuring Impact

Wouldn’t it be great if there was a way of measuring your social impact across multiple projects using a single dependable statistic? Well, I’ve got some bad news, and some good.

I was recently talking to a charity who wanted to know how if they could go about measuring and reporting the overall impact of the organisation on children and families. With multiple strands each aiming to achieve different things, they asked if a single outcome measure – one accurate, reliable number – to sum up the impact of the whole organisation was either possible or desirable.

First, here’s the bad news: it’s very unlikely – I might even be so bold as to say impossible – that any such thing exists. You might think you’ve found one that works but when you put in front of a critic (or a nitpicking critical friend, like me) it will probably get ripped apart in seconds.

Of course, if there is a measure that works across multiple projects, even if not all of them, you should use it, but don’t be tempted to shoehorn other projects into that same framework.

It’s true that measuring impact requires compromise but an arbitrary measure, or one that doesn’t stand up to scrutiny, is the wrong compromise to make.

The Good News

There is, however, a compromise that can work, and that is having the confidence to aggregate upwards knowing your project level data are sound. You might say, for example, that together your projects improved outcomes for 10,000 families, and then give a single example from an individual project that improved service access or well-being to support the claim. In most situations that will be more meaningful than any contrived, supposedly universal measure of impact.

Confidence is the key, though: for this to work you need to find a reliable way of measuring and expressing the success of each individual project, and have ready in reserve information robust enough to hold up to scrutiny.

Measuring Means Data

In conclusion, the underlying solution to the challenge of measuring impact, and communicating it, is a foundation of good project level data. That will also make it easier to improve performance and give you more room to manoeuvre. Placing your faith in a single measure, even if you can decide upon one, could leave you vulnerable in a shifting landscape.

 

Images showing analysis, in a light bulb to illustrate project evalution

You Might Be Winning but Not Know It

Have you ever eagerly awaited the results of a project impact study or external evaluation only to be disappointed to be told you had no impact? ‘How can this be?’ you might ask. ‘The users liked it, the staff saw the difference being made, and the funding provider was ecstatic!’ The fact is, if you’re trying to gauge the final success of a project without having analysed your data throughout its life, proving you made a difference is bound to be difficult.

Of course we would all like to know before we invest in a project whether it’s going to work. As that’s practically impossible (sorry) the next best thing is to know as soon as we can whether it is on a path to success or, after the fact, whether it has been successful. But even that, in my view, isn’t always quite the right question: more often we should be asking instead what it has achieved, and for whom.

In most cases – rugby matches and elections aside – success isn’t binary, it’s complex, but good data analysed intelligently can reduce the noise and help to make sense of what is really going on.

A service might in practice work brilliantly for one cohort but have negligible impact on another, skewing anecdotal results. Changes might, for example, boost achievement among girls but do next to nothing for boys, leading to the erroneous conclusion that it has failed outright. Or perhaps across the entire group, attainment is stubbornly unmoving but attendance is improving – a significant success, just not the one anyone expected. Dispassionate, unprejudiced data can reveal that your project is achieving more than you’d hoped for.

Equally, if the goalposts are set in concrete, consistently mining that data can give you the insight you need to learn, improve and change tack to achieve the impact you want while the project is underway. Or, at least, to check that you’re collecting and reviewing the right data – if the answer to any of your questions is a baffled shrug or an anecdote (and it too often is, in my experience) then you have a problem.

I’ll be circling back for a detailed look at some of the case studies hinted at above, as well as several others covering various fields, in later posts in this series.

In the meantime, consider the project that keeps you awake at night – where are its dark corners, and what good news might be lurking there?

Prison Reform and Outcome Measurement

Pomp and pageantry came to Westminster this week, with the Queen’s Speech setting out the British government’s legislative agenda for the coming Parliamentary session. But amid the ermine and jewels was a call for hard, empirical data.

The centre piece of the ‘Gracious Address’ was a major shake-up of the prison system in England. Legislation will be brought forward to give governors of six “reform prisons” unprecedented autonomy over education and work in prisons, family visits, and rehabilitation services. With this autonomy will come accountability and the publication of comparable statistics on reoffending, employment rates on release, and violence and self-harm for each prison.

Further details of the government’s prison reforms were contained “Unlocking Potential”, Dame Sally Coates’ review of prison education in England that was published this week. The review includes recommendations to improve basic skills and the quality of vocational training and employability, and also greater personal social development.  Echoing the government’s move to devolve greater autonomy to prison governors, Dame Sally’s review also endorsed the need for governors to be held to account for the educational progress of all prisoners in their jails, and for the outcomes achieved by their commissioning decisions for education.

Improved education outcomes for individual prisoners will be supported by improved assessment of prisoners’ needs and the creation of Personal Learning Plans. However, Dame Sally’s review also made a call for greater performance measurement not only for the sake of accountability, but also for the planning and prioritisation of education services.

As noted before, this is an exciting time for prison reform on both sides of the Atlantic. However, reform must be made on evidence and supported by the hard data. Devolving decision making to those who know best is a bold move but with autonomy comes accountability and transparency. As Dame Sally’s report recommends, accountability and transparency are well served by,

Developing a suite of outcome measures to enable meaningful comparisons to be made between prisons (particularly between those with similar cohorts of offenders) is vital to drive improved performance”.

As the pace of reform continues, GtD looks forward to supporting those reforms with our expertise of outcome measurement and social impact analytics.