Posts

TR: PbR results - speakers who presented to Community Rehabilitation Companies and other justice sector employees

Community Rehabilitation Companies: PbR Results Event

Transforming Rehabilitation is the UK government’s programme of outsourcing probation services to new community rehabilitation companies. In a radical move, the government is now paying these new companies by the reduction in reoffending results they achieve. GtD is at the forefront of this by providing our cutting-edge social impact analytics to Sodexo Justice Services who manage a number of these new companies.

The first PbR figures were published last month and GtD has been active in informing the debate on their significance. As part of this debate, we recently hosted a sell-out event for senior management and practitioners working in community rehabilitation companies and the justice sector.

An expert panel comprising Prof. Darrick Jolliffe of Greenwich University (above left), Dr Sam King of Leicester University (above right) and GtD’s own Jay Hughes (above centre left), considered the initial findings and what to do next, with Jack Cattell (above centre right) setting out a new vision of how predictive analyses can be used by practitioners to improve performance.

Prof. Darrick Jolliffe – University of Greenwich

If you were unable to attend but would like to learn more about how GtD could support you in evaluating your social impact outcomes or for a free predictive analytic roadmap for your CRC, contact Jack Cattell  The event presentations can also be viewed via the link below:

Transforming Rehabilitation – Learning from the PbR results presentations

Dr Sam King – University of Leicester

 

Jack Cattell – GtD

 

We’ve also set up a LinkedIn group as a forum for shared learning and discussion, for individuals who work or have an interest in, the fields of probation, offender rehabilitation and Transforming Rehabilitation. Click here to request to join – Transforming Rehabilitation

Image to illustrate no data can mean no voice and no idea

No Data? No Voice, No Idea – The Importance of Data

The collection and analysis of data must never be allowed to fall by the wayside – it’s a founding stone, not a ‘nice to have’.

Of course, I would say that, wouldn’t I? But here are six concrete reasons why data is important, drawing on Get the Data’s recent work with the Advice Services Alliance (ASA) as a case study.

  1. Data Is the Great Persuader

There is no more powerful tool for influencing stakeholders than data, as I explained in more detail in this recent blog post on using data to influence stakeholders.

ASA works with its members – various associations who provide advice services – to capture and analyse data from the front-line. This gives weight to conversations with all of those who have an interest in ASA’s direction of travel, reassuring them that strategic decisions are being made in response to changing needs, and reinforcing the professionalism that underlies the work they and their partners do.

 

  1. Data Means Funding

In particular, data is invaluable in drawing in new sources of funding, and persuading potential funding providers. Faced with a choice of projects or programmes in which they might invest, with ever-tighter budgets, funding bodies will regard convincing data as a good reason to choose your work over others. ASA will use data to add weight to its funding applications for this reason.

 

  1. Data Gives You the Power to Lead the Debate

ASA seeks to lead thought, representing its members in national discussions by highlighting issues affecting the people who use advice services. The body of data and analysis to which ASA will refer confers authority and allows the organisation to direct the debate and steer collective thinking around youth issues.

 

  1. Data Defines Good (and Bad) Practice

Using data provided by its members ASA will be able to identify areas for improvement in front line practice and also to pinpoint what is working especially well so that good practice can be shared across the community. It informs training programmes, service improvements and helps determine how resources should be employed for maximum impact.

 

  1. Better data and measurement development 

Good data leads to better data. Use of data means we learn its limitations and how to outcome those. We also learn how to measure the right outcomes, better; particularly in how to measure informal outcomes along said formal attainment.

 

  1. Data Is Cheaper and Easier Than Ever

Cloud-based databases are cheap and easy to implement compared to the cumbersome systems of the past. They make it easier for people to enter data and share it. So there’s really no excuse for failing to collect and analyse data in this day and age.

If you would like to find out more about our cutting-edge approach to data capture and analysis please get in touch.

Databse word cloud

Choosing a Database? Just Keep It Simple

When choosing a database for your project or programme, made to measure software or flashy online tools aren’t necessarily the right choice.

If you’re going to gather and analyse data in a serious way of course you need a database of some sort and, having spent much of my academic and professional career buried in them, I’m very much an advocate. But I too often see evidence of the database itself being regarded as the end rather than the means – as the solution to the challenge at hand rather than a tool for addressing it.

Perhaps that’s because we all so often feel under pressure, either external or self-imposed, to demonstrate what we have delivered in concrete terms. A specially commissioned database, perhaps with a catchy name and fancy interface, is something a project manager can point to and say, ‘I made this.’

The problem is that it takes huge amounts of time, resources and testing to create excellent software from scratch or to implement a powerful online tool, meaning that in practice these products all too often become clunky, creaky and frustrating. And products that don’t work well don’t get used.

As always, the answer is to focus on what you want to achieve. That will help you understand what kind of data you need to collect, who will be collecting and managing it and, therefore, what kind of system you need to house it.

Sometimes, of course, a custom-designed database or powerful online application are absolutely the right choice but, in my view, those occasions are actually very rare. More often than not the humble, rather plain Microsoft Access, or a similar tried and tested generalist professional product, is not only cheaper but also better suited to the task. Remember, these programmes have been worked on over the course of not only years but decades, and have huge amounts of resources behind their development and customer support programmes. Standard software also makes sharing, archiving and moving between systems faster and easier in most cases.

It is also possible to customise Access databases to a fairly high degree, either in-house using the software’s in-built features, using third-party software, or by hiring developers to create a bespoke front end interface without getting bogged down in the complicated underlying machinery.

And cloud data storage has made all this easier and cheaper than ever.

In conclusion, I’d like people to recognise that the real deliverable isn’t necessarily software, and that there’s no shame in off-the-peg. After all, a database is only as good as the information it holds and a clean set of useful, appropriate data is what people should really be proud of.

A lightbulb of cogs to illustrate service innovation

Service Innovation – Segment and Conquer

Supermarkets use data to sell us more of the things we want, and even things we don’t yet know we want – a real world example of service innovation through segmentation that we can learn from.

In social policy, we all know that there is no one programme or service that will work equally well for everyone in the target cohort. Even if it is having an impact across the board there will be some people for whom it works better than others and that’s where extra value can be squeezed out.

We might roll our eyes at the buzz-phrase ‘customer segmentation’, and of course there’s a difference between tailoring public services and selling sausages, but both require a similar approach to gathering data, analysing it, and in a sense letting it lead the way.

In the case of Tesco it’s about working out what shoppers want and selling it to them – a far easier job than convincing them to buy things in which they have no interest, a win for both parties. With public services it’s a matter of thinking in broad terms where we want people to end up – or not end up, as the case may be – and then letting what bubbles up from the data determine the most efficient route, and even the specific end point.

For example, working with one client that specialises in tackling youth offending, our data analysis found that though their intervention was effective overall, it was less effective at reducing offending among 12 to 13-year-olds than among young people of 15 and 16. By treating these two segments differently the overall impact of the intervention can be improved and more young people can be set on the right path at the right moment in their lives.

This approach challenges current orthodoxy which would have us determine our theory of change and set out clearly how we will achieve a given outcome before starting work. This can lead people to impose an analysis on the data after the fact, forcing it to fit the predetermined course. It also implies that all service users need more or less the same thing and we know very well that they don’t. The orthodox approach has its place, of course, once data has been collected and analysed, when we can start to make predictions based on prior knowledge.

Equally, it’s not efficient to design a bespoke service for every single end user, but there is a sweet spot in which we can identify sub-groups and thus wring out more value from programmes with relatively little additional time, manpower or funding. I’ll finish with another example: we have been designing approaches to impact management with a number of providers of universal services for young people and adult disability services. These agencies work with different sorts of people, with varying needs, and for whom different outcomes are desirable. Advanced statistical analysis can help us identify groups within that complex body and lead to service innovation which is both tailored and general.

Influence through Data

“Yeah, Says Who?” – Influence Through Data

You know you’ve achieved results – the data tells you so – but how do you influence sceptics to believe it?

It can be a rude awakening to take the findings of a study outside your own team or organisation, where trust and mutual support are more or less a given. In front of a wider audience of funding providers or other stakeholders, you will inevitably in my experience find yourself being challenged hard.

This is as it should be – scrutiny is a key part of a healthy system – but, at the same time, it’s always a shame to see an impactful project or programme struggle purely because its operators fail to sell it effectively.

Fortunately, while there are no black-and-white rules, there are some things you can do to improve your chances.

Confidence = Influence

When I present findings I do so with a confidence that comes with experience and from really understanding the underlying mechanics. But if you’re not a specialist and don’t have that experience there are things you can do to make yourself feel more confident and thus inspire greater confidence in your audience.

First, make sure you have thought through and recorded a data management policy. Are you clear how often data should be entered? If information is missing, what will you do to fill the gaps? What are your processes for cleaning and regularising data? Is there information you don’t need to record? A professional, formalised approach to keeping timely and accurate data sends all the right signals about your competence and the underlying foundations of your work.

Secondly, use the data as often as possible, and share the analysis with those who enter your data so that they can understand its purpose, and own it. Demonstrating that your data is valued and has dedicated, accountable managers hugely increases its (and your) credibility.

Thirdly, take the initiative in checking the reliability and validity of your own tools. If you use well-being questionnaires, for example, take the time to check whether they are really measuring what you want to measure in most instances. In other words, try to find fault with your own approach before your stakeholders so that when they find a weak point you have an answer ready that not only reassures them but also underlines the objectivity with which you approach your work.

Own Your Data’s Imperfections

Finally, and this might feel counterintuitive, you should identify the weaknesses in your own data and analysis and be honest about them. All data and analysis has limitations and being clear about those, and the compromises made to work around them demonstrates objectivity which, again, reinforces credibility.

In conclusion, the better you understand your own data and analysis, flaws and all, the more comfortable and confident you will feel when it, in turn, comes under scrutiny.

Image of data being analysed

There’s no Magic Way of Measuring Impact

Wouldn’t it be great if there was a way of measuring your social impact across multiple projects using a single dependable statistic? Well, I’ve got some bad news, and some good.

I was recently talking to a charity who wanted to know how if they could go about measuring and reporting the overall impact of the organisation on children and families. With multiple strands each aiming to achieve different things, they asked if a single outcome measure – one accurate, reliable number – to sum up the impact of the whole organisation was either possible or desirable.

First, here’s the bad news: it’s very unlikely – I might even be so bold as to say impossible – that any such thing exists. You might think you’ve found one that works but when you put in front of a critic (or a nitpicking critical friend, like me) it will probably get ripped apart in seconds.

Of course, if there is a measure that works across multiple projects, even if not all of them, you should use it, but don’t be tempted to shoehorn other projects into that same framework.

It’s true that measuring impact requires compromise but an arbitrary measure, or one that doesn’t stand up to scrutiny, is the wrong compromise to make.

The Good News

There is, however, a compromise that can work, and that is having the confidence to aggregate upwards knowing your project level data are sound. You might say, for example, that together your projects improved outcomes for 10,000 families, and then give a single example from an individual project that improved service access or well-being to support the claim. In most situations that will be more meaningful than any contrived, supposedly universal measure of impact.

Confidence is the key, though: for this to work you need to find a reliable way of measuring and expressing the success of each individual project, and have ready in reserve information robust enough to hold up to scrutiny.

Measuring Means Data

In conclusion, the underlying solution to the challenge of measuring impact, and communicating it, is a foundation of good project level data. That will also make it easier to improve performance and give you more room to manoeuvre. Placing your faith in a single measure, even if you can decide upon one, could leave you vulnerable in a shifting landscape.

 

Images showing analysis, in a light bulb to illustrate project evalution

You Might Be Winning but Not Know It

Have you ever eagerly awaited the results of a project impact study or external evaluation only to be disappointed to be told you had no impact? ‘How can this be?’ you might ask. ‘The users liked it, the staff saw the difference being made, and the funding provider was ecstatic!’ The fact is, if you’re trying to gauge the final success of a project without having analysed your data throughout its life, proving you made a difference is bound to be difficult.

Of course we would all like to know before we invest in a project whether it’s going to work. As that’s practically impossible (sorry) the next best thing is to know as soon as we can whether it is on a path to success or, after the fact, whether it has been successful. But even that, in my view, isn’t always quite the right question: more often we should be asking instead what it has achieved, and for whom.

In most cases – rugby matches and elections aside – success isn’t binary, it’s complex, but good data analysed intelligently can reduce the noise and help to make sense of what is really going on.

A service might in practice work brilliantly for one cohort but have negligible impact on another, skewing anecdotal results. Changes might, for example, boost achievement among girls but do next to nothing for boys, leading to the erroneous conclusion that it has failed outright. Or perhaps across the entire group, attainment is stubbornly unmoving but attendance is improving – a significant success, just not the one anyone expected. Dispassionate, unprejudiced data can reveal that your project is achieving more than you’d hoped for.

Equally, if the goalposts are set in concrete, consistently mining that data can give you the insight you need to learn, improve and change tack to achieve the impact you want while the project is underway. Or, at least, to check that you’re collecting and reviewing the right data – if the answer to any of your questions is a baffled shrug or an anecdote (and it too often is, in my experience) then you have a problem.

I’ll be circling back for a detailed look at some of the case studies hinted at above, as well as several others covering various fields, in later posts in this series.

In the meantime, consider the project that keeps you awake at night – where are its dark corners, and what good news might be lurking there?

GtD - Jack Cattell Presenting at PPMRC

GtD’s Jack Cattell – Managing the Data Glut

Last month I had the great pleasure of spending a week in the USA to see for myself how GtD is developing its services in Atlanta, before attending the 9th annual Public Performance Measurement and Reporting Conference at Rutgers University in New Jersey.

Americans and the British may be “divided by a common language”, but I was struck by the common challenges that policy makers and practitioners face on both sides of the Atlantic: ensuring that practice is grounded in “what works?”, getting more “bang for your buck” in service delivery and managing “big data” to understand how policy makers and practitioners are responding to complex social problems. Whether they are based in the U.S. or the U.K., all of us at GtD feel privileged to help our clients to meet these challenges by providing definitive social impact analytics that are helping them to monitor their activities; learn quickly how to improve them; and, ultimately to prove their effectiveness – definitively.

It was the theme of “big data” that took Alan and I to Rutgers University where we were delighted to contribute our thoughts and experiences of “managing the data glut” at the PPMR conference. Drawing on over 20 years of experience of research and evaluation in the criminal justice system, we used our work in the development of the DASHBOARD for the CJS as an example of how to manage the data glut. It provided a clear, illustrative example of how we rationalised over 1,500 separate performance indicators to provide a highly visual and user-friendly dashboard of data that provides managers across the criminal justice system a single version of their performance in bringing offenders to justice.

The presentation we gave at the conference is available to view on our LinkedIn page, please do follow us for updates and information relating to social impact analytics.

Contact my colleague Alan or myself if you would like more information about any of our projects or how our social impact analytics could benefit your organisation. If you believe you are ready to embark on your own social impact journey, you might be interested in our free Strategic Impact Assessment. Please do get in touch if we can be of assistance by emailing us at:

Jack Cattell jack.cattell@getthedata.co.uk

Alan Mackie alan.mackie@gethedata.co.uk

Jack Cattell - Director, Get the Data

Jack Cattell presenting at the PPMRNC at Rutgers

Jack Cattell will be presenting at the Public Performance and Management Reporting Network Conference at Rutgers University later this month. Addressing the theme “Data Driven Decision Making: Navigating the Data Glut”, Jack will focus on GtD’s work in rationalising an organisation’s data to provide clear performance indicators. The conference has attracted speakers from across North America, Europe and Asia and takes place on the 22nd and 23rd September.

Driving Improved Outcomes for the Juvenile Justice System

It was a great pleasure to attend the Coalition of Juvenile Justice’s annual conference in Washington D.C. last week. Under the title, “Redefining leadership: engaging youth, communities and policy makers to achieve better juvenile justice outcomes”, the conference convened a broad coalition of policy makers, practitioners, advocates, researchers and, importantly young people themselves.

Improving outcomes for our young people was a good theme for the conference. And it comes at a time when juvenile justice is undergoing substantial reform in states across the U.S. This includes my home state of Georgia which is considered to be in the vanguard of reforming states.

The motivation for reform appears to be driven by a humanitarian desire to end the use of custody for young people and improve conditions in jail.  It is also clear that the reform agenda is driven by fiscal realities and the need to reduce the cost to the taxpayer. The Department of Juvenile Justice in Georgia estimates that it costs upwards of $90,000 a year to house just one juvenile offender in one of its facilities.

Humanitarian and costs concerns are legitimate concerns in public policy, not least how we treat our vulnerable young people. However, we must guard against unfounded good intentions and the danger of delivering cut-price justice. Good juvenile justice outcomes should be about increasing protective factors, ending the “school to prison pipeline”, improving relations with the police and education authorities and – of course – reducing recidivism.  So it was heartening that the conference was committed to improving outcomes for the juvenile offenders, their families and communities as well as the wider juvenile justice system.

The key to achieving good outcomes lies in understanding the data, and I attend several seminars on data-driven decision making, with presenters demonstrating the use of data to model how their juvenile justice system was reformed to achieve improved outcomes. It was also good to learn more about current evidence based interventions that are successful in addressing the underlying factors related to offending behaviour. My own contribution was to present a poster on how evaluation provides empirical evidence of how and why an intervention has achieved its outcomes and what can be done to improve them.

Juvenile justice reform should be led by an empirical analysis of the data: what is effective and cost beneficial. Evaluation has a key role in this and as the current reforms are rolled out they should be evaluated and scrutinized to determine whether they were successful, how they could be improved – or even whether a particular reform should be reversed or altered.

If you would like to learn more about how evaluation can help you with your local reforms, please contact me at alan.mackie@getthedata.co.uk