Posts

Influence through Data

“Yeah, Says Who?” – Influence Through Data

You know you’ve achieved results – the data tells you so – but how do you influence sceptics to believe it?

It can be a rude awakening to take the findings of a study outside your own team or organisation, where trust and mutual support are more or less a given. In front of a wider audience of funding providers or other stakeholders, you will inevitably in my experience find yourself being challenged hard.

This is as it should be – scrutiny is a key part of a healthy system – but, at the same time, it’s always a shame to see an impactful project or programme struggle purely because its operators fail to sell it effectively.

Fortunately, while there are no black-and-white rules, there are some things you can do to improve your chances.

Confidence = Influence

When I present findings I do so with a confidence that comes with experience and from really understanding the underlying mechanics. But if you’re not a specialist and don’t have that experience there are things you can do to make yourself feel more confident and thus inspire greater confidence in your audience.

First, make sure you have thought through and recorded a data management policy. Are you clear how often data should be entered? If information is missing, what will you do to fill the gaps? What are your processes for cleaning and regularising data? Is there information you don’t need to record? A professional, formalised approach to keeping timely and accurate data sends all the right signals about your competence and the underlying foundations of your work.

Secondly, use the data as often as possible, and share the analysis with those who enter your data so that they can understand its purpose, and own it. Demonstrating that your data is valued and has dedicated, accountable managers hugely increases its (and your) credibility.

Thirdly, take the initiative in checking the reliability and validity of your own tools. If you use well-being questionnaires, for example, take the time to check whether they are really measuring what you want to measure in most instances. In other words, try to find fault with your own approach before your stakeholders so that when they find a weak point you have an answer ready that not only reassures them but also underlines the objectivity with which you approach your work.

Own Your Data’s Imperfections

Finally, and this might feel counterintuitive, you should identify the weaknesses in your own data and analysis and be honest about them. All data and analysis has limitations and being clear about those, and the compromises made to work around them demonstrates objectivity which, again, reinforces credibility.

In conclusion, the better you understand your own data and analysis, flaws and all, the more comfortable and confident you will feel when it, in turn, comes under scrutiny.

Hallmarks of a Good Evaluation Plan Part 2 – Change & Competence

Hallmarks of a Good Evaluation Plan Part 2 – Change & Competence

People don’t want to fund projects, or organisations, or even people – they want to fund change. And they want to work with professionals who know the territory.

Last week  I introduced the three hallmarks of a good evaluation plan and covered the first of those, “relevance”, in some detail. This week, I’m unpacking the others.

The second hallmark is evidence that evaluation, as planned, will promote learning and change within an organisation.  In our experience at Get the Data, we know that not all organisations are ready for change, so reassuring funding bodies of that willingness at the outset is a good tactical move. You can support this by engaging with changemakers within your organisation – those individuals who, if the evaluation demands change, have the desire and ability to make it happen.

For our part, Get the Data’s cutting edge predictive analyses are helping practitioners to identify what will work best for their clients. Managers are using that information to improve interventions, predict future impact and, in the case of social impact bonds, forecast future income. All of which, of course, goes to demonstrate a focus on improving results through intelligent change.

Knowing Your Stuff

The third and final hallmark of a good evaluation plan is evidence of technical competence which will reassure funding assessors that they are dealing with people who are truly immersed in the field in which they are working.

In practice, that means employing the agreed professional nomenclature of inputs, outputs, outcomes and impacts; and also demonstrating an awareness of the appropriate methods for impact and process evaluation. Though this is partly about sending certain signals (like wearing appropriate clothing to a job interview) it is by no means superficial: it also enables assessors to compare your bid fairly against others, like for like, which is especially important in today’s competitive environment. In effect, it makes their job easier.

Organisations that commission Get the Data are working with some of the most vulnerable people in society. We value their work and are committed to using quantitative methods of evaluation to determine their impact. We are proud that our impact evaluations are not only delivering definitive reports on the impact of their work but also play a decisive role in ensuring vital interventions continue. A rigorous evaluation is a business case, a funding argument and publicity material all in one.

I hope you have found this short introduction to the hallmarks of a good evaluation plan useful.  If you want to learn more about how our social impact analytics can support your application for grant funding then contact me or sign up for a free one-hour Strategic Impact Assessment via our website.

 

Image of data being analysed

There’s no Magic Way of Measuring Impact

Wouldn’t it be great if there was a way of measuring your social impact across multiple projects using a single dependable statistic? Well, I’ve got some bad news, and some good.

I was recently talking to a charity who wanted to know how if they could go about measuring and reporting the overall impact of the organisation on children and families. With multiple strands each aiming to achieve different things, they asked if a single outcome measure – one accurate, reliable number – to sum up the impact of the whole organisation was either possible or desirable.

First, here’s the bad news: it’s very unlikely – I might even be so bold as to say impossible – that any such thing exists. You might think you’ve found one that works but when you put in front of a critic (or a nitpicking critical friend, like me) it will probably get ripped apart in seconds.

Of course, if there is a measure that works across multiple projects, even if not all of them, you should use it, but don’t be tempted to shoehorn other projects into that same framework.

It’s true that measuring impact requires compromise but an arbitrary measure, or one that doesn’t stand up to scrutiny, is the wrong compromise to make.

The Good News

There is, however, a compromise that can work, and that is having the confidence to aggregate upwards knowing your project level data are sound. You might say, for example, that together your projects improved outcomes for 10,000 families, and then give a single example from an individual project that improved service access or well-being to support the claim. In most situations that will be more meaningful than any contrived, supposedly universal measure of impact.

Confidence is the key, though: for this to work you need to find a reliable way of measuring and expressing the success of each individual project, and have ready in reserve information robust enough to hold up to scrutiny.

Measuring Means Data

In conclusion, the underlying solution to the challenge of measuring impact, and communicating it, is a foundation of good project level data. That will also make it easier to improve performance and give you more room to manoeuvre. Placing your faith in a single measure, even if you can decide upon one, could leave you vulnerable in a shifting landscape.

 

Images showing analysis, in a light bulb to illustrate project evalution

You Might Be Winning but Not Know It

Have you ever eagerly awaited the results of a project impact study or external evaluation only to be disappointed to be told you had no impact? ‘How can this be?’ you might ask. ‘The users liked it, the staff saw the difference being made, and the funding provider was ecstatic!’ The fact is, if you’re trying to gauge the final success of a project without having analysed your data throughout its life, proving you made a difference is bound to be difficult.

Of course we would all like to know before we invest in a project whether it’s going to work. As that’s practically impossible (sorry) the next best thing is to know as soon as we can whether it is on a path to success or, after the fact, whether it has been successful. But even that, in my view, isn’t always quite the right question: more often we should be asking instead what it has achieved, and for whom.

In most cases – rugby matches and elections aside – success isn’t binary, it’s complex, but good data analysed intelligently can reduce the noise and help to make sense of what is really going on.

A service might in practice work brilliantly for one cohort but have negligible impact on another, skewing anecdotal results. Changes might, for example, boost achievement among girls but do next to nothing for boys, leading to the erroneous conclusion that it has failed outright. Or perhaps across the entire group, attainment is stubbornly unmoving but attendance is improving – a significant success, just not the one anyone expected. Dispassionate, unprejudiced data can reveal that your project is achieving more than you’d hoped for.

Equally, if the goalposts are set in concrete, consistently mining that data can give you the insight you need to learn, improve and change tack to achieve the impact you want while the project is underway. Or, at least, to check that you’re collecting and reviewing the right data – if the answer to any of your questions is a baffled shrug or an anecdote (and it too often is, in my experience) then you have a problem.

I’ll be circling back for a detailed look at some of the case studies hinted at above, as well as several others covering various fields, in later posts in this series.

In the meantime, consider the project that keeps you awake at night – where are its dark corners, and what good news might be lurking there?

Prison Reform and Outcome Measurement

Pomp and pageantry came to Westminster this week, with the Queen’s Speech setting out the British government’s legislative agenda for the coming Parliamentary session. But amid the ermine and jewels was a call for hard, empirical data.

The centre piece of the ‘Gracious Address’ was a major shake-up of the prison system in England. Legislation will be brought forward to give governors of six “reform prisons” unprecedented autonomy over education and work in prisons, family visits, and rehabilitation services. With this autonomy will come accountability and the publication of comparable statistics on reoffending, employment rates on release, and violence and self-harm for each prison.

Further details of the government’s prison reforms were contained “Unlocking Potential”, Dame Sally Coates’ review of prison education in England that was published this week. The review includes recommendations to improve basic skills and the quality of vocational training and employability, and also greater personal social development.  Echoing the government’s move to devolve greater autonomy to prison governors, Dame Sally’s review also endorsed the need for governors to be held to account for the educational progress of all prisoners in their jails, and for the outcomes achieved by their commissioning decisions for education.

Improved education outcomes for individual prisoners will be supported by improved assessment of prisoners’ needs and the creation of Personal Learning Plans. However, Dame Sally’s review also made a call for greater performance measurement not only for the sake of accountability, but also for the planning and prioritisation of education services.

As noted before, this is an exciting time for prison reform on both sides of the Atlantic. However, reform must be made on evidence and supported by the hard data. Devolving decision making to those who know best is a bold move but with autonomy comes accountability and transparency. As Dame Sally’s report recommends, accountability and transparency are well served by,

Developing a suite of outcome measures to enable meaningful comparisons to be made between prisons (particularly between those with similar cohorts of offenders) is vital to drive improved performance”.

As the pace of reform continues, GtD looks forward to supporting those reforms with our expertise of outcome measurement and social impact analytics.

Impact Evaluation and Social Impact Bonds

How Rigorous Impact Evaluation Can Improve Social Impact Bonds

Social Impact Bonds

In recent years, Social Impact Bonds are being increasingly used by the British government to deliver public services via outcomes based commissioning. They are also becoming increasingly common in the U.S. By linking payments to good outcomes for society, SIBs are used not only to provide better value for money, but also as a driver of public sector reform. In the words of guidance published by the British government’s Cabinet Office:

“[Social Impact Bonds] are … designed to help reform public service delivery. SIBs improve the social outcomes of publicly funded services by making funding conditional on achieving results. “

While SIBs are not without their critics, their proponents argue that the bonds are a great way to attract private investment to the public sector while focusing all partners on the delivery of the desired social outcomes. This new way of commissioning services also encourages prime contractors to subcontract delivery of some service to the community and voluntary organisations, who bring their own experience, expertise and diversity to the provision of social services.

GtD have completed evaluations that have helped shape social impact bonds, and through our work we have identified five key questions that should be asked by anyone thinking of setting up a SIB or is looking to improve the design of their SIB:

1. Will it work?

Some services delivered by SIBs fail before they start because the planned intervention cannot plausibly achieve the desired outcome. In other words, just because an intervention reduced the number of looked after children entering the criminal justice system doesn’t mean it should work for all young people at risk of offending. That said, if the evidence base around a particular intervention is weak that does not mean one should not proceed – but it should promote a SIB design that includes an evaluation that can state quickly whether the SIB is delivering the hoped for outcomes.

2. Who can benefit from this intervention and who can’t?

We all want to help as many people as possible. However, we can quickly lose sight of who we are seeking to help when we are simply meeting output targets. In other words, if public services are funded by the number of clients they see, then providers could be tempted to increase numbers by accepting referrals of people for whom the service was not intended. So to achieve your outcomes – and receive payments – it’s vital to monitor intelligently the profile of your beneficiaries and ask yourself, “If my targeting were perfect are these the clients I would want to work with to deliver my intended outcomes?”

3. Are we doing what we said we were going to do? And does it work?

Interventions can fail simply because they don’t do what you said you were going to do. If, for example, you are working with young people to raise career prospects and your operating model includes an assessment of need (because the evidence base suggests that assessments increase effectiveness) then it should be no surprise that you did not meet your outcome targets if an assessment was completed with only half of your clients. Identifying your key outputs, monitoring their use and predicting outcomes based on their use can give you much greater confidence in achieving your intended impact.

4. What do we need to learn and how do we learn quickly?

All SIBs we have seen collate a lot of data about their beneficiaries and the service provided but few use those data to their full potential. With predictive analysis, we can monitor who appears to respond best, who is not benefiting and what form of service delivery is the most effective. In other words, is one-to-one work or group work more cost effective? As such you can learn how to define your referral criteria better or learn how to improve your operating model, even within the first months of a SIB.

5. How can we build a counterfactual?

A counterfactual is an estimation of what outcomes would have been achieved without the SIB. Comparing your SIB’s outcomes to the counterfactual can highlight some areas for learning and how to improve over time. Consideration should be given to the counterfactual at the commencement of the SIB. Full advantage can be taken of the publically available data sets to construct the counterfactual: for example, the National Pupil Database, Justice Data Lab or the Health Episode statistics. (Top tip: consent from beneficiaries to use these data sources is generally required).

To discuss how GtD’s impact evaluation can help improve your organisation’s SIB, please contact Jack Cattell, jack.cattell@getthedata.co.uk