Opinion
Lee Elliot Major argues for a more evidence-based approach to university access work.
It is nothing short of a scandal that the vast majority of work in our universities and colleges aimed at opening doors to students from low and middle income homes is not evaluated properly. We spend over £1 billion a year on programmes to widen participation and broaden access into our academic elites; yet we know very little about what impact most of these efforts are having. Well-intentioned efforts to aid social mobility – from school outreach programmes to financial support for students – are effectively operating in the dark, uninformed by any hard evidence of what has worked before.
The problem has come to light again with the release of a report for the Higher Education Funding Council for England (Hefce) which “found little evidence that impact is being systematically evaluated by institutions”. Previous reports have revealed a lack of even the most basic monitoring of data and outcomes across the sector, prompting the English funding council to issue guidance on evaluation.
The national strategy unveiled by Hefce and the Office for Fair Access (Offa) meanwhile has recommended a light-touch network of regional coordinators to facilitate collaboration between universities and schools. This sounds suspiciously like ‘AimHigher light’- a slim-line version of the previous national outreach programme in England. AimHigher was cut in the last Whitehall spending review due to lack of evidence of its impact. A lot of good work was undermined by the absence of hard data.
The gathering of robust evidence remains the Achilles heel of the sector. It seems tragic that this should be so in our respected seats of learning. Once when the Sutton Trust offered to evaluate an outreach scheme at a highly prestigious UK university, the head of access declined, arguing that they would rather use the extra money to help more students.
The problem with this response is twofold: Firstly, we didn’t (still don’t) know if the programme was actually having any impact on the students taking part. Secondly, if we did evaluate it, then the lessons could enable many thousands more students to be helped properly in the future. The current default – to simply survey participants to see if they enjoyed the experience – is no longer good enough. The question must be asked: did the programme impact on the student in the desired way that would not otherwise have happened if the programme had not existed. Did the programme enable students from poorer backgrounds to enter university who otherwise wouldn’t have done so?
But there are signs that the tide is at last turning. To its credit Offa is urging institutions to adopt a more ‘evidence based’ approach. What is now needed is the full mix of evaluation and monitoring – local pilot studies as well as national randomised trials – to measure the outcomes of access work.
Universities can look to the work we have been doing with schools on interventions in the classroom to learn some of the basic principles. The DIY evaluation guide published by the Education Endowment Foundation (EEF) offers simple advice on how to evaluate the impact of a programme at a local level. This is about combining professional judgment with knowledge of previous evidence to devise a programme, and then monitor outcomes of participating students in comparison to similar students not on the programme. The Trust is currently developing a common evaluation framework for all of its programmes. This will enable evaluations for small projects without the resources to commission an independent evaluation themselves.
The Government recently designated The Sutton Trust and EEF as the ‘What Works centre’ for education following the publication of our highly successful toolkit for schools. The Trust is currently developing an ‘HE access toolkit’, which we hope will summarise current evidence on the impact of access work in an accessible format. Although it is not clear how much this will be able to say, given the paucity of research in the field.
Undertaking ‘gold standard’ evaluations which involve selecting participants at random to ascertain genuine impact remains a tricky task. But the Sutton Trust has already funded a feasibility study on how a proper randomised control trial (RCT) might be undertaken for an access programme. We are now considering commissioning a fully fledged RCT.
Even if RCTs are currently a step too far for others, then evaluations need at least to involve the use of comparison groups. Two examples of such usage can be seen in recent evaluations commissioned by the Trust. Our review of summer schools used UCAS university admissions data to compare the outcomes of summer school students against similar students not on the programme. The Reach for Excellence programme meanwhile constructed a comparison group from students who qualified but didn’t enrol on the programme.
If I had my way every access programme would require an evaluation that met these basic standards. Robust evaluation is not easy to do, costs time and money, and often produces awkward and humbling results. But not to do so, is in the end failing the students we are trying to help.
This blog post first appeared on Westminster Briefing.