In this blogpost Hugh Richards shares his experience of results’ days. Hugh is Head of History at Huntington School in York and the leader of the team of SLs who work on the HA’s Subject Leader Development Programme.
Firstly, I am no expert on data. I am an experienced head of department, but I am not claiming any particular authority, just sharing my approach in case it helps other Heads of Department. It’s a tricky, emotive day, and it has the potential to be costly in different ways. Although we sometimes desperately want things to be different for students and teachers, without some caution we can end up spending a lot of money on post-results services and/or damaging the culture, relationships and confidence within our departments.
On the day: looking after yourself
Before we get into it, let’s talk about you, as a Head of Department. Results days are tough for HoDs and it’s crucial to keep things in perspective. Whilst HoDs are rightly held accountable for results, it’s important to acknowledge the indirect relationship between your work as HoD and the students’ outcomes. You don’t carry sole responsibility for each individual result. Your responsibility is to improve outcomes for students over time.
So don’t kick yourself too hard for disappointing results. We are all only human, and I am willing to bet you’ve done the best job you could for the extraordinarily demanding last few years. Similarly, if they are brilliant results, don’t congratulate yourself too much – the work of many others went into these results (including primary colleagues). Results are a team effort and the students themselves have a great deal of individual agency. Keep that perspective.
Right. Results day is on the way. What’s the plan? The trick is to look after both students and your teaching team.
On the day: students
If you’re in school on results day, focus on helping any students who need your time and advice. The analysis can come later. The main thing the students will want, other than (hopefully!) celebrate, is discuss re-marking of papers:
- Should we re-mark? History is often identified as one of the subjects where the marking is most reliant on examiner judgement. Consequently, re-marks (reviews of marking) are tempting. However, it’s worth knowing the various rules and procedures of your exam board- these can be found by Googling your exam board name and “post-results services” – might reveal a better path, for example a ‘clerical check’ to ensure each page of the script and additional answer booklets have been marked and all the marks entered correctly into the system. Look for the deadlines for these for your exam board. Equally, familiarise yourself with the review of marking procedures. Often, the given mark for an answer needs to vary quite a long way from the mark the re-marker judges it to deserve (in a different level of the mark scheme for example) for any changes to be made. So when to request a re-mark? My advice is to focus on the overall score (i.e. the total across all the papers) compared to overall grade boundary (the final grade they get, not a ‘grade’ on a single paper.) Anything within 2-3 marks might change. Finally, no requests should be made without written consent from the student and this should be taken care of by examinations officers.
- Which paper? Don’t be tempted to automatically get their weakest paper re-marked. Instead, send the paper with the greatest number of questions– this has the greatest number of chances for a mark to be changed. At A-level, a paper with two questions has less chance of changing than one with 3-4. The only caveat here is if they have dropped only a few marks on the ‘biggest’ paper, and there isn’t much room for improvement in any case.
- Making the decision: talk to the student, parents and other colleagues before making the decision. It is easy to spend a lot of money gambling and in my experience not many get changed, even in history.
If there is a major issue, e.g. one unit is strikingly different to the others (unlikely, but possible with things like A-level coursework where it is all moderated by one person for each centre) then talk to your head of centre, line manager and exams team to figure out the next steps in any broader appeal. Do this quickly, but get advice before communicating any such appeal to students and parents.
On the day: teachers
In my experience a few headlines – overall attainment and progress- is all your teaching team needed on the day. It’s still the holiday- don’t flood them with every piece of data you process. Perhaps send an email with the headlines of attainment and progress so it’s there if they want it.
The rest can wait. If there are big problems, save them for a start of term presentation. Reassure your team you will get a plan together to tackle them, and it can wait until September. Keep spirits positive and thinking purposeful.
Before the start of term: a bit of analysis:
This year’s results will be a bit weird. Remember they were examined on a reduced specification. That’s not to say we can’t learn anything at all from the data, but rather they might tell us even less than usual. I say ‘even less’ as I think the context of this data won’t really be able to usefully inform our practice.
Tip 1: don’t over-analyse, stick to the useful data
Try to remember there is a difference between interesting data and informative data. What is interesting to drill down into might not actually shape what you do next. Interesting data often can’t, or shouldn’t, inform your practice. For example, it might be interesting to see how SEND students in a particular teaching group did, but this probably doesn’t tell you much about helpful approaches with the next group’s SEND students, who will have different specific needs.
Tip 2: Look for patterns over time.
This will be really difficult this year. Perhaps compare to 2019 and before, but actually these students had a very different school experience and examination arrangements, so there is probably little to gain from this in 2022. However, this should usually be the aim. In the light of this year’s context, I would be very wary about changing big things about what you do – there simply won’t be a pattern of data to support it.
Taking dramatic action based on one 15-mark question not going well for your students is not the way forward. The same teaching, with a different question next year, might result in much better outcomes. Instead, look for the signals amongst the noise. Adam Boxer has a great blog on interpreting the outcomes of assessments and exams. He explains it better than I can.
One example from my experience was when we noticed one constant pattern 2017-2019. One of our five units – The Making of America – was consistently the lowest performing compared to the other units across several years. I have therefore completely reworked this unit as clearly the first-time teaching wasn’t working for our students. I have high hopes for it in the 2023-5 results, as again I will be looking for a pattern of improvement over time.
Tip 3: Small cohorts are the enemy of meaningful patterns
There has been a trend to isolate small cohorts of students- for example boys with low academic starting points. At A-level the problem is often worse – small teaching groups can lead to some really warped percentages and make year-to-year comparison very difficult. That said, small cohorts can be about individual stories much more – very few schools have large enough A-level cohorts (it is different for colleges) to make useful comparisons year to year. So think instead about the stories you can explore about each individual, focusing on reflection and robust conversations rather than data that skews quickly.
Keep an eye out for it and judge what the size of the cohort allows you to do – data comparisons or reflection about individual students? A mixture of the two?
Tip 4: Carefully consider what you gain from comparing teachers
If you feel you need to compare the results of different teachers, keep it private and never hint at it to them. You gain nothing from doing this openly. For what it’s worth, I don’t think you gain much from doing it privately- other than to find areas of excellence and dig into why things are working well for a particular teacher, and find some practice that might be shared. A more useful evaluation of teacher performance is focussing on the results you might realistically have expected from their group- not other teachers with other groups.
Tip 5: Take care when using exam data to judge teachers.
This is a knotty one. Performance management normally requires results to be taken into account, so interpreting exam data becomes a high stakes exercise. Remember there is much about each result which teachers can’t influence: independent revision completed, family support, attendance. This latter is especially true if these students have missed a large slice of school due to the pandemic.
That said, reasons and excuses are different things. We must root out the former and be honest with ourselves about the latter. Legitimate accountability is important – the life chances of our students depend on us getting it as right as we can – but we equally need to avoid punitive or overbearing accountability.
Critically, there shouldn’t be many surprises at this stage – hopefully informal conversations over the two years of the GCSE have given you a sense of how a teacher’s class might perform – their results should correlate to these qualitative expectations as well as to any ‘target grades.’ If a teacher begins mentioning a host of problems with students you haven’t heard about before, it may merit a closer look.
Again, individual students – with good or poor results – carry a lot of agency. If a student misinterprets an A-level history essay question and loses a lot of marks as a result, this is not necessarily the fault of the teacher. Perhaps if this pattern occurs across a teacher’s results, there is more to be done to support them in the coming year.
The acid test for me is asking questions like these:
- How would your strongest teacher have fared with that group over the last two years?
- Would they have done significantly better? How do you know?
These questions are hard to answer, but they should be thought about before reacting to negative sets of results from an individual teacher.
The new term- moving forward:
It is utterly critical to remember that these exams aren’t designed to tell you what to do next.
So finding clues on what to improve is tricky. If your students have had difficulty with essay questions, for example, what’s the problem to fix? Their core historical knowledge of those topics? How to write the essay? Their ability to express their historical thinking/explanations? Their understanding of the vocabulary in the question? This is really hard to pick out. So be careful in assumptions you make about what went wrong/what needs to change.
Getting scripts back helps. You want to focus on the unexpected here, as these are the often the nuggets of gold in the pan that reveal something you can use in your teaching. Choose ones where:
- Students have outperformed what you expected from them. They did something really well. Can you replicate it? This is especially helpful for middle and lower end grades that are above expectations.
- Students have significantly underperformed. What went wrong? Are there any commonalities that perhaps could work into your teaching?
Keep looking for big patterns, not small things. Don’t tear up a load of planning if a tricky 15 mark question hasn’t gone well. But, if lots of students have not recognised ‘medieval’ as synonymous with ‘middle ages’ in a question, that is something you can emphasise in your teaching. Similarly, if they haven’t recognised a time period in the question you can find ways to improve their awareness of the chronological divisions of the specification – and hence the exam.
Intervene by curriculum design. If your data has found large issues think about how you teach this when you first deliver it. For example, if they have struggled with a chronological descriptor in a question, can you ensure your teachers pepper different phrases and date ranges into their teaching, for example using timelines or lesson titles more effectively? If it’s something you teach in Year 10, and thus have already taught the new Year 11s, make sure it gets picked up in revision phases of Year 11. There are a range of things you can think about when considering how effectively an element is taught:
- Improved first-time teaching
- Improved used of assessment
- ‘What’s the Wisdom On…?’ in Teaching History
- Research/cognitive psychology?
- Talking to networks of colleagues?
- Look at an issue across the curriculum, including in KS3?
Share this work as a department. There is no point fixing any issues in just your classroom. If teachers are underperforming year-on-year, one of the most important ways to improve their work is to share resources and the hallmark practice of the best teachers.
Consider what might need to be tweaked about KS3 – are there misconceptions with deep roots? Is there better groundwork that could be done in Years 7-9? Students rarely gain secure knowledge via more and earlier exam-style answers as Kate Hammond demonstrated in her article in Teaching History 157. You could plan a departmental CPD using Kate’s work to support planning for better groundwork.
There is lots more that could be added here, and I hope others might make further suggestions on Twitter, so if there is something I have forgotten, overlooked or you want to discuss further, please do get in touch via Twitter – my handle is @HughJRichards.
Lastly, good luck with the results and the aftermath, and remember that we are far from being back to normal yet – lurking under the ‘normal’ feel of the results days is an awful lot of good, old-fashioned mayhem during the last few years.