3 common SLT assessment issues and how to avoid them

Jodie Lopez

Jodie Lopez is a self-confessed geek and proud of it. Her career started in sales and customer service, and she left this to become a Primary teacher. She is now a freelance consultant and founder of lovEdtech. Jodie speaks at conferences for schools about using technology on a shoestring. Jodie is passionate about ensuring that every penny spent on tech in schools is both fit for purpose and keeps teachers coming back for more!

Follow @jodieworld

Website: www.lovedtech.uk Email This email address is being protected from spambots. You need JavaScript enabled to view it.
Image credit: Unsplash //  rawpixel. Image credit: Unsplash // rawpixel.

Most schools use formative assessment throughout the year, and then have some sort of test at the end as practice for SATs. This data-handling may be done via a commercial system, a tracking system they have created in-house, or through one of the paper-based approaches that many schools are still using. It doesn’t matter which method you choose, but it does matter how the data is being used.

In a life after levels, this gets really tricky. There are no consistent “levels” to use to ensure your assessment can be compared with another school, and I have lost count of the number of ways people have described to me what “secure” looks like in their school. It’s pretty much nonsense from an outside perspective, and doesn’t always result in the actual SATs results looking the way you might have expected.

There is currently a lot of talk about formative assessment as a method of data analysis, of tracking being utterly pointless and simply an exercise in making teachers tick loads of boxes “There’s talk of tracking being utterly pointless.”over the weekend. However, I have yet to work with a school SLT and not be able to tell them something they didn’t realise just from looking at their formative assessment spreadsheets. It is just the first flag, but as soon as I raise the question and they look a bit deeper, it has given them some very straightforward next steps to work on.

High-stakes accountability has somewhat skewed our view of tracking any assessment for learning, but it’s also become non-negotiable in pretty much all schools. I do, however, think that if teachers are going to spend time putting numbers and data into any system - paper or electronic - then we’d better (at SLT level) be making the absolute most of that data. Otherwise, it’s an insult to the families and friends of the teacher who spent their Sunday night doing it, instead of having a relaxed dinner and movie night!

So, I want to cover some of the most common findings from looking at data in any year group - but particularly across Key Stages 1 and 2, as that is where most of my experience comes in. These tactics are designed to be a useful starting point in assessing your assessment practices, as well as finding opportunities to put in place whatever may be required to ensure all your pupils reach their highest potential. After all, this should be the main (only?!) aim of assessment through the Key Stage.

Big word of caution: It is really important at SLT level that you don’t just assume one of the following is true from just a spreadsheet. This is a chance to go and look further, to discuss with teachers before just saying “Ah, we can see on the data that you are doing XYZ wrong”. All this will give you is a teacher who is now both terrified of what their data says, and what you are saying about them behind closed doors. It may lead them to manipulate the data to match what you want - which is the most unhelpful part of all of the accountability issues.
.
Issue 1: Teachers just aren’t filling in the assessment sheets

This issue is the easiest to fix in some ways, and can also help you to breathe a sigh of relief.

How to spot it: Year 3 data looks to have plateaued, and is not looking as healthy as other year groups

Where to look next: Check the books. This is such a simple check. If the books clearly show progress, and if the lesson plans (if you choose to look at those too) show that the curriculum is being covered, then the issue is simply with the input of the data onto your chosen system.

Reactions:Oh my goodness, Year 3 haven’t progressed AT ALL!” is a very different perspective to “Oh my goodness, the Year 3 teacher hasn’t filled in the spreadsheet AT ALL!” - although to the Year 3 teacher, the former is often put forward. This puts them on the defensive. Of course they have progressed! I have been teaching them really well! LOOK!

How to fix it: It would seem, from an SLT perspective, that the easiest fix to this is to just tell the Year 3 teacher to fill in the sheet. But try to look a bit further than that. Why are they not filling it in? Are you trying to assess too much, too often? Can you cut out some of the statements? What we have to teach is not the same as what is useful to actually assess in formal terms. So see if there is a way to help them to match up the two things, to help alleviate unnecessary workload issues.

Issue 2: Teachers are inconsistent with their expectations

How to spot it: Say you’re looking at data in December and have (I’m assuming a red / amber / green rating, as this is most commonly used):

  • A sea of green from one teacher.
  • A sea of red or yellow from another.

When looking across objectives, this can show a difference in how they are approaching assessment with the new curriculum. I always say that if a teacher is turning everything green after teaching a unit once, they may be judging “If teachers are to spend time entering into a system, we’d better make the most of it!”their teaching more than the learning. A lesson on fractions can go really well, for example, because of excellent planning and use of assessment for learning during lessons. However, that does not mean the pupil can recall and apply their knowledge of fractions later on, when out of context or in test conditions. On the flipside, you may find a teacher who is terrified of saying a child is secure until they have seen it used constantly, which may reflect in the data as lack of progress.

Where to look next: Again, look at the books! Can you see ongoing evidence of progress in the objective, or application outside of just practicing the skill over and over in a lesson on the subject?

Reactions: The great thing about this issue is that it offers an ideal opportunity for everyone to stop and think about the new curriculum again, allowing us to focus on depth and breadth. This should not be a cause for concern in teaching terms at all; it is simply about perceptions of secure subject knowledge and skills.

How to fix it: A staff meeting or INSET day session revisiting assessment and moderating across the school is key here. It is no longer easy to moderate writing in the old way of “Yep, that’s a 2B” etc. But moderating judgements is really useful. Do you all agree that Child A is secure in the following objectives, looking at these pieces of work? This method leads to fascinating discussions, and teachers will feel more secure in their own assessment judgements.

Issue 3: Data discrepancies between teacher judgements and test results

How to spot it: This one is easy to spot in the headline judgements: a child who is secure on the formative assessment, but does badly on a test, or vice versa. However, it is also worth looking at any test gap analysis information you have alongside any formative assessment gap analysis. An example of this would be a child who seemed secure in fractions during class, but got them all wrong on the test.

Where to look next: There are a few possible reasons for discrepancies, and some are nothing to do with the curriculum. Therefore, outside of the advice here I would look at more social and emotional issues around taking tests, concentration in class, and so on. But from a purely academic point of view, look at the books! As with the previous issue, it may be that the teacher is not allowing opportunities to apply the knowledge from lessons in other contexts.

Reactions: Other than any of the possible non-academic reasons for massive differences between test results and in-class work, this leads back to issue #2 again. Are the teachers really confident that they are not just teaching the content of the curriculum, but also giving ample opportunities to apply that knowledge in a variety of contexts and challenges?

How to fix it: CPD on problem-solving in Maths, or extended writing challenges in English, could be required. These are usually very effective methods for ensuring that pupils are confident in the content of the curriculum, outside of being guided by their teacher. More opportunities for independent working and teaching those skills are also useful. Pupils who have SEND may also have so much help in class that they really struggle with tests, even though they have the knowledge. Therefore, look at how to build their skills in independence where needed. On the other side of this coin are the pupils who seem to not pay attention, but then do amazingly well in the test - to everyone’s amazement. It may be they are bored in class and could need more challenge.

Going forward

It often feels like teaching granny to suck eggs when I mention these things to current teachers. But sometimes it is the opportunity to step back and look from a fresh perspective, versus dealing with all the ongoing daily activities in school, which can be useful. With that in mind, I hope you are able to look at data with fresh eyes! Try to look at the small anomalies, rather than the scary-looking graphs with plateaus and jumps. And, seriously, look at the books. Honestly. I don’t mean see if the title is underlined or the teacher’s handwriting is following the policy either. Just look for progress. That’s all.

Want to receive cutting-edge insights from leading educators each week? Sign up to our Community Update and be part of the action!

In order to make our website better for you, we use cookies!

Some firefox users may experience missing content, to fix this, click the shield in the top left and "disable tracking protection"