Teaching is better than marking

If you had to make someone learn something, would you decide to teach them it, or write a short comment on a piece of paper and hope that they learn it? Yet that is not too far away from what we do with marking. After reading students’ work, there are often better ways to address areas for improvement than with individual written feedback. Most involve teaching.

The modelling lesson

If I mark a set of essays, it is rare that every student has an entirely unique target for improvement. More realistically, there are about four or five targets, most of which are identifiable after a handful of books. Many students will even need all of those targets to improve.

Instead of wasting time writing all of these targets out in books, lump them together and model a response which meets the targets. Particularly useful for extended writing or longer mark questions, you can ensure that you model how to meet these targets, directing questions to the students who made particular errors. For example, to the one who forgets to use quotation marks, start to write the quotation and ask them “what’s missing?” and “why do we need these?” I have exponentially increased the amount of modelling I do this year, and feedback modelling is an important part of it.

Intervention groups

As I mentioned above, students tend to have similar targets. It’s very likely that marking identifies ‘batches’ of students with the same targets. Because it’s quite inefficient to write the same comment multiple times, why not just teach them in a group? It’s often worth looking at the errors that seem to persist despite feedback in books, then teach them. For example, things like comma splices are much easier to explain and model than give written advice for.

Int1I have written before about our in-class interventions at DKA, and this is just the same, but after marking. It helps if you create the classroom culture where students are able to work independently while the teacher concentrates on groups. Classroom layout can also play a part in making this easier, and spaces where you can work with small groups and still have a good view of the class are recommended.

Student work lesson

When you read work, take snapshots of excellent examples, then display them one by one and unpick them with the class. Pupils love working out who each belongs to and it’s great for the person who wrote it. It’s also useful to look at these examples and see what can be improved. It feels safer to offer constructive criticism on a good example from a pupil you have already praised. The best models will have examples which reflect the common errors of the class. In half an hour, you can read a set of books, work out the areas for improvement, choose the examples and then plan your lesson.

Marking proforma

marking-proformaThis is something I have seen used by @Mrhistoire, @mrthorntonteach and @jofacer which I have been using when marking homework. I am asking KS4 students to write an essay a week so this has been essential in making that manageable. I read their essays, complete this and simply teach the key errors. One thing I am aware of is that students might spend a long time on their work so to see it come back with no written comments could be disheartening. By making a big deal of the ‘appreciations’ and giving public praise in class and in our morning line-up, positive behaviour logs, phonecalls home and the appreciations bulletin, I think this will be ok! Students get a copy of this, highlight their names and then I teach what is in the second box.

Live marking

Let’s say students are writing for half an hour. In that time, teachers can get around every student and have a look at their work. Teachers are pretty good at spotting a misconception with a ten second glance around the room at mini whiteboards, so you can bet they’re even better with much longer looks at books while students work. Often, it can be addressed there and then, allowing students to improve instantly. Or perhaps you can use a marking proforma like the one above, and you won’t even need to take the books in to mark! If timed right, you can have all students complete a task, decide the next steps and teach them what they need at the end of the lesson.

All of these methods work because the teacher looks at the work, identifies what needs to improve and then targets it through teaching. The elephant in the room is that leaders expect to see written feedback, schools often have inflexible marking policies, and therefore some of these methods become less efficient because we have to make them visible. It is hard to see the desire for written feedback go away completely, but with some of these more efficient, less time-heavy methods, the argument may start to be won. Teachers can then focus on the best way to give feedback- teaching.

Further reading: Look at the excellent work they are doing at Meols Cop High School: Stop writing feedback comments…and see what happens!













Mini whiteboards- the essential classroom tool

I have not always seen eye to eye with mini whiteboards. I hated handing them out, storing them, pens running out, pens going missing, not having enough, doodles, scribbles, drawings of inappropriate things, drawings of me, inappropriate drawings of me. In the past I would trot them out every so often before inevitably dismissing them as just too much of a hassle. Yet now I use mini whiteboards in every lesson and I can’t imagine teaching without them. Perhaps these are not the most fashionable tools in the education world, but here are the reasons why I find them indispensable.

Whiteboards are a way of getting immediate feedback about student understanding, with an effective sequence always starting with a well-designed task or question. Rather than just waiting as students write, this time can be used to seek the most useful responses. Do most students get it or not? Where are the great examples that can be used as models? Where are the examples of common errors that students can learn from? Which students are making the same errors?

Depending in what our checking tells us, there are many options:

  • Stop the task and reteach something immediately.
  • Give a simple piece of corrective advice to address a common problem e.g. ‘remember to mention the writer’.
  • Choose a range of responses (usually three) for students to compare.
  • Pick one example that is ‘nearly there’ and ask the class to improve it.
  • Sequence the responses shared so that they become increasingly more sophisticated.
  • Find answers with a common thread and ask students to connect them (really useful for language analysis).
  • Find three errors and ask students to connect them.
  • Teach the students who don’t understand in a small group. See in-class interventions.

MWBIf the first board I checked was this one, I would be thinking about spelling, particularly character names. Is this student the only one getting these wrong? I know that in this lesson I shared a model paragraph on the board and the phrase ‘ultimately lead to’ has been lifted from that and used incorrectly, so I need to know if others have made the same error. I could choose this board to look at the spelling of ‘ultimately’. We could use it as a fairly decent first attempt at an opening paragraph and ask students to rework it to make it better. I could seek out another contrasting set of adjectives used to describe Sheila and ask students to compare. Or I might not use this board at all-there are plenty of others to choose from! The main thing is, I don’t want to leave it to a randomly selected student response and hope it tells me something. By getting every student to write a response to a clear and unambiguous question (not ‘hands up if you don’t understand’), I can be reasonably sure of the class picture and do something about it.

There are other benefits of using whiteboards too. One of the issues that we have, as I am sure many schools do, is the use of unnecessary fillers in spoken responses. Part of the reason for this is, I feel, is students’ lack of confidence in what they are saying. Or that they don’t know what they are going to say before they start saying it. The few seconds it takes to write a couple of ideas down on a whiteboard can help to eliminate both of these. Add in grammar issues, not answering in sentences and the ubiquitous ‘innit’, and we have a number of elements that a quickly prepared written response can help to eliminate.

I want my students to develop excellent habits of editing, proofreading and redrafting. Whiteboards help me to create a culture where this is expected. It starts with making sure the first thing they write is high quality, and there’s an increased accountability when students write on whiteboards. When they know it will be held up and discussed, there is an incentive for it to be their best work. It’s easy to edit work quickly on a whiteboard and far less messy. With a constant focus on the quality of these short responses, students start to self edit, even as they are reading out their answers. They can spot the errors in other students’ answers and then fix their own. It’s easy for the teacher to correct the spelling of the word ‘beginning’ on a board and then instruct others in the class to fix it if they made that mistake. It’s great for adding commas or colons in the right place. Even as students write in their exercise books, they can make notes on whiteboards.

AOS 3Every lesson at our school starts with a Do Now which is completed on the whiteboards. It means that lessons start in a calm and prompt manner and it is efficient when we don’t have to wait for books to be handed out. The ability to immediately write something down on whiteboards can also help to eliminate the ridiculous amount of time it takes for some students to write the title and date.

Some argue that teachers might only do work on whiteboards in order to avoid marking books. Well, of course! And there’s nothing wrong with that. The fact is, I’m not going to mark everything students write in their book and I can’t mark it if it is only in their head. If it is written on a whiteboard, I am giving immediate feedback and in a fraction of the time it takes to mark books- in many ways it is more beneficial than marking books.

But what about the problems that I listed before? Surely these won’t go away. Much of what we try and do at Dixons Kings is to make it easy for teachers to teach and students to learn, so behaviour systems and classroom routines are crucial to ensure students don’t often exhibit those off-task behaviours. As for the equipment issues, every classroom has a set of whiteboards on desks and students must carry a pen with them as part of their essential equipment. Without these systems, I wouldn’t use whiteboards, especially as I don’t have my own classroom.

We designed a mini whiteboard routine which is used by all teachers in the school. When we want students to hold up their boards, we say, “3-2-1… show me.” They hold boards up with two hands then we say “track student x” and that student reads their answer out. We come back to these routines in our practice sessions to ensure they are consistent. There are some who might argue that these routines reduce creativity or lead to ‘robotic’ teaching. Yet they have allowed me to be the most creative I have ever been as a teacher and I would welcome anyone to come and visit to see just how much this routine and some of the others free teachers up to actually teach. Without these whole school approaches, you can still design a whiteboard routine that works for you. Consider in advance how equipment is stored and handed out and expectations during the sequence.

So, have a rummage in your stock cupboard, find the discarded whiteboards (I guarantee there will be some) and start using them.

TLT15 Part 2: In-class interventions

Every lesson has at least one moment where the teacher has to decide whether to move on. ‘Have they got it?’ we ask. The next step seems clear: stay with something when students don’t ‘get it’ and move on when they do. But is it ever as simple as all of the students understanding or not understanding something? Not in my experience- it will usually be proportions of the class who get it. So re-teaching or moving on aren’t the only possibilities. It is much more complicated than that.

The moment we teach something, we may have one, two or fifteen students who don’t get it. We need to know who they are and do something about it. In this, the second part of my TLT15 presentation, I will explain how we can use in-class interventions to close gaps even as they appear.

The right questions > intentional checking > in-class intervention

If we are going to intervene effectively, the information we get needs to be valid and it needs to be from everyone. It can’t just be the odd student, like the one who shouts out “I don’t get it”, leading the teacher to stop the whole class for an explanation that they don’t all need. Questioning a selection of students will only tell about those students and asking students if they get it will only tell us about perception. The only methods that can give us a true picture require every student to participate and give an answer that is unambiguous. Hinge questions are particularly useful for effective diagnosis- read Harry Fletcher-Wood’s excellent series of posts on this topic. Mini whiteboards are the easiest low-tech way to gather accurate student information across a whole class and there are of course some high-tech ways too. We can also make decisions about individual understanding from simply reading students’ work.

We need to be intentional in our data gathering, ready to intervene in the most appropriate way. When students are completing tasks, or when they are answering questions on mini-whiteboards, teachers should be looking for specific things that they will address. When we teach the lesson and design the questions, we know what the common misconceptions are and need to look for them-it shouldn’t only be responding to whatever comes up.

At this point, teachers should be deciding what happens next: the in-class intervention. If 3 or 4 students are struggling with something, then they can be retaught while the rest of the class works. If a small number ‘get it’ then perhaps they can work on an additional task while the whole class is retaught. You might have a case where one group of students has struggled with one aspect while another has struggled with something else- set them all off on a task and then teach each half in turn. There are any number of possibilities.

I watched a Maths lesson recently where the teacher put a question on the board and gave students 3 minutes to complete it on whiteboards. Instead of simply waiting for students to hold them up, the teacher walked round and looked at every single student. When it came to holding boards up, struggling students had already been identified, as had those who understood, common misconceptions were gathered, two different methods spotted. A student was asked to share an example of the misconception, which allowed the teacher to explain the correct method again. After this, a group of three were taken to the front for another go while the rest of the class completed more questions. It was a sequence designed and implemented to ensure that all students understood.

Create the culture for it

I actually don’t think the concept or the implementation of in-class interventions like this are difficult. I think the hardest part is creating a school and classroom culture to allow these to take place effectively.

If teachers are going to teach small groups while the rest of the class work, they need to have classrooms designed to facilitate this. These images of classrooms from my school show areas designed specifically for intervention. In the first image, the table in the corner is used and its position ensures that the teacher can scan the rest of the room as she intervenes. The horseshoe shape of the second image is a space that can accommodate several students and is right at the front to allow the teacher to monitor the rest of the class.

Int1Teachers cannot establish the right culture of diagnosis and immediate intervention if  the rest of the class start misbehaving when they intervene. That is why it is crucial that school systems take the hard work of managing behaviour away from teachers. If teachers have to deal with many incidences of poor behaviour, interventions won’t work, and if they are chasing up detentions and making endless calls home then they are not planning. Clear expectations of behaviour, consistently demonstrated by teachers and supported by senior leaders with centralised detentions, means that this small group teaching is much easier to manage. I have written about our consistent approach to behaviour at Dixons Kings Academy here.

We don’t even get to the intervention stage without useful information, so teachers need to see mini whiteboards as routine and not a nuisance. (How many mini whiteboards sit unloved in stock cupboards?) At DKA, we have whiteboard language and routines designed to make using them hassle-free. Students must carry a whiteboard pen around with them as part of their essential equipment (with a detention on the day if they don’t), once again making it as straightforward as possible to use whiteboards.

I am conscious that an approach which requires planning for different outcomes to a question may in fact lead to excessive planning. While it is true that planning may be made more complex, it doesn’t necessarily require creating resources as intervention can just be teaching. Anyway, any resources created, to use Joe Kirby’s word, are renewable. It’s just feedback- and it will take much less time than marking a set of books. And if you get it right with the initial teaching of concepts then you probably don’t have to waste as much time or energy later coming back and dealing with the problems.

TLT15 Part 1: Closing real gaps with data days

In education, we are obsessed with closing gaps. Gender gaps, gaps between cohorts, the gap between where a student should be and where they currently are. But we can’t close these gaps easily- at least not directly. They are actually chasms, impossible to fill because they are too abstract and just not specific enough. Gaps tell you nothing about individual students. The only way that we can close these massive gaps is by concentrating on the very specific things students don’t know, and the very specific things they can’t do.

The traditional way of closing these gaps is with written feedback, but it is pretty inefficient. Many others have written about some of the problems with thinking of feedback as equivalent to marking (see links at end of post) and in my own post on solving the problems of feedback, I tried to mitigate for these problems, but if so much effort has to go into making marking more efficient, is it actually the best method? If we count up the hours spent marking books by a typical teacher during the week, can we say that this is an effective use of their time? Over the next couple of posts, I will outline how we can use feedback to greater effect, without spending the hours it takes to write detailed comments in books. In this post, I am looking at how we use data days to close real gaps.

Data Days

At Dixons Kings Academy, we have three ‘Data Days’ throughout the year. Inspired by the book Driven by Data by Paul Bambrick-Santoyo, these are days where students don’t come in and teachers look through their class data, discussing with heads of faculty how they can address any problems. These are not meetings where teachers have to justify their data, but ones where they can explore it, spot patterns and identify tangible next steps.Driven

Bambrick-Santoyo suggests starting these meetings with ‘global’ questions, which look at the wider class picture:

How well did the class do as a whole?

What are the strengths and weaknesses? Where do we need to work the most?

How did the class do on old versus new standards? Are they forgetting or improving on old material?

How were the results in the different types of question type (multiple choice vs open ended; reading versus writing?)

Who are the strong and weak students?

Then these are followed up with ‘dig-in’ questions:

Bombed questions- did students all choose the same wrong answer? Why, or why not?

Break down the standards. Did students do similarly on every question within the standard or were some questions harder? Why?

Sort data by students’ scores: are there questions that separate proficient and non-proficient students?

Look horizontally by student: are there any anomalies occurring with certain students.

Assessment design and analysis

It is no use just turning up to these meetings and looking at grades, because grades just don’t tell us anything meaningful- they are not specific enough. They are a starting point, but not much more than that. These meetings are only effective if we have precise and useful information. To get it we have to ensure that our tasks and assessments are designed with this in mind. A well designed assessment tells you much more than simply the grade achieved. It can tell you in exact detail what students can or can’t do or what they do or don’t know- but only if it is designed well. Often, assessments can be arbitrary tasks which we set because we have to. Or we just mark something randomly completed in class: I haven’t marked their books for a while, so I’d best mark this. If we think about some of the questions above, it makes sense that a good assessment will give information at the class level, student level, question level and assessment objective level.

TLT152Assessments can be essays, a mix of different types of questions like a Science exam, or could even be a multiple choice quiz. I often use Quick Key as a highly efficient way of using MCQs to get precise information about students. See here for a walkthrough. If multiple choice questions are designed effectively, they can give us exactly the information we need. In this example, showing itemised data from a quiz, we can see a common wrong answer of B. The wrong answer gives me very useful information about a common misconception. With carefully designed plausible wrong answers, multiple choice questions like this can be incredibly rigorous and often inform you far better than an essay would. It might be more time consuming to input question level data manually, but it wouldn’t take more time than writing comments in books. See the links at the bottom of the page for more on question level analysis.

Intervention planning

Once we have drilled down to exactly what to focus on, we spend the rest of the data day intervention planning. These plans need to start from the gaps, not from the students. If we start from the students, then we end up with a massive gap to close and then try to throw things at the gap, ultimately changing very little. By starting with specific things, the gaps can actually be closed.

I used to start with the student; this is from an old intervention plan: “[student] does not always understand the task. He can produce good work when given very clear instructions but struggles without. Seated at front of class for easy support.” The full intervention plan is littered with similarly vague and useless advice. For this one, moving to the front is my plan. Well, if it was as simple as that, we would design classrooms to be all front! For another, the only thing written was: “Irregular attendance. Hopefully this has been addressed.” Despite the best of intentions, no gap was being closed, at least not as a result of my plan.

TLT153Our intervention planning now starts with the gap to be closed, then the students it affects, then exactly what will be done. Nothing vague. Precise gaps, specific interventions. As you can see, the interventions include different tasks, reteaching, working with other adults etc. There is no guarantee that this closes every gap for good, but at least it is a focussed plan. There are a couple of students on that plan who are not underachieving according to their grade, but have some basic issues that need fixing- these might not be our focus if we started just from the grade. On data day, every teacher produces one of these per class for the following three weeks. The most important thing is that we give teachers time to do it. Time to ask the questions, write the plan and prepare the resources.

Even if data days didn’t exist, the process could still be followed: design effective assessments, ask questions of the data, plan and carry out specific interventions on specific gaps. I contend that the time spent doing this is as valuable as the time spent marking a set of books, and I wonder if this is a more effective proposition. Next time: in-class interventions.

Useful Links

Marking and feedback are not the same from David Didau

Marking is not the same as feedback from Toby French

Is marking the enemy of feedback? from Michael Tidd

Marking is a hornet from Joe Kirby

Exam feedback tool from Kristian Still- this is a timesaving question level analysis and feedback tool

The problem with levels- gaps in basic numeracy skills identified by rigorous diagnostic testing from William Emeny

Is there a place for multiple choice questions in English? Part 1 and Part 2 from Phil Stock

Question Level Analysis with Quick Key

Quick Key is a tool that I use regularly to provide me with detailed feedback on student understanding. I wrote about it briefly in this post about technology that makes my life easier, but I thought that it merited a more detailed explanation.

QKW1First of all, Quick Key is an app for IOS and Android. It is free, but there is a paid version which allows you to do more. It is still incredibly cheap compared to other tech tools. You set a multiple choice quiz, students fill in one of the tickets and then you scan the tickets in using your phone. Students don’t need to have a device themselves- just a pencil. It takes very little time to create quizzes and very little time to scan them in. On your phone, you can sort the questions to find the ones that students struggled with and sort students by score also. This is great, but when you go to the website, you can do much more with the data, including exporting itemised information.

QKW2Here is the question level data from a recent quiz. The quiz was taken by a year 10 class and was on everything that they have studied this year. They should be able to answer every question as everything has been covered. That doesn’t mean that it has been learnt, however. A quick explanation of what you can see. Student names are down the left, scores down the right. Question numbers and correct answers are along the top. (I have shaded questions on the same topic together to help my analysis.) When there is a dot, this indicates that they answered it correctly, a letter indicates a wrong answer and the option chosen, while an X means that a question wasn’t answered (or wasn’t scannable).

QKW3I have identified the two lowest scoring students here. If I was only interested in scores, I would just give each extra work and be done with it. However, on closer inspection I can see that the first student has performed poorly across all topics, which is quite consistent and is borne out by classwork too. But the other student has actually answered a number of the first questions correctly (1-8 were all on language terminology) and is not underperforming relative to his peers. It is later questions where he has struggled. From this information, I can work out a couple of possibilities: 1) The student doesn’t know the poems from the anthology well, particularly Ozymandias and London. 2) Perhaps the student struggled to answer all of the questions in the given time. My next step? Extra revision on these two poems for homework; extra time on the next quiz.

QKW4This question level analysis allows us to identify the differences between pupils who at first seem to be performing at an identical level. If we look at these two students who both achieved 67%, we can see that, with the exception of question 14, they answered completely different questions incorrectly. If we said that 67% equalled a C and put these two students in a C grade intervention group, neither of them would get exactly what they needed! Question level analysis of this kind allows us to really know what individual students need.

QKW5Now let’s move from students to questions. 13 and 14 were answered poorly across the board. This obviously identifies a gap in knowledge and in this case it tells me that students don’t really understand what the exam will look like. They were ‘taught’ this, but I obviously didn’t do as good a job as I thought. This is good information to have because it is easily addressed on a whole class level.

QKW6But what about question 5? This surprised me, as this question was designed to show that students understood what a metaphor was. And in class, they can do this. But a look at the question itself gives me a clue. I can make an assumption here that a number of students just saw “like or as” and thought simile. From this I can work out a couple of things: 1) My students need to read questions carefully and 2) I need to have a better explanation for similes than “it is a comparison using like or as”. QKW7I pulled on this thread and looked at question 1- why did students not answer this correctly when I expected them to? Once again, a quick check makes me think that students saw “verb” and jumped straight to C without really reading the question. It may of course simply tell me that they don’t know what a verb is and that option was an understandable guess. Both of these questions flag up that some students can identify terminology and give examples, but can’t define them.

I could delve deeper into this data but I think the point of its usefulness is made. The quiz took maybe 30 minutes to create, 10 minutes to complete and 5 minutes to scan. Yet the information it has given me is perhaps more valuable than marking a set of books, something which would have taken considerably more time.

Solving the problems of feedback

Feedback is one of the best things we can do as teachers. However, it is also one of the most time-consuming, so it is crucial that feedback is highly efficient. David Didau states in his excellent Getting Feedback Right series: “Teachers’ feedback can certainly have a huge impact but it’s a mistake to believe that this impact is always positive.” The stakes are too high- if we get it wrong then teachers work increasingly long hours giving feedback which has little effect.

I see three particular parts of the process that we should refine in order to make feedback much more efficient:

  • The time it takes giving feedback: We need to ensure the time spent giving feedback is as short as possible while keeping the quality high.
  • Student response: We need to control the conditions under which feedback is received.
  • Impact of feedback: We need to ensure that feedback is not a ‘one-hit wonder’- it isn’t quickly forgotten about.

 Problem 1 and three solutions

We have to commit time to marking. Students work hard and deserve their work to be given due attention. There is a finite ‘time cost’ to marking that can’t be saved.  However, it feels to me that the marking load of teachers is simply unsustainable if we don’t try to streamline the process and there are ways we we can save time in the process of writing feedback itself. Here are three ideas which I find help me to do this.

Taxonomy of errors

Taxonomy2Often, when giving feedback, we tend to see the same misconceptions. If we write the same comment, plus some way for the student to improve, then we are being woefully inefficient. One approach is to create a document with all of the errors and targets. Label students’ work with T1 when a new target is generated then type this and some guidance on the sheet. When the same target comes up, write T1 again and when a new one appears write T2 etc. You end up with a target sheet such as this example of common errors in descriptive writing for use with a class. Initially, this might take longer than traditional marking but once you have created advice for, say, improving homophones, you can use it again and again.

This approach comes from Keven Bartle, whose post here explains how you could work with a  taxonomy of errors. Andy Sammons has a couple of blog posts on the subject too: DIY LEARNING: Taxonomy of Errors and Using Taxonomy of Errors for feeding forward.


This is a highly efficient way of creating individual feedback for each student. It saves time on marking and you end up with an individual feedback sheet for each student.

Mail merge step by step



mm31)      Record your targets in the spreadsheet.

2)      Create a template in Word.

3)      Use the mailmerge wizard>choose ‘letters’.

4)      Click Next>’use current document’>Next.

5)      Choose ‘Browse’ to find your spreadsheet.

6)      Then click OK a lot!

7)      Insert the fields you want on each target sheet.

8)      Then click through ‘Next’ until you create your target sheets.  You can choose ‘edit individual letters’ to then adapt them and insert tasks etc.

You can read more about using mail merge in this post.

Pre-plan feedback

There is never a time when teachers are not busy. We do know in advance when certain assessments will take place and where we will be giving more detailed feedback. As we plan schemes of work or design a curriculum, we can also design things that will help us with our feedback. These can include some of the feedback activities mentioned above but there are other possibilities.

Kraken 2In the example below, I have created a model paragraph with a drop down menu. This was used during the process of essay writing. Rather than giving the feedback only at the end of the essay, this was much more formative. Bearing in mind that targets are often quite predictable in some aspects of student work, the feedback was quite simple. Obviously I needed to add some other targets and very specific feedback for others. My intervention was helpful and definitely created better essays.

Problem 2 and three solutions

Even when we give high quality feedback, we need to remember that it needs to be received by students and sometimes we can create conditions in which students don’t really think about the feedback in the way we want them too. It is important that we do everything we can to make sure students don’t have negative responses to our feedback. I often think the analogy of how teachers feel after lesson observations is a useful way of thinking about student feedback.

Gaps not chasms

“The purpose of feedback is to reduce the gap between current and desired states of knowing” [BUT] “…we are motivated by perceivable and closable knowledge gaps but turned off by knowledge chasms.” Hattie, Yates. It is a difficult balancing act but if we pitch our feedback wrongly then it will not have the desired effect.

Delay the sharing of grades

We want to ensure students concentrate on the feedback. Giving students grades at the same time will almost certainly ensure they concentrate on the grades. If they score highly, they ignore the feedback because they have done well. If they have not, they ignore the feedback for the opposite reason. Then they compare with the other students in the room. Best is to delay grades.

Be careful what you praise

Generic praise like ‘well done’ is meaningless unless rooted in exactly what has gone well. If we consider a Growth Mindset approach, praising effort is effective and praising talent is not. Even this is problematic as students may well work hard but make no progress so we even need to be careful about the kind of effort we praise. Praise can have all sorts of unintended consequences and we need to consider what they might be.

Dylan Wiliam summarises these ideas here:

(In what continues to be my most read post, I suggested a range of complex and convoluted ways that students can interact with feedback including things like ‘coded feedback’ where you write everything in code and students have to translate. Please forgive me.)

Problem 3 and three solutions

When feedback takes so long to give, we are daft if students only spend a small time on it. We need to stick with things until the student really has acted on their feedback and there is sustained improvement as a result. My old head of department used to refer to ‘pat on the head’ responses such as: “Yes miss. I will do this next time.” But they don’t- and next time they make the same mistakes. Alex Quigley has just written this cracking post on this subject which is well worth a read.

Feedback on feedback

HomophonesIf a student has completed a task in response to feedback, check it. If any misconceptions are not cleared up, they need to be addressed. Also, you can refine the quality of your feedback by looking at responses. In this instance, the student has followed my instructions in a different way than I expected. If I didn’t check this then I may think the student can use these homophones correctly because I gave them the task. If we don’t evaluate the quality and impact of our own feedback, then the time we spend is wasted.

Follow on feedback

Once I have checked pupil responses to feedback, I can give them a further activity at a suitable interval to think deeply about it. Those who made no improvement get a similar target with a dramatically different task. Those who are nearly there will be able to consolidate further and those who did well will be given a ‘next steps’ activity. Not only is this ensuring that the feedback sticks, but I already know what the students need to be working on and it takes me much less time.

Feedforward feedback

Because it is crucial students master their targets, we should come back to them again and again until they stick. They should ‘feed forward’ into the next piece of work. Targets given for improvement should not be one hit wonders. If we don’t do this then what was the point of our feedback? Learning is messy and isn’t just fixed with one simple comment.

So, if we can save time writing comments, focus on the conditions under which feedback is received and stick with the feedback for longer, our feedback is going to be much more efficient. Then we can put our feet up and knock off at 3 o’clock…



The books that fall through the cracks

For a long time, I have said ‘look at my books’ as the clearest indicator of what goes on in my classroom. They provide a clear narrative of the lessons, the effort, the feedback.

And yet we have had a work scrutiny with a random sample this week and I have found myself hoping that certain pupils’ names are chosen. The ones who read and act on the feedback, the ones who attend regularly, the ones who underline the title! And there are a small number of books I don’t want anyone to see- the books that fell through the cracks.

Part of me wants to criticise the work scrutiny process but I think it’s better to question why I have any books that I don’t want to be ‘scrutinised’. So I’ve chosen a sample of two books from one of my classes for a closer look. The first is the book I’d most like to show off and the second is the one I hope is never looked at.

Book 1 (The one I want them to look at)

Sheet 1This student engages in feedback. Every single time that I have provided feedback they have addressed it. There is regular ‘dialogue’ with me. Comments like ‘Thanks Sir’ and ‘I’ll make sure that I do that in the next draft’ show this. I often provide feedbactivities– tasks to support their targets- and these are always approached with determination to improve.


TicksThis student also ticks every error to show me that she has read it. It’s an interesting habit and one it is worth trying to instill in others.

It isn’t perfect because there are still misconceptions. They could use a ruler more regularly and they appear to have used the last page of the book to practise their signature.

All in all, this book shows off the impact that good feedback has when it acted upon. The student is making progress- evident in the book and the data. If you judged me on this book, you would have to say the quality of my marking is high and the student is making excellent progress. Match it to the data and it would corroborate that.

Book 2 (The book I don’t want anyone to look at)

At the opposite end of the spectrum is the next book.

It is a book that starts on the 11th of March. It’s a replacement for the first one which was lost. (I didn’t lose it, although students’ books do go walkies sometimes). All that evidence of progress and all that feedback is missing. Yes, I have a markbook with data recorded and all of the previous targets but there is no evidence of it in the book.

Worryingly, there’s not much from me in the book- at least not in pen. They have missed the key assessments and so some pieces of work are acknowledged but not given particularly developmental feedback. There are two feedback sheets. They at least provide evidence that I’ve done something. However, looking at the sheets tells another story- the feedback didn’t really count.

First of all, they are not really engaging with the feedback at all. In the second image, the minimum is completed before scribbling all over the page.

Sheet 2Sheet 3










The attendance of the second student is not high and they have missed a number of lessons for a number of reasons. A couple of feedback sheets have not been glued in so must have fallen out of the book. I’m not suggesting we should make excuses like this and avoid trying to address the issues but it does provide context.

If you judge me on this book, you could argue that this is not good enough. Is that fair? I don’t think it would be fair to say this was indicative of the average student in the class. Looking through the pile of books from the class, most are close to the first one but there are quite a number of instances where the feedback is not really engaged with, there are a few feedback sheets not stuck in, there are the times when people were absent. If you look at every single book you will see a consistency and lots of evidence of good practice. It would be fair to say that this student is not visibly making good enough progress, that they are not engaging with the feedback and that I don’t seem to have addressed this well enough.


I put a lot into my marking and if book two was chosen in a sample I’d end up having to defend myself and make all of the aforementioned excuses.  It’s hard not to feel a little defensive. (Perhaps it’s that word ‘scrutiny’) Whereas, in writing this blog, I’m asking myself why I have not picked up on this sooner. Why has it taken me nearly three months to pick up on a student who has not properly engaged in feedback in that time?

An idea for a developmental way of looking at exercise books would be to do as I have above. Select the best and ‘worst’ books from a class. Reflect on what the books show and how the issues can be addressed. This could be done as a something more formal, as part of staff or departmental training or just as an individual. Some may say that it doesn’t show the true picture- nobody is going to pick the worst book. I think it all depends on how it is framed. If people are working in a culture where they feel their development is valued and this is not simply a judgemental approach, I think they’ll embrace it fully.

Work scrutinies are important for a number of reasons and they can provide much useful information. If conducted well, they can be very helpful in developing practice but we’ll get much more out of it if we take a good look at our own books and start with the dog-eared, graffiti-covered, why-have-they-only-used-the-right-hand-pages books that have fallen through the cracks.

Further reading:

In this similarly reflective post, I looked at the impact of my feedback.

Feedback on my feedback

While feedback is one of the most effective interventions, not all feedback is good. Many studies do show a negative effect of some feedback. An analysis by Kluger and DeNisi (1996) of 3000 research reports- and excluding a number of poorly designed reports- showed that effect sizes were ‘highly variable’ and 38% were negative. (Source: Dylan Wiliam at the Festival of Education)

Giving feedback isn’t enough. The quality of feedback matters and we spend so much time writing feedback and marking books that it is ridiculous not to ensure that the feedback is good enough. With that in mind, in this post I am casting a hyper-critical eye on a piece of feedback I gave which didn’t work in the way I wanted.

Following an essay on war poetry, a number of my year 9 students received this target: ‘Try to analyse the effect of specific language techniques’. They tended to spot a technique and use a quotation but the analysis was simple and along the lines of ‘makes the reader want to read on’ and ‘makes it more effective.’ They were given the following activity to support, based on The Woman in Black:

“…gazing first up at the house, so handsome, so utterly right for the position it occupied, a modest house and yet sure of itself, and then looking across at the country beyond. I had no sense of having been here before, but an absolute conviction that I would come here again, that the house was already mine, bound to me invisibly.”

Personification is a language technique that makes something which is not alive seem as if it is. Can you find an example in the passage above? Write the quotation.

Why do you think the writer might want the house to seem alive?

 Problem 1: The task doesn’t really offer any additional support

Example 1The first issue I spotted is illustrated by this example where the student has performed at the same level that they were performing at in the first place. They answered the question, but nothing in the activity, or my advice, gave them any support in answering the question better. The feedback itself might be a useful reminder for the next essay but if the student couldn’t do it before then a poorly framed task like this will not help them. The task was supposed to make students think about the effect of language. However, with no support to do it better, then nothing changed!

Problem 2: Poorly chosen example and poorly worded questions

The technique of personification was chosen because we were studying it in the following lesson. I think I picked the wrong example to illustrate this. Although the complexity of the text is not beyond the class, there are a number of aspects which needed explaining: handsome; modest; absolute conviction. Too much time was taken simply trying to work out what the passage was saying. This quotation is harder to analyse in the context of the whole text and I could have used an example from the start of chapter two which personified fog as evil. Students had just studied foreshadowing and pathetic fallacy so the analysis of the language would have been more straightforward and actually more precise if I had chosen that extract.

Even if the quotation had been straightforward, there is an imprecision in my design of tasks and questions. Technically, personification is a language technique that makes something which is not alive seem as if it is but this is not precise enough and certainly doesn’t get to the heart of how personification works in writing. In asking the question ‘Why do you think the writer…’, I felt that I was allowing the students scope to offer wide-ranging interpretations and I did get some thoughtful responses from students. However, the question didn’t have an explicit link with the previous one so students didn’t link the evidence they had chosen to the explanation of the effect. In the example below, I am happy that the student is exploring the effect but the response isn’t really linked to the particular quotation they have chosen.

 Example 2






Problem 3: The task doesn’t make them think solely about the thing I want them to think about

Doug Lemov writes in Practice Perfect,‘When teaching a technique or skill, practise the skill in isolation until the learner has mastered it.’

In this instance, students should have been thinking about the analysis part. I instead tried to make them learn a new piece of terminology (or at least refresh their understanding), find evidence and then analyse it. They had to do a lot of work before even getting to the key part. The final question could have been attempted without any of that part. The student below found the first part so difficult that they spent very little time on the analysis part. One student wrote nothing.

Example 4As I said earlier, I am being hyper-critical, but feedback is too important and frankly too time consuming for there to be any room for poorly designed feedback. Feedback needs to be designed so that they unavoidably think about the feedback and not other aspects. Explanations need to be crystal clear and precise to allow this to happen.


Evaluating the impact of written feedback

A couple of weeks ago I was putting together my slides for #TMSBradford and I took some photographs of mail merge feedback sheets completed by students. (This is one method I use for reducing time spent on marking while increasing impact.)

The problem was, I had to reject the first 3 or 4 examples because they hadn’t actually met the targets in the feedback exercise. Some had written very little and some had completed the feedback incorrectly e.g. used the wrong ‘there’ in a homophone activity.


This student has attempted the task but has not added all of the full stops.

This matters because it illustrated that the written feedback I had taken time to produce in these cases had zero impact! In some ways you could argue that there was a negative impact as a misconception was allowed to continue.

As a result of this, I then decided to look at a range of my books across all of my classes and look at the impact of my written feedback.

In some books, there was feedback that had no impact on future pieces of work- identified errors continued into the next task. Sometimes the errors were fixed immediately afterwards but appeared further down the line. In both instances, the time spent marking appeared to be wasted and the issues needed to be addressed again.

Of course, most of the time, students did act on the feedback and it is important to track those instances of success and do more of the things that work.

Dylan Wiliam, speaking at the Festival of Education, explained that written feedback is effective but that some studies actually show a negative effect. It is therefore not ok just to do written feedback, it must be of sufficient quality and have a worthwhile impact for it to be worth the time it takes. This is why this extra layer of evaluating the impact of written feedback becomes necessary.

I have already written about the entire process of making written feedback work. I would add the ideas that follow to that sequence so it becomes: before, during, after, evaluate.

Here are my recommendations for ensuring that written feedback does actually make a difference:

Ensure that any written feedback comes with advice on how to improve.

If you say ‘You need to organise writing into clear paragraphs’ then the few students who just forgot about paragraphs will possibly remember next time but the ones who don’t actually know when to take a new paragraph are not going to be able to understand just from that. It needs to come with some guidance on how to do that. Then students need an opportunity to put that guidance into practice. This is clearly the most important stage of the written feedback process so students need to be given time to do this. They also need to develop the mindset of relishing their feedback and wanting to use it to improve. Teachers need to assure students that their feedback is simply the most important thing for them to improve.

After giving feedback, read the responses to the feedback and ensure that any misconceptions are addressed.

Read the response to feedback immediately (or as soon as you can). If you don’t check that they have understood and improved as a result of the feedback then you risk the possibility that the feedback didn’t work. If students don’t improve from your feedback, refine it. Was the wording of your feedback helpful? Were students given sufficient time to read and respond? If a student just didn’t try then you can discuss it with them. This is obviously much more effective soon after the feedback.

If you only write a comment with no opportunity to act on the feedback then it is going to be difficult for you to ascertain whether they have even understood the feedback. Asking students to comment on your feedback or phrasing the feedback as a question might be one way to do that. Even then, you need to evaluate the quality of student responses. A comment of ‘Thanks Sir. I have read my feedback and understand it’ isn’t really that illuminating!

Tackle wide misconceptions in class too.

Take the opportunity to teach things that are coming up repeatedly in your marking. Do this in addition to students acting on the feedback. The feedback is much more likely to stick if it is accompanied by this.

The taxonomy of errors is a good approach to this. Andy Sammons has a couple of blog posts on the subject too: DIY LEARNING: Taxonomy of Errors and Using Taxonomy of Errors for feeding forward. This is my example of common errors in descriptive writing for use with a class.

Ensure that students revisit their targets repeatedly.

There are a number of ways to do this:

Make sure most recent targets are displayed on the front of exercise books or folders. Make them easy to refer to.

Another idea, which I will experiment with next year, is to create feedback bookmarks to be kept in exercise books.

Use the targets to feed forward into the next piece of work. Make this the first thing that you focus on when you mark.

Meticulously record targets for students and monitor them. I RAG them on a spreadsheet- If I don’t I lose track of the targets I have given in the past.

Have routines embedded to ensure students’ targets are memorised. This could be that they answer with their target in response to the register. It could be that you repeatedly ask individuals about their targets. It could also be part of a call and response.

(I’m not interested in students being able to recite these simply so they can say it when someone observing asks them, I want them memorised because I want them to be conscious of what they need to do and then do it in their work.)

Don’t give too much feedback to act on.

Sometimes you will identify a number of things that need addressing. You should be precise with your feedback to ensure that progress is made as a result of the feedback. Write too many targets and there is a danger that they will not be acted on.

Like any area of professional practice, we need to continually refine our approach to written feedback so that we maximise the impact. While I still rate the quality of my written feedback highly, there is still much more room for improvement.

The Sharepocalypse: Written feedback










At #TMENG, attendees were given ‘Sharepocalypse’ cards. These are the responses to the question above on responding to feedback. Please add your own ideas in the comments section.

Have ROW time (reflect on work) – when marking, set a target- first 10 minutes of next lesson is spent addressing that target.

Phrasing feedback as questions and giving time in lesson for students to write a response and then checking this later.

By trying to do it before they’ve ‘finished’. Showbie is good for this. @alcass2s

  • Personalise it
  • Allow time to read and respond to feedback
  • Establish a dialogue with students
  • Ask students to summarise feedback
  • Respond to responses to feedback

Verbal Feedback stamp with 2 bullet points – pupils write what you have said.

At the end of marking a piece of work set 3 targets: 1 long term and 2 short term. The short term targets are things which they spend the first 5/10 minutes of the next lesson doing e.g. Find 3 alternate words for ‘good’ which they could use in their writing. Students complete task in their books.

Choose five different common problems seen in a piece of work. Put them into a hierarchical order and assign students into groups. They have to address this problem and then can only move onto the next kind of sentence.

  • Ensure that you make time for meaningful conversations on a regular basis.
  • Relate feedback to clearly understood objectives and make it personal.
  • Make students justify their moving on to the next skills by analysing their own work and improvement. @funkypedagogy

At the moment using ‘What went well’ and ‘Even better if’ in exercise books and after assessments. @gwenelope

Have ready made ‘how to improve sheets’. Students do work and you stick help sheet in books and they complete tasks on the sheet.

After a piece of work, before handing in, the students write me a letter to explain what I might find when I look at it.

Taxonomy of errors (my new obsession!)

Get students to predict their feedback first with markscheme before handing feedback to them.

Spend time telling them, then make them write it down.

Lift quality of peer feedback. Use ABC feedback. Use regularly so students know it: Add>Build upon>challenge

Train students to self-assess their writing, then assess their work AND their assessment of themselves. They get better and better – take off the training wheels- just assess their assessment.

  • Blogging! MP3 of me reading and discussing student essay – more detail, some timing. They really have to pay attention!
  • Feedback symbols for quicker marking.
  • Mark/25 e.g. spg 3, analysis 6, use of vocab 4. Pre-decided and shared then re-used in later versions too. @miss_tiggr

Highlight in books: green=great, pink=could improve. Match to success criteria/ APP.

I usually try to ask a question in marking- and check students have answered it- or give them a specific task to do. Think instant verbal feedback is very helpful.

Feedback post-its. They write down their targets and move them up each lesson to the current page.