Skip to content Skip to footer

Workshop 2A: Tools to Enhance Self-Feedback Generation

by Adrienne de Souza 

Coinciding with the Colloquium, at the back of my mind, I was puzzling over a conundrum. As a college, we had decided to increase the teaching load for my team by half a class per teacher. My team teaches Project Work, and as a coursework-based subject with a 3000-word report as a key deliverable, we spend a fair amount of time marking. And so my back-of-the-mind puzzle was this: how can I design a curriculum that helps to manage an increased teaching load without simply adding more hours? 

When I first signed up for the workshop, Tools to Enhance Self-Feedback Generation, this puzzle was not on my mind yet, so I had signed up purely out of interest in the session synopsis. Serendipitously, it was not long into the workshop that I realised that self-feedback generation might just offer a possible way out of my conundrum, by way of converting the many hours my team would otherwise spend marking additional scripts, into a time-saving – and equally effective – student-directed task. This first connecting dot made its mark in my mind when Professor Lipnevich shared this comic, along with a related quotable quote from a teacher at a workshop: 

 
 
 
 
 
 

 

 
 
 
 

 

 

 

 

 

 

“I have 120 papers to grade.  
If I try to provide detailed feedback  
on an on-going basis,  
I will lose my mind.” 

Adapted from: Workshop 2A, Slide 28 

  

We know that carefully constructed feedback messages on students’ written work can lead to enhanced performance. At the same time, providing high quality feedback responses is time-consuming and may be impractical for teachers in many situations. So the goal of Professor Lipnevich’s research which she shared in this workshop was a compelling one – to try to find a feasible approach to providing standardised feedback. 

  

Self-feedback generation 

The workshop considered two ways of providing such standardised feedback: rubrics and annotated exemplars. Using these tools, students are then to generate self-feedback on their work.  Self-feedback refers to “the implementation of self-assessment in ways that generate feedback information and processes for students’ own purposes (e.g., achieving educational gains)” (Panadero, Lipnevich & Broadbent, 2019)1. Professor Lipnevich presented two studies, both involving students submitting a written assignment, followed by providing them with rubrics, exemplars, or a combination of both, with instructions to improve their initial submission based on the tool provided. In the second study, an additional element of training students to use the respective tools was introduced. The results are summarised below: 

  

 
 
 
 
 
 

Study 1 

 
 
 
 

Study 2 

 
 
 
 
  • All three conditions led to improvement in college students’ written work 

  • The rubric condition outperformed the other two conditions 

  • Students focused only on the “best” exemplar when it was available 

  • The rubric condition facilitated better quality self-feedback and, possibly, reduced cognitive load 

 
 
  • The rubric group is shown to perform better before training. However,  after training, the exemplars group does as well on two out of three outcomes. 

  • The combined condition also improves with training, but less so than either the rubrics or exemplars group 

  • All three intervention groups are shown to outperform the control group over time. 

 

 

 

Can standardised feedback be as good as personalised feedback? 

In a separate study in Brazil, it was found that teachers spent, on average, 12 minutes per essay on teacher comments. For the above-mentioned workshop participant with 120 scripts to grade, that comes up to 1440 minutes or 24 hours. In comparison, in the same Brazilian study, it takes an average of 20 minutes per annotated exemplar. That is a whopping 98.6% reduction in time spent! But what of the outcomes? The study concluded: no difference. 

Therefore, from Study 1, Study 2, and the Brazil study, the workshop concluded that: 

  • Encouraging students to generate self-feedback using various instructional tools is a viable strategy. 
  • Exemplars, rubrics, and other tools that encourage students’ self-feedback generation work as well and save time. 
  • Explicit instruction should be provided on how to use these tools. 
  • There is evidence of transfer to a new task, so learning is taking place. 

I must confess that exciting at the conclusions were, I did have questions. How exactly would I enact this in the classroom? After all, I had been using annotated exemplars in my classroom for years. And of course I provided explicit instruction – I would painstakingly take students through the exemplar, checking for understanding with appropriately placed mini assessment for learning tasks, after which they would produce a piece of writing which would fall short of desired standards. And that would trigger the extensive, detailed feedback I would provide, which would lead to gains in the next draft. If that was already my process, did the conclusions of the study mean I should… just “re-teach” the annotated exemplar after the first draft in lieu of my detailed feedback? But why would that be any different from teaching it the first time? I hypothesized that it had something to do with one or more of the following: 

  • There is simply too much to process the first time; the cognitive load is too high and our brains cannot attend to so many demands. Students do get something out of the first teaching (compared to without any teaching), but that first touch point is just that – a starting point. 

  • Writing is thinking. The first written piece is part of the process, as in the whole concept of process writing. Their first draft is therefore an early stage in the learning process of consolidating and making meaning from what they learnt from the first instructional touchpoint. 

  • The act of looking back on work produced, while at the same time having the opportunity to compare it side by side with an exemplar, invites one to reconsider work more closely. Perhaps there is some connection to spaced repetition in memory consolidation, where this reflective practice acts as a form of spaced repetition to strengthen knowledge and understanding of expectations.     

  

Peer feedback 

After the very exciting research on self-feedback, we went on to look at peer feedback. The workshop reaffirmed the power of utilising peer feedback, with recommendations on how to set the stage for peers to give good feedback using the ladder of feedback: 

 

My takeaway from the ladder of feedback is that it is fundamentally about setting students up for success. If we want students to succeed in a peer feedback task, we cannot simply tell students to go forth and give each other feedback, with an imploring “and make sure it is good constructive feedback!”. Instead, we need to scaffold and support students, and teach them how to do just that. After all, as educators, if we attend multiple courses to learn how to give effective feedback ourselves, it would only follow that the same applies when asking students to give feedback too. And connecting back to self-feedback generation, I reflected on how I had perhaps been too sloppy in the past with asking students to go forth and write after taking them through an annotated exemplar – perhaps the real gains comes from a more thoughtfully designed prompt, beyond a call to “go forth and write and emulate the skills demonstrated by the exemplar”. With that, my takeaway was that in order for self-feedback to truly work as intended, I must design the activity thoughtfully to scaffold and support their own self-feedback generation too.   

 

Solving a puzzle 

Equipped with both knowledge and the mindset that self-feedback and peer feedback were indeed viable alternatives to time-consuming, detailed, personalised feedback, participants were invited to consolidate our learning with a final activity – to consider our practice, and plan how we would implement self- and peer feedback in the course we teach. 

  

In the context of my course, Project Work, first-year Junior College students (typically aged 17) are invited to consider a problem that they are interested in solving – food waste in households, obesity among primary school children, etc. – and, after several months of in-depth research, articulate the problem and propose a solution in a 3000-word written report. At my college, we provide the students with three opportunities to submit a draft report and receive detailed feedback on their drafts. Students definitely benefit from the feedback, with clear improvements being seen across the drafts. Yet, it is no doubt laborious and extremely time consuming.  I reflected on the three-draft cycle we currently have in place, and came up with this proposal: 

  

I was thoroughly energised by the session, not to mention having possibly and unintentionally found a solution to my puzzle. Fired up by this, I shared my puzzle, along with my proposed curriculum modifications to incorporate self-feedback (and retain my existing peer feedback element), while at the same time eliminating two rounds of detailed tutor feedback. In the same breath, I shared my apprehension at “witholding” detailed tutor feedback across the first two drafts. Despite the fact that Professor Lipnevich had literally just showed us evidence that outcomes in her studies were comparable, it was still scary to think that I would not be providing detailed feedback until the “last minute”. In return, I received feedback gems from fellow participants, who reminded me that I would continue to have opportunities to see their work, along with the self- and peer feedback generated through the in-class consultations. In addition, one participant suggested an elegant solution: I could get students to submit a consolidation of their self-feedback and their intentions for improving their draft at each stage, thereby allaying this fear, while at the same time incorporating a bonus self-regulation task: 

 

This way, I would have a more targeted opportunity to check-in on how they are processing feedback for themselves (and provide feedback on their feedback if necessary!), while levelling them up as holistic learners by more intentionally facilitating self-regulated learning skills. Win-win! 

About two months after the workshop, I had the opportunity to “pilot test” this idea with a small-scale assignment – a 500-word reflection piece that we typically collected three drafts for. Instead of marking their first draft, I had students bring their completed first draft to class, and brought them through a scaffolded activity to generate self-feedback using an exemplar and rubrics. Following that, I had them rewrite a second draft based on their self-feedback, and submit both their original draft with their self-feedback captured in comment bubbles, as well as their revised second draft. I then proceeded to mark their second draft as usual – with some apprehension as to the outcome. Fortunately, my worries were unfounded – for a good majority of students, the feedback they gave themselves very much mirrored the type of feedback I would have given, with clear and commendable improvements seen from the first to second draft. While not true of all students, comparing it with last year’s submissions, just as the Brazil study foretold, I saw the same: no difference 

So, how can I design a curriculum that helps to manage an increased teaching load without simply adding more hours? It really does appear that self-feedback (and peer feedback) generation are very workable solutions!