An understanding of feedback

Sam Quill reflects critically on his understanding of feedback and how it has developed through engaging with 3rd year students in dermatology and otorhinolaryngology and with academic peers, working together in a Communities of Practice model. 

Context
Undertaking Trinity’s Special Purpose Certificate in Academic Practice has encouraged me to design student-centred learning activities around social constructivist techniques (e.g. Carlisle & Jordan 2005, Ramsden 1996). In clinical education contexts where minimum standards of care are required for every patient, it concerns me that not all students are equally likely to benefit from peer learning activities: integrating constructivism into my practice has highlighted to me that not all students are equally prepared to learn from their peers. Some of the issues my students encounter with peer feedback might well be related to Biggs’ theory of ‘academic learners’. Biggs (1999) suggests that students who thrive in higher-level education without much teacher direction, where student-led learning activities like peer feedback pervade, already possess the skills to reflect on their learning.

When I asked my students what they thought about feedback, I was interested to discover that learners who found student-led approaches more difficult tended to focus their criticisms on peerfeedback to patient case presentations. They indicated that they preferred instruction on the “correct” answer from subject experts, rather than learning through peer dialogue and shared understanding.They felt that peer feedback often pointed to what they had done ‘wrong’ rather than offering ‘feed-forward’ action points. They also expressed negative emotions towards what they perceived to be critical feedback. I believe this has dissuaded the studentsfrom providing honest evaluation of others’ work in an attempt to not hurt each other’s feelings.

The point of feedback?

From discussions with students it seemed likely that some students were unclear on the purpose of feedback and therefore unsure of what to expect and how to ‘do’ feedback appropriately. Price et. Al (2012) acknowledge the lack of clear consensus on the definition of feedback but suggests that it can serve different roles in learning. For example, on the behaviourist side of the spectrum lie the corrective and reinforcement roles of feedback; on the constructivist side, feedback has a more dialogic, future-focused function. A common mistake in higher education practice involves asking students to reflect “without necessary scaffolding or clear expectation”. Sharing my experiences with colleagues and peers undertaking the Reflecting and Evaluating your Teaching in Higher Education module revealed that this was a common misstep and for me, peer presentations and ‘formally’ structured discussion with colleagues reinforced the benefits of combining individual and collective reflection to work on common challenges.


Peer-driven reflection has prompted me to acknowledge the need for a shared understanding of feedback between me and my students – an insight that has helped my students to embrace the introduction of metacognitive skills into their curriculum.

In both teaching and in clinical practice, I recognize that reflective skills and pedagogical literacy are particularly important in a paradigm where peer learning underpins postgraduate clinical professional development. Ryan & Ryan remind us that “deep reflective skills can be taught, however they require development and practice over time.” By reflecting actively on the process of ‘unfurling’ the concept of scholarship of teaching, as outlined by Kreber & Cranton (2000), I can see how my social-constructivist learning activities could be adapted to support better learning for more students. I believe our senior faculty need to plan for the integration of reflective learning skills at all levels of medical education, especially in the earlier, pre-clinical years – but this approach needs to be adopted into daily educational practice, not discussed solely at high level curriculum committees.

Next steps?

Looking ahead, I want to build on my areas of improvement identified in the Johari window below, encouraging me to articulate these in response to peer commentary. Specifically, I want to take more of a scholarly approach to evidencing the value of change in my teaching activities at TCD. I would love to see these new reflective feedback skills resulting in a generation of doctors who intuitively “reflect-in-action”, providing responsive care to patients in need, who also have the ability to “reflect-on-action” and improve medical practice and medical education in the future.  Both self-reflection and peer feedback have been essential in developing my Johari window. Would you consider doing a similar exercise for your own context? The links below offer some sample resources below to try for yourself!

Reflective learning resources:

Can students ever really be partners in assessment?

Ben Ryan, a 3rd year BESS student at Trinity College and member of the ‘Enhancing Digital Teaching & Learning’ (EDTL) project with the IUA, discusses key points in relation to students as partners in assessment.

Students are more than capable of being partners in assessment. We have so much experience of different assessment types, and what has or hasn’t worked for us in the past. We know what kind of assessments we find interesting and challenging. Involving students in the assessment process can help us be more engaged in the module and get us to develop key skills like communication, teamwork and compromise.

Getting students involved in the assessment process gives us agency and independence and lets us take control of our learning. When we’re given the opportunity to influence aspects of a module’s assessments, I’ve found that myself and other students were much more engaged with that module and generally had a better understanding of what was being required of us in the assessments. I believe students can be partners in designing assessments as we know what assessments we prefer, and which are more beneficial to our learning – and which ones aren’t worth putting as much effort into.

Getting students involved could be as simple as running polls or having discussions in class or on boards to agree the type of assessment (individual versus group project, essay versus report) and how teams and groups are selected. I personally think students should be involved in assessments at every step of the way from creation onwards. I think it leads to better engagement with lecture and module content module and can give students a better understanding of the assessment process. We can clearly tell when an assessment is just recycled year in year out and we lose interest in the assessment and the module content as a result. We know this isn’t always possible – particularly with very large classes – but assessments should at least match the current version of a module!

Students-as-Partners (‘SaP’) models aren’t always used well. Sometimes it can go too far by giving students too much freedom to decide their assessment. Students can be easily overwhelmed by a lack of guidance and support. In one case, I had to write an essay on any topic relating to one of my modules. I thought this was a poor use of the SaP model: it was really broad and I found myself being overwhelmed and not knowing where to start.

Without setting clear boundaries for the length of time devoted to discussion around assessment, co-creation conversations can drag on and take away from time spent engaging with content in class. Before having these discussions with students, staff should clearly outline how long will be spent discussing assessments and what they hope to achieve from the conversation. I also think staff need to be sure that discussion includes all student voices, not just the most vocal. Any discussion on assessment co-creation should probably include a channel for students to express their views anonymously or privately (google form, private email to lecturer). Sometimes it is better to just try out the process of assessment co-creation. You will very quickly see what works and what doesn’t make sense for you – and also what does and doesn’t make sense for your students.

If staff are considering putting a SaP model into their assessments, my main advice would be to just go for it. What’s stopping you from using SaP in your assessment approaches? Why?