About Academic Practice

Academic Practice at Trinity (Trinity Teaching & Learning) strengthens and advances excellence in teaching and learning. We are committed to enhancing student learning through the development and facilitation of research-led approaches to teaching in higher education.

Calm in the storm: Managing online assessment during a pandemic

Dr Neil Dunne is Programme Director for Trinity’s Postgraduate Diploma in Accounting. In this reflection, Neil reaches out to Programme Directors from across the disciplines, inviting them to consider some key learnings from pandemic assessment that they might carry forward into the new normal.

Over the past five years, Trinity’s Postgraduate Diploma in Accounting has launched the accounting career of almost 200 graduates. Upon completion, graduates attain exemptions from professional accounting exams (e.g., Chartered Accountants Ireland, ACCA and others), which helps them navigate the arduous journey towards professional certification. Professional accounting bodies base their accreditation decisions primarily on the content of syllabi and exams. So when COVID shook Ireland in March 2020, my concerns as Programme Director included not only the pivot to online teaching, but also the challenge of assessment in a pandemic.

Semester 2 exams, which were fast approaching, had been written for a closed-book face-to-face context, i.e., the traditional basis for our professional accreditations. I had to consider how we would assess in April 2020 and beyond in a way that would be online and flexible, yet rigorous enough to maintain our extensive accreditation. It was challenging. With hindsight, I have identified four key learnings on how to navigate online assessment, which I hope are useful for all Programme Directors:

Reach out:  The accounting academic community recognized the issues, and really rallied around each other. I consulted colleagues in the Irish and British Accounting and Finance Associations, both informally and through seminar attendance. Here, I learnt some useful innovations, and also that all Accounting Programme Directors were anxious!

I exchanged frequent correspondence with the professional accounting bodies, who conveyed flexibility and empathy, but perhaps understandably refrained from being too specific on what exactly was required of online assessment. Nonetheless, their documents provided a useful foundation for me to decide on how to assess online.

I frequently checked in with our fantastic faculty, who had to amend their exams for an open-book context, and external examiners, who had to review these amended exams. Similarly, I engaged often with our two class reps, who conveyed the perfectly understandable anxiety and concern of students, and who also played a vital role in communicating my decision-making process to other students via their class WhatsApp groups. I cannot praise the class reps highly enough.

Get the details right: Although open-book exams clearly differ from their closed-book variant, the events of March-April 2020 vividly demonstrated this to me. Let’s start with the front page of the exams. We designed a new standardized cover sheet for open-book accounting exams, which combined guidance from professional accounting bodies, some Trinity-specific declarations, and my own ideas. This cover sheet made clear that answers taken verbatim from a textbook, or unsupported by workings, would not be accepted, and that any examples used in answers should be original (rather than textbook-sourced). These regulations served to allow students demonstrate their own original thinking. To minimise student anxiety and uncertainty, we placed this new cover sheet on Blackboard well in advance of the exam session.

Rather than hurriedly arranging online proctoring, which is expensive and often flawed, we aimed for exams where candidate attainment would be unaffected by the presence or absence of invigilation. This ‘prevention is better than cure’ approach to potential plagiarism necessitated rewriting of our April 2020 exams. Our litmus test became: if a question was answerable entirely by reference to the textbook, or did not allow the candidate to demonstrate original thought, we modified it or removed it from the exam. Operationalising this philosophy necessitated migrating from knowledge-based towards applied and scenario-based questions. Additionally, questions now often sought opinions. For example, a question that might previously have read ‘Describe the nature and purpose of alternative performance measures (APMs)’ became:

Whilst reading the ‘Top Accounting’ website, you noticed the following quote:

“APMs are quite misleading for users of accounts, and should be banned”

Required:  Do you agree with this statement? Refer to material you have studied this semester to support your answer.

The first part of the revised question seeks an opinion, thus privileging original thought. The second part requires that opinion to rest on lecture material, thus reducing the ‘Googleability’ factor. In other words, a candidate seeing APMs for the first time during the exam could not just Google ‘APM’s and update their answer sheet accordingly. In contrast, candidates that had engaged with the material consistently throughout the semester could immediately begin to demonstrate their aptitude.

Speaking of answer sheets…. We decided to allow candidates hand-write rather than type their answers, for two reasons: First, students expressed a strong preference for hand-writing, and were wary of exams mutating into an assessment of Word/Excel proficiency (which aren’t Programme Learning Outcomes), rather than core accounting concepts and skills. Second, the requirement to hand-write answers allowed us to more accurately assess the provenance of each script.

Students downloaded the exam from Blackboard each day, hand-wrote their answers, and then, using an app recommended by us, scanned and uploaded their answers back to Blackboard. We allowed students 15 extra minutes to deal with any IT issues, i.e., downloading and uploading the exam. It generally worked well, and facilitated stylus-based marking/annotation. I’d also set up a ‘mock’ assignment a few weeks before the exam session for students to submit their answers and thus gain practice using the app. Although time consuming, this helped iron out any IT issues in advance of the exam.

Navigate the aftermath: Even pre-COVID, we all know that examiners’ real work begins after the exam, in terms of trudging to the Exams Office, collecting our scripts, and then allocating several weeks to grading, exam boards, etc. However, the online exams surfaced some unique extra considerations. First, we had to closely monitor for grade-inflation. A significant spike in results might have problematized our entire approach to online exams. However, overall results in 2020 and 2021 broadly remained in line with prior years. Second, notwithstanding the mitigation measures described in the previous section, the dreaded spectre of plagiarism still loomed large, and faculty had to extend extra effort in identifying excessive similarity of response. Unfortunately in 2020, there were some cases, entailing difficult emails and Zoom calls. In 2021, we had no such cases.

People are understanding! The various stakeholders affected by our assessment decisions were generally very understanding. For instance, students adopted a pragmatic and resilient approach that will serve them well in the accounting profession. College immediately provided invaluable training modules and seminars around the area of online assessment. Trinity Business School accepted that longer assignment-type online exams would not be appropriate, and facilitated our request to hold two-hour online exams instead. Additionally, our Programmes Team provided fantastic support. External examiners willingly reviewed a whole new set of online exams. Professional bodies understood that we were still assessing the same learning outcomes, and indeed any post-COVID accreditation reviews have been successful. Finally, our accounting faculty demonstrated their long-held great concern for both student well-being and the integrity of the accounting profession.

To conclude, COVID has made us all think more carefully about assessment. Although the worst may be behind us, the new normal will also involve online assessment, so hopefully the above points may be useful to colleagues in Trinity and beyond. Professionally, I have certainly been on a journey these past 18 months, and would be delighted to talk through any concerns with colleagues that wanted to reach out.

An understanding of feedback

Sam Quill reflects critically on his understanding of feedback and how it has developed through engaging with 3rd year students in dermatology and otorhinolaryngology and with academic peers, working together in a Communities of Practice model. 

Context
Undertaking Trinity’s Special Purpose Certificate in Academic Practice has encouraged me to design student-centred learning activities around social constructivist techniques (e.g. Carlisle & Jordan 2005, Ramsden 1996). In clinical education contexts where minimum standards of care are required for every patient, it concerns me that not all students are equally likely to benefit from peer learning activities: integrating constructivism into my practice has highlighted to me that not all students are equally prepared to learn from their peers. Some of the issues my students encounter with peer feedback might well be related to Biggs’ theory of ‘academic learners’. Biggs (1999) suggests that students who thrive in higher-level education without much teacher direction, where student-led learning activities like peer feedback pervade, already possess the skills to reflect on their learning.

When I asked my students what they thought about feedback, I was interested to discover that learners who found student-led approaches more difficult tended to focus their criticisms on peerfeedback to patient case presentations. They indicated that they preferred instruction on the “correct” answer from subject experts, rather than learning through peer dialogue and shared understanding.They felt that peer feedback often pointed to what they had done ‘wrong’ rather than offering ‘feed-forward’ action points. They also expressed negative emotions towards what they perceived to be critical feedback. I believe this has dissuaded the studentsfrom providing honest evaluation of others’ work in an attempt to not hurt each other’s feelings.

The point of feedback?

From discussions with students it seemed likely that some students were unclear on the purpose of feedback and therefore unsure of what to expect and how to ‘do’ feedback appropriately. Price et. Al (2012) acknowledge the lack of clear consensus on the definition of feedback but suggests that it can serve different roles in learning. For example, on the behaviourist side of the spectrum lie the corrective and reinforcement roles of feedback; on the constructivist side, feedback has a more dialogic, future-focused function. A common mistake in higher education practice involves asking students to reflect “without necessary scaffolding or clear expectation”. Sharing my experiences with colleagues and peers undertaking the Reflecting and Evaluating your Teaching in Higher Education module revealed that this was a common misstep and for me, peer presentations and ‘formally’ structured discussion with colleagues reinforced the benefits of combining individual and collective reflection to work on common challenges.


Peer-driven reflection has prompted me to acknowledge the need for a shared understanding of feedback between me and my students – an insight that has helped my students to embrace the introduction of metacognitive skills into their curriculum.

In both teaching and in clinical practice, I recognize that reflective skills and pedagogical literacy are particularly important in a paradigm where peer learning underpins postgraduate clinical professional development. Ryan & Ryan remind us that “deep reflective skills can be taught, however they require development and practice over time.” By reflecting actively on the process of ‘unfurling’ the concept of scholarship of teaching, as outlined by Kreber & Cranton (2000), I can see how my social-constructivist learning activities could be adapted to support better learning for more students. I believe our senior faculty need to plan for the integration of reflective learning skills at all levels of medical education, especially in the earlier, pre-clinical years – but this approach needs to be adopted into daily educational practice, not discussed solely at high level curriculum committees.

Next steps?

Looking ahead, I want to build on my areas of improvement identified in the Johari window below, encouraging me to articulate these in response to peer commentary. Specifically, I want to take more of a scholarly approach to evidencing the value of change in my teaching activities at TCD. I would love to see these new reflective feedback skills resulting in a generation of doctors who intuitively “reflect-in-action”, providing responsive care to patients in need, who also have the ability to “reflect-on-action” and improve medical practice and medical education in the future.  Both self-reflection and peer feedback have been essential in developing my Johari window. Would you consider doing a similar exercise for your own context? The links below offer some sample resources below to try for yourself!

Reflective learning resources:

Can students ever really be partners in assessment?

Ben Ryan, a 3rd year BESS student at Trinity College and member of the ‘Enhancing Digital Teaching & Learning’ (EDTL) project with the IUA, discusses key points in relation to students as partners in assessment.

Students are more than capable of being partners in assessment. We have so much experience of different assessment types, and what has or hasn’t worked for us in the past. We know what kind of assessments we find interesting and challenging. Involving students in the assessment process can help us be more engaged in the module and get us to develop key skills like communication, teamwork and compromise.

Getting students involved in the assessment process gives us agency and independence and lets us take control of our learning. When we’re given the opportunity to influence aspects of a module’s assessments, I’ve found that myself and other students were much more engaged with that module and generally had a better understanding of what was being required of us in the assessments. I believe students can be partners in designing assessments as we know what assessments we prefer, and which are more beneficial to our learning – and which ones aren’t worth putting as much effort into.

Getting students involved could be as simple as running polls or having discussions in class or on boards to agree the type of assessment (individual versus group project, essay versus report) and how teams and groups are selected. I personally think students should be involved in assessments at every step of the way from creation onwards. I think it leads to better engagement with lecture and module content module and can give students a better understanding of the assessment process. We can clearly tell when an assessment is just recycled year in year out and we lose interest in the assessment and the module content as a result. We know this isn’t always possible – particularly with very large classes – but assessments should at least match the current version of a module!

Students-as-Partners (‘SaP’) models aren’t always used well. Sometimes it can go too far by giving students too much freedom to decide their assessment. Students can be easily overwhelmed by a lack of guidance and support. In one case, I had to write an essay on any topic relating to one of my modules. I thought this was a poor use of the SaP model: it was really broad and I found myself being overwhelmed and not knowing where to start.

Without setting clear boundaries for the length of time devoted to discussion around assessment, co-creation conversations can drag on and take away from time spent engaging with content in class. Before having these discussions with students, staff should clearly outline how long will be spent discussing assessments and what they hope to achieve from the conversation. I also think staff need to be sure that discussion includes all student voices, not just the most vocal. Any discussion on assessment co-creation should probably include a channel for students to express their views anonymously or privately (google form, private email to lecturer). Sometimes it is better to just try out the process of assessment co-creation. You will very quickly see what works and what doesn’t make sense for you – and also what does and doesn’t make sense for your students.

If staff are considering putting a SaP model into their assessments, my main advice would be to just go for it. What’s stopping you from using SaP in your assessment approaches? Why?

Towards a Culture of Dishonesty?

Dr Ciara O’Farrell, Head of Academic Practice at Trinity College, discusses key issues surrounding academic integrity and plagiarism in higher education, and highlights the importance of reaching a shared understanding of both.

Is Andy Warhol’s iconic painting of a Campbell’s soup can satire, or copying? Is Madonna’s ‘Hollywood’ video a creative homage to French photographer Bourdin, or was she striking someone else’s pose? Many years ago, I attended a teaching & learning conference and I distinctly remember a workshop where Perry Share (IT Sligo) discussed these images, unpacking their relationship to popular culture and framing the notion of plagiarism in intertextuality theory. Fresh from my home discipline of English (where T.S. Elliot once noted, ‘immature poets imitate; mature poets steal’) the workshop challenged my pre-conceived perceptions of plagiarism, prompting me to reconsider my attitudes to ghost writing, for example.

In March 2021, Forbes published an article on US giant “Chegg”, currently the most valuable EdTech company in America with stock prices tripling since the pandemic. Indeed, ‘to chegg’ is fast emerging as a verb. Chegg describes its service as ‘connecting college students to test answers on demand.’ Ask an expert a question, the Chegg Study website boasts, and you will have an answer back in ‘as little as 30 minutes.’ However, according to Forbes, who interviewed 52 students who use the Chegg study app, ‘all but 4 admitted they use the site to cheat.’

Cheating is nothing new but there is concern among some academics that the sudden move to open book assessment since Covid-19 may have made it more prevalent. We know from the research that learning achieved through open book assessment is valuable to students and employers alike, and I doubt that many students or academics want to see a lock, stock, and barrel return to the closed book, timed written exams which dominated University assessment until recently. So how can we prevent this?

Of course, students have a responsibility not to cheat but for students transitioning into third level from a world where plagiarism is becoming increasingly normalised, the type of online student ‘training’ many institutions currently have in place only goes so far and is often little more than a tick boxing exercise. Plagiarism policies help but are challenging to implement. Assessors too can mitigate plagiarism, but this necessitates an assessment re-design that requires students to apply their knowledge rather than regurgitate it and to synthesise their ideas with those of others rather than ‘steal’ them. This also requires assessors to shift their perceptions of the purposes of assessment and to view it as something that not only ‘tests’ knowledge but acts as a vehicle for learning.

It is time for third level institutions to hold sincere conversations with students about the ‘why’ of plagiarism and to frame these discussions from historical, ethical, legal, cultural, and pedagogical perspectives. Until we reach a shared understanding with students of what plagiarism is and convince them of the importance of academic integrity, we risk a culture of dishonestly taking hold.

Formative digital assessment

Jonny Johnston, Academic Developer, writes about classroom assessment techniques (‘CATS’) in digital teaching and learning

Higher Education as an endeavour (and as an industry) has spent the last 30 years worshipping at the cult of assessment: is/our assessment practices fit for purpose? What are we assessing? Are we assessing for learning, as learning, assessing to take stock of learning, or assessing to certify learning and award degrees? Why do we do assessment the way we do, and how do we see it changing? And, topically – where does digital fit into the debate?

When we talk about digital transformation in assessment, quite often we put the focus on high-stakes summative assessment practices and focus on the shift towards open-book cultures or on the potential privacy invasions of remote proctoring. Our use of classroom assessment techniques (often referred to as assessment-for-learning strategies) is often what gives as a sense of whether or not students have tuned in or just turned up – and whether they’re engaged or not.

In a face to face environment like a lecture hall or seminar room, we can judge from students’ expressions whether or not they’re with us – and break up teaching activity with think-pair-share activities, solo minute papers, group discussions, and a whole raft of collaborative activities. We change our delivery, repeat and clarify, highlight concepts based on how students are responding. Shifting these activities into the digital space isn’t always straightforward – particularly if we’re not aware of just how often we do these in real time when we’re teaching in person.

Some of the things we do in person can work better online: particularly for large group teaching. VLE tools and videoconferencing apps like Zoom can support anonymous annotation on shared slidedecks, encouraging learners to ask questions at low-risk to themselves. Structured engagement in breakout rooms can be used to replicate the ‘talk with the people on either side of you to discuss as a think-pair-share’ – and asking students to report back their ‘group’ answers is lower stakes for a learner than sharing their individual answer in front of 300 peers.

Polling tools are quick and easy to set up on the fly and can be used in almost any situation to give you a sense of where your students are. Wordclouds generated in response to ‘muddiest point’ or ‘minute paper’ style prompts (using tools like TurningPoint or Menti) can be used for responsive plenary activities and let you really quickly and easily see what students have taken away from the session – or haven’t, as the case may be! ‘Post-it’ style ideation, brainstorming, and ‘card-sort’ activities on virtual pegboards can be done with tools like Flinga or Padlet and can be used by groups or individuals.

We don’t necessarily think explicitly about formative assessment in action in the digital classroom. I think it’s a major oversight: the vast majority of assessment we do as educators is on-the-hoof and formative, particularly when we’re teaching in live time. Our digital teaching is evolving rapidly. Is our digital assessment evolving to match?