About Academic Practice

Academic Practice at Trinity (Trinity Teaching & Learning) strengthens and advances excellence in teaching and learning. We are committed to enhancing student learning through the development and facilitation of research-led approaches to teaching in higher education.

What is a digital assessment? Exploring the digital vs non-digital divide 

Dr. Pauline Rooney, Academic Developer at Trinity College Dublin, poses the question, can we really categorise assessments as digital vs non-digital—and is it a useful distinction at all?

Digital technologies provide new ways of designing, facilitating and managing assessment processes. There are various terms used for this, including “Online Assessment”, “Technology-Enhanced or Enabled Assessment”, “E-Assessment”, and the lexicon around digital assessment is constantly evolving as new practices and understandings emerge.    

At Trinity, we use the term “Digital Assessment. But what do we mean by this? And what does this term mean to you?  

Photo by Sebastian Sikora / CC BY 3.0

For many people, digital assessment is equated with online assessmenta term which often conjures up images of online MCQ tests, virtual simulations, blogs, wikis and proctored online exams. These assessment modes are made possible by recent advances in digital technologies, and are often defined by their use of technology. Can you imagine how one might create a blog without a blogging tool?!  

I would argue that the term “digital assessment” encapsulates far more than assessments conducted online, or assessments which are defined by their use of technology.  

Let’s take the traditional essay for example. Is this digital or non-digital? Non-digital, I hear you say!  However, these days, it is rare to research, write and submit an essay without the use of digital technologies at some point in the process. Many essays are now disseminated and collected within virtual learning environments. Students typically write their essays using laptops and word processing software, having undertaken their research online. Their lecturers may also have given digital feedback in the form of text-based annotations/comments, or even audio or video recordings.   

Still non-digital do you think?  

What about a performance? Take, for example, the Drama student as they enact a theatre performance with their class peers. In pre-Covid times, this was typically a live, in-person affair, with the actors performing to a reactive live audience in a constant cyclic interchange of energies. With the Covid-19 pandemic, many such performances moved online, designed, rehearsed and performed in isolation. See, for example, the wonderful Lockdown Shakespeare produced in July 2020 by final year acting students at Trinity College’s Lir Academy. For me, this constitutes a wonderful example of digital assessment, where digital technologies are used so creatively to enable new forms of performance and assessment processes.  

Digital technologies now permeate our lives for better or worse. They have changed how we access and consume information, how we communicate with our peers, how we collaborate and, as some would argue, they are even changing the way that we behave and think. (See for example Carr 2010). Against this backdrop, the way in which our students engage with most assessment processes is now a complex fusion of analogue and digital technologies, spaces, activities and practices (Fawns 2020).  

With this in mind, can we really categorise assessments as digital vs non-digital? Is it a useful distinction at all? And if yes, what does it mean to you?   

Can we learn from recorded lectures when they fly by at double speed?

Caitríona Ní Shé of Academic Practice reflects below on the use of 2 x speed in replaying recorded lectures.

I have long since been aware of the value of playing instructional videos (how to’s etc) at 2x speed, but it was only recently that I considered the impact that this might have on higher education. With the onset of the pandemic, and the move to online teaching and learning, many educators either pre-recorded lecture material or recorded their live online lectures, and then them made available to students on the institutional Virtual Learning Environment (VLE).

What if students continued habits that they had developed when engaging with everyday web content and engaged with their lecture materials at 2x speed? Would they miss salient points from the lecture? Would they retain the knowledge imparted? Or would they fail to achieve the desired learning outcomes?

As the pandemic arrived, my own children in higher education moved from physically attending live lectures to accessing online and recorded lectures. Their lectures were available to download and watch at a time that suited them. I observed suspiciously from the side-lines, as they watched and listened to recorded lectures at 2x, sometimes pausing and replaying, but generally flying through the lectures as fast as they could! And it turns out that they were not alone. Anecdotally, students (including those attending Trinity) reported that they regularly watched recorded lectures at 2x speed.

A current UCLA study into the effects of video speed on comprehension was the topic of an RTE radio piece on the Drivetime programme early in the new year [1]. An incredulous host, Cormac Ó hEadhra, cast a sceptical eye on the whole notion, wondering ‘what type of a course are you doing if you can listen at double speed and take… specific details in?’ However, his co-host, Sarah McInerny, was not so sceptical and suggested that playing back at higher speeds may force the students to concentrate harder, and ultimately save students’ time. Megan O’Connor, Deputy President of the USI and a guest on the show, said she that while she never listens herself at double speed, she knows of people who do this and wasn’t against the idea. Megan suggested that students should be allowed adapt their learning techniques to suit their personal needs, and that all lectures should be recorded and made available as standard practice.

Recognising that there are many variables to consider when evaluating the use of 2x recorded lectures, UCLA conducted a series of experiments to determine the immediate and delayed comprehension of students who watched recorded lectures at varying speeds (1x, 1.5x,2x and 2.5x) [2]. Between 100 and 230 students participated in each experiment and sat the subsequent comprehension tests. A control group of 123 students were asked to complete the same tests without watching the videos. Even though previous work in this area had reported mixed results, this study found that immediate and delayed comprehension was not affected by watching videos at either 1.5x or 2x. Thus, they suggest that students may put the time saved, by watching recorded lectures at 2x, to educationally beneficial use. Additionally, students who watched the videos twice at 2x speeds, with a week’s delay in between each session, performed better in the comprehension tests than those who had watched the recorded lecture once at 1x, thus demonstrating the strategic value of 2x. Finally, 85% of the control group reported normally watching their lectures at speeds of greater than 1x.

Therefore, the answer to the question ‘Can we learn from recorded lectures when they fly by at double speed?’ is Yes, we can!

Having come from a position where I considered that using 2x on a ‘how to…’ video is vastly different than listening to a lecturer explain the concepts of group theory or differential equations, when surely you need to listen at 1x to catch every syllable the lecturer makes, I am now convinced. Using 2x may indeed be a strategic learning technique that saves time for students and supports the request by students that all lectures are recorded and made available on VLEs.

And loath though I am to say this, my kids have been correct all along, 2x works, even in the context of recorded lectures in higher education.


[1] Ó hEadhra, C. & McInerny, S. (2022, January 18). Fast Forwarding your lectures could actually be good for your learning. [Radio Broadcast]. https://www.rte.ie/radio/radio1/clips/22052588/

[2] Murphy, D. H., Hoover, K. M., Agadzhanyan, K., Kuehn, J. C., & Castel, A. D. (2022). Learning in double time: The effect of lecture video speed on immediate and delayed comprehension. Applied Cognitive Psychology, 36(1), 69-82.


Podcasts: a not-so-quiet revolution in academic practice?

Jonny Johnston, Academic Practice, writes here about using podcasts to support teaching and learning.

Podcasts – effectively talk radio on demand-  have been a feature of the new media landscape for many years now, often used as a pleasant distraction during commuting. Many academics, including those at Trinity, use them for public engagement and to disseminate their research outside of the university (e.g. Prof Michelle D’Arcy’s ‘Common Threads’).  

Inside the academy, podcasts are also increasingly playing a role in support of teaching and learning in and across the disciplines (Gribbins, 2007; Lonn & Teasley, 2009; Prakash & Anand, 2017; Erdirisingha, Salmon, & Fotheringill 2007; Osklawski-Lopez & Kordsmeier, 2021). Indeed, in the early stages of the pandemic, lecture capture software highlighted the possibilities of preparing lecture materials for podcasts for colleagues who, perhaps, might not otherwise have thought about using podcasts in their teaching practice.

Podcasts in practice

Podcasts can be used in both a light-touch way (e.g. highlighting key ideas/challenges, introducing new themes) and as a more academic resource (e.g. used for revision, spotlighting specialism) in teaching. Using podcasts to support academic teaching can be as straightforward as recording video or audio files and sharing them through Blackboard – and as simple as taking a red pen to an existing lecture script and deciding which strands of the lecture go best together in 15-20 min standalone segments (Cosmini, Cho, & Espinoza, 2017).

Podcasts don’t need to be used as a ‘deep’ teaching tool. They can also be used to support community development and broaden engagement with peers working in similar areas across the context. Our own Academic Practice podcast, ‘Coffee & Cobblestones’, first launched during the Covid-19 pandemic, has been used to broaden engagement with our Centre and to connect our institutional educational development activity to national enhancement themes, strengthen linkages across the sector with peer Centres, and to showcase excellent practice at institutional level.

No matter how you intend to use them, one of the key challenges with podcasts, however, is that they are often used by listeners as background ‘filler’ – just like talk radio. Effective podcast use in teaching is like effective lecturing practice– for it to be more than content delivered at a student, articulating what you expect students to ‘do’ with the podcast content is important. Flagging specific podcast resources to learners explicitly as learning supports can encourage students to engage with them and listen more than once to a particular episode – perhaps making notes, coming up with questions, and summarising the content of an episode in their own words.

Finding listeners and getting them to tune in

Without a clear strategy for reaching a particular audience, podcasts can risk ‘broadcasting into the void’. As such, there is a clear need to identify appropriate target audiences and implement communication strategies to support their use (e.g. coherent Twitter campaigns, advertising via institutional mechanisms). If you’re thinking about using podcasts, these are good questions to get started with:

  • Who do you want to listen (e.g. who is the audience?) and how does that shape your podcast?
  • What do you want your podcast to include? (e.g. what is the content?)
  • Where will you host the podcast (e.g. on an open platform, inside the VLE?)
  • If you’re hosting your podcast externally, what are appropriate keywords for people to find you? (e.g. how identifiable is your podcast?)
  • Who might you want to have as ‘guest’ speakers (e.g. varying who your listeners listen to?)
  • What is/are the key messages you want people to take away from each episode of the podcast?

Some suggested podcasts to get started with:

  1. Coffee & Cobblestones – Academic Practice, Trinity College Dublin
  2. Teaching Matters, University of Edinburgh.
  3. Inside Education – A Podcast for Educators Interested in Teaching. Sean Delaney, Marino Institute of Education.
  4. Talking & LearningArena Centre for Teaching and Learning, UCL
  5. Dead Ideas in Teaching & Learning– Columbia University Centre for Teaching & Learning
  6. Leading Lines– Vanderbilt University Centre for Teaching and Learning

References:

  1. Gribbins, Michele, “THE PERCEIVED USEFULNESS OF PODCASTING IN HIGHER EDUCATION: A SURVEY OF STUDENTS’ ATTITUDES AND INTENTION TO USE” (2007). MWAIS 2007 Proceedings. 6. http://aisel.aisnet.org/mwais2007/
  2. Lonn, Steven, Teasley, Stephanie D. 2009. “Podcasting in Higher Education: What Are the Implications for Teaching and Learning?” Internet and Higher Education 12:2, 88–92.
  3. Prakash SS, Muthuraman N, Anand R. Short-duration podcasts as a supplementary learning tool: perceptions of medical students and impact on assessment performance. BMC Med Educ. 2017 Sep 18;17(1):167. doi: 10.1186/s12909-017-1001-5. PMID: 28923046; PMCID: PMC5604391.
  4. Edirisingha, P., Salmon, G. and Fothergill, J. (2007) Profcasting – a Pilot Study and Guidelines for Integrating Podcasts in a Blended Learning Environment, In U. Bernath and A. Sangrà (Eds.) Research on competence development in online distance education and e-learning (127-137). Oldenburg: BIS-Verlag.
  5. Oslawski-Lopez J, Kordsmeier G. “Being Able to Listen Makes Me Feel More Engaged”: Best Practices for Using Podcasts as Readings. Teaching Sociology. 2021;49(4):335-347. doi:10.1177/0092055X211017197
  6. Cosmini, M.; Cho, D.; Liley, F.; Espinoza, J. (2017). Podcasting in medical education: How long should an educational podcast be? Journal of Graduate Medical Education, 9(3): 388-389.
  7. Jalali A, Leddy J, Gauthier M, et al. . Use of podcasting as an innovative asynchronous e-learning tool for students. US-China Educ Rev. 2011; 6: 741– 748. https://pdfs.semanticscholar.org/98ba/6cc35942469946b52cef044d9cb050fe8329.pdf.

Now or Not Now: Understanding ‘time blindness’ in students

Dr Ciara O’Farrell is Head of Academic Practice at Trinity College Dublin. In this post she explores ‘time blindness’ and suggests that even a micro change of thinking can go a long way in supporting students affected by time blindness.

Despite almost 20 years in the discipline of Teaching and Learning, I have only recently heard of the concept of time blindness – yet time blindness is as real as colour blindness. We all know that time management is an important skill for students to master in Higher Education. They have to learn to multitask; meet numerous deadlines; follow complex scheduling; navigate transitions; create and follow timelines for projects; plan and execute multiple assessments (often due at the same time); and study for future exams. But ‘managing’ time necessitates ‘understanding’ time and time blindness precludes that understanding. Put simply, you can’t manage what you aren’t aware of.

Time is both a biological and a mental construct and we have probably all experienced time distortion at some stage, such as losing track of it when we are enjoying ourselves or missing an appointment for no obvious reason. Contexts such as sleep deprivation, grief, anxiety or even the recent pandemic lockdowns can also alter our perception of time, though usually temporarily.

Time blindness is a more permanent distortion of time that can profoundly impact the lives of those affected by it, disrupting their very ‘fabric of time.’[1]  For these individuals, time is neither linear nor tangible and, struggling to perceive time outside of the present moment, time becomes either ‘now’ or ‘not now’. Students with time blindness often underestimate the time it takes to get somewhere or do something and have difficulties understanding calendars or allocating time to an exam questions or assignments; conversely, they can ‘hyperfocus’ and get totally lost in time. They typically feel they have an infinite amount of time, until they don’t, so they don’t feel deadlines creeping up on them and regularly miss them. Exams? Sure they’re always an eternity away – until they’re not.

Time blindness often affects individuals with Attention-Deficit/Hyperactivity Disorder (ADHD) and a recent study attests that up to 16% of college students worldwide have ADHD.[2]  ADHD is a deficit of the brain’s ‘executive systems’, those neuropsychological processes that enable us to plan, prepare for and reach future goals through managing time, planning, focusing attention, following instructions and regulating our behaviour. Executive Function (EF) stimulates our brains to engage in goal-directed, future-orientated actions but time blindness severely disables this motivation to set and action appropriate goals, causing a significant problem for students at typical college age who are expected to learn independently and practice self-regulation, often for the first time in their lives.

As we reach our 20s, our ‘time horizon’ (or ability to look into the future to plan ahead) extends, but where a neurotypical student of this age is usually able to see and plan a few months ahead, the time horizon of a student with ADHD can be significantly shorter, often extending to just a week or two. Effectively blind to the future and trapped in the ‘now’, these students are likely to be only motivated by short-term goals and deadlines – the further out the event, the harder it is to manage. ADHD expert Russel Barkley recognises the impact of this and says that those affected by time blindness “need to repeatedly practice […] seeing the future, saying the future, feeling the future, and playing with the future so as to effectively ‘plan and go’ toward that future.” However, without a concept of time this is incredibly challenging.

How can Institutions and educators help?

Implementing wholescale Institutional educational strategies often require cultural change, but even a micro change of thinking can have an important impact on our students affected by time blindness. What a paradigm shift to realise that these students aren’t just lazy, unreliable, or inconsiderate to the expectations of others! Where their actions (or inactions) might previously have led us to think they were unmotivated or lacking in will power or self-regulation, instead we can understand why their actions don’t always align to their intentions. We can understand that time blindness is not a behaviour or a choice – it’s a symptom.

Many universities in Ireland offer ‘reasonable accommodations’ to students with ADHD and disability services, such as ours in Trinity, often suggest useful strategies for staff to support students with ADHD.[3] Take assessment: for students with time blindness, managing assessment deadlines is often a significant challenge, yet concurrent assessment deadlines are a common feature of our higher education ‘modularised’ system – like waiting for a bus, there’s nothing for ages and then a convoy of them come at once. Extending deadlines can help students with ADHD but sometimes this just creates a further bottle neck down the road, so extended deadlines tend to work best when combined with strategies to plan and manage time. Staggered deadlines, on the other hand, help all students.

Designing university teaching, learning and assessment that meets the needs of all students is complex, although good design is often universally beneficial. A programme-focused approach to assessment cultivates conditions for effective student learning by attending to issues such as timing, sequence, or amount of assessment. The Trinity Assessment Framework recommends mapping and reviewing assessment practices across a year, subject or programme so that the assessment diet can be viewed as a whole and planned accordingly. This benefits all.

We don’t judge someone with colour blindness as not bothering to differentiate green from blue. They are blind to colour. Similarly, students with time blindness don’t achieve their study goals because they aren’t bothered. They are blind to time. I believe educators like me can help those affected by time blindness by even acknowledging its existence because, like colour blindness, time blindness ‘just is’.


[1] Barkley, R. (2009) Time Blindness. CADDAC Conference. https://www.youtube.com/watch?v=Uvka7fMyTkM [accessed 15 March 2022]

[2] Mak, A. D. P., et al (2021). ADHD Comorbidity Structure and Impairment: Results of the WHO World Mental Health Surveys International College Student Project (WMH-ICS). Journal of Attention Disorders. https://doi.org/10.1177/10870547211057275

[3] https://www.tcd.ie/disability/teaching-info/awareness-info/adhd.php

Tracking a lifetime of learning with ePortfolio

Laine Abria a 4th year Pharmacy student, discusses her experience with ePortfolio, how it developed throughout her internship with Academic Practice and how she intends to use ePortfolios going forward.

When someone says the word “portfolio”, the image that comes to my mind is a big folder of detailed sketches and colourful paintings or a collection of financial documents on assets and stocks. As a pharmacy student, it’s not surprising that these definitions don’t really appeal to me. So, in the first year of my degree, when I was instructed to keep an electronic portfolio or ‘ePortfolio’ with little guidance, I didn’t really understand its purpose.

Fast forward three years into my degree, my knowledge of ePortfolio is limited still despite me having had to keep one since first year. It was only during my internship with Academic Practice last semester, when my knowledge of ePortfolio quickly skyrocketed through reading articles about them, speaking with experts, helping organise an event on the topic, and creating an ePortfolio on Google Sites with my reflections and artefacts from the internship. I think I have a good understanding of the purpose of ePortfolio, how to make them, their role in assessment and how they may benefit me both as a student and as a future pharmacist (fingers crossed). My biggest takeaway from engaging with ePortfolio is that not only can they be used to show evidence of learning but also, they are useful tools for tracking my learning and identifying which methods of learning suit me best.

But it’s one thing to know what is required it’s another to be able to do it. My ePortfolio is in no way perfect and I still struggle with choosing artefacts and reflecting on my learning to integrate my knowledge. I think a part of this is because of my previous experience, where my ePortfolio on PebblePad was merely a place where I would dump 5 reflective cycles at the end of the year to gain a satisfactory mark on one measly component of a module filled with countless OSCEs, workshops, and CAs – more ‘traditional’ assessments that seemingly hold more value.

However, I haven’t given up on ePortfolios as I have good reasons to keep them. Not only are all registered pharmacists in Ireland expected to keep one and submit it for review every five years, but as I wrap up the final years of my degree, I feel like I am entering a new stage of being a ‘learner’. In this stage, I know I’ll have to be more independent and responsible in terms of learning, as I prepare to enter the ‘real’ world and put everything I’ve learnt in college into practice.

I’m curious to know about others’ experiences with keeping ePortfolio and whether they have been beneficial for them, especially at points of transition. I expect keeping an ePortfolio involves developing a routine of reflection, one that will take some time to get used to.

Calm in the storm: Managing online assessment during a pandemic

Dr Neil Dunne is Programme Director for Trinity’s Postgraduate Diploma in Accounting. In this reflection, Neil reaches out to Programme Directors from across the disciplines, inviting them to consider some key learnings from pandemic assessment that they might carry forward into the new normal.

Over the past five years, Trinity’s Postgraduate Diploma in Accounting has launched the accounting career of almost 200 graduates. Upon completion, graduates attain exemptions from professional accounting exams (e.g., Chartered Accountants Ireland, ACCA and others), which helps them navigate the arduous journey towards professional certification. Professional accounting bodies base their accreditation decisions primarily on the content of syllabi and exams. So when COVID shook Ireland in March 2020, my concerns as Programme Director included not only the pivot to online teaching, but also the challenge of assessment in a pandemic.

Semester 2 exams, which were fast approaching, had been written for a closed-book face-to-face context, i.e., the traditional basis for our professional accreditations. I had to consider how we would assess in April 2020 and beyond in a way that would be online and flexible, yet rigorous enough to maintain our extensive accreditation. It was challenging. With hindsight, I have identified four key learnings on how to navigate online assessment, which I hope are useful for all Programme Directors:

Reach out:  The accounting academic community recognized the issues, and really rallied around each other. I consulted colleagues in the Irish and British Accounting and Finance Associations, both informally and through seminar attendance. Here, I learnt some useful innovations, and also that all Accounting Programme Directors were anxious!

I exchanged frequent correspondence with the professional accounting bodies, who conveyed flexibility and empathy, but perhaps understandably refrained from being too specific on what exactly was required of online assessment. Nonetheless, their documents provided a useful foundation for me to decide on how to assess online.

I frequently checked in with our fantastic faculty, who had to amend their exams for an open-book context, and external examiners, who had to review these amended exams. Similarly, I engaged often with our two class reps, who conveyed the perfectly understandable anxiety and concern of students, and who also played a vital role in communicating my decision-making process to other students via their class WhatsApp groups. I cannot praise the class reps highly enough.

Get the details right: Although open-book exams clearly differ from their closed-book variant, the events of March-April 2020 vividly demonstrated this to me. Let’s start with the front page of the exams. We designed a new standardized cover sheet for open-book accounting exams, which combined guidance from professional accounting bodies, some Trinity-specific declarations, and my own ideas. This cover sheet made clear that answers taken verbatim from a textbook, or unsupported by workings, would not be accepted, and that any examples used in answers should be original (rather than textbook-sourced). These regulations served to allow students demonstrate their own original thinking. To minimise student anxiety and uncertainty, we placed this new cover sheet on Blackboard well in advance of the exam session.

Rather than hurriedly arranging online proctoring, which is expensive and often flawed, we aimed for exams where candidate attainment would be unaffected by the presence or absence of invigilation. This ‘prevention is better than cure’ approach to potential plagiarism necessitated rewriting of our April 2020 exams. Our litmus test became: if a question was answerable entirely by reference to the textbook, or did not allow the candidate to demonstrate original thought, we modified it or removed it from the exam. Operationalising this philosophy necessitated migrating from knowledge-based towards applied and scenario-based questions. Additionally, questions now often sought opinions. For example, a question that might previously have read ‘Describe the nature and purpose of alternative performance measures (APMs)’ became:

Whilst reading the ‘Top Accounting’ website, you noticed the following quote:

“APMs are quite misleading for users of accounts, and should be banned”

Required:  Do you agree with this statement? Refer to material you have studied this semester to support your answer.

The first part of the revised question seeks an opinion, thus privileging original thought. The second part requires that opinion to rest on lecture material, thus reducing the ‘Googleability’ factor. In other words, a candidate seeing APMs for the first time during the exam could not just Google ‘APM’s and update their answer sheet accordingly. In contrast, candidates that had engaged with the material consistently throughout the semester could immediately begin to demonstrate their aptitude.

Speaking of answer sheets…. We decided to allow candidates hand-write rather than type their answers, for two reasons: First, students expressed a strong preference for hand-writing, and were wary of exams mutating into an assessment of Word/Excel proficiency (which aren’t Programme Learning Outcomes), rather than core accounting concepts and skills. Second, the requirement to hand-write answers allowed us to more accurately assess the provenance of each script.

Students downloaded the exam from Blackboard each day, hand-wrote their answers, and then, using an app recommended by us, scanned and uploaded their answers back to Blackboard. We allowed students 15 extra minutes to deal with any IT issues, i.e., downloading and uploading the exam. It generally worked well, and facilitated stylus-based marking/annotation. I’d also set up a ‘mock’ assignment a few weeks before the exam session for students to submit their answers and thus gain practice using the app. Although time consuming, this helped iron out any IT issues in advance of the exam.

Navigate the aftermath: Even pre-COVID, we all know that examiners’ real work begins after the exam, in terms of trudging to the Exams Office, collecting our scripts, and then allocating several weeks to grading, exam boards, etc. However, the online exams surfaced some unique extra considerations. First, we had to closely monitor for grade-inflation. A significant spike in results might have problematized our entire approach to online exams. However, overall results in 2020 and 2021 broadly remained in line with prior years. Second, notwithstanding the mitigation measures described in the previous section, the dreaded spectre of plagiarism still loomed large, and faculty had to extend extra effort in identifying excessive similarity of response. Unfortunately in 2020, there were some cases, entailing difficult emails and Zoom calls. In 2021, we had no such cases.

People are understanding! The various stakeholders affected by our assessment decisions were generally very understanding. For instance, students adopted a pragmatic and resilient approach that will serve them well in the accounting profession. College immediately provided invaluable training modules and seminars around the area of online assessment. Trinity Business School accepted that longer assignment-type online exams would not be appropriate, and facilitated our request to hold two-hour online exams instead. Additionally, our Programmes Team provided fantastic support. External examiners willingly reviewed a whole new set of online exams. Professional bodies understood that we were still assessing the same learning outcomes, and indeed any post-COVID accreditation reviews have been successful. Finally, our accounting faculty demonstrated their long-held great concern for both student well-being and the integrity of the accounting profession.

To conclude, COVID has made us all think more carefully about assessment. Although the worst may be behind us, the new normal will also involve online assessment, so hopefully the above points may be useful to colleagues in Trinity and beyond. Professionally, I have certainly been on a journey these past 18 months, and would be delighted to talk through any concerns with colleagues that wanted to reach out.

An understanding of feedback

Sam Quill reflects critically on his understanding of feedback and how it has developed through engaging with 3rd year students in dermatology and otorhinolaryngology and with academic peers, working together in a Communities of Practice model. 

Context
Undertaking Trinity’s Special Purpose Certificate in Academic Practice has encouraged me to design student-centred learning activities around social constructivist techniques (e.g. Carlisle & Jordan 2005, Ramsden 1996). In clinical education contexts where minimum standards of care are required for every patient, it concerns me that not all students are equally likely to benefit from peer learning activities: integrating constructivism into my practice has highlighted to me that not all students are equally prepared to learn from their peers. Some of the issues my students encounter with peer feedback might well be related to Biggs’ theory of ‘academic learners’. Biggs (1999) suggests that students who thrive in higher-level education without much teacher direction, where student-led learning activities like peer feedback pervade, already possess the skills to reflect on their learning.

When I asked my students what they thought about feedback, I was interested to discover that learners who found student-led approaches more difficult tended to focus their criticisms on peerfeedback to patient case presentations. They indicated that they preferred instruction on the “correct” answer from subject experts, rather than learning through peer dialogue and shared understanding.They felt that peer feedback often pointed to what they had done ‘wrong’ rather than offering ‘feed-forward’ action points. They also expressed negative emotions towards what they perceived to be critical feedback. I believe this has dissuaded the studentsfrom providing honest evaluation of others’ work in an attempt to not hurt each other’s feelings.

The point of feedback?

From discussions with students it seemed likely that some students were unclear on the purpose of feedback and therefore unsure of what to expect and how to ‘do’ feedback appropriately. Price et. Al (2012) acknowledge the lack of clear consensus on the definition of feedback but suggests that it can serve different roles in learning. For example, on the behaviourist side of the spectrum lie the corrective and reinforcement roles of feedback; on the constructivist side, feedback has a more dialogic, future-focused function. A common mistake in higher education practice involves asking students to reflect “without necessary scaffolding or clear expectation”. Sharing my experiences with colleagues and peers undertaking the Reflecting and Evaluating your Teaching in Higher Education module revealed that this was a common misstep and for me, peer presentations and ‘formally’ structured discussion with colleagues reinforced the benefits of combining individual and collective reflection to work on common challenges.


Peer-driven reflection has prompted me to acknowledge the need for a shared understanding of feedback between me and my students – an insight that has helped my students to embrace the introduction of metacognitive skills into their curriculum.

In both teaching and in clinical practice, I recognize that reflective skills and pedagogical literacy are particularly important in a paradigm where peer learning underpins postgraduate clinical professional development. Ryan & Ryan remind us that “deep reflective skills can be taught, however they require development and practice over time.” By reflecting actively on the process of ‘unfurling’ the concept of scholarship of teaching, as outlined by Kreber & Cranton (2000), I can see how my social-constructivist learning activities could be adapted to support better learning for more students. I believe our senior faculty need to plan for the integration of reflective learning skills at all levels of medical education, especially in the earlier, pre-clinical years – but this approach needs to be adopted into daily educational practice, not discussed solely at high level curriculum committees.

Next steps?

Looking ahead, I want to build on my areas of improvement identified in the Johari window below, encouraging me to articulate these in response to peer commentary. Specifically, I want to take more of a scholarly approach to evidencing the value of change in my teaching activities at TCD. I would love to see these new reflective feedback skills resulting in a generation of doctors who intuitively “reflect-in-action”, providing responsive care to patients in need, who also have the ability to “reflect-on-action” and improve medical practice and medical education in the future.  Both self-reflection and peer feedback have been essential in developing my Johari window. Would you consider doing a similar exercise for your own context? The links below offer some sample resources below to try for yourself!

Reflective learning resources:

Can students ever really be partners in assessment?

Ben Ryan, a 3rd year BESS student at Trinity College and member of the ‘Enhancing Digital Teaching & Learning’ (EDTL) project with the IUA, discusses key points in relation to students as partners in assessment.

Students are more than capable of being partners in assessment. We have so much experience of different assessment types, and what has or hasn’t worked for us in the past. We know what kind of assessments we find interesting and challenging. Involving students in the assessment process can help us be more engaged in the module and get us to develop key skills like communication, teamwork and compromise.

Getting students involved in the assessment process gives us agency and independence and lets us take control of our learning. When we’re given the opportunity to influence aspects of a module’s assessments, I’ve found that myself and other students were much more engaged with that module and generally had a better understanding of what was being required of us in the assessments. I believe students can be partners in designing assessments as we know what assessments we prefer, and which are more beneficial to our learning – and which ones aren’t worth putting as much effort into.

Getting students involved could be as simple as running polls or having discussions in class or on boards to agree the type of assessment (individual versus group project, essay versus report) and how teams and groups are selected. I personally think students should be involved in assessments at every step of the way from creation onwards. I think it leads to better engagement with lecture and module content module and can give students a better understanding of the assessment process. We can clearly tell when an assessment is just recycled year in year out and we lose interest in the assessment and the module content as a result. We know this isn’t always possible – particularly with very large classes – but assessments should at least match the current version of a module!

Students-as-Partners (‘SaP’) models aren’t always used well. Sometimes it can go too far by giving students too much freedom to decide their assessment. Students can be easily overwhelmed by a lack of guidance and support. In one case, I had to write an essay on any topic relating to one of my modules. I thought this was a poor use of the SaP model: it was really broad and I found myself being overwhelmed and not knowing where to start.

Without setting clear boundaries for the length of time devoted to discussion around assessment, co-creation conversations can drag on and take away from time spent engaging with content in class. Before having these discussions with students, staff should clearly outline how long will be spent discussing assessments and what they hope to achieve from the conversation. I also think staff need to be sure that discussion includes all student voices, not just the most vocal. Any discussion on assessment co-creation should probably include a channel for students to express their views anonymously or privately (google form, private email to lecturer). Sometimes it is better to just try out the process of assessment co-creation. You will very quickly see what works and what doesn’t make sense for you – and also what does and doesn’t make sense for your students.

If staff are considering putting a SaP model into their assessments, my main advice would be to just go for it. What’s stopping you from using SaP in your assessment approaches? Why?

Towards a Culture of Dishonesty?

Dr Ciara O’Farrell, Head of Academic Practice at Trinity College, discusses key issues surrounding academic integrity and plagiarism in higher education, and highlights the importance of reaching a shared understanding of both.

Is Andy Warhol’s iconic painting of a Campbell’s soup can satire, or copying? Is Madonna’s ‘Hollywood’ video a creative homage to French photographer Bourdin, or was she striking someone else’s pose? Many years ago, I attended a teaching & learning conference and I distinctly remember a workshop where Perry Share (IT Sligo) discussed these images, unpacking their relationship to popular culture and framing the notion of plagiarism in intertextuality theory. Fresh from my home discipline of English (where T.S. Elliot once noted, ‘immature poets imitate; mature poets steal’) the workshop challenged my pre-conceived perceptions of plagiarism, prompting me to reconsider my attitudes to ghost writing, for example.

In March 2021, Forbes published an article on US giant “Chegg”, currently the most valuable EdTech company in America with stock prices tripling since the pandemic. Indeed, ‘to chegg’ is fast emerging as a verb. Chegg describes its service as ‘connecting college students to test answers on demand.’ Ask an expert a question, the Chegg Study website boasts, and you will have an answer back in ‘as little as 30 minutes.’ However, according to Forbes, who interviewed 52 students who use the Chegg study app, ‘all but 4 admitted they use the site to cheat.’

Cheating is nothing new but there is concern among some academics that the sudden move to open book assessment since Covid-19 may have made it more prevalent. We know from the research that learning achieved through open book assessment is valuable to students and employers alike, and I doubt that many students or academics want to see a lock, stock, and barrel return to the closed book, timed written exams which dominated University assessment until recently. So how can we prevent this?

Of course, students have a responsibility not to cheat but for students transitioning into third level from a world where plagiarism is becoming increasingly normalised, the type of online student ‘training’ many institutions currently have in place only goes so far and is often little more than a tick boxing exercise. Plagiarism policies help but are challenging to implement. Assessors too can mitigate plagiarism, but this necessitates an assessment re-design that requires students to apply their knowledge rather than regurgitate it and to synthesise their ideas with those of others rather than ‘steal’ them. This also requires assessors to shift their perceptions of the purposes of assessment and to view it as something that not only ‘tests’ knowledge but acts as a vehicle for learning.

It is time for third level institutions to hold sincere conversations with students about the ‘why’ of plagiarism and to frame these discussions from historical, ethical, legal, cultural, and pedagogical perspectives. Until we reach a shared understanding with students of what plagiarism is and convince them of the importance of academic integrity, we risk a culture of dishonestly taking hold.

Formative digital assessment

Jonny Johnston, Academic Developer, writes about classroom assessment techniques (‘CATS’) in digital teaching and learning

Higher Education as an endeavour (and as an industry) has spent the last 30 years worshipping at the cult of assessment: is/our assessment practices fit for purpose? What are we assessing? Are we assessing for learning, as learning, assessing to take stock of learning, or assessing to certify learning and award degrees? Why do we do assessment the way we do, and how do we see it changing? And, topically – where does digital fit into the debate?

When we talk about digital transformation in assessment, quite often we put the focus on high-stakes summative assessment practices and focus on the shift towards open-book cultures or on the potential privacy invasions of remote proctoring. Our use of classroom assessment techniques (often referred to as assessment-for-learning strategies) is often what gives as a sense of whether or not students have tuned in or just turned up – and whether they’re engaged or not.

In a face to face environment like a lecture hall or seminar room, we can judge from students’ expressions whether or not they’re with us – and break up teaching activity with think-pair-share activities, solo minute papers, group discussions, and a whole raft of collaborative activities. We change our delivery, repeat and clarify, highlight concepts based on how students are responding. Shifting these activities into the digital space isn’t always straightforward – particularly if we’re not aware of just how often we do these in real time when we’re teaching in person.

Some of the things we do in person can work better online: particularly for large group teaching. VLE tools and videoconferencing apps like Zoom can support anonymous annotation on shared slidedecks, encouraging learners to ask questions at low-risk to themselves. Structured engagement in breakout rooms can be used to replicate the ‘talk with the people on either side of you to discuss as a think-pair-share’ – and asking students to report back their ‘group’ answers is lower stakes for a learner than sharing their individual answer in front of 300 peers.

Polling tools are quick and easy to set up on the fly and can be used in almost any situation to give you a sense of where your students are. Wordclouds generated in response to ‘muddiest point’ or ‘minute paper’ style prompts (using tools like TurningPoint or Menti) can be used for responsive plenary activities and let you really quickly and easily see what students have taken away from the session – or haven’t, as the case may be! ‘Post-it’ style ideation, brainstorming, and ‘card-sort’ activities on virtual pegboards can be done with tools like Flinga or Padlet and can be used by groups or individuals.

We don’t necessarily think explicitly about formative assessment in action in the digital classroom. I think it’s a major oversight: the vast majority of assessment we do as educators is on-the-hoof and formative, particularly when we’re teaching in live time. Our digital teaching is evolving rapidly. Is our digital assessment evolving to match?