Skip to content Skip to footer

From Currents to a (Brain)storm: Tensions Beget Re-thinking, Re-mapping, Building

by Dominic Koh Jing Qun

As I reflect upon my journey through this course, I noticed my thinking had undergone dramatic changes with ideas from every seminar making huge splashes in my thinking. Much like how currents swirl and crash into one another, developing into a storm1, the exploration of each idea revealed conflicts between ideas and generated more questions in my head. I thought I was not sense-making deep enough but in a way, relieved that this is the norm once I learnt during the final seminar that assessment is inherently rife with tensions. Indeed, the questions that I posed and the juxtaposition of different ideas to address the challenges that I chose to grapple with in my assignments sprung from my growing cognisance of these tensions.

This storm of tensions is not all that bad. Using Hegel’s idea of a dialectical clash of thesis and antithesis leading to synthesis of new ideas, these tensions that I face led to new insights being born. Here, I tried to condense my insights that I gained from this whole course and my journey in grappling assessment ideas into three broad revelations using the three notions of rethinking, remapping and building2 offered in the final seminar as a scaffold.

A re-thinking of context specificity in assessment

One of the earliest ideas introduced was the idea of assessment being context-specific and a situated practice (Joughin, 2009; Klenowski & Wyatt-Smith, 2014). After revisiting the various questions that I posed and the musings in my assignments, I realised my questions in the online tasks and introspection that comes with writing my assignments aided me to gradually identify different dimensions of the context within which assessment is situated. This led to a re-thinking of this notion of context-specificity. Consequently, I had now a more enriched understanding of context-specificity as demonstrated below.

In our discussion on rubrics, I questioned whether the metaphorical notion of invitation (Bearman & Ajjawi, 2021) and dialogue (Klenowski & Wyatt-Smith, 2014) would work for my students. Such a question arises from a single context of teaching and learning. Gradually, as I tried to question the concepts further, the context that I focused my questioning on moved beyond teaching and learning. For instance, in analysing holistic and analytic rubrics, I found myself questioning the necessity to provide detailed descriptors for all criteria tested – institutional belief factor – when I read Torrance’s (2007) argument, beyond just how it would affect my students or my making. For gestures, apart from how I could incorporate into my teaching, I tried to question further, which led to think about its synergy with other existing assessment practices in schools and beyond – workplace factor – and its potential contribution to assessment as accommodation to specific groups of learners – fairness in some sense.

Perhaps due to this shift in thinking about context-specificity, prodded by rounds of introspection, I noticed a greater granularity in how I perceive context. The most evident would be the notion of fairness which I previously learnt that it is a “contextual construct” (Rasooli et al., 2023, p.269) situated in my teaching practice. Revisiting this made me realise that fairness pervades assessment beyond just test outcomes. The different justices (Rasooli et al., 2023) are related to different aspects of assessment from preparing the test to execution of tests. I see this as different layers of the assessment context in which fairness is relevant.

This re-thinking of context specificity also allowed me to think more clearly which in turn enabled me to derive answers to my own questions. For feedback, I asked about how I could fit in the new concepts learnt to create a sustainable routine to use in teaching. It is precisely of this re-thinking of context specificity that I realised each feedback practice has to be contextualised to the group of students, topic and subject. As such, it is difficult to obtain a prescriptive response that I originally desired. Rather, I should be considering how to use the concepts and frameworks (Lipnevich & Smith, 2022; Quinlan & Qitt, 2021; Sadler, 2013; Tan, 2013, 2022) learnt  to form my own, non-exhaustive, design specifications for my feedback practice to allow me to customise feedback practices for the different contexts. Some specifications include:

  • What is the definition of feedback that I wish to adhere to and how would it look like in practice?
  • Who or what are the sources of feedback?
  • What is the rhythm or cycle appropriate for this class or topic?
  • What are the standards that students are supposed to aim using the feedback?
  • What tasks should be crafted to enable the use of feedback?
  • What working knowledge do my students possess? Do they have the knowledge to interpret my feedback in the same way as I intend?

In a sense, I had answered my question on feedback practice through re-thinking and re-considering what I have learnt that could be put into use. This re-thinking of context specificity still continues as I write this piece, which hopefully enables me to stay cognisant of the multiple aspects to any assessment issue that I encounter in the future and aid in my search for answers.  

On this idea of re-thinking about context-specificity, I recalled how post-modernist theories seek to deconstruct existing narratives to search for new meanings, usually invoking a re-thinking of sorts. Thus, I came to wonder whether we can apply post-modernist perspectives on assessment? Can we apply a ‘critical pedagogy’ or ‘intersectionist’ lens to view assessment practice and issues? I would opine that certain worldviews or identities would interact differently with assessment and new tensions could be uncovered3. This would certainly enrich what is meant by context-specificity, leading to new answers in the future. 

A re-mapping of flow of assessment

I realised some seminars, particularly the more recent ones, quite literally flipped my understanding of how teaching and assessment can be sequenced. It could also be that it revealed my assumption of there being a single way of sequencing instruction and assessment, much like Plato’s cave allegory in a simplistic sense. I did not expect to find myself to glean new possibilities of ‘switching up’ my teaching pedagogy from an assessment course4, which is a pleasant find. Regardless, these re-mappings appear in a range of flavours, from mild and palatable (readily incorporated to practice) to extreme and reserving for acquired taste (accepted by a select few).

The mild flavour came from authentic assessment. As previously highlighted, I recognised the inherent danger of simply constructing a task and layering it with real-world elements, that is, construct under-representation and irrelevance which diminish validity (Tay, 2018). The intuitive and natural flow of setting a task followed by fitting it into the real world was overturned with the introduction of construct-centred approach (Messick, 1994) or five-dimensional approach (Gulikers et al., 2007). The starting point is deciding the construct to be assessed and the type of “real-world” rather than the task itself. It is from this “meta” level that drives appropriate creation of assessment. Such change is mild as it merely requires a shift in starting point of developing a task.

This is followed by preparation for future learning (PFL). The double transfer paradigm (Schwartz & Bransford, 1998; Schwartz & Martin, 2004) was introduced whereby students are engaged in a task and are assessed on it, followed by discussing the assessment responses with the students before providing some form of direct instruction related to the task. Another assessment task is then administered at the end. This re-maps from the typical lesson-then-test sequence. Moreover, the assessment in this paradigm tests on the preparedness of the student manifesting as extent of prior knowledge activation or aspects of solution space being considered rather than the outcomes. It is not an overly extreme change as it is a plausible sequence teachers could take if they wish to “switch up” their pedagogy. Yet, it is not mild. It creates tension with existing class schedules or teachers’ scheme of work as such a paradigm would take more time. The shift in focus of the assessment may not be taken up by teachers haunted by examination pressures (Hogan et al., 2013). 

Lastly, the extreme flavour came from the introduction of biomarkers that could offer feedback (Jamaludin & Tan, 2023). Neurophysiological data as biomarkers offers a dynamic and temporal tracking of a student’s learning process throughout a lesson. It re-maps how students’ learning can be collected from certain time points to throughout a lesson, which could potentially alter how feedback rhythms, cycles and spirals (Quinlan & Pitt, 2021) are conceptualised. This is considerably extreme as it invites not just logistical issues of the equipment needed to collect such biomarker data but to those unacquainted with the science of learning, it appears too distant from assessment itself. Indeed, this chasm between science of learning and assessment was particularly evident during my student-directed seminar as I did struggle to reconcile the two together. Separately, some might even discount the idea of biomarkers altogether as unethical for one would be collecting and using students’ biodata which could open a Pandora’s box of issues (Farah, 2005).

Continuing with the idea of assessment being rife with tensions, such re-mapped flows of assessment would certainly be in tension with different aspects of assessment practice. The reservation regarding the use of biomarker data brings back to focus the idea of ethics. Is there such a thing as assessment ethics? Is fairness that we have encountered an aspect of this ethics? Perhaps by learning what is morally permissible in the realm of assessment would we know the extent to which such re-imagining of the flow of assessment should be pursued in reality.

Building with values as the bedrock (lest we forget)

With assessment being so complex with many contextual considerations, contextual pressures and concepts invoked, there were so many things that can be used to build our assessment practices. At some point in the course, I began to question whether would we, as assessors, lose our way at some point?

There is a chance of being overly fixated on assessment outcomes, tests and grades and losing sight of the central purpose of assessment (Biesta, 2009; Joughin, 2009). This idea was very evident when I prepared the critique of the practicum form in my assignment. In the bid to grade efficaciously and “objectively”, breaking concepts and skills into distinct competencies listed as analytic criteria seemed acceptable. Furthermore, at least on surface, such distinct competencies seem to provide clear feedback (Sadler, 2009). Yet, such decomposition of competence into discrete competencies does not necessarily entail assessment of competence itself (Sadler, 2013), not to mention it reduces the learning for students (Sadler, 2007). From this, I saw how this assessment’s central purpose of ensuring competence of student teachers can be readily distorted in the process of operationalising grading efficiently. It is a lesson that reinforces Joughin’s (2009) point of losing purpose of assessment, leading to ramifications on students’ learning.

This over-fixation and losing our way also apply to these new, almost bedazzling concepts. From the second online task, it was clear that I was fixated in trying to incorporate so many new ideas in a bid to construct a set of well-informed routines, based on the way I was questioning the concept. Furthermore, I highlighted that I didn’t know what my feedback was doing for my students,

I had already realised I might have lose sight of what assessment is supposed to do for my students and myself as the teacher. So what could I do to avoid this?

This brings back to my first key insight from this course in which assessment is a value-laden exercise. Interestingly, many concepts learnt in this course either explicitly or implicitly articulate the consideration of values. For instance, the notion of validity deals with interpretation which relies, in part, on our values. The notion of authenticity in authentic assessments requires a decision on what is valued, that is which “world” is valued, to be placed as the reference. The proposition of using biomarkers as feedback sources questions what is being valued – performance outcomes or the learning itself. All of these seems to allude to an idea that anything new can be incorporated but it must be oriented around a purpose, values and beliefs about assessment. If we are clear on what our goals and beliefs are, we would know what would be useful to be incorporated, that could synergise with our practices and readily justified with conviction. By knowing our values in assessment, we would know what new concepts can be used and understand not all new things must be used.

Such values can be easily forgotten. Alternatively, we can end up valuing superficial ideas (Biesta, 2009) if we are too caught up in the operations of things or risk nounification (Tan, 2016) of “values” if one passively states them without much thought. A constant active monitoring and evaluation is necessary, which was I had proposed for myself as one of earliest actions to take for assessment. After learning so much, this conviction to continually evaluate, and not merely state, my values and epistemological stance had somewhat grown even more as I know with greater certainty that it cannot be disregarded as an abstract thought for in doing so can bring consequences to how assessment is constructed and impact our learners. Values are thus the bedrock upon which any assessment practice can be built. Losing sight of our fundamental value and purpose in assessment spells a splendid collapse of an assessment practice. If anything, this is the greatest lesson learnt from this course. 

Notes

[1. While a storm usually has a negative connotation, couldn’t it have a positive connotation as well?
2. Many thanks to A/P Kelvin Tan for introducing this idea.
3. Interested readers may wish to refer to readings such as:

MacArthur, J. (2016). Assessment for social justice. The role of assessment in achieving social justice. Assessment & Evaluation in Higher Education, 41(7), 967-981. doi: 10.1080/02602938.2015.1053429

Nieminen, J. S. (2022). Assessment for inclusion: Rethinking inclusive assessment in higher education. Teaching in Higher Education. https://doi.org/10.1080/13562517.2021.2021395

Thank you Dr Amir for these suggested readings.

4. Considering that assessment is part of a triadic relationship with curriculum and pedagogy (which I re-learnt while perusing my undergraduate notes), it should not be a surprise in finding ideas about pedagogy from assessment discourse! 

References

Bearman, M., & Ajjawi, R. (2021). Can a rubric do more than be transparent? Invitation as a new metaphor for assessment criteria. Studies in Higher Education, 46(2), 359–368.

Biesta, G. (2009). Good education in an age of measurement: On the need to reconnect with the question of purpose in education. Educational Assessment, Evaluation and Accountability, 21, 33–46.

Farah, M. J. (2005). Neuroethics: The practical and the philosophical. Trends in Cognitive Sciences, 9(1), 34–40. 

Gulikers, Judith & Bastiaens, Th.J. & Kirschner, Paul. (2007). Defining authentic assessment: Five dimensions of authenticity. Balancing dilemmas in assessment and learning in contemporary education. 

Hogan, D., Chan, M., Rahim, R., Kwek, D., Maung Aye, K., Loo, S. C., Sheng, Y. Z., & Luo, W. (2013). Assessment and the logic of instructional practice in Secondary 3 English and mathematics classrooms in Singapore. Review of Education, 1(1), 57–106. 

Jamaludin, A., & Tan, A. L. (2023). Neurophysiological affordances for assessment design and feedback: Biomarkers of student learning. In C. Evans and M. Waring, (2023) (Eds.), Research handbook on innovations in assessment and feedback in higher education. Elgar Publishing.

Joughin, G. (2009). Assessment, learning and judgement in higher education: A critical review. In G. Joughin. (Ed.), Assessment, learning and judgement in higher education (pp. 13–28). Springer ScienceþBusiness Media B.V.

Klenowski, V., & Wyatt-Smith, C. (2014). Assessment for Education: Standards, Judgement and Moderation. SAGE Publications, Ltd.

Lipnevich, A. A., & Smith, J. K. (2022). Student – feedback interaction model: Revised. Studies in Educational Evaluation, 75, 101208–101219. 

Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessment. Educational Researcher, 23(2), 13–23.

Quinlan, K. M., & Pitt, E. (2021). Towards signature assessment and feedback practices: A taxonomy of discipline-specific elements of assessment for learning. Assessment in Education: Principles, Policy & Practice, 28(2), 191–207. 

Rasooli, A., Rasegh, A., Zandi, H., & Firoozi, T. (2023). Teachers’ Conceptions of fairness in classroom assessment: An empirical study. Journal of Teacher Education, 74(3), 260–273. https://doi.org/10.1177/00224871221130742

Sadler, D. R. (2007). Perils in the meticulous specification of goals and assessment criteria. Assessment in Education, 14(3), 387–392.

Sadler, D. R. (2009). Transforming holistic assessment and grading into a vehicle for complex learning. In G. Joughin. (Ed.), Assessment, learning and judgement in higher education (pp.45–64). Springer Science+Business Media B.V.

Sadler, D. R. (2013). Making competent judgments of competence. In S. Blömeke, O. Zlatkin-Troitschanskaia, C. Kuhn & J. Fege. (Eds.), Modelling and measuring competencies in higher education: Tasks and challenges (pp. 13-27). Sense Publishers.

Schwartz, D. L., Bransford, J. D. (1998). A time for telling. Cognition and Instruction, 16(4), 475–522. 

Schwartz, D. L., & Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129–184. 

Tan, K. (2013). A framework for assessment for learning: Implications for feedback practices within and beyond the gap. ISRN Education, 2013, 1–6. 

Tan, H. K. K. (2016). Asking questions of (what) assessment (should do) for learning: The case of bite-sized assessment for learning in Singapore. Educational Research for Policy and Practice, 16, 189–202. 

Tan, H. K. K. (2022, July 8). The four boxes of assessment feedback literacy. Assessment for All Learners. https://www.assessmentforall.com/the-four-boxes-of-assessment-feedback/

Tay, H. Y. (2018). Designing quality authentic assessments: A look into Australian classrooms (1st ed.). Routledge. https://doi.org/10.4324/9781315179131

Torrance, H. (2007). Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post‐secondary education and training can come to dominate learning. 1. Assessment in Education: Principles, Policy & Practice, 14(3), 281–294.