Skip to content Skip to footer

Equity in Education: Reconceptualising Power Dynamics

By Chen Ying,  Rheverie MCT903 

It is perhaps surprising to hear that it has been 25 years since “Thinking Schools, Learning Nation” (TSLN) was declared as Ministry of Education’s (MOE) vision (2021). The now-ubiquitous phrase birthed countless policies (Deng & Gopinathan, 2016) and yet, Singapore’s education system remains characterized by an obsession with examination results (Deng & Gopinathan, 2016; Ab Kadir, 2019) instead of a “spirit of learning” (MOE, 2021). An in-depth examination of assessment practices offers an explanation: that amidst the bid for greater thinking and learning, cognitive passivity, the antithesis to TSLN, is unconsciously fostered. As this essay will endeavour to illustrate, there is a power imbalance inherent within assessment that, when neglected, could exacerbate to resist attempts to foster higher-order thinking skills and dispositions (HOTD) and, accordingly, TSLN. In response, a more self- aware application of rubrics that balances the power dynamics between students and authority is needed to foreground student voice and agency, the success of which might bring us a step closer to realizing TSLN.

 

TSLN, Subjectification & Sustainability: An Unattainable Reform?

 

For Singapore, even if imperfect, TSLN was a necessary—and therefore “right”— response to the knowledge-based economy (Ab Kadir, 2019) that implied new education outcomes. Of the three major functions of education proposed by Biesta (2009)—qualification, socialization, and subjectification (p.39)—this essay will focus on subjectification. Ideally, subjectification promotes independent “thinking and acting” (Biesta, 2009, p.41) but Singapore students are passive, capable of efficient reproduction of knowledge but not original, independent thought (Gopinathan, 2007, 2015; Deng & Gopinathan, 2016; Tan, 2017; Ab Kadir, 2019). TSLN thus attempted to rejuvenate an education system that is poorly equipped to hone students’ critical capacity. It accedes that disciplinary content alone is insufficient and students must be empowered for lifelong, self-directed learning.

 

In that sense, sustainable learning and assessment (SLA), which “prepares students to meet their own future learning needs” (Boud, 2000, p.151), is an outcome of Singapore’s education. In his seminal piece on the topic, Boud (2000) emphasized that SLA is crucial if we want students to be “active agent[s]” (p.154) of a “learning society” (p.152), words that echo TSLN’s key tenets. Consequently, emphasis has shifted from content mastery to HOTDs, thereby achieving subjectification goals where students develop into “agentic individuals” (Tai, Ajjawi, Boud, Dawson, & Panadero, 2018, p.478). This student would be capable of independent, lifelong learning (MOE, 2021) and possesses the critical mind to appraise and improve society (Biesta, 2009).

 

The Difficult Road to TSLN

 

Despite the commendable effort (Norruddin, 2018), Singapore’s classrooms remain didactic (Gopinathan, 2007, 2015; Tan, 2017). Deng and Gopinathan (2016) explained that the combination of high-stakes examinations, stakeholders’ instrumentalist views, teachers’ misconception of knowledge as transmissible facts, and the resultant “purpose-fit” practices (p.460) marginalize HOTDs. This view is corroborated by Ab Kadir’s (2019) examination of Singapore students’ perception of their critical thinking ability, which revealed that critical thinking is side-lined and even “absent” in “daily lessons” (p.559). Most students described their learning experience as “memorising” (p.559); there is barely a need to think at all, much less think critically.

 

Conceptual Understanding of “Judgment”

 

Deng and Gopinathan (2016) were accurate in their assessment, but have yet to unpack the complexities. One of the main causes crippling TSLN is stakeholder’ problematic understanding of “judgment” and the dangerous implications for education.

 

 

Neglected Undertones of Power in “Judgment”

One key facet of qualification is judgment. An inescapable part of education—and life (Boud & Soler, 2016)—judgment befalls all and, to a large degree, justifies education. However, if we only understand judgment as an innocent rite of passage, we neglect the power tensions that beleaguer efforts at learning. In particular, power imbalance between teachers and students could stifle students’ capacity for HOTDs.

 

At the heart of judgment is a power hierarchy between judge and defendant and within that a difference in agency. The powerful judge decides what will be done to the powerless defendant, who can only accept. Where TSLN is concerned, this power imbalance could have disastrous consequences if allowed to dominate. Without opportunities to assert the authority embedded within HOTDs such as self-judgment, the student is hard-pressed to develop them. Two components of judgment in education are extremely pertinent to our discussion: assessment and standards (as manifested in rubrics). The former is the seemingly inescapable judgment while the latter facilitates the former (Klenowski & Wyatt-Smith, 2013).

 

One reason for neglecting the significance of power dynamics in assessment could be that stakeholders perceive assessment as procedural, forgetting that all assessments are learning experiences (Boud, 1995) that influence students’ understanding of knowledge, learning and, I might add, their relationship with authority. Each assessment teaches students how to respond to authority as assessment standards are “set by a recognized authority” (Sadler, 2014, p.283). Although assessment improves accountability, concurrently, the hidden curriculum (Eisner, 2002) could foster habits of unthinkingness. Indeed, one can self-regulate unthinkingly, as machines do. Preoccupation with examinations risks the dysfunctional replacement of learning with assessment (Torrance, 2007) and informs students that authority has absolute power over them so they must, if they seek “success”, bow their heads.

 

 

Ramifications: Subservience in place of Subjectification

 

The countless assessments and references to standards make “authority” omnipresent in students’ experiences in the form of rubrics. While not inherently problematic, it is difficult to fully appreciate rubrics’ implications. Certainly, rubrics can facilitate learning, develop students’ self-evaluation skills (Boud & Soler; 2016; Yan & Boud, 2021), and even reduce anxiety (Panadero & Jonsson, 2020). However, there are also limitations, such as standards’ inherently reductive nature (Kohn, 2006; Klenowski & Wyatt-Smith, 2013), encouragement of criteria compliance (Torrance, 2007; Klenowski & Wyatt-Smith, 2013), and limited affordances for learning (Panadero & Jonsson, 2020).

 

Yet despite the abundant research, insufficient attention is paid to the power imbalance between students and representatives of authority that could be reinforced by careless use of rubrics. Molloy and Bearman (2019) noticed “tensions between vulnerability and credibility” (p.35) in the student-teacher relationship, which is a power imbalance. The ignorant student’s attempts to learn leaves him at the knowledgeable teacher’s mercy. The teacher, professionally obligated to maintain credibility, may erroneously do so via autocratic control. The student’s agency is greatly reduced and he likely becomes passive. Such is the normalized “status quo” (Molloy & Bearman, 2019), implying that the student accepts, and perhaps welcomes, this “oppression” and the teacher may mistake compliance for progress. We could replace one representative of authority (teacher) with another (rubrics) and the problem would persist.

 

Such imbalance elevates rubrics and standards to an undeserved pedestal, where they are seen as “right” when no rubric is perfect or permanent. Rubrics are in fact context- dependent social constructs (Bearman & Ajjawi, 2018 p.13) that are only effective for a specific assessment in its specific context. Beyond that, it is inaccurate and may do more harm than good. Indeed, Molloy and Bearman (2019) have yet to discuss the possible causes and consequences of this power imbalance, which is the misapprehension of ignorance as incapability and resultant silencing of students and subservience, respectively. Rendered helpless when they are stripped of authority, the student can only passively obey. The more standards are presented as an absolute authority, the more unthinking obedience is encouraged. The insistent judgment against authoritative rubrics might eventually manifest a “worshipping” of authority where unthinkingness turns into subservience that thwarts efforts for subjectification and TSLN.

 

Practical Suggestions for Student Voice in Assessment

 

As the manner in which standards are implemented teaches students what to value, teachers’ practices must change to ameliorate the power imbalance. One way to do so is by foregrounding student voice. However, although there is much discourse about increasing student involvement in assessment, be it co-construction of rubrics, self- and peer-assessment, or the like (Boud & Solder, 2016; Bearman & Ajjawi, 2018; Tai et al, 2018; Panadero & Jonsson, 2020; Pui, Yuen, & Goh, 2021), these attempts are often simplistic. Where co- construction is discussed, the amount of student authority is seldom stated. Typically, students are encouraged to propose changes, revealing that the decision-making authority lies elsewhere. These strategies often seek to improve students’ understanding of rubrics (Panadero & Jonsson, 2020). But if the objective is clarification, then students are again in a lower position of authority, as there is a pre-existing and seemingly infallible standard to learn.

 

In response, this essay now explores three suggestions for practical implementation of rubrics in assessment that might promote an effective sharing of power where students assert their agency while working with rubrics, before considering how these suggestions might be enacted for Literature education in Singapore.

 

 Proposition 1: Balancing Power and Disrupting Status Quo through Dialogue

 

Given that a key cause of the power imbalance is the lack of student voice and agency, one solution would be practices that promote dialogue between students and representatives of authority, be it teachers or rubrics. There needs to be a space for continuous and constructive conversation with intellectual candour (Molloy & Bearman, 2019), where teacher and student authority are temporarily equalized.

 

According to Molloy and Bearman (2019), intellectual candour is a “hesitant” discourse between teacher and student where the prospect of failure, that is, loss of authority, is momentarily disregarded (p.36). By disrupting the traditional power paradigm, intellectual candour creates a safe space where “intellectual risks” might be taken to advance learning (p.37). This space is extremely valuable in discourses involving rubrics and assessment because it “acknowledges fallibility” and restores student power, both of which encourage student voice (Molloy & Bearman, 2019).

 

Indeed, although dialogues have often been proposed with regards to the use of rubrics (Ashby-King, Iannacone, Ledford, Farzad-Phillips, Salzano & Anderson, 2021), the power dynamics of the parties involved is rarely discussed. It is of pivotal importance that I stress that as a dialogue, the priority is not clarification or explanation, as both relegate the students to a position of passivity, but a mutually influencing exchange where all parties might undergo a transformation with regards to their understanding of assessment and rubrics. Intellectual candour’s greatest value here is the creation of what others have called a “shared-context” (Ashby-King et al, 2021, p.12), where power is more balanced after being shifted from teacher to student, and student agency is increased through significant participation.

 

The significance of such conversations stem from their “continuously interactive” nature (Matshedisho, 2020, p.176), where students are also teaching teachers about standards (Bearman & Ajjawi, 2018), be it their understanding of standards, the limitations of rubrics when enacted, and so on. The student is thus not seeking approval for his propositions, nor does the teacher have authority to give approval; both must convince each other of their perspectives. Failure to reach a consensus might indicate crucial gaps in their understanding and would require further discussion. Such a debate balances the authority between parties and is an exercise in critical thinking that promotes effective use of rubrics in learning. The resultant effect is long-lasting. As Ajjawi, Tai, Dawson and Boud (2018) argued, such exchange hones the self-judgment skills that undergird “feedback conversations” for effective learning, both within and beyond education (p.8).

 

Proposition 2: Subject Epistemology as Essence in Rubrics Design

 

Admittedly, designing rubrics is difficult, especially if the rubrics are also to facilitate TSLN. Left vague, students do not grasp the expectations while, if too detailed, students might passively follow the rubrics’ instructions instead of critically developing their own learning strategies (McKnight, Bennett, & Webster, 2020). Along the same vein, Panadero and Jonsson (2020) recommended that rubrics be “neither too closely tied to the particular task nor too general” (p.3), so that the same rubrics could be used across a variety of tasks, the repeated usage thus increasing students’ understanding of criteria. They elaborated that these rubrics could be “gradually faded” after students have “internalized” them (p.8). While a degree of internalization is inevitable, we should question the content to be internalized. Done carelessly, a gradual assimilation of criteria might numb students to the underlying tensions and politics. We must thus consider what exactly students should internalize and if that would facilitate their attainment of subject and TSLN objectives.

 

To that end, education and subject epistemology should be foregrounded in rubric design. The former refers to the principles that guide TSLN and that is aligned with all subjects, such as knowledge and learning being ever-evolving and endless. The latter emphasizes assessment’s contributions to each subject’s “ways of knowing” (Nieminen & Lahdenperä, 2021, p.12) that is applicable to all tasks within the subject domain. Such rubrics are attuned to TSLN and develop in students the perspective of the discipline’s “experts” (Tai et al, 2018, p.476). Equipped thus, students are empowered to speak and be heard within the respective domain, which is quintessential to facilitate the dialogue aforementioned. For instance, a student who knows nothing about the epistemology of Literature—that meaning is tentative and plural—would find it difficult to speak intelligently on the subject or have their opinions taken seriously. Conversely, when rubrics are consistently aligned with each subject’s worldview and that of TSLN, they contribute to students’ empowerment and, even when internalized, arguably promote active thinking as opposed to passivity.

 

Proposition 3: Embracing Rubrics’ Dynamic, Contextual Nature

 

Even if the epistemology embedded within rubrics serve as an anchoring constant, this is not to say that there is a “master” rubrics for the subject. Epistemology forms only part of the rubrics, and the rest should be specific to the corresponding tasks. So students must be guided to appreciate rubrics as imperfect documents that are meticulously designed for specific purposes. Correspondingly, rubrics should include the following components: reflective questions to explore, challenge, and co-construct the rubrics with teachers as academic peers, comparison of multiple rubrics and artefacts to foreground its dynamic and contextual nature. Firstly, rubrics for students could include reflection questions where students must evaluate the criteria and explain modifications they would have made, be it the inclusion of new criteria, questioning of existing criteria, presentation, or word choice. This exercise would highlight rubrics’ dynamic nature, presentation’s imprecision, and subjectivity of interpretation (Bearman & Ajjawi, 2018), emphasizing that some criteria are difficult, if not impossible, to articulate (Panadero & Jonsson, 2020, p.11). Consequently, students must negotiate their own meaning depending on context, and work with rubrics. Effectively done (such as through Proposition 1), this process has immense potential to develop HOTDs.

 

 

This process should be facilitated by artefacts, including multiple exemplars and different versions and presentations of the same rubrics—such as holistic and analytic rubrics—to enrich the dialogue and emphasize assessment’s complexity. I would caution against providing a single example, such as a single exemplar, as it might encourage the false and reductive impression of a Right Answer. Instead, using a combination of artefact types, with multiple examples for each type, could circumvent implications of criteria compliance and mimicry. The resultant discourse promotes a fuller and more realistic understanding of quality and expectations in its various manifestations (Ajjawi et al, 2018), which could go a long way in improving students’ self-evaluation and HOT capacity.

 

Rubrics’ contextual nature should also be foregrounded, so that students appreciate that rubrics are designed for a variety of purposes, such as formative or summative (Panadero & Jonsson, 2020), and that these purposes are not necessarily aligned. After all, the learning objectives of each subject and, more importantly, the prioritized educational objectives, change as students grow. Hence, the expectations stated in previous rubrics might be outdated and it would be problematic to use the same rubric persistently as some have proposed (Panadero & Jonsson, 2020). A significant change in context demands a new set of rubrics, lest the rubrics present obstacles in learning instead of facilitating it. Hence, at each stage of proficiency, new rubrics that share the same epistemology, but perhaps prioritizing different objectives, should be designed. A comparison of these rubrics could also be done to highlight rubrics’ porous boundaries, reminding students that rubrics do not capture all that is valuable but, to a limited extent, what is of most value in a specific context. As a result, students develop the evaluative ability to not only judge their work, but to judge rubrics, an outcome of subjectification.

 

Case Study: Rubrics for Literature Education in Singapore

 

The three propositions above aim to empower students to assert their agency meaningfully. We now illustrate what they might appear in practice by examining and modifying the descriptors in Singapore’s government-issued rubrics for essay assignments for Literature.


 

 

 Figure 1: Band 1 Descriptor for Lower Secondary Literature Assessment of Set Text (Express). Refer to Appendix B for full rubrics.

 

 

The above excerpt from CPDD’s rubrics (2019) shows the Band 1 descriptors for lower secondary literature assessment. Closer scrutiny will reveal that these are extracted from the O Level rubrics (Appendix A). The main difference is the change from a holistic rubric to an analytic one, which increases transparency but is insufficient to make this rubric conducive for the intended novice students’ literature education or for the cultivation of HOTDs.

 

All in all, this rubric takes many liberties and makes unfounded assumptions about students’ subject and assessment literacy. The rubrics for a summative examination aimed at qualification, such as the O Level rubrics, arguably should prioritize the examiners’ perspective, efficiency and accountability. The language should be in the subject’s academic vernacular, the language of the examiners-as-experts, which emphasizes subject mastery, the essence of qualification. But if we were to use this same rubric in every day teaching and learning, as implied by CPDD, we might find the results lacklustre at best. It is unreasonable to assume that novice students grasp the subject’s metalanguage even if the main criteria have been unpacked. The dispositions and skills embedded within the descriptors might be too subtle for novices todetect. Likewise, we cannot assume that novice students are aware of the assessment context or intended audience. As Matshediso (2020) emphasized, “tacit knowledge of a discipline” (p.176) is needed to comprehend criteria, and that needs to be developed through teaching and effective rubrics.

 

Ultimately, these issues undermine students’ development. Consider, for instance, the first criteria on relevance to the question. While the criteria appears self-explanatory—one should answer the question, after all—an in-depth examination of the descriptors reveal otherwise. What do “opportunities offered by the question” refer to and how are they different from the “demands of the question” (CPDD2, 2019)? Experienced students might have eventually, and hopefully, understood from practice that “opportunities” refer to moments where students can expand their textual interpretation into discussions of real-world issues, thus meeting one of the subject’s main ontological significance, but novice students will likely be confused. For all four criteria, the subject’s epistemology is implied at best, which could undercut novice students’ understanding of the subject and engagement with the rubrics. Even for experienced students, this rubric largely fail to provide an effective foundation for critical thinking and evaluation. What is the basis for “sound understanding” and “insights” that make them central to the subject, for one, and what makes an evidence “thoughtful” and how is that important to literary study (CPDD2, 2019), for another? Certainly, we can assume that a competent teacher would explain and illustrate these in daily teaching. But given the importance of epistemology to the subject’s value, it should have a place within assessment documents. Furthermore, for rubrics to support learning and development of HOTDs, epistemology provides an anchor upon which self-evaluation is done and supports the assessment’s epistemic role (Nieminen and Lahdenperä, 2021). The latter, as discussed, empowers student voice and agency.

 

In light of these, I propose the following modified descriptors. For now, only the band descriptors are changed. A more thorough revision of the criteria and presentation is advised.

Relevance to Question

Response unpacks question’s key terms and demands in relation to the text’s themes.

Discusses the text’s response to real-world social issues implied in the question.

How would you detect the themes that are implied in the question’s phrasing? What is the larger societal significance of this question

and why is that important?

Understanding of Text

Response demonstrates familiarity with text’s socio-cultural context, plot, characters, and themes. Demonstrates an understanding of how these aspects influence each other, and how literary

techniques are used to illustrate them.

Consider why these elements are important to your interpretation of the text’s. Why should you analyze the writer’s use of

literary techniques?

Substantiation with Evidence

All claims and interpretations are substantiated with multiple strategic evidence that allow the student to demonstrate his close-reading and analytical thinking. A variety of evidence is analyzed to illustrate student’s grasp of different facets of writer’s craft.

Consider what makes an evidence strategic and how it might affect the convincingness of your argument. What is the value in choosing a variety of different  evidences,  and multiple pieces at that?

Quality of Argument

Writing      is    clear,    with    the                  student’s interpretation conveyed in a systematic and

logical sequence that makes the argument convincing to the intelligent reader (a fellow literature student). Viewpoint is consistent,          with no contradictions in interpretation that confuse the reader or hamper convincingness.

What makes a piece of writing clear? How  might  a  clear  and

coherent essay make your

essay more convincing?

 

Figure 2: Modified Band 1 Descriptors for Lower Secondary Literature Assessment of Set Texts (Express)

 

One of the most important modifications to the descriptors is the expansion of key terms to foreground subject epistemology (Proposition 2). Hence, “thoughtfully selected” evidence is replaced with “strategic” evidence so as to illustrate students’ competency in the subject, which is the heart of literary critique (MOE, 2019). Similarly, “opportunities” are replaced with “real-world social issues” to increase descriptors’ clarity in a manner that empowers critical thinking instead of criteria compliance, and that is tied to subject epistemology (literature as social critique). Descriptors for understanding of text now use the same terms as the subject’s key areas of study (MOE, 2019) for an explicit alignment between subject epistemology, daily teaching and learning, and assessment. These changes provide a solid foundation upon which self-evaluation can occur. No instructions or procedures are given, only the hallmarks of literary critique, to empower students with knowledge that would add volume to their voice in discourses about how the subject should be taught and assessed (Proposition 1). Finally, the third column contains scaffolding questions that might be discussed during interactive dialogues (Proposition 1) and could be included in the rubrics as reflection questions, especially for advanced students (Proposition 3), to aid their cultivation of HOTDs for TSLN.

 

Implication: The Need for Transferrable Assessment Literacy

 

Even if the mythical perfect rubric was found, the great hurdle of students’ assessment literacy remains. As Bearman and Ajjawi (2018) argued, for students to focus on the rubrics’ “spirit” instead of the “letter”, students must see with rubrics instead of through them (p.4). Accordingly, students need to be aware that rubrics are a “filter” composed of societal expectations and values, all of which are manifestations of a possibly fallible and temporal authority. Yet, possibly an extension of teachers’ problematic use of rubrics, students’ assessment literacy can be lacking. Matshedisho (2020) found that students and lecturers understood the same set of rubrics differently: where students expected “affordances” from rubrics, that is, “[p]rocedures”, “[i]nstructions” and even the “[r]eadings” to include, the lecturer expected students to understand from the rubrics the “conceptual knowledge” to demonstrate, such as “interpretation” and “framing” (p.175). Regardless the rubric design, one unignorable problem here is students’ myopic and utilitarian conception of rubrics. This is not a standalone phenomenon. Students have been observed to only use rubrics when instructed to do so, such as during self-assessment (Pui et al, 2021), despite being given the rubrics long before. In other words, we cannot assume that students automatically appreciate rubrics’ significance or know how to use them. Instead, students might disregard rubrics’ potential as a learning tool, be it for subject knowledge or HOTDs, and instead perceive rubrics as a map towards a stellar paper grade. On a related note, students have different predispositions and backgrounds that affect their conception of and relationship with rubrics (Bearman & Ajjawi, 2018). There is no one-size-fits-all method or easy differentiation available. Thus, teachers having a clear conception of rubrics’ potential for learning is insufficient; students’ appreciation of rubrics and learning needs must be prioritized.

 

The implications surrounding rubrics do not end with formal education. One matter of great concern is when students leave the education system and rubrics are gone. In the workplace, rubrics are rarely utilized (McKnight et al, 2020), but the departure of rubrics is not the departure of judgment. Even though standards and expectations remain, they are unlikely to appear in neat tables complete with grades, and will possibly be more subjective due to the complex workplace context. The problem is that knowing how to apply rubrics does not mean students have the skills for “evaluative judgment without a rubric” (Tai et al, 2018, p.476). Therefore, students’ experiences with self-evaluation and rubrics in school must be sufficient to inculcate in them independent self-judgment. The HOTDs underpinning assessment literacy must be transferrable beyond education so that students can identify the standards, in their nebulous forms, as well as the authority and tensions within. To that end, Ajjawi et al (2018) emphasized the importance of contextual and socio-cultural factors on assessment and proposed creating opportunities to expand students’ evaluative judgment to wider contexts (p.9). Regardless the solution adopted, attention to this implication is crucial to realizing TSLN’s long-term goals.

 

Conclusion

Although the power imbalance between students and authority is inherent and impossible to eradicate—and complete eradication would also be problematic—such power imbalance need not cripple TSLN. Reality is that students (and teachers) are unnecessarily oppressed when “judgment” is abused. The fact that students’ voices are not given much, if any, attention in Singapore’s education system, and reforms are “done to [students] rather than with [students]” (Ab Kadir, 2019, p.565), is horrific. To reconceive stakeholders’ power dynamics, assessment and rubrics’ imperfect nature should be recognized and negotiated. Specifically, instead of presenting rubrics as absolute representatives of authority that reign over students, a presentation that contradicts TSLN’s aims of subjectification and sustainability, students can be empowered to make meaningful decisions. In doing so, the role of assessment in epistemic justice is acknowledged and students are a step closer to becoming “agentic individuals” who are ready to articulate their own visions of learning (Tai et al, 2018, p.477).

 

References 

Ab Kadir, M. A. (2019). Singapore’s educational policy through the prism of student voice: recasting students as co-agents of educational change and ‘disrupting’ the status quo?. Journal                    of                                   Education  Policy, 34(4),  547-576.

https://doi.org/10.1080/02680939.2018.1474387

 

Ajjawi, R., Tai, J., Dawson, P., & Boud, D. (2018). Conceptualising evaluative judgement for sustainable assessment in higher education. In Developing Evaluative Judgement in Higher Education (pp. 7-17). Routledge.

 

Ashby-King, D. T., Iannacone, J. I., Ledford, V. A., Farzad-Phillips, A., Salzano, M., & Anderson, L. B. (2021). Expanding and constraining critical communication pedagogy in  the  introductory  communication  course:  A  critique  of  assessment rubrics. Communication                                                                                          Teacher,                                        1-17.

https://doi.org/10.1080/17404622.2021.1975789

 

Bearman, M., & Ajjawi, R. (2018). From “seeing through” to “seeing with”: Assessment criteria and the myths of transparency. Frontiers in Education, 3(96), 1-8. https://doi.org/10.3389/feduc.2018.00096

 

Biesta, G. (2009). Good education in an age of measurement: On the need to reconnect with the question of purpose in education. Educational Assessment, Evaluation and Accountability (formerly: Journal of Personnel Evaluation in Education), 21(1), 33-46. https://doi-org/10.1007/s11092-008-9064-9

 

Boud, D. (1995). Assessment and learning: contradictory or complementary. Assessment for Learning     in         Higher Education,       35-48.

https://teacamp.vdu.lt/pluginfile.php/2910/mod_resource/content/1/UA/Assessment_an d_learning_contradictory_or_complementary.pdf

 

Boud,  D.        (2000). Sustainable      assessment:     rethinking        assessment       for        the      

learning society. Studies                      in                     Continuing      Education, 22(2),                   

151-167. https://doi.org/10.1080/713695728

 

Boud, D. & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in Higher Education, 41(3), 400–413. https://doi-org/10.1080/02602938.2015.1018133

 

Curriculum Planning and Development Division 2. (2019). Band Descriptors for Lower Secondary Literature Assessment of Set Text (Express). Ministry of Education.

 

Deng, Z., & Gopinathan, S. (2016). PISA and high-performing education systems: Explaining Singapore’s          education         success. Comparative                 Education, 52(4),                                     449-472.

http://dx.doi.org/10.1080/03050068.2016.1219535

 

Eisner, E. (2002). Chapter 4: The three curricula that all schools teach. In E. Eisner, The Educational Imagination: On the Design & Evaluation of School Programs, pp87-107. New Jersey: Pearson Education.

 

Gopinathan, S. 2007. “Globalisation, the Singapore Developmental State and Education Policy: A Thesis Revisited.” Globalisation, Societies and Education, 5(1), 53–70. https://doi.org/10.1080/14767720601133405

 

Gopinathan, S. 2015. Singapore Chronicles: Education. Singapore: Institute of Policy Studies and the Straits Times.

 

Klenowski, V., & Wyatt-Smith, C. (2013). Why teachers need to understand standards. In Assessment for Education: Standards, Judgement and Moderation (pp.9-28). Sage.

 

Kohn,  A.  (2006).  The  trouble  with  rubrics.  English  Journal,  95(4),  12-15. https://doi.org/10.2307/30047080

 

Matshedisho, K. R. (2020). Straddling rows and columns: Students’(mis) conceptions of an assessment rubric. Assessment & Evaluation in Higher Education, 45(2), 169-179. https://doi.org/10.1080/02602938.2019.1616671

 

McKnight, L., Bennett, S., & Webster, S. (2020). Quality and tyranny: competing discourses around a compulsory rubric. Assessment & Evaluation in Higher Education, 45(8), 1192- 1204. https://doi.org/10.1080/02602938.2020.1730764

 

Molloy, E., & Bearman, M. (2019). Embracing the tension between vulnerability and credibility: ‘intellectual candour’ in health professions education. Medical Education, 53(1), 32-41. doi: 10.1111/medu.13649

 

Ministry of Education. (2019). Literature in English Syllabus: Lower and Upper Secondary. https://www.moe.gov.sg/-/media/files/secondary/syllabuses/eng/2019literatureinenglishsyllabuslowerandupperse condary.pdf?la=en&hash=C5756A2A2E90E1391931ABD4AD445081A5DBFE5B

 

MOE. (2021, Oct 18). Our Mission and Vision. Ministry of Education, Singapore. https://www.moe.gov.sg/about-us/our-mission-and-vision

 

Nieminen, J. H., & Lahdenperä, J. (2021). Assessment and epistemic (in) justice: how assessment produces knowledge and knowers. Teaching in Higher Education, 1-18. https://doi.org/10.1080/13562517.2021.1973413

 

Norruddin, N. (2018, May 7). Thinking Schools, Learning Nation. Singapore Infopedia. https://eresources.nlb.gov.sg/infopedia/articles/SIP_2018-06- 04_154236.html#:~:text=The%20%E2%80%9CThinking%20Schools%2C%20Learnin g%20Nation,a%20lifelong%20passion%20for%20learning.

 

Panadero, E., & Jonsson, A. (2020). A critical review of the arguments against the use of rubrics. Educational         Research                                                         Review, 30           (100329),               1-19.

https://doi.org/10.1016/j.edurev.2020.100329

 

Pui, P., Yuen, B., & Goh, H. (2021). Using a criterion-referenced rubric to enhance student learning: a case study in a critical thinking and writing module. Higher Education Research & Development, 40(5), 1056-1069. https://doi.org/10.1080/07294360.2020.1795811

 

Sadler, R. D. (2014). The futility of attempting to codify academic achievement standards. Higher Education, 67(3), 273-288. https://doi.org/10.1007/s10734-013-9649-1

 

Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education, 76(3), 467-481. https://doi.org/10.1007/s10734-017-0220-3

 

Tan, C. (2017). The Enactment of the Policy Initiative for Critical Thinking in Singapore Schools. Journal of Education Policy, 32 (5), 588–603. https://doi.org/10.1080/02680939.2017.1305452

 

Torrance, H. (2007). Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in postsecondary education and training can come to dominate learning. Assessment in Education, 14(3), 281-294. https://doi.org/10.1080/09695940701591867

 

Yan, Z., & Boud, D. (2021). Conceptualising assessment-as-learning. In Assessment as Learning (pp. 11-24). Routledge