Skip to content Skip to footer

Understanding Technology-Enhanced Feedback Practices Insights From Singapore

by Tan Si Hua

Introduction

Assessment feedback in K-12 education, essential for learning, has been increasingly being mediated by advanced technologies. Recent shifts in the conceptualisation of the feedback process have led to a greater emphasis on student engagement and the utilisation of feedback beyond the sole focus on the nature of feedback generated and administered by teachers, recognising that ultimately, only students can make use of the feedback for learning. Accompanied by advancements in educational technology, teachers are now expected to adapt their understanding and practices of feedback to align with this evolving digital environment.

To this point, there has been a paucity of studies examining the interplay between evolving concepts of feedback and the utilisation of educational technologies, whether skillful or otherwise, of learning technologies designed to mediate feedback and create ‘feedback-rich environments’ (Henderson et al., 2019b). This research gap is noteworthy, especially given that technology is regarded as a core feature of contemporary feedback practices by renowned scholars and entities (e.g., Carless & Winstone, 2020; OECD, 2021), holding potential for merging learning and assessments with technology.

With the Ministry of Education (MOE) in Singapore consistently advocating for integrating technology to improve feedback practices in schools, there is a need to closely examine how this technology is applied in enhancing assessment feedback practices. This will enable researchers, practitioners, and policymakers to begin to understand how technology-enhanced feedback (TEF) supports feedback practices and how context influences teachers’ use of TEF. Accordingly, this research seeks to use Singapore as a case study to investigate how technology may be used as an approach to assessment feedback in Singapore’s primary school classrooms, focusing on supporting young learners’ engagement with and use of assessment feedback. The findings of this research are expected to provide valuable insights and practical applications that can be applied in similar educational contexts globally, contributing to the discourse on the intersection of assessment and educational technology in K-12.

Assessment Feedback in Singapore Schools

Known as one of the most powerful influences on learning and achievement by leading educational researchers (Hattie & Timperly, 2007), assessment feedback discourses and practices have drawn much attention globally for close to two decades (Klenowski, 2019). Drawing from Ramaprasad’s (1983) seminal definition that assessment feedback ‘is information about the gap between the actual level and reference level of a system parameter, which is used to alter the gap in some way’ (p. 4), one common way feedback may be understood in the field of assessment is as any information provided by an agent (e.g., teacher, peer, self) to a learner regarding aspects of the learner’s performance or understanding, aimed at identifying and narrowing or closing learning gaps (Carless & Boud, 2018; Lipnevich, Berg, & Smith, 2016; Sadler, 1989).

In Singapore, assessment feedback, in particular, has gained prominence since the introduction of the PERI Holistic Assessment journey in all Primary Schools more than a decade ago, purporting a balanced school-based assessment system that provides constructive feedback, enabling more meaningful learning in support of learners’ development in their formative years, in response to challenges to the high-stakes, examination-focused assessment approach.

Like other Asian countries, particularly in Confucian-heritage cultures such as Hong Kong that use examination as certification and sorting mechanisms (Biggs, 1996), Singapore’s education system has often been characterised as examination-oriented (Lim-Ratnam, 2013). Right from Primary School, admission to each level of education is guarded almost solely by how individual learners perform in high-stakes national examinations, creating an artificial scarcity of success (Stiggins, 1995), and exerting pressures on students and teachers to deliver results from any school assessment preceding the Primary School Leaving Examination at the end of Primary Six (Ratnam-Lim and Tan, 2015). These competitive stakes of national examinations have inconsiderably remoulded the curriculum (Tan, 2011), with apparent effects that include an ‘education for earning, not learning’ (Lee, 1991, p. 227) curricular that emphasises grades and content mastery over holistic development and learning, producing adverse effects on students’ current and future learning (Tan, 2011).

In recent years, the role of assessment feedback has been further emphasised by MOE in “strengthen(ing) students’ intrinsic motivation to learn, and help(ing) them become more self-directed in learning” (MOE, 2020c), against the backdrop of the restructuring of school-based assessments in Singapore schools (e.g. Mid-year Examinations in Middle and Upper Primary from 2023), as part of its efforts to further move away from an overemphasis on academic results under its latest Learn For Life movement (MOE, 2020c). Undergirded by MOE’s curriculum philosophy that “Feedback lies at the heart of formative assessment in interpreting information from assessment, and adapting instructional practices accordingly, to address student learning gaps and improve teaching practices” (STP, 2020), “providing and engaging learners in feedback for students to improve learning” (STP, 2020) is viewed as an important aspect of assessment in Singapore schools.

Explicably, teachers in Singapore generate and administer a substantial amount of feedback each day, devoting almost double the amount of time to this task compared to their OECD colleagues (OECD, 2013; 2018). However, engaging learners in feedback effectively involves more than just providing personalised feedback with actionable steps for learners to close their learning gaps. This task is particularly challenging for teachers who must provide comprehensive feedback to all forty-two or more learners in larger classes, a common scenario in countries with Confucian Heritage Culture like Singapore, especially when quick feedback is needed for learners to act upon it for subsequent tasks.

A central insight illuminated from assessment feedback literature, however, is that emphasising feedback as a one-way cognitivist transmission of performance-related information imposed on learners and positioning them as passive recipients of feedback, who may or may not use them to improve their work, has shown to be necessary but insufficient in fulfilling its purported aspiration of significantly influencing student learning (Boud & Molloy, 2013; Carless, 2015). Such conceptualisation of feedback from a cognitivist point of view perceives feedback as a product (Chong, 2018), specifically as information to be conveyed to students, for example, on the errors to be corrected, or the accuracy of student understanding (Gipps & Simpson, 2014). This is aligned with how learning, from a cognitivist standpoint, is understood to be about the acquisition of knowledge, developing internal mental structures, and individual sense-making (Penuel & Shepard, 2016). Assessment feedback scholars (e.g. Thurlings, Vermeulen, Bastiaens & Stijnen, 2013) depicted feedback processes related to cognitivism in a linear manner – beginning with the teacher giving feedback, which is then processed by the student and finalised in particular learning outcomes.    

In recent years, there has been a significant shift in the way literature views feedback. This shift has moved away from the traditional “old paradigm” of cognitivist transmission-oriented feedback processes to a “new paradigm” of social constructivist learner-centred feedback processes (Carless, 2015, p. 196), emphasising the importance of dialogic feedback practices in supporting learners’ engagement with and use of feedback in which learners are actively involved in making sense of information from various sources, such as peers and computer-based systems, and using it to enhance their work or learning strategies in their immediate learning tasks for current and future learning (Boud & Carless, 2018).

From a social constructivist standpoint, feedback is conceptualised as an interactive, cyclical process (Chong, 2018). Prior knowledge serves as the starting point, and external feedback from various sources, such as teachers, peers, and computer-based systems, is intended to enhance, contradict, or complement the student’s current understanding of the task and learning process. Through dialogic interactions, students can verify, expand upon, or reorganise the information stored in their memory. This information includes thoughts about the learning task, material, self-ability, and the relationship between their current state of learning and performance and external targets, criteria, and reference points. This facilitates judgment-making and subsequent action-taking to identify and close learning gaps (Nicol & Macfarlane-Dick, 2006). This process aligns with social constructivism, which involves creating meaning from experience and is negotiated through interaction with sources of information (Penuel & Shepard, 2016).

Technology as an approach for dialogic feedback

Since the widespread proliferation of the internet shortly after the turn of the century, web portals, online discussion forums, and learning management systems offered teachers opportunities to directly interface with educational technology to deliver even more quickly to students, giving rise to growing studies on Technology-Enhanced Assessment (TEA) and Technology-enhanced Feedback (TEF), especially within the realm of higher education (Carless & Winstone, 2020). Situated in the paradigm shift of assessment feedback conceptions, technology has been acknowledged by assessment feedback researchers (e.g., Deeley, 2018; Hepplestone, Holden, Irwin, Parkin, & Thorpe, 2011) as an important strategy in supporting students’ engagement with and use of feedback.

TEA may be understood as assessment of, for, and as learning, where the use of technology serves to improve both assessment outcomes and experiences (Jordan, 2013). In this paper, ‘Technology-Enabled Feedback’ (TEF) is adopted as a term to represent the role of technology more accurately in enhancing feedback processes. This aligns with insights from researchers in the field, such as Mushi and Deeneen (2018), who observed that while numerous studies have explored technology as a crucial enabler — suggesting its indispensable role in feedback mechanisms — it’s been noted that many feedback methods thought to require technology can actually be effectively implemented without technology. A full review of TEF is not within the scope of this paper in the interest of word space: here, the paper offers a background to situate the discussion by identifying and examining the type of TEF that has been reported as being able to enhance feedback practices and discuss on ways forward.

 

Research has shown that TEF can effectively enhance the way feedback is captured, shared, and preserved, utilising diverse formats such as audio, video, screencasts, and text delivered to students via platforms like Learning Management Systems without modifying the original content. Examples include teachers recording and uploading their feedback for student assignments online (Crook et al., 2012; Marriott & Teoh, 2012; McCarthy, 2015; Phillips, Henderson & Ryan, 2016; West & Turner, 2016) or giving written feedback in virtual collaborative spaces (Coll, Rochera, Gispert & Barriga, 2013).

Building upon these foundational methods, the integration of Artificial Intelligence (AI) has brought a more advanced dimension to TEF. The realm of AI contributes to the autonomous generation and delivery of feedback. Facilitated by sophisticated tools like automated grading systems and intelligent tutoring systems, students’ answers, such as multiple-choice questions or short responses, are analysed and subsequently generated or selected from a predefined pool of suitable feedback (Jordan, 2013). More advanced TEF employs Natural Language Processing[1] and Optical Character Recognition[2] techniques to provide feedback on students’ submissions in multimodality, like essays and Scientific modelling (Chodorow, Gamon & Tetreault, 2010; Jordan, 2012) and more complex constructs such as critical reading fluency (e.g. Jonathan, Koh, Caleon & Tay, 2017).

The benefits of TEF, albeit mostly self-reported and in Higher Education context, have been largely positive, may be seen aligning with the conceptual framework proposed by Carless and Winstone (2020), impacting at least three interrelated dimensions crucial for enhancing dialogic feedback practices: (1) facilitates the timely and convenient provision of feedback information to encourage student engagement with and uptake of feedback (Design), (2) promotes interaction and closeness for students to exercise agency, seek help and perform reflections by sensitively attending to communicational and relational aspects of feedback with students (Relational), and (3) tackle practicalities of managing time and workload in assessment feedback processes, while balancing the demands of grading with the goals of learning (Pragmatic).

Design – Studies have reported that teachers’ use of Technology-Enhanced Feedback (TEF) systems to provide automated feedback, such as hints and questions, helps support them in providing timely feedback and addressing misconceptions. The use of TEF in these studies has shown to improve student performance, particularly among lower-progress and special education students who benefited from timely and repetitive feedback, as reinforcements (Dai, Gu, & Zhu, 2023; Edtstadler & Ebner, 2018). Notably, research, such as that conducted by Kickmeier-Rust, Hillemann and Albert (2014), has indicated that students receiving TEF not only exhibit a marked reduction in errors but also develop a deeper understanding and better retention of concepts.

Further, teachers in studies used TEF systems with tools like leaderboards, real-time quizzes, and adaptive challenges to provide learners with real-time feedback to help students monitor and review their learning goals and adjust strategies (Burkhard, Seufert, Cetto, & Handschuh, 2022). Both teachers and students report that these TEF tools are effective, not only in reinforcing learning concepts but also in strengthening students’ self-regulatory strategies and enhancing their ability to manage cognitive processes, thereby fostering independent learning skills and self-efficacy.

Relational – Research has reported that students perceive TEF that includes audio or video elements as higher quality than just typed comments (Mahoney, Macfarlane & Ajjawi 2019). TEF, especially in video format, has been self-reported to foster greater student engagement and rapport building compared to written comments, even in asynchronous settings. This approach, which positions teachers as conversing with rather than commenting on students, has been found to significantly contribute to a sense of belonging and involvement (Borup, West, Thomas & Graham, 2014), positively impacting motivation and self-efficacy (Ajjawi, Kent, Broadbent, Tai, Bearman & Boud, 2022). Further, Wood (2022) noted that such feedback allows students to perceive their teachers as more caring and involved in their learning. The use of voice, tone, rhythm, and facial expressions in feedback further personalises the experience, simulating a one-on-one interaction and fostering a sense of closeness (Thomas, West & Borup, 2017).

Furthermore, students have reported that teachers utilise AI capabilities, particularly in immersive environments with chatbots and teachable agents, to enhance their receptivity to feedback, addressing the issue of students often reacting defensively to critical feedback or low grades (Carless & Boud, 2018; Silvervarg, Wolf, Blair, Haake, & Gulz, 2021). For instance, studies consistently demonstrate that TEF not only improves completion rates but also boosts self-efficacy, especially in lower-achieving and special needs students (Pan & Liu, 2022; Zhi, Lytle, & Price, 2018). Automated conversational feedback has been particularly effective in reducing anxiety and frustration, notably among girls in STEM fields, thereby enhancing overall student engagement (Chen, Koong, & Liao, 2022; Kim, Knowles, Scianna, & Ruipérez-Valiente, 2023; Rajendran, Iyer, & Murthy, 2018).

Pragmatic – Studies reported that TEF’s benefits include convenient and rapid creation, storage, and access to feedback artifacts, which enhance the experience for both learners and teachers (e.g., Barry, 2012; Cann, 2014; Henderson & Phillips, 2014). For instance, teachers have reported time savings when using audio-visual modalities for TEF (Barry, 2012). Additionally, automated TEF systems like Automated Essay Scoring allow teachers to use initial assessments as a foundation, upon which they can build to provide more tailored guidance and insights. This approach leverages technology to make the feedback process both more efficient and effective, ensuring that the feedback is not only timely but also meaningful and relevant to students’ learning goals (Zhao, Zhuang,  Zou,  Xie,  & Yu, 2023, Carpenter,  Geden,  Rowe,  Azevedo  & Lester, 2020).

Furthermore, technology has also been reported in studies to aid feedback delivery during collaborative learning, applicable in both face-to-face and online large classroom settings. TEF tools, such as learning dashboards and learning companions that offer instant hints and guiding questions, have been deployed to mitigate cognitive overload and facilitate group work, particularly in scenarios where direct teacher involvement is challenging. Teachers have observed that features like automated prompts and guiding questions in TEF not only help students overcome learning obstacles, particularly at large but also assist teachers in managing their workload in traditional classroom settings (VanLehn et al., 2021).

Although studies have consistently highlighted the positive benefits of TEF, they also simultaneously underscore the nuances of contextual and personal factors influencing its enactment by teachers. This aligns with Govett’s (2022) Social-Material lens conception which advocates that feedback enactment should be understood within contested and complex social and material settings, embedding actors and structures, including technology, power, space, time, and context.

From a technology standpoint, teachers in studies have reported that TEF systems, particularly those incorporating AI in Intelligent Tutoring Systems or adaptive platforms, are perceived as inflexible. This rigidity restricts their ability to customise feedback to meet the specific needs of students. Since these systems are primarily designed by educational technologists, teachers often find it challenging to understand and trust the functioning of the algorithms and the generation of automated feedback (Deeva, Bogdanova, Serral, Snoeck, & Weerdt, 2021). Furthermore, while AI systems in TEF are generally proficient at facilitating the assessment of students’ knowledge mastery, they are less capable of accommodating the subtleties of individual students’ characteristics and personalities, which are crucial for student engagement and effective use of feedback. A notable example of this limitation is AI’s approach to pairing students for peer feedback. Teachers have reported that AI predominantly considers learners’ readiness and performance in recommending pairings and tends to overlook the social dynamics and personal traits that are crucial for effective collaboration. This highlights a significant limitation in AI systems’ current ability to comprehend the nuances of human interactions in peer learning contexts (Lawrence,  Echeverria, Yang,  Aleven  & Rummel, 2023).

Studies have highlighted that “technostress,” defined by Brod (1984) as the stress or negative psychological impact resulting from a lack of competencies or the struggle to cope with new computer technologies, poses barriers to teachers in implementing educational technologies as part of their instructional strategies, including assessment feedback. Dong, Xu, Chai, and Zhai (2019) noted that the rapid evolution of technology may leave teachers feeling overwhelmed, leading to anxiety, mental fatigue, and skepticism, resulting result in diminished motivation to use such educational technologies. Additionally, external pressures, such as school policies and institutional expectations, further exacerbate technostress, complicating the adoption of TEF. For example, Eteokleous (2008) highlighted that teachers’ perception of a heavy curriculum can act as a barrier to integrating instructional strategies, including the use of assessment feedback. Similarly, Porter and Graham (2016) pointed out that the absence of robust infrastructure, adequate technological support, and pedagogical assistance poses significant barriers to effective TEF enactment.

Technology-enhanced Feedback in Singapore Classrooms - A Silver Bullet?

Technology as an Approach to Assessment Feedback In Singapore Schools

In recent years, there has been a notable acceleration in the Ministry of Education’s (MOE) push for Singapore schools to leverage technology in enhancing assessment feedback practices – along with its existing system-wide EdTech Plan (previously known as ICT Masterplan) driving sustained educational technology enactment as part of its Thinking School, Learning Nation movement since 1997 (Ng, 2017). Minister for Education  Chan Chun Sing, has, since assuming office in 2021, consistently emphasised in his speeches (E.g. MOE, 2021a, 2021b, 2021c, 2022a, 2022b, 2022c, and 2022d) the MOE’s commitment to harness TEF to more effectively care to learners’ diversity and manage teachers’ workloads. 

For instance, MOE stated “there is a limit to how much teachers can customise the learning experiences or provide continuous and detailed feedback for every student” (Smart Nation & MOE, 2019, p.7). In response, the MOE has been progressively rolling out TEFs such as AI-enhanced automated marking systems, through its nationwide Student Learning Space (SLS) portal. Hopeful of the promises that technology has to offer, Blended Learning is also mandated as a regular feature by MOE in all secondary schools from 2020, and trails are underway in primary schools, seven years ahead of its original target in line with post-Covide 19 global economic and digital trends (PMO, 2020). The MOE has urged to leverage TEFs like SLS online quizzes to faciliated both home-based and in-school learning and assessment feedback processes, in tandem with one of the priority areas of SkillsFuture for Educators, which emphasises engaging students with TEF and motivating them to reflect on their own learning with TEF.

But, how, exactly does TEF afford feedback practices to promote young learners’ engagement with and use of feedback in Singapore primary school classrooms? How does Singapore’s educational context influence ways in which teachers prioritise the use of TEF to promote primary school students’ learning?

To date, there is no known research with sound design that critically examines how TEF is enacted to support feedback practices for young learners by teachers in the context of Singapore’s primary schools. This gap in our understanding is not insignificant; finding optimal ways for schools to reap the benefits of TEF is challenging without first understanding how TEF affords feedback practices and how teachers enact TEF to support learning and assessment. The under-elaboration of detailed exploration of TEF’s enactment in Singapore schools –  what it looks like, how it takes place, and what conditions – creates ambiguity in finding optimal ways to enact and disseminate good TEF practices.

Currently, teachers in Singapore are reported to face difficulties in prioritising formative assessment uses in schools. While there is a vision of TEF supported by research and policy, which focuses on meaningful outcomes and empowering teachers and students, assessment feedback scholars highlighted the significant influence of deeply embedded socio-cultural systems in dictating assessment and curricula through their control over external examinations, limiting the autonomy of educational practices in individual schools and among teachers (Tan & Deneen, 2015).

Teachers in Singapore have reported finding it challenging to reconcile the dilemma between assisting students to perform well in tests and high-stakes examinations, and integrating new assessment practices in their classrooms (Leong and Tan, 2014). Additionally, there is a tendency for teachers to utilise assessment feedback in limited ways, often restricted to merely correcting or highlighting mistakes and students’ weaknesses, or re-teaching the material. The focus on preparing students for high-stakes summative assessments remains a dominant feature in the assessment landscape of Singapore schools (Deeneen et al., 2019; Tan, 2022).

In the context of Singapore’s ongoing educational technology reforms, scholars like Chua and Chai (2019) and Koh, Chai, and Toh (2014) – not unfamiliar in Singapore’s educational technology field – have observed resistance among teachers to adopting new innovations or practices. Studies found that teachers’ reluctance if often due to a misalignment with their existing instructional beliefs, which are heavily influenced by cultural/institutional and physical/technological contextual factors (Chua & Chai, 2019). Without this alignment, teachers tend to revert to traditional teaching methods, even when committed to new pedagogical initiatives or reforms. Correspondingly, scholars in both the fields of Assessment for Learning and education technologies have argued for reforms that address the larger, systemic socio-economic and political realities.

Making Sense of the Technology-enhanced feedback Conundrum in Singapore schools

Adopting Carless and Winstone’s (2020) conceptual framework of the three inter-related dimensions of technology in enabling feedback processes through a social-material lens (Gravett, 2020), this proposed study seeks to use Singapore as a case to examine the following inter-related research questions:

  • RQ1: How does TEF process address the three dimensions of feedback practices (i.e. Design, Relational and Pragmatic) to support students’ engagement with and use of feedback in Singapore primary classrooms?
  • RQ2: To what extent are the three dimensions (i.e. Design, Relational and Pragmatic) of technology prioritised in feedback practices to support students’ engagement with and use of feedback in Singapore primary classrooms?
  • R3: What are the factors driving teachers’ considerations behind the prioritisation of this/these dimension(s)? 

The research focus is an important one in the field of Assessment Feedback, Educational Technology, and Teacher Learning based on both policy and scholarship considerations. In terms of policy, the establishment of curriculum reforms to harness technological affordances in assessment feedback has made it central to develop an effective way to translate classroom practices toward reform goals. From a scholarly perspective, it is important to empirically characterise the affordances of TEF implemented in schools under the influence of reform policy, but also necessary to verify assumptions (E.g. Efficacy TEF in supporting learners’ engagement with and use of feedback, and Conditions enabling/ constraining TEF enactment) on use of technology in assessment feedback practices. These findings have the potential to be applicable to other educational systems facing similar educational contexts, navigating assessment and technological reforms

[1] Natural Language Processing is a technology that enables computers to understand human language, encompassing the capability to process and analyse both spoken and written language (Chowdhary & Chowdhary, 2020).

[2] Optical Character Recognition (OCR) is a technology that converts different types of documents, like scanned papers and digital images, into editable and searchable text (Smith, 2020).

References

Ajjawi, R., F. Kent, J. Broadbent, J. H.-T. Tai, M. Bearman, and d. Boud. 2022. “Feedback That Works: A Realist Review of Feedback Interventions for Written Tasks.” Studies in Higher Education 47 (7): 1343–1356.

Barry, S. (2012). A video recording and viewing protocol for student group presenta- tions: Assisting self-assessment through a Wiki environment. Computers & Education, 59(3), 855–860.

Beaumont, C., O’Doherty, M., & Shannon, L. (2011). Reconceptualising assessment feedback: a key to improving student learning?. Studies in Higher Education36(6), 671-687.

Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347_364.

Borup, J., R. E. West, R. A. Thomas, and C. R. graham (2014). Examining the Impact of video Feedback on Instructor Social Presence in Blended Courses. The International Review of Research in Open and Distributed Learning 15 (3).

Boud, D. (2000). Sustainable assessment: rethinking assessment for the learning society. Studies in continuing education22(2), 151-167.

Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: the challenge of design. Assessment & Evaluation in higher education38(6), 698-712.

Boud, D., & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in Higher Education41(3), 400-413.

Brod, C. (1984). Technostress: The human cost of the computer revolution. Boston: Addison Wesley Publishing Company.

Burkhard, M., Seufert, S., Cetto, M., & Handschuh, S. (2022). Educational Chatbots for Collaborative Learning: Results of a Design Experiment in a Middle School. International Association for Development of the Information Society.

Cann, A. (2014). Engaging students with audio feedback. Bioscience Education, 22(1), 31–41.

Carless, D. (2015). Excellence in university assessment: Learning from award-winning practice. Routledge.

Carless, D., & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education43(8), 1315-1325.

Carless, D., & Winstone, N. (2020). Teacher feedback literacy and its interplay with student feedback literacy. Teaching in Higher Education, 1-14.

Carpenter, D., Geden, M., Rowe, J., Azevedo, R., & Lester, J. (2020). Automated analysis of middle school students’ written reflections during game-based learning. In Artificial Intelligence in Education: 21st International Conference, AIED 2020, Ifrane, Morocco, July 6–10, 2020, Proceedings, Part I 21 (pp. 67-78). Springer International Publishing.

Chen, C. H., Koong, C. S., & Liao, C. (2022). Influences of integrating dynamic assessment into a speech recognition learning design to support students’ English speaking skills, learning anxiety and cognitive load. Educational Technology & Society, 25(1), 1-14.

Chua, C. S., & Chai, C. S. (2019). Information communication technology (pp. 149-168). Springer International Publishing.

Chodorow, M., Gamon, M., & Tetreault, J. (2010). The utility of article and preposition error correction systems for English language learners: Feedback and assessment. Language Testing, 27(3), 419–436.

Chong, S. W. (2018). Reconsidering student feedback literacy from an ecological perspective. Assessment & Evaluation in Higher Education46(1), 92-104.

Chowdhary, K., & Chowdhary, K. R. (2020). Natural language processing. Fundamentals of artificial intelligence, 603-649.

Coll, C., Rochera, M. J., de Gispert, I., & Barriga, F. D. (2013). Distribution of feedback among teacher and students in online collaborative learning in small groups. Digital Education Review, 23, 27–45.

Crook, A., Mauchline, A., Maw, S., Lawson, C., Drinkwater, R., Lundqvist, K., … & Park, J. (2012). The use of video technology for providing feedback to students: Can it enhance the feedback experience for staff and students? Computers & Education, 58(1), 386–396.

Dai, J., Gu, X., & Zhu, J. (2023). Personalized Recommendation in the Adaptive Learning System: The Role of Adaptive Testing Technology. Journal of Educational Computing Research, 61(3), 523-545.

Deneen, C. C., Brown, G. T. L., & Carless, D. (2017). Students’ conceptions of eportfolios as assessment and technology. Innovations in Education and Teach- ing International, 1–10. Online prepublication, retrieved from http://dx.doi.org/10.1080/14703297.2017.1281752.

Deneen, C. C., Fulmer, G. W., Brown, G. T., Tan, K., Leong, W. S., & Tay, H. Y. (2019). Value, practice and proficiency: Teachers’ complex relationship with assessment for learning. Teaching and Teacher Education80, 39-47.

Deeley, S. J. (2018). Using technology to facilitate effective assessment for learning and feedback in higher education. Assessment & evaluation in higher education43(3), 439-448.

Deeva, G., Bogdanova, D., Serral, E., Snoeck, M., & De Weerdt, J. (2021). A review of automated feedback systems for learners: Classification framework, challenges and opportunities. Computers & Education162, 104094.

Deng, Z., & Gopinathan, S. (2016). PISA and high-performing education systems: Explaining Singapore’s education success. Comparative Education52(4), 449-472.

Dong, Y., Xu, C., Chai, C. S., & Zhai, X. (2020). Exploring the structural relationship among teachers’ technostress, technological pedagogical content knowledge (TPACK), computer self-efficacy and school support. The Asia-Pacific Education Researcher29, 147-157.

Ebner, M., Edtstadler, K., & Ebner, M. (2018). Tutoring writing spelling skills within a web-based platform for children. Universal access in the information society, 17, 305-323.

Gravett, K. (2022). Feedback literacies as sociomaterial practice. Critical Studies in Education63(2), 261-274.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of educational research77(1), 81-112.

Hattie, J. (2009). The black box of tertiary assessment: An impending revolution. Tertiary assessment & higher education student outcomes: Policy, practice & research259, 275.

Henderson, M., & Phillips, M. (2014). Technology enhanced feedback on assessment. In Australian Computers in Education Conference, Adelaide, SA, September 30–October 3, 2014 (pp. 1–11). Retrieved from http://acec2014.acce.edu.au/session/technology- enhanced- feedback-assessment.

Hepplestone, S., Holden, G., Irwin, B., Parkin, H., & Thorpe, L. P. (2011). Using technology to encourage student engagement with feedback: a literature review. Research in Learning Technology19(2), 117-127.

Jonathan, C., Tan, J. P. L., Koh, E., Caleon, I. S., & Tay, S. H. (2017). Enhancing students’ critical reading fluency, engagement and self-efficacy using self-referenced learning analytics dashboard visualizations.

Jordan, S. (2012). Student engagement with assessment and feedback: Some lessons from short-answer free-text e-assessment questions. Computers & Education, 58(2), 818–834.

Jordan, S. (2013). E-assessment: Past, present and future. New Directions, 9(1), 87–106.

Koh, J. H. L., Chai, C. S., & Tay, L. Y. (2014). TPACK-in-Action: Unpacking the contextual influences of teachers’ construction of technological pedagogical content knowledge (TPACK). Computers & Education78, 20-29.

Kickmeier-Rust, M. D., Hillemann, E. C., & Albert, D. (2014). Gamification and smart feedback: Experiences with a primary school level math app. International Journal of Game-Based Learning (IJGBL), 4(3), 35-46.

Kim, Y. J., Knowles, M. A., Scianna, J., Lin, G., & Ruipérez‐Valiente, J. A. (2023). Learning analytics application to examine validity and generalizability of game‐based assessment for spatial reasoning. British Journal of Educational Technology54(1), 355-372.

Klenowski, V. (2019). Assessment for learning revisited: An Asia-Pacific perspective.

Leong, W. S., & Tan, K. (2014). What (more) can, and should, assessment do for learning? Observations from ‘successful learning context’in Singapore. Curriculum Journal25(4), 593-619.

Lim-Ratnam, C. (2013). Tensions in defining quality pre-school education: The Singapore context. Educational review65(4), 416-431.

Lipnevich, A. A., Berg, D. A., & Smith, J. K. (2016). Toward a model of student response to feedback. In Handbook of human and social conditions in assessment (pp. 169-185). Routledge.

Marriott, P., & Teoh, L. K. (2012). Using screencasts to enhance assessment feedback: Students’ perceptions and preferences. Accounting Education, 21(6), 583–598.

McCarthy, J. (2015). Evaluating written, audio and video feedback in higher education summative assessment tasks. Issues in Educational Research, 25(2), 153–169.

MOE (2020a). Blended Learning to Enhance schooling Experience and Further Develop Students into Self-Directed Learners From: https://www.moe.gov.sg/news/press-releases/20201229-blended-learning-to-enhance-schooling-experience-and-further-develop-students-into-self-directed-learners

MOE (2020b). Personal Learning Device. From: https://www.moe.gov.sg/news/parliamentary-replies/20201102-personal-learning-device

MOE (2020c). Skillsfuture for Educator. Committee of Supply 2020. From: https://www.moe.gov.sg/microsites/cos2020/skillfuture-for-educators.html

MOE (2021a). Speech by Minister for Education Mr Chan Chun Sing at Teachers’ Conference and Excel Fest 2021. Retrieved from: https://www.moe.gov.sg/news/speeches/20210602-speech-by-minister-for-education-mr-chan-chun-sing-at-teachers-conference-and-excel-fest-2021

MOE (2021b) Teachers’ Day Message 2021 by Minister for Education, Mr Chan Chun Sing. https://www.moe.gov.sg/news/speeches/20210902-speech-by-minister-chan-chun-sing-for-teachers-day-celebration-2021

MOE (2021c) Speech by Minister Chan Chun Sing at the Appointment and Appreciation Ceremony for Principals, at the Ministry of Education HQ (Buona Vista). Retrieved fromhttps://www.moe.gov.sg/news/speeches/20211203-speech-by-minister-chan-chun-sing-at-the-appointment-and-appreciation-ceremony-for-principals-at-the-ministry-of-education-hq-buona-vista

MOE (2022a) MOE FY2022 Committee of Supply Debate Response by Minister for Education Chan Chun Sing. Retrieved from: https://www.moe.gov.sg/news/speeches/20220307-moe-fy2022-committee-of-supply-debate-response-by-minister-for-education-chan-chun-sing

MOE (2022b) Speech by Minister Chan Chun Sing at MOE Year 2022 Main Promotion Ceremony at Resorts World Convention Centre. Retrieved from: https://www.moe.gov.sg/news/speeches/20220428-speech-by-minister-chan-chun-sing-at-moe-year-2022-main-promotion-ceremony-at-resorts-world-convention-centre

MOE (2022c) Speech by Minister Chan Chun Sing at the Opening Ceremony of the Ninth Redesigning Pedagogy International Conference, at the Nanyang Auditorium, Nanyang Technological University. Retrieved from: ww.moe.gov.sg/news/speeches/20220530-speech-by-minister-chan-chun-sing-at-the-opening-ceremony-of-the-ninth-redesigning-pedagogy-international-conference-at-the-nanyang-auditorium-nanyang-technological-university

MOE (2022d). Opening Address by Minister Chan Chun Sing at Pre-University Seminar 2022 Opening Ceremony on 31 May 2022 at National Junior College. Retrieved from: https://www.moe.gov.sg/news/speeches/20220531-opening-address-by-minister-chan-chun-sing-at-pre-university-seminar-2022-opening-ceremony-on-31-may-2022-at-national-junior-college

Mahoney, P., S. Macfarlane, and R. Ajjawi. (2019). A Qualitative Synthesis of video Feedback in Higher Education. Teaching in Higher Education 24 (2): 157–179. doi:10.1080/13562517.2018.1471457.

Munshi, C., & Deneen, C. C. (2018). Technology-enhanced feedback. In The Cambridge handbook of instructional feedback (pp. 335-356). Cambridge University Press.

Ng, P. T. (2017). Learning from Singapore: The power of paradoxes. Routledge.

Nicol, D. J., & Macfarlane‐Dick, D. (2006). Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice. Studies in higher education31(2), 199-218.

OECD (2013) The OECD Teaching and Learning International Survey (TALIS) – 2018 Result. Retrieved from: https://www.oecd.org/education/school/talis-2013-results.htm

OECD (2018) The OECD Teaching and Learning International Survey (TALIS) – 2018 Result. Retrieved from: https://www.oecd.org/education/talis/talis-2018-results-volume-i-1d0bc92a-en.htm

OECD (2021). Opportunities and drawbacks of using artificial intelligence for training.

Penuel, W. R., & Shepard, L. A. (2016). Assessment and teaching. Handbook of research on teaching5, 787-850.

Pan, Z., & Liu, M. (2022, March). The effects of learning analytics hint system in supporting students problem-solving. In LAK22: 12th International Learning Analytics and Knowledge Conference (pp. 77-86).

Phillips, M., Henderson M., & Ryan, T. (2016). Multimodal feedback is not always clearer, more useful or satisfying. In S. Barker, S. Dawson, A. Pardo, & C. Colvin (Eds.), Show me the learning: Proceedings ASCILITE 2016 Adelaide (pp. 512–522).

Porter, W. W., & Graham, C. R. (2016). Institutional drivers and barriers to faculty adoption of blended learning in higher education. British Journal of Educational Technology, 47(4), 748–762.

Prime Minister Office (2020). A stronger and more cohesive Singapore. Retrieved from: https://www.pmo.gov.sg/Newsroom/National-Broadcast-by-SM-Tharman-Shanmugaratnam-on-17-Jun-2020

Ramaprasad, A. (1983). On the definition of feedback. Behavioral science28(1), 4-13.

Rajendran, R., Iyer, S., & Murthy, S. (2018). Personalized affective feedback to address students’ frustration in ITS. IEEE Transactions on Learning Technologies, 12(1), 87-97.

Ratnam-Lim, C. T. L., & Tan, K. H. K. (2015). Large-scale implementation of formative assessment practices in an examination-oriented culture. Assessment in Education: Principles, Policy & Practice22(1), 61-78.

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional science18, 119-144.

Singapore Teaching Practice (2020). Engaging Student in Feedback. Retrieved from: https://pubhtml5.com/sylt/ezcr/basic

Silvervarg, A., Wolf, R., Blair, K. P., Haake, M., & Gulz, A. (2021). How teachable agents influence students’ responses to critical constructive feedback. Journal of Research on Technology in Education, 53(1), 67-88.

Smart Nation (2019). National AI Strategy From: https://www.smartnation.gov.sg/files/publications/national-ai-strategy.pdf

Stiggins, R. J. (1995). Assessment literacy for the 21st century. Phi Delta Kappan77(3), 238.

Tan, K. (2011). Assessment for learning in Singapore: Unpacking its meanings and identifying some areas for improvement. Educational Research for Policy and Practice10(2), 91-103.

Tan, K. H. K. (2022). Lessons from a disciplined response to COVID 19 disruption to education: beginning the journey from reliability to resilience. Assessment in Education: Principles, Policy & Practice29(5), 596-611.

Tan, K. H. J (2014). Assessment feedback practices for enhancing learning. In W.S.Leong, Y.S. Cheng, & K. H. K. Tan (Eds.). Assessment and Learning in Schools (pp. 129 – 139). Singapore, Pearson Education South Asia Pte Ltd.

Tan, K. H. K., & Deneen, C. C. (2015). Aligning and sustaining meritocracy, curriculum and assessment validity in Singapore. Assessment Matters7(1), 31-52.

Thurlings, M., Vermeulen, M., Bastiaens, T., & Stijnen, S. (2013). Understanding feedback: A learning theory perspective. Educational Research Review9, 1-15.

Thomas, R. A., R. E. West, and J. Borup. 2017. “An Analysis of Instructor Social Presence in Online Text and Asynchronous video Feedback Comments.” The Internet and Higher Education 33 (April): 6173.

The economist. (2018a). What other countries can learn from Singapore schools. Retrieved from: https://www.economist.com/leaders/2018/08/30/what-other-countries-can-learn-from-singapores-schools

The economist. (2018b). It has the worlds best schools but Singapore wants better. https://www.economist.com/asia/2018/08/30/it-has-the-worlds-best-schools-but-singapore-wants-better

VanLehn, K., Burkhardt, H., Cheema, S., Kang, S., Pead, D., Schoenfeld, A., & Wetzel, J. (2021). Can an orchestration system increase collaborative, productive struggle in teaching-by-eliciting classrooms?. Interactive Learning Environments, 29(6), 987-1005.

West, J., & Turner, W. (2016). Enhancing the assessment experience: Improving student perceptions, engagement and understanding using online video feedback. Innovations in Education and Teaching International, 53(4), 400–410.

Wood, J. (2022) Enabling Feedback Seeking, Agency and Uptake through dialogic Screencast Feedback. Assessment & Evaluation in Higher Education. doi:10.1080/02602938.2022.2089973.

World Economic Forum. (2019). Singapore has abolished school exam rankings and her is why. Retrieved from: https://www.weforum.org/agenda/2018/10/singapore-has-abolished-school-exam-rankings-here-s-why/

Zhi, R., Lytle, N., & Price, T. W. (2018, February). Exploring instructional support design in an educational game for K-12 computing education. In Proceedings of the 49th ACM technical symposium on computer science education (pp. 747-752).

Zhao, R., Zhuang, Y., Zou, D., Xie, Q., & Yu, P. L. (2023). AI-assisted automated scoring of picture-cued writing tasks for language assessment. Education and Information Technologies, 28(6), 7031-706