by Margaret Teoh
Introduction
In Singapore’s primary schools, a densely packed curriculum and rigid timetables often compel teachers to prioritise syllabus coverage over deeper learning. This intensity stems largely from the need to prepare pupils for the high-stakes Primary School Leaving Examination (PSLE), which determines secondary school placement and anchors much of the instructional focus. Although the Primary Education Review and Implementation (PERI) initiative was introduced to promote more holistic forms of assessment and feedback (Leong & Tan, 2014), the dominance of the PSLE continues to sustain a performance-driven culture and exert strong washback on daily practice. Teachers therefore navigate competing demands: while they recognise Assessment for Learning (AfL) as sound pedagogy, limited time and accountability pressures often reduce its application to test preparation and incremental score gains (Deneen et al., 2019, p. 41). Consequently, classroom assessment remains largely examination-oriented, leaving little room for sustained formative engagement or iterative learning.
The question, therefore, is not whether feedback is beneficial, but how it can be systematically embedded within daily instruction without compromising syllabus coverage. This paper examines the structural and cultural constraints shaping current feedback practices and proposes a phased model aligned with the Ministry of Education’s progression from foundational fluency to disciplinary reasoning. The framework operates at two interrelated levels: the macro level, encompassing the year’s curricular sequence (for instance, Primary 4 mathematics), and the meso level, involving the timing, sequencing, and design of pupil response opportunities (Boud & Dawson, 2023, p. 161). Drawing on Bloom’s mastery learning, Vygotsky’s Zone of Proximal Development (ZPD), and cognitive load theory (CLT), the proposed architecture reconceptualises formative feedback as a deliberate design feature rather than an incidental event. It allocates time for consolidation and re-attempt (mastery), structures interaction to foster meaning-making (socio-constructivism), and calibrates task demand to maintain cognitive manageability (CLT), thereby ensuring that each iteration contributes productively to learning progress.
Tension between Formative Feedback and Singapore Primary Math Curriculum
Singapore’s primary mathematics syllabus is centrally set and content-heavy, with MOE defining topics for each level (MOE, 2023). Most schools follow similar scheme of work with brisk pacing, leaving little room for feedback cycles. Teachers must constantly decide between pausing to address misconceptions or keeping to the schedule. With limited time, feedback often becomes one-way brief comments or model answers for pupils to copy. While efficient, this leads to superficial correction and mechanical transcription, leaving some pupils with unresolved gaps. Over time, this fosters a dependent “wait-for-the-answer” culture, reinforcing passivity instead of inquiry.
At the end of Primary 6, all pupils in Singapore sit for the PSLE—a single high-stakes assessment that determines secondary school placement. For many parents, entry into “elite” schools is regarded as an indicator of future success (Kwan, 2023, p. 4). In such an environment, pupils often internalise the belief that only top scores are acceptable, while anything less signals inadequacy or disappointment (Oladele & Ayemu, 2025, p. 7). As Tan (2011) observes, this washback effect encourages pupils to “only learn what they perceive to be relevant to tests,” thereby reducing classroom assessment to a form of exam preparation rather than a tool for learning support (p. 99). Consequently, formative feedback tends to receive less emphasis than scores.
This exam-oriented mindset also shapes how pupils interpret feedback. Although quizzes, drafts, and teacher comments are designed as formative checkpoints, they are frequently perceived as high-stakes in a grade-conscious culture. Many pupils report experiencing heavier assessment loads and tend to treat short quizzes as mini versions of summative exams (Kwan, 2023, p. 5). At home, a worksheet marked with corrections can easily be read as a sign of failure, reinforcing the perception that teacher comments equate to criticism. Consequently, some pupils hesitate to act on feedback or to seek clarification, fearing that such behaviour may be viewed as weakness. Studies show that higher-achieving pupils are generally more willing to ask questions, whereas lower-achieving peers often remain silent (To et al., 2025, p. 9). Ironically, those who stand to gain the most from feedback are often the least likely to engage with it.
In response to exam pressure, many families turn to tuition, practice books, and past-paper drilling (Kwan, 2023, p.4). While tuition can fill gaps, it reinforces test-oriented learning and limits engagement with school-based formative feedback. Pupils may discount teachers’ comments if tutors later provide ready-made solutions, and heavy tuition schedules can reduce motivation to reflect and act on feedback. Consequently, feedback uptake weakens and opportunities for iterative improvement shrink.
Teachers, too, face pressure from parents and school leaders to deliver strong PSLE results. This accountability often narrows practice to what is tested – content mastery and exam techniques – at the expense of broader curricular aims. Singapore’s exam culture can thus constrain teachers’ willingness to adopt learner-centred feedback approaches (Wong et al., 2020, p.434), while classroom time tends to favour procedural drilling over conceptual exploration (p.446).
Beliefs about pupils’ ability to use feedback form another constraint. Tan and Wong (2018) found that many teachers viewed feedback mainly as telling pupils what they did wrong—an assessment of learning stance—rather than as guidance for next steps in an AfL cycle (p.131). Some even believed primary pupils were “too young to use feedback,” and therefore did not teach strategies for acting on it (p.130). In mathematics lessons, this narrows feedback to ticks, crosses, or model answers, instead of prompts that reveal thinking, connect ideas, or build metacognitive awareness.
Workload further compounds these challenges. OECD TALIS 2024 indicates that Singapore teachers work long hours, juggle multiple roles, and report stress above international averages (Ang, 2025). Many describe reform fatigue from continual policy shifts (Prestoza & Naldoza, 2025). MOE’s push toward “balanced assessment” has increased the use of bite-sized checks (Wong et al., 2020) but also adds to daily marking and follow-up. In primary mathematics, classes of 30–40 pupils make personalised feedback difficult; designing diagnostics, turning them around quickly, and writing qualitative comments—on top of regular teaching—creates real strain. The cumulative load “has taken a toll on teachers’ health,” with burnout a recurring concern (Kwan, 2023, p.5). Under such pressure, formative assessment can slide into box-ticking: some teachers perform the visible routines of AfL (state objectives, ask a few questions, set a short quiz) without closing the loop to inform next steps (Marshall & Drummond, 2006, p.133). The result is uniform tasks and generic whole-class comments rather than targeted guidance (Kwan, 2023, p.4). Even when feedback is given, limited class time from the coverage pressure means pupils seldom act on it. That said, efficiency is not inherently harmful: brief, one-directional corrective feedback can keep lessons moving and provide progress clarity for parents. The design challenge is to balance precision with depth so that brevity serves learning rather than replaces it.
In summary, although many teachers acknowledge the importance of formative feedback, the need to complete the syllabus and prepare pupils for the PSLE often takes priority. This produces a paradox: the very accountability systems intended to raise educational standards can inadvertently restrict teacher autonomy and reduce opportunities for sustained feedback cycles. Meaningful improvement, therefore, may depend less on adding new initiatives and more on rethinking how existing structures can better integrate assessment, feedback, and learning.
Defining the Formative Feedback Structure
Tasks are sometimes labelled formative without evidence that feedback is used to adapt teaching or guide pupil action (Kwan, 2023, p.6). Following Ramaprasad’s classic formulation, feedback is information about the gap between a current state and a reference standard that is used to reduce that gap (Tan, 2011, p.96; Sadler, 1989, p.120). In this sense, feedback becomes formative only when learners have planned opportunities to act on it. Brookhart (2017) emphasises that feedback must be comprehensible and actionable—enabling pupils to use it to improve subsequent work, whether the information comes from teachers, peers, self-assessment, or exemplars. Sadler (1989) adds three necessary conditions for impact: learners need (1) a clear conception of quality, (2) the ability to compare their work with that standard, and (3) strategies to close the gap (p.119). Similarly, Hattie and Timperley (2007) frame effective feedback around three guiding questions—Where am I going? How am I going? Where to next?— and show that feedback focused on tasks, processes, and self-regulation is more effective than person-focused praise. Collectively, these perspectives define a workable structure for primary mathematics: make success criteria visible, elicit evidence, provide timely and specific work‑focused information, and build in the next attempt so pupils can apply it (Boud & Molloy, 2013, p.709).
Mastery Learning within a Formative Feedback Structure
This design principle underpins mastery learning. Bloom’s central argument was that differences in achievement can be reduced if instruction and time are adjusted to meet learner needs, rather than held constant (Guskey, 2015, p.755). In essence, teachers provide extra time, guidance, and feedback for pupils who have yet to master key ideas, instead of expecting all to progress at the same rate.
As illustrated in Figure 1, after initial teaching, the teacher conducts a short formative assessment (A). The results generate diagnostic and prescriptive feedback—supporting some pupils through corrective activities while extending others through enrichment. A follow-up assessment (B) then checks whether the feedback and correctives were effective, giving every learner a structured opportunity to succeed (Guskey, 2015, p.755). Meta-analyses show that well-designed mastery learning consistently improves outcomes (Guskey & Pigott, 1988; Kulik et al., 1990), and a Singapore study reported similar gains, particularly in mathematics (Ho et al., 1983).
The key implication is that feedback should not be treated as a single correction but as a planned, iterative process—an essential mechanism for achieving mastery.
Formative Feedback and Socio‑Constructivism
Iterative feedback cycles also align with socio-constructivist principles. Boud and Molloy (2013) describe “Feedback Mark 2,” a model that shifts focus from teachers’ one-way comments to how learners interpret, use, and act on feedback across successive tasks (p.709). Similarly, Dann (2019) frames classroom feedback through Vygotsky’s ZPD, describing it as the evolving area between independent performance and supported achievement (p. 363). Within this zone, scaffolding plays a crucial role: temporary, targeted guidance that draws attention to key features or misconceptions, enabling learners to complete parts of a task previously beyond their reach (Wood et al., 1976, p.90).
Peer interaction provides another form of scaffolding. When peer feedback routines include clear roles, criteria, and task goals, pupils act as co-constructors of learning. Such exchanges generate rich qualitative information that deepens understanding and promotes self-regulation (Panadero, 2016, p.248). In primary mathematics, structured routines where pupils compare solution strategies (e.g., different ways to represent a ratio with a bar model), question one another’s reasoning, and revise their work transform feedback from a single event into a sustained learning dialogue. Over time, learners internalise these exchanges—developing “internal feedback” by comparing their work to exemplars or peer performances and using that insight to refine future attempts (Nicol, 2020).
Feedback is positioned as a social and developmental process: it reveals the gap between current and desired understanding, supports meaning-making through interaction, and gradually withdraws as learners gain independence.
Formative Feedback in Cognitive Load Theory
Feedback—dialogic or otherwise—works only within the limits of learners’ cognitive capacity. If information is too dense or arrives all at once, working memory overloads and learning degrades (Artino, 2008, p.426). CLT explains this: working memory has limited capacity for novel material (Sweller et al., 1998). Accordingly, lengthy comments often dilute focus; concise, goal-directed feedback is more effective (Shute, 2007, p.9). CLT also warns against redundancy: as knowledge grows, guidance that once helped can become unhelpful—the expertise-reversal effect (Sweller et al., 2011, p.23).
Feedback should therefore be calibrated to what pupils can understand and act on. The task and feedback can increase in complexity in small increments so total load stays manageable (Paas et al., 2004). Feedback can be sequenced comments in “bite-sized” steps—address one issue, let pupils revise, then move on. For instance, respond first to 1-digit × 1-digit multiplication before tackling 2-digit × 1-digit.
In short, usable feedback is digestible and level-appropriate – enough information to guide improvement without swamping working memory, so pupils can understand it, act on it, and sustain progress. As expertise develops, feedback can shift from step-by-step directives toward brief cues and questions that prompt self-correction (Sweller et al., 2019).
When viewed collectively, mastery learning, socio-constructivism, and CLT highlight three interdependent elements of effective feedback: opportunities to revisit concepts, meaningful interactions that deepen understanding, and tasks calibrated to pupils’ cognitive readiness. When these dimensions are well integrated, feedback becomes part of learning. A thoughtful feedback design weaves these strands together by clarifying learning goals, identifying precise areas for growth, and embedding multiple, structured opportunities to revise, engage in purposeful dialogue, and work through appropriately challenging tasks—all enabling each pupil to make real progress.
Designing an Adaptive Feedback Structure within a Fixed Syllabus
In Singapore’s high-stakes environment, simply adding a few “formative” tasks rarely transforms classroom learning. Such activities only matter when both curriculum and assessment systems recognise and reward them. This reflects Biggs’s (1996, p.350) idea of constructive alignment—teachers and pupils naturally focus on what is formally assessed. The implication is clear: feedback must be embedded into curriculum design as a core driver of learning, not treated as an optional add-on.
The national mathematics syllabus progresses from basic facts and procedures to routine and then non-routine problem solving. In a typical unit (e.g., whole numbers), teachers introduce concepts, assign practice, and move toward application tasks of increasing complexity. Given the tight pacing, classes often advance before all pupils have secured the foundations, leaving gaps that later surface in non-routine tasks—where it becomes difficult to tell whether challenges stem from missing content or weak heuristics. As a result, feedback often turns generic, reducing its precision and effect.
To balance syllabus coverage with systematic gap-closing, I propose a phased model that embeds adaptive feedback cycles within the existing teaching sequence. The phases (Figure 2) mirror the Singapore syllabus trajectory—from skill acquisition to conceptual integration—and progress from simpler to more complex tasks (MOE, 2023). Within each phase, assessment, instruction, and feedback are constructively aligned to the intended learning outcomes (Biggs, 1996, p.350). This structure secures foundational mastery in Phases 1–2 before extending to more open problem-solving in Phases 3–4, giving teachers clearer diagnostic baselines and enabling more targeted feedback. By aligning guidance with each pupil’s developmental stage, the model also mitigates the expertise-reversal effect and supports both competence and confidence growth.
Phases 1 & 2: Knowledge/Skill Acquisition and Application
Phases 1 and 2 operate as diagnostic feedback loops based on the principles of mastery learning. Each cycle—teach → assess → feedback → reattempt (continues until pupils demonstrate readiness to progress), while those who meet the standard move on to the next task or enrichment (Figure 3). The premise is simple: with adequate time and focused support, most pupils can achieve the expected level of mastery (Guskey, 2015).
Before each assessment, teachers clarify explicit success criteria so pupils know what “meeting the standard” looks like. Because the loops are iterative, feedback is chunked—one concept or skill per cycle—to prevent cognitive overload. Tasks and comments are narrowly targeted, allowing teachers to pinpoint and address specific gaps within each pupil’s ZPD. Scaffolding builds gradually from simple to complex: reducing interacting elements at the start eases cognitive load and supports emerging understanding, even if partial at first (Paas et al., 2004). Immediate feedback is especially valuable in these phases (Hattie & Timperley, 2007) to prevent misconceptions from taking root and to ensure each revision moves pupils closer to clear, attainable success criteria.
- Phase 1: Foundational skill acquisition
Phase 1 builds essential factual and procedural fluency—the “surface” stage in Hattie and Donoghue’s learning model (2016, p.3). Assessment items are straightforward and focus on one success criterion at a time (e.g., representing equivalent fractions). As shown in Figure 3, after a diagnostic check (Assessment 1), teachers provide specific corrective feedback, followed by a reattempt (Assessment 2). This loop continues until pupils meet the standard of the success criterion. Pupils who succeed move on to the next micro-skill (e.g., comparing fractions), while those needing more time receive additional support and retry. The narrow focus keeps cognitive demands manageable and allows pupils to form accurate schemas before tackling more complex tasks. Each revision reinforces progress, helping pupils see improvement as part of the process. Once all foundational micro-skills are secured, pupils are ready to advance to Phase 2.
- Phase 2: Application in familiar contexts
Phase 2 extends Phase 1 learning into familiar, routine contexts—for instance, choosing whether to add or subtract in a simple word problem after mastering both operations in Phase 1. Formative checks remain tightly focused, and feedback pinpoints the specific hurdle (e.g., a misread question vs. a calculation slip). The feedback–reattempt cycle continues until pupils consistently apply the correct strategies in typical tasks.
Across Phases 1–2, feedback operates mainly at the task and process levels—clarifying what is correct or incorrect and how to fix it (Hattie & Timperley, 2007). In practice, sustaining such cycles for large classes within tight schedules is demanding: each loop requires generating tasks, marking, and providing immediate targeted feedback—often multiple times. Studies note that class size and workload can limit the quality and timeliness of teacher feedback (Boud & Molloy, 2013, p.700; Venter et al., 2025, p.516).
Here, generative AI can streamline routine processes, allowing teachers to focus on diagnosis and guidance. Research shows AI systems can (a) produce varied question sets, (b) mark automatically, and (c) return task- and process-level explanations or hints instantly through online platforms (Wongvorachan et al., 2022). Learning analytics dashboards can then highlight response patterns, helping teachers plan targeted interventions. Used thoughtfully, Gen AI becomes an enabler—sustaining timely, individualised feedback loops at scale while keeping teachers central to interpreting misconceptions and shaping next steps.
With Phases 1–2 complete, pupils achieve foundational mastery aligned with PSLE expectations and are ready for Phase 3, where focus shifts to non-routine problem-solving and self-regulated learning.
Phases 3 & 4: Non-routine Problem-solving and Disciplinary Mind
While Phases 1–2 build fluency in facts and procedures, Phases 3–4 extend learning toward the broader goals of mathematics—reasoning, communication, strategic decision-making, metacognition, and self-regulation (MOE, 2023). Tasks at these stages are more open-ended and complex, often allowing multiple valid approaches, with pupils expected to justify why their methods work. In the Primary Mathematics textbooks, such problems are usually found under the “challenging” or “extension” sections at the end of each chapter.
As these tasks demand higher-order thinking, feedback must shift from one-way correction to meaningful dialogue and co-construction. This process relies on the teacher’s sensitivity to pupils’ emotions, motivation, and readiness to respond. It draws on human judgment and relational trust—qualities that technology alone cannot easily replicate for deep impact.
Nicol and Macfarlane-Dick’s (2006) seven principles for feedback that promotes self-regulation (see Figure 4) provide a sound foundation. To make such feedback practical and transparent, rubrics and exemplars can be used. Brookhart (2013) defines a rubric as “a coherent set of criteria for pupils’ work that includes descriptions of levels of performance quality on the criteria” (p.4). In Phases 3–4, rubrics must do more than score final answers—they should highlight the quality of reasoning, the representations and processes used, and the self-monitoring behaviours that refine learning. When pupils compare their work with these criteria and exemplars, they begin generating internal feedback—evaluating, adjusting, and improving their own thinking (Nicol, 2020). Properly aligned with the MOE Mathematics Framework, rubrics promote conceptual coherence and offer both teachers and pupils a precise, shared language for reflection and peer assessment.
- Phase 3: Mathematical Problem-solving (Non-routine Tasks)
Phase 3 develops pupils’ capacity to use prior knowledge flexibly in unfamiliar, curriculum-aligned problems—such as ratio tasks framed in new contexts or with multiple constraints that require planning and self-monitoring. To avoid reverting to procedural “drill and practice” from Phases 1–2, teachers should vary conditions and values, while giving equal weight to reasoning, representation, and explanation alongside the final answer. With foundational skills already secure, feedback now focuses less on correctness and more on ‘how’ pupils approached, monitored, and refined their solutions.
For complex problems, feedback need not always be immediate; a short delay can aid reflection if pupils are given prompt opportunities to apply it (Hattie & Timperley, 2007, p.98). Feedback may take the form of written comments or verbal exchanges during class discussion. At this stage, feedback should invite dialogue—posing prompts and questions that lead pupils to clarify, justify, and adjust their thinking. Polya’s four-step problem-solving framework—understand the problem, plan a strategy, carry out the plan, and look back—offers a practical structure for this exchange. For example, if a pupil misreads a question, targeted prompts can help them identify what matters and reinterpret the problem (Small & Lin, 2018, p.182).
Rubrics based on Polya’s framework strengthen these feedback loops by making expectations explicit and offering a common language for reflection (see Appendix 1a – sample rubric; 1b – conversation starter). Such rubrics support self-assessment, peer dialogue, and teacher guidance. When pupils use them during feedback discussions, they can better recognise their strengths and pinpoint next steps. These interactions take place within Vygotsky’s ZPD, where learning progresses through scaffolded dialogue before gradually becoming self-directed (Dann, 2019).
The idea of mastery still applies, but the emphasis moves from getting answers right to refining the way pupils think and solve problems. Because strategies transfer across topics, re-attempts now involve applying earlier feedback to fresh questions in the same or new topic. Keeping a reflective journal or “growth log” helps pupils track how their approaches change over time and how feedback guides those shifts.
Phase 3 does not rigidly precede Phase 4; pupils can engage with both depending on readiness. As pupils progress at different rates, Phases 3 and 4 often overlap—some move on to open-ended investigations, while others continue refining their reasoning and problem-solving approaches.
Phase 4: Mathematics Disciplinary Mind
Gardner (2006) describes a disciplinary mind as one that understands and applies the distinctive ways of thinking within a field. In mathematics, this means reasoning logically, specialising and generalising, forming conjectures from patterns, and justifying claims through reasoning or proof (Mason et al., 1982). Phase 4 consolidates prior learning by nurturing mathematical habits of mind—connecting ideas across topics, reasoning systematically, communicating precisely, and taking ownership of improvement (MOE, 2023).
Tasks at this stage are open-ended and integrative—for example, a geometry optimisation task (minimising area for a fixed perimeter) or a data-modelling inquiry (analysing how classmates spend a 30-minute recess over a week). Depending on their readiness and interest, pupils might present conjectures supported by examples, design posters that visualise reasoning, or write reflections on the heuristics and strategies they used. Rubrics and exemplars continue to anchor expectations, providing a shared language for describing quality and growth (see Appendix 2 for sample tasks and rubrics).
At this stage, the teacher’s role shifts from evaluator to coach—facilitating reflection and self-direction. Hill et al. (2021) describe relational feed-forward as a useful model: instead of focusing only on completed work, teachers and pupils discuss work-in-progress, clarify criteria, identify uncertainties, and plan specific next steps. These short, forward-looking dialogues—held monthly or termly—help pupils experience feedback as guidance rather than judgment. They sustain confidence during open tasks and keep attention on concrete improvements. Each session ends with one or two focused goals, an action plan, and a clear point for follow-up (Hill et al., 2021).
Participation in Phase 4 varies by readiness and motivation. Pupils may work individually or collaboratively, with projects spanning the school year and culminating in showcases of their findings. Mastery here is measured not by perfect answers but by depth of reflection and sophistication of ideas. As pupils internalise these practices, feedback gradually shifts from teacher-led comments to pupils’ own evaluative thinking—turning feedback into a driver of agency, independence, and self-regulated learning.
Phases 1–2 secure the essentials needed for fluency and exam readiness, while Phases 3–4 open space for richer habits of thinking and problem-solving. (Appendix 3 outlines how pupils with different levels of readiness may move through these phases.) Viewed as a whole, the sequence shifts feedback from a retrospective exercise to a forward-moving driver of learning—even within the timetable constraints shaped by the PSLE demands.
Technology can help by taking over repetitive tasks and speeding up basic feedback loops, but its impact rests on everyday conditions in classrooms: teachers’ confidence to use it, class sizes that allow meaningful interaction, and a climate where parents and school leaders support purposeful pacing. Without systemic support, even the best-designed model can slide into routine compliance instead of deep learning. Ultimately, sustained improvement relies on a broader cultural shift—one that places individual growth and habits of mind on the same footing as examination performance.
Conclusion and Practical Implications
The effectiveness of feedback, as discussed by Henderson et al. (2019), relies on twelve supporting conditions. The four-phase model gives concrete form to the “design” conditions in points 5–8 (see Figure 5) by building usable, timely, and aligned feedback into each stage of learning. For a genuine culture of feedback (points 9-12 in Figure 5) to take root, system-level support is necessary. A straightforward digital workflow can handle routine task generation, quick marking, and re-attempts, provided the tools are reliable, easy to navigate, and accessible to all pupils—not only those with strong home support. Leadership also plays a key role: teachers need room to adjust pacing and homework expectations according to pupils’ readiness. This may mean trimming repetitive work for pupils who have already secured key skills, while giving others more time for targeted practice. Feedback routines flourish only when teachers are trusted to use professional judgement rather than follow uniform requirements for every pupil.
Developing capacity for feedback (points 1-4 in Figure 5) is equally important. Pupils need to know how to read comments, ask clarifying questions, set small goals, and act on the advice they receive (Carless & Boud, 2018). When feedback is framed as support rather than judgement, pupils engage more readily and progress becomes more self-directed. Automation can ease workload in Phases 1–2, but it cannot replace the relational work such as diagnosis, dialogue, and targeted guidance. Continued professional learning for teachers remains vital, especially in interpreting evidence, planning follow-up moves, and using technology judiciously.
Teachers may also need to shift expectations of classroom order. Productive feedback work is seldom tidy: groups may work on different tasks at the same time, and questions may arise at varying levels. Routines and norms therefore matter. For instance, during small-group instruction, pupils engaged in extended Phase 4 tasks seek help during scheduled conferencing rather than interrupting ongoing teaching.
By sequencing support—through quick corrective loops in Phases 1–2 and reasoning-focused dialogue in Phases 3–4—the model provides a practical means to manage both syllabus coverage and PSLE-related pressures, while still pursuing broader curricular aims. Although embedding such routines requires time and consistency, their eventual stabilisation fosters more sustained mastery, greater learner confidence, and increased independence. Pupils emerge from primary mathematics not only proficient in procedures but also equipped to reason, make informed choices, and apply feedback—whether externally provided or self-generated—to support continued learning beyond the exam.
References
Ang, S. (2025, January 31). Singapore teachers say they’re busier than ever—even without mid-year exams. Channel News Asia. https://www.channelnewsasia.com
Artino, A. R. J. (2008). Cognitive load theory and the role of learner experience: An abbreviated review for educational practitioners. AACE Journal, 16(4), 425–439.
Biggs, J. B. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347–364. https://doi.org/10.1007/BF00138871
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712. https://doi.org/10.1080/02602938.2012.691462
Boud, D., & Dawson, P. (2023). What Feedback Literate Teachers Do: An Empirically-Derived Competency Framework. Assessment & Evaluation in Higher Education, 48(2), 158–171. https://doi-org.libproxy.nie.edu.sg/10.1080/02602938.2021.1910928
Brookhart, S. (2013). How to create and use rubrics for formative assessment and grading. Virginia: ASCD.
Brookhart, S. M. (2017). Summative and formative feedback in instruction. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 52–68). Cambridge University Press.
Carless, D., & Boud, D. (2018). The development of pupil feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325. https://doi.org/10.1080/02602938.2018.1463354
Dann, R. (2019). Feedback as a relational concept in the classroom. The Curriculum Journal, 30(4), 352–374. https://doi.org/10.1080/09585176.2019.1668012
Deneen, C., Fulmer, G. W., Brown, G. T. L., Tan, K., Leong, W. S., & Tay, H. Y. (2019). Value, practice and proficiency: Teachers’ complex relationship with assessment for learning. Teaching and Teacher Education, 80, 39–47. https://doi.org/10.1016/j.tate.2018.12.016
Gardner, H. (2006). Five minds for the future. Harvard Business School Press.
Guskey, T. R. (2007). Closing achievement gaps: Revisiting Benjamin S. Bloom’s “Learning for mastery.” Journal of Advanced Academics, 19(1), 8–31. https://doi.org/10.4219/jaa-2007-704
Guskey, T. R. (2015). Mastery Learning. In International Encyclopedia of the Social & Behavioral Sciences (Second Edition, Vol. 14, pp. 752–759). Elsevier Ltd. https://doi.org/10.1016/B978-0-08-097086-8.26039-X
Guskey, T. R., & Pigott, T. D. (1988). Research on Group-Based Mastery Learning Programs: A Meta-Analysis. Journal of Educational Research, 81(4), 197–216. Hattie, J., & Donoghue, G. M. (2016). Learning strategies: A synthesis and conceptual model. npj Science of Learning, 1, 16013. https://doi.org/10.1038/npjscilearn.2016.13
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
Henderson, M., Phillips, M., Ryan, T., Boud, D., Dawson, P., Molloy, E., & Mahoney, P. (2019). Conditions that enable effective feedback. Higher Education Research & Development, 38(7), 1401–1416. https://doi.org/10.1080/07294360.2019.1657807
Hill, J., Berlin, K., Choate, J., Cravens-Brown, L., McKendrick-Calder, L., & Smith, S. (2021). Can Relational Feed-Forward Enhance Pupils’ Cognitive and Affective Responses to Assessment? Teaching and Learning Inquiry, 9(2), 1–21. https://doi.org/10.20343/teachlearninqu.9.2.18
Ho, W. K., Tan, W. K., & Ng, K. B. (1983). Report on the mastery learning (pilot) project. Institute of Education, Singapore.
Kulik, C. C., Kulik, J. A., Bangert-Drowns, R., & Slavin, R. E. (1990). Effectiveness of mastery learning programs: A meta-analysis. Review of Educational Research, 60(2), 265–299. https://doi.org/10.3102/00346543060002265
Kwan, J. (2023). Classroom-based assessment: Tension between summative and formative assessment practices among primary schools in Singapore. Biomedical Journal of Scientific & Technical Research, 53(3), 44749–44758. https://doi.org/10.26717/BJSTR.2023.53.008403
Leong, W. S., & Tan, K. (2014). What (more) can, and should, assessment do for learning? Observations from “successful learning contexts” in Singapore. The Curriculum Journal, 25(4), 593–619. https://doi.org/10.1080/09585176.2014.944002
Marshall, B., & Drummond, M. J. (2006). How teachers engage with assessment for learning: Lessons from the classroom. Research Papers in Education, 21(2), 133–149. https://doi.org/10.1080/02671520600615638
Mason, J., Burton, L., & Stacey, K. (1982). Thinking mathematically. Addison-Wesley.
Ministry of Education (MOE). (2023). Mathematics syllabus: Primary 1 to 6. Curriculum Planning & Development Division.
Nicol, D. (2020). The power of internal feedback: Exploiting natural comparison processes. Assessment & Evaluation in Higher Education, 46(5), 756–778. https://doi.org/10.1080/02602938.2020.1770161
Nicol, D. J., & Macfarlane‐Dick, D. (2006). Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090
Oladele, S., & Ayemu, D. (2025). Comparative Analysis of the Impact of Standardized Testing on Socio-Emotional Development in Primary School Children Across Different Assessment-Driven Education Systems.
Paas, F., Renkl, A., & Sweller, J. (2004). Advances in cognitive load theory, methodology and instructional design. Instructional Science, 32(1/2), 1–8. https://doi.org/10.1023/B:TRUC.0000021806.17516.d0
Panadero, E. (2016). Is It Safe? Social, Interpersonal, and Human Effects of Peer Assessment: A Review and Future Directions. In G. T. L. Brown & L. R. Harris (Eds.), Handbook of Human and Social Conditions in Assessment (1st ed., pp. 247–266). Routledge. https://doi.org/10.4324/9781315749136-17
Prestoza, M.J.R., & Naldoza, N.D. (2025). Challenges and Practices of Instructional Leadership: A Qualitative Inquiry of Principals’ and Teachers’. The Normal Lights, 19(1)
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144. https://doi.org/10.1007/BF00117714
Shute, V. J. (2007). Focus on formative feedback (ETS RR-07-11). Educational Testing Service. https://doi.org/10.1002/j.2333-8504.2007.tb02038.x
Small, M., & Lin, A. (2017). Instructional feedback in mathematics. In D. Carless et al. (Eds.), The Cambridge handbook of instructional feedback (pp. 169–190). Cambridge University Press.
Sweller, J., Ayres, P., & Kalyuga, S. (2011). The expertise reversal effect. In Cognitive load theory (pp. 57–69). Springer. https://doi.org/10.1007/978-1-4419-8126-4_12
Sweller, J., van Merriënboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251–296. https://doi.org/10.1023/A:1022193728205
Sweller, J., van Merriënboer, J. J. G., & Paas, F. G. W. C. (2019). Cognitive architecture and instructional design: Twenty years later. Educational Psychology Review, 31(2), 261–292. https://doi.org/10.1007/s10648-019-09465-5
Tan, K. (2011). Assessment for Learning Reform in Singapore – Quality, Sustainable or Threshold?. In: Berry, R., Adamson, B. (eds) Assessment Reform in Education. Education in the Asia-Pacific Region: Issues, Concerns and Prospects, vol 14. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-0729-0_6
Tan, K. H. K., & Wong, H. M. (2018). Assessment feedback in primary schools in Singapore and beyond. In A. A. Lipnevich & J. K. Smith (Eds.), The Cambridge handbook of instructional feedback (pp. 123–144). Cambridge University Press.
To, J., Aluquin, D., & Tan, K. H. K. (2025). Making pupil voice heard in dialogic feedback: Feedback design matters. Frontiers in Education, 10, Article 1550328. https://doi.org/10.3389/feduc.2025.1550328
Venter, J., Coetzee, S. A., & Schmulian, A. (2025). Exploring the use of artificial intelligence (AI) in the delivery of effective feedback. Assessment & Evaluation in Higher Education, 50(4), 516-536.
Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17(2), 89–100. https://doi.org/10.1111/j.1469-7610.1976.tb00381.x
Wong, H. M., Kwek, D., & Tan, K. (2020). Changing Assessments and the Examination Culture in Singapore: A Review and Analysis of Singapore’s Assessment Policies. Asia Pacific Journal of Education, 40(4), 433–457. https://doi.org/10.1080/02188791.2020.1838886
Wongvorachan, T., Lai, K. W., Bulut, O., Tsai, Y.-S., & Chen, G. (2022). Artificial Intelligence: Transforming the Future of Feedback in Education. Journal of Applied Testing Technology, 23, 95–116. Retrieved from https://jattjournal.net/index.php/atp/article/view/170387
Appendix 3: Learning Pathways for Pupils of Varying Readiness
