Ali, F., Choy, D., Divaharan, S., Tay, H. Y., & Chen, W. (2023). Supporting self-directed learning and self-assessment using TeacherGAIA, a generative AI chatbot application: Learning approaches and prompt engineering. Learning: Research and Practice, 9(2), 135-147.
https://doi.org/10.1080/23735082.2023.2258886
Self-directed learning and self-assessment require student responsibility over learning needs, goals, processes, and outcomes. However, this student-led learning can be challenging to achieve in a classroom limited by a one-to-many teacher-led instruction. We, thus, have designed and prototyped a generative artificial intelligence chatbot application (GAIA), named TeacherGAIA, that can be used to asynchronously support students in their self-directed learning and self-assessment outside the classroom. We first identified diverse constructivist learning approaches that align with, and promote, student-led learning. These included knowledge construction, inquiry-based learning, self-assessment, and peer teaching. The in-context learning abilities of large language model (LLM) from OpenAI were then leveraged via prompt engineering to steer interactions supporting these different learning approaches. These interactions contrasted with ChatGPT, OpenAI’s chatbot which by default engaged in the traditional transmissionist mode of learning reminiscent of teacher-led instruction. Preliminary design, prompt engineering and prototyping suggested fidelity to the learning approaches, cognitive guidance, and social-emotional support, all of which were implemented in a generative AI manner without pre-specified rules or “hard-coding”. Other affordances of TeacherGAIA are discussed and future development outlined. We anticipate TeacherGAIA to be a useful application for teachers in facilitating self-directed learning and self-assessment among K-12 students.
Dawson, P., Carless, D., & Lee, P. P. W. (2021). Authentic feedback: supporting learners to engage in disciplinary feedback practices. Assessment & Evaluation in Higher Education, 46(2), 286-296.
How can learners be supported to engage productively in the kinds of feedback practices they may encounter after they graduate? This article introduces a novel concept of authentic feedback to denote processes which resemble the feedback practices of the discipline, profession or workplace. Drawing on the notion of authentic assessment, a framework for authentic feedback is proposed with five dimensions: realism, cognitive challenge, affective challenge, evaluative judgement and enactment of feedback. This framework is exemplified and interrogated through two cases of authentic feedback practice, one in the subject of digital media in an Australian university, the other focussed on bedside rounds in medicine at a university in Hong Kong. The framework enables the identification of both highly authentic aspects of feedback, and aspects that could be made more authentic. The framework informs the design of feedback practices that carry the potential to bridge university and workplace environments.
Lipnevich, A. A., To, J., & Kiat, K. T. H. (Eds.). (2023). Unpacking Students’ Engagement with Feedback: Pedagogy and Partnership in Practice. Taylor & Francis.
Unpacking Students’ Engagement with Feedback: Pedagogy and Partnersh (routledge.com)
Learners of all levels receive a plethora of feedback messages on a daily–or even hourly–basis. Teachers, coaches, parents, peers–all have suggestions and advice on how to improve or sustain a certain level of performance. This volume offers insights into the complexity of students’ engagement with feedback, the diversity of teachers’ feedback practices, and the influence of personal assessment beliefs in tension with prevailing contexts. It focuses on two main sections: what is students’ engagement with feedback? And what is the variety of teachers’ feedback practices? Under these themes, the content covers a broad range of key topics pertaining to instructional feedback, how it operates in a classroom and how students engage with feedback. Unarguably, feedback is a key element of successful instructional practices–however we also know that (a) learners often dread it and dismiss it and (b) the effectiveness of feedback varies depending on teacher’s and student’s characteristics, specific characteristic of feedback messages that learners receive, as well as a number of contextual variables. What this volume articulates are new ways for learners to engage with feedback beyond recipience and uptake. With nuanced insights for research and practice, this book will be most useful to teachers, university teacher educators, and researchers working to design and enact new ways of engaging with feedback in schools and beyond.
Luo, J. (2024). A critical review of GenAI policies in higher education assessment: a call to reconsider the “originality” of students’ work. Assessment & Evaluation in Higher Education, 1-14.
This study offers a critical examination of university policies developed to address recent challenges presented by generative AI (GenAI) to higher education assessment. Drawing on Bacchi’s ‘What’s the problem represented to be’ (WPR) framework, we analysed the GenAI policies of 20 world-leading universities to explore what are considered problems in this AI-mediated assessment landscape and how these problems are represented in policies. Although miscellaneous GenAI-related problems were mentioned in these policies (e.g. reliability of AI-generated outputs, equal access to GenAI), the primary problem represented is that students may not submit original work for assessment. In the current framing, GenAI is often viewed as a type of external assistance separate from the student’s independent efforts and intellectual contribution, thereby undermining the originality of their work. We argue that such problem representation fails to acknowledge how the rise of GenAI further complicates the process of producing original work and what it means by originality in a time when knowledge production becomes increasingly distributed, collaborative and mediated by technology. Therefore, a critical silence in higher education policies concerns the evolving notion of originality in the digital age and a more inclusive approach to address the originality of students’ work is required.
McArthur, J. (2023). Rethinking authentic assessment: work, well-being, and society. Higher education, 85(1), 85-101.
Panadero, E., & Lipnevich, A. A. (2022). A review of feedback models and typologies: Towards an integrative model of feedback elements. Educational Research Review, 35, 100416..
https://www.sciencedirect.com/science/article/pii/S1747938X21000397
A number of models has been proposed to describe various types of feedback along with mechanisms through which feedback may improve student performance and learning. We selected fourteen most prominent models, which we discussed in two complementary reviews. In the first part (Lipnevich & Panadero, 2021) we described the models, feedback definitions, and the empirical evidence supporting them, whereas in the present publication, we analyzed and compared the fourteen models with the goal to classify and integrate shared elements into a new comprehensive model. As a result of our synthesis, we offered an expanded typology of feedback and a classification of models into five thematic areas: descriptive, internal processing, interactional, pedagogical, and students characteristics. We concluded with an Integrative Model of Feedback Elements that includes five components: Message, Implementation, Student, Context, and Agents (MISCA). We described each element and relations among them, offering future directions for theory and practice.