The Cognitive Debt Crisis: Designing Ethical AI for Deeper Learning
Abstract
This post addresses the escalating "Cognitive Debt Crisis" in education, a systemic issue precipitated by the proliferation of Artificial Intelligence (AI) tools designed to optimise for efficiency over deep learning. Cognitive debt is defined as the cumulative deficit in neural pathway formation that results from using AI to bypass the essential cognitive friction inherent in robust learning processes. Drawing upon the foundational pedagogical theories of Freire, Bloom, and Vygotsky, this analysis critiques the current paradigm of educational AI, arguing that its design philosophy often replicates and scales a passive, "banking" model of education. The central thesis is that resolving this crisis necessitates a fundamental paradigm shift away from frictionless convenience towards the intentional design of "desirable difficulty." This requires moving beyond tool-level ethics to embrace a comprehensive framework of system-level governance, as proposed by Boddington, where AI systems are treated as auditable infrastructure aligned with pedagogical values. The post concludes by presenting a case for institutional agency in developing bespoke, values-driven AI systems as a viable and necessary alternative to the pedagogical misalignment characteristic of many commercial offerings
Introduction: The Structural Misalignment of AI in Education
The integration of Artificial Intelligence into educational ecosystems has been heralded as a transformative force, promising personalised learning at an unprecedented scale. Yet, beneath this veneer of technological optimism, a critical structural misalignment threatens the very foundation of cognitive development. This post identifies and confronts this issue as the "Cognitive Debt Crisis," a term that captures the long-term intellectual deficit incurred for short-term convenience. The crisis is rooted in a dominant design philosophy that prioritises efficiency and user engagement, inadvertently creating tools that encourage learners to circumvent the effortful cognitive processes essential for deep, durable learning. As Pratschke (2025) articulates, "Using AI to bypass cognitive friction accumulates 'debt' that undermines the neural pathway formation essential for learning".
This accumulation of cognitive debt is not an incidental byproduct of technology adoption, nor is it a consequence of individual user error. Rather, it is the predictable outcome of a systemic failure to align the objectives of AI systems with the core principles of pedagogy. When AI tools are optimised to provide immediate answers, summarise complex texts without requiring comprehension, or generate written work from simple prompts, they systematically strip away the "productive struggle" that builds robust mental models and facilitates the transfer of knowledge. This creates an illusion of mastery, where learners can produce correct outputs without undergoing the cognitive consolidation necessary for genuine understanding, leading to dependency and weaker long-term retention. The framing of this phenomenon as a "crisis" is a deliberate analytical choice. It elevates the discussion beyond a mere critique of user experience or a debate over pedagogical best practices, positing instead that the current trajectory of educational AI represents a fundamental conflict between the logic of technological efficiency and the biological and psychological realities of human learning.
In this post, I argue that the cognitive debt crisis represents a structural misalignment that cannot be resolved through market forces or superficial feature adjustments alone. It requires a deliberate and systemic intervention grounded in established learning science and a robust ethical framework. The central thesis is that a new paradigm for educational AI is both necessary and achievable; one that consciously rejects the pursuit of frictionless efficiency in favour of designing for desirable difficulty, learner agency, and pedagogical integrity. By shifting the focus from tool-level functionality to system-level governance, educational institutions can reclaim their agency and build AI ecosystems that serve, rather than subvert, the ultimate goal of education: the cultivation of independent, critical, and resilient minds.
2. Theoretical Foundations: Enduring Pedagogies in a Digital Age
To fully grasp the nature of the cognitive debt crisis, it is essential to ground the critique of contemporary AI in the enduring principles of pre-digital educational theory. The works of Paulo Freire, Benjamin Bloom, and Lev Vygotsky provide a powerful and remarkably relevant theoretical lens through which to analyse the shortcomings of the current efficiency-driven paradigm and to envision a more pedagogically sound alternative.
Freire's Banking Model and the Digital Depository
In his seminal work, Pedagogy of the Oppressed, Paulo Freire (1970) critiques what he terms the "banking model" of education. In this model, the teacher is the active subject who "deposits" knowledge into the minds of students, who are treated as passive, empty vessels or receptacles. The curriculum is detached from lived experience, and success is measured by the student's ability to memorise and repeat the deposited information. Freire argues that this approach is fundamentally oppressive, as it diminishes critical consciousness, stifles creativity, and negates the possibility of genuine dialogue. The primary ethical risk of the banking model is the profound loss of student agency, transforming learners into objects rather than empowering them as subjects of their own learning.
The advent of generative AI has inadvertently resurrected and scaled this flawed model, creating what can be termed the "digital banking problem". AI tools that supply ready-made answers on demand function as the ultimate digital depository, capable of depositing vast quantities of information into a document without requiring any meaningful cognitive engagement from the learner. This process replicates the core mechanics of banking education at an unprecedented scale and speed, encouraging the passive consumption of information and prioritising the convenience of the immediate answer over the long-term development of learning capacity. By positioning the AI as the all-knowing subject and the user as the passive recipient, this technological paradigm amplifies the very loss of agency and dialogue that Freire warned against, contributing directly to the accumulation of cognitive debt.
Bloom's 2-Sigma Problem: The Holy Grail of EdTech
In 1984, educational psychologist Benjamin Bloom identified what he called the "2-sigma problem".Through his research, Bloom demonstrated that students who received one-to-one tutoring consistently performed two standard deviations better than students in a traditional classroom setting; a massive improvement equivalent to raising the median student to the 98th percentile. Bloom attributed these remarkable gains to the core components of effective tutoring: personalised feedback, mastery-based sequencing, and continuous corrective guidance. For decades, the challenge of scaling this highly effective but resource-intensive model has been a central problem in education.
AI, in theory, presents the perfect solution to this challenge. It holds the potential to democratise the benefits of one-to-one tutoring, providing every learner with a personalised, adaptive, and responsive educational guide. This represents the true, yet largely unfulfilled, promise of AI in education. The cognitive debt crisis arises precisely because the dominant design philosophy has chosen a different path. Instead of being engineered as sophisticated tutors that guide learners through challenging material, many AI tools are designed as simple answer engines. This design choice represents a fundamental failure to preserve the pedagogy that makes tutoring effective. By prioritising the delivery of the final product (the answer) over the facilitation of the learning process, these tools squander the opportunity to solve Bloom's 2-sigma problem and instead contribute to the digital banking model Freire described.
Vygotsky's Zone of Proximal Development: A Blueprint for AI Scaffolding
The pedagogical method for realising Bloom's promise can be found in the work of Lev Vygotsky. Vygotsky (1978) introduced the concept of the Zone of Proximal Development (ZPD), defined as the cognitive space between what a learner can accomplish independently and what they can achieve with expert guidance and collaboration. Learning, in this view, is a social and effortful process that occurs most effectively within this zone. The mechanism for facilitating this learning is "scaffolding," a concept later elaborated by Wood, Bruner, and Ross (1976), which involves providing tailored support that is gradually withdrawn as the learner's competence and independence grow.
Vygotsky's framework provides a clear blueprint for the ethical and effective design of educational AI. An AI system aligned with this model would not provide direct answers but would function as dynamic scaffolding. It would assess the learner's current ability, identify their ZPD, and offer just enough support: a hint, a guiding question, a conceptual breakdown to enable them to overcome a challenge through their own effort. The AI's support would then fade as the learner demonstrates mastery, ensuring that agency and cognitive ownership remain with the student. The failure of many current AI tools lies in their violation of this principle. By offering complete solutions, they operate far beyond the ZPD, eliminating the need for effort and preventing the learner from building the skills necessary for unaided performance. This strategic juxtaposition of these three foundational theories reveals a clear causal narrative: the dominant design of educational AI follows Freire's detrimental banking model, which in turn squanders the potential to solve Bloom's 2-sigma problem precisely because it ignores the pedagogical principles of scaffolding within Vygotsky's ZPD. The cognitive debt crisis is the direct and predictable consequence of this profound pedagogical misalignment.
Cognitive debt refers to the accumulation of deficits in understanding and skill that result when learners repeatedly bypass necessary cognitive effort. Pratschke (2025) describes it succinctly: using AI to bypass cognitive friction accumulates a ‘debt’ that undermines the neural pathway formation essential for learning. In essence, when students skip effortful processing (such as problem-solving or recall) by relying on AI-generated solutions, they enjoy immediate ease but forgo the deep neural consolidation that underpins durable learning (Pratschke, 2025; Kosmyna et al., 2025). Empirical evidence supports this concern. A recent MIT study by Kosmyna et al. (2025) found that students who relied on ChatGPT for writing exhibited weaker brain activity in key learning regions and poorer memory retention. Tasks felt easier with AI assistance, but this superficial ease masked a long-term cost: reduced engagement of memory and critical thinking processes, and a diminished sense of authorship over one’s work (Kosmyna et al., 2025). Over time, such outsourcing of cognitive effort may impair neuroplasticity and the development of independent problem-solving skills. Cognitive debt thus accumulates: the learner becomes increasingly dependent on external assistance, while underlying capabilities to recall, transfer, and create knowledge stagnate or even atrophy (Pratschke, 2025; Kosmyna et al., 2025). So-called 'study modes' are merely behavioural overlays, telling AI to act like a tutor without embedding actual pedagogical understanding. If we bolt on “learning modes,” we still risk cognitive debt because the system isn’t pedagogically aligned.
Paradoxically, the very technologies that risk fostering cognitive debt could also be harnessed to deepen learning, if guided by sound pedagogy. In theory, AI tutors could democratise the benefits of Bloom's one-to-one tutoring for all students, closing Bloom’s gap, but only if the technology is designed to preserve effective pedagogy rather than shortcut it. Similarly, Vygotsky’s (1978) concept of the Zone of Proximal Development (ZPD) and the practice of instructional scaffolding (Wood, Bruner & Ross, 1976) highlight that learning occurs optimally just beyond the learner’s independent ability, given appropriate guidance. In a ZPD framework, AI should act as a scaffold by providing hints, prompts, and feedback that help learners stretch to the next level, and then gradually fade this support as competence grows. Learning is most effective when it is social, effortful, and contextualised (Vygotsky, 1978), not when tasks are done for the learner. This implies that AI in education must be carefully aligned to challenge students within their ZPD, rather than pushing answers that bypass the productive struggle. The theoretical foundations of Bloom and Vygotsky both suggest that well-designed AI has the potential to enhance learning (through personalisation and timely support), but they also warn that stripping away effort and human interaction could undercut the very processes that lead to mastery.
AI Alignment with Pedagogy: From Productive Friction to Posthuman Assemblages
Moving from foundational theory to a direct critique of contemporary AI, it becomes clear that the cognitive debt crisis is a product of a specific design philosophy that misunderstands the nature of learning. Resolving the crisis requires a new philosophy grounded in the concept of "desirable difficulty" and a more sophisticated understanding of the learning environment as a complex "assemblage" of human and non-human actors. So-called 'study modes' in LLMs are merely behavioural overlays, telling AI to act like a tutor without embedding actual pedagogical understanding. If we bolt on “learning modes,” we still risk cognitive debt because the system isn’t pedagogically aligned. True pedagogical alignment requires three elements:
- First, proper sequencing that respects cognitive development.
- Second, rich context awareness. i.e. understanding the learner's prior knowledge, emotional state, and goals.
- Third prioritise productive friction. It should be designed to support the learner within their ZPD, providing hints, prompts, and Socratic questioning rather than direct answers.
Without these, we're not building educational technology; we're building cognitive crutches. We need to move beyond simply building "tools" and start designing integrated "systems" that promote genuine engagement and mastery.
The Case for Desirable Difficulty in AI Design
The prevailing ethos in consumer technology design is the pursuit of a "smooth" and "frictionless" user experience. While this may be desirable for online shopping or social media, it is antithetical to the goals of education. Learning is not a frictionless process; it is inherently effortful. As argued by Pratschke (2025), educational AI must be designed with "sequencing, context, and productive friction" at its core. This notion of "productive friction" aligns with a significant body of research in cognitive science on "desirable difficulties". Bjork and Bjork (2011) contend that introducing certain challenges into the learning process such as spacing out study sessions, interleaving different topics, and requiring retrieval practice, can slow down initial acquisition but lead to far more robust, durable, and flexible long-term learning.
Similarly, Kapur's (2008) work on "productive failure" demonstrates that allowing learners to grapple with complex problems and even fail before receiving instruction can lead to deeper conceptual understanding than direct instruction alone. The common thread is that effortful processing is not a bug to be eliminated but a crucial feature of building strong neural pathways and transferable knowledge. An AI designed for deeper learning would therefore embrace its role as a facilitator of this productive struggle. Instead of providing answers, it would pose challenging questions, introduce variability, and require learners to actively retrieve and apply knowledge, thereby inoculating them against the illusion of mastery and helping them build genuine cognitive resilience.
Learning Assemblages and Distributed Agency
A purely tool-centric critique, however, is insufficient. The impact of any technology is shaped by the context in which it is deployed. To understand the systemic nature of the cognitive debt crisis, it is useful to adopt Sian Bayne's (2018) posthumanist concept of the "assemblage". This framework posits that learning does not occur in a vacuum but emerges from a dynamic network of interconnected elements, including learners, teachers, technologies, institutional policies, physical spaces, and assessment criteria. Within this assemblage, agency is not located solely within the human user; it is distributed across the entire network. Technology and policy actively "co-produce the learning conditions".
This perspective provides the critical theoretical pivot needed to shift the focus of ethical responsibility from the individual to the system. Cognitive debt is not simply the fault of a student "misusing" an AI tool. It is an emergent property of an educational assemblage that is optimised for the wrong outcomes. When AI collapses dialogue into answer-delivery, the assemblage steals the learner’s turn at thinking. That’s cognitive debt: performance now, competence later. If an institution's policies and assessment methods reward speed and the submission of correct final products over the process of learning and intellectual risk-taking, then even a pedagogically well-designed tool may be used as a shortcut. The causal mechanism for cognitive debt, therefore, is the misalignment of the entire assemblage's reward signals with pedagogical goals. This insight makes clear that the solution cannot be limited to better tool design or user training; it must involve a holistic re-engineering of the system's values and governance structures.
Table 1: A Comparative Framework of AI Design Philosophies in Education
To crystallise the central conflict discussed, the following table provides a comparative framework contrasting the dominant, efficiency-optimised paradigm with the proposed pedagogically-aligned paradigm.
Dimension | Efficiency-Optimised Paradigm (Accumulates Cognitive Debt) | Pedagogically-Aligned Paradigm (Fosters Deeper Learning) |
Primary Goal | Provide correct answers quickly; maximize user engagement and efficiency. | Foster long-term understanding, critical thinking, and learner agency. |
Metaphor | AI as an answer engine; a digital depository (Freire's Banking Model). | AI as a cognitive tutor; dynamic scaffolding (Vygotsky's ZPD). |
Role of Learner | Passive consumer of information. | Active participant, jointly responsible for the learning process. |
Treatment of Friction | Cognitive friction is a flaw to be eliminated for a "smooth" user experience. | "Desirable difficulty" or "productive friction" is a necessary feature for robust learning. |
Underlying Theory | Misapplication of Cognitive Load Theory (Sweller, 1988). | Desirable Difficulties (Bjork & Bjork, 2011), Productive Failure (Kapur, 2008). |
Measure of Success | Speed of task completion, user retention, clicks. | Demonstrated mastery, unaided performance, knowledge transfer. |
Ethical Focus | Tool-level ethics (e.g., bias in output). | System-level ethics (e.g., impact on learning ecosystems, auditable processes). |
Typical Source | Commercial, off-the-shelf products with opaque design goals. | In-house or bespoke systems designed with institutional values and pedagogy at the core. |
A Framework for Ethical Governance and Institutional Implementation
Critiquing the current paradigm is necessary, but a forward-looking, actionable solution is essential. Moving from analysis to prescription requires a robust framework for ethical governance and a clear model for institutional implementation. This involves adopting a systemic view of ethics and empowering institutions to exercise agency in their technological development.
Boddington's Third Dimension: Towards Auditable AI Systems
Paula Boddington (2017) argues for a "third dimension" of AI ethics that moves beyond debates about the moral status of machines or the specific biases of algorithms. This third dimension focuses on the governance of entire systems, emphasising accountability, transparency, and the alignment of system behaviour with human values. Applying this framework to education means treating AI not as a collection of discrete tools but as a form of critical, auditable infrastructure.
Implementing this approach would require several key shifts. First, AI systems would need to be designed for transparency, with clear logs and oversight mechanisms that allow educators and administrators to understand how the AI is interacting with students and why it makes certain recommendations. Second, and most crucially, the internal reward signals that drive the AI's behaviour must be explicitly aligned with pedagogical goals. Instead of optimising for engagement metrics like clicks or time-on-task, which can incentivise shallow, game-like interactions, the system's success would be benchmarked against genuine learning outcomes, such as improved performance on transfer tasks or demonstrated mastery of concepts. This systemic, auditable approach provides a scalable mechanism for ensuring that AI serves educational ends and offers a powerful assurance against the digital banking problem.
An Institutional Proof of Concept: The Loveday (2025) Case Study
The abstract principles of system-level governance find concrete expression in the institutional implementation detailed by Loveday (2025). This case study serves as a vital proof-of-concept, demonstrating that an alternative to the passive adoption of misaligned commercial AI is both viable and effective. The institution described in the study rejected off-the-shelf solutions and instead chose to build its own in-house AI agents for specific administrative and support tasks, such as results processing and student support triage.
Several features of this implementation are key. The decision to prioritise "customisation over commercialisation" allowed the institution to embed its specific values and pedagogical priorities directly into the technology's architecture. The agents were designed to be "ethical & inclusive by design," with a "GDPR-first" approach that centered data privacy and user rights from the outset. Finally, the report highlights the importance of institutional "culture as a catalyst," suggesting that a supportive, forward-thinking organisational environment is a prerequisite for successful and ethical AI integration. The Loveday case study powerfully refutes the technologically deterministic argument that institutions have no choice but to accept the tools offered by large technology companies. It demonstrates that institutional agency is the critical mechanism for translating ethical theory into practice. By taking control of their technological development, institutions can build bespoke, values-driven systems that practically implement the principles of system-level ethics and actively mitigate the risk of cognitive debt.
Rebuttal of Alternative Positions
To fully substantiate the argument for a pedagogically aligned paradigm, it is necessary to address and dismantle prominent counterarguments. These alternative positions typically defend the status quo by appealing to empirical evidence of efficacy, established learning theories, or pragmatic concerns about cost and efficiency. A thorough rebuttal of these claims on all three fronts strengthens the case for systemic change.
1.The Illusion of Efficacy: Deconstructing Short-Term Learning Gains
A common argument against the concept of cognitive debt is that emerging evidence, particularly meta-analyses of studies on ChatGPT, shows that AI assistance improves student performance and even higher-order thinking. Studies such as those by Deng et al. (2025), Wang et al. (2025), and Han et al. (2025) are often cited to support the claim that worries about cognitive degradation are overblown.
This position, however, is rejected for its reliance on a narrow and misleading definition of "learning." These meta-analyses primarily measure short-term, tool-dependent performance gains. While students may produce better essays or solve problems more quickly with the AI, these studies typically do not evaluate long-term retention or the ability to perform similar tasks unaided. This is the very definition of the illusion of mastery. More importantly, this behavioural evidence must be contrasted with emerging neuroscientific findings. A pre-print study by Kosmyna et al. (2025) from MIT suggests that using an AI assistant for an essay-writing task is correlated with decreased neural connectivity and evidence of shallower cognitive processing. This indicates that while the output may be superior, the underlying cognitive work essential for learning is being offloaded. The short-term gains in performance thus mask the accumulation of long-term cognitive debt, creating a dangerous trade-off between immediate productivity and durable competence.
2.The Misapplication of Cognitive Load Theory
A second counterargument comes from the domain of user experience and learning theory, positing that "friction ruins user experience" and that "smooth AI is always better".This view is often justified by appealing to Cognitive Load Theory, developed by John Sweller (1988), which argues that reducing extraneous cognitive load can improve learning outcomes. Proponents of frictionless AI contend that by making information access and task completion as easy as possible, they are simply applying this established principle.
This argument is based on a critical misapplication of the theory. Cognitive Load Theory distinguishes between three types of load: intrinsic (the inherent difficulty of the material), extraneous (difficulty imposed by poor instructional design), and germane (the effortful work of processing information and constructing mental models). While extraneous load should indeed be minimised, germane load is the very engine of learning. The "smooth AI" position conflates these distinct concepts, treating all cognitive effort as extraneous. In its quest to eliminate all friction, it also eliminates the "desirable difficulties" (Bjork & Bjork, 2011) and "productive struggle" (Kapur, 2008) that are essential for creating robust, long-term knowledge. As evidence from Kosmyna et al. (2025) suggests, this pursuit of smoothness can result in weaker retention and reduced neural activity, demonstrating that not all difficulty is detrimental; some is essential.
3.The Hidden Costs of Commercial AI
The final rebuttal addresses the pragmatic argument that commercial, off-the-shelf AI is simply cheaper, faster, and more efficient to deploy than developing bespoke, in-house systems. Proponents of this view cite the economies of scale, continuous updates, and lower initial costs offered by major technology vendors (OECD, 2021).
This position is rejected because it fails to account for the significant hidden costs and strategic risks associated with ceding control to external vendors. These hidden costs include potential data privacy vulnerabilities, the long-term financial and operational risks of vendor lock-in, and, most importantly, a fundamental inability to ensure pedagogical alignment. Commercial AI products are typically designed for a general market, with opaque algorithms and objectives optimised for engagement and scalability, not for the specific pedagogical values of an educational institution. As guidelines from both the National Institute of Standards and Technology (NIST, 2023) and UNESCO (2023) emphasise, maintaining institutional control over governance, risk management, and ethical objectives is not a luxury but a core requirement for the responsible implementation of AI. The perceived short-term savings of commercial AI are often outweighed by the long-term costs of pedagogical misalignment and the loss of institutional autonomy.
Conclusion: Designing for Deeper Learning in the Age of AI
This post has charted the emergence of a cognitive debt crisis in education, arguing that it is the direct result of a structural misalignment between the design of many AI tools and the fundamental principles of human learning. The dominant paradigm, driven by a philosophy of frictionless efficiency, has inadvertently resurrected Paulo Freire's "banking model" of education at a digital scale, promoting passive consumption and undermining the development of critical, independent thought. This approach squanders the immense potential of AI to solve Bloom's 2-sigma problem by ignoring the Vygotskian principles of scaffolding and productive struggle that are essential for genuine mastery.
The solution lies not in rejecting technology, but in embracing a new design paradigm; one that consciously prioritises deeper learning over superficial efficiency. This requires embedding the principles of desirable difficulty and productive friction into the core architecture of educational AI, transforming these tools from answer engines into sophisticated cognitive tutors that foster learner agency and resilience. However, better tools alone are insufficient. The crisis demands a shift towards systemic accountability, as articulated in Boddington's call for a "third dimension" of ethics that treats AI as auditable infrastructure. The responsibility must be placed on the entire educational assemblage (the network of technology, policy, and practice) to align its objectives with pedagogical values.
As the Loveday (2025) case study demonstrates, this is not a utopian ideal but an achievable reality. By exercising institutional agency and investing in bespoke systems that are ethical and pedagogical by design, educational institutions can forge a different path. The choice confronting educators, developers, and policymakers is therefore a stark one. It is a choice between the short-term convenience of frictionless technology and the long-term cognitive health of learners. To secure a future where AI serves the true purpose of education, we must commit to designing systems that challenge, guide, and empower the human mind, not bypass it.
References
Bayne, S. (2018) 'Posthumanism: A navigation aid for educators', on_education: Journal for Research and Debate, no. 2 (September).
Bjork, E.L. and Bjork, R.A. (2011) 'Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning', in Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society, pp. 59-68.
Bloom, B.S. (1984) 'The 2 Sigma Problem: The search for methods of group instruction as effective as one-to-one tutoring', Educational Researcher, 13(6), pp. 4-16.
Boddington, P. (2017) Towards a Code of Ethics for Artificial Intelligence. Cham: Springer.
Deng, R., Benitez, J.A., Wang, J., Chen, Y. and Kim, J. (2024) 'Does ChatGPT enhance student learning? A systematic review and meta-analysis of experimental studies', Computers & Education, 105224.
Freire, P. (2000) Pedagogy of the Oppressed. 30th Anniversary ed. New York: Continuum (orig. 1970).
Han, X., Zhang, Y., Liu, Q. and Li, H. (2025) 'The impact of Generative AI on learning outcomes: A systematic review and meta-analysis of experimental studies', Educational Research Review.
Kapur, M. (2008) 'Productive failure', Cognition and Instruction, 26(3), pp. 379-424.
Kosmyna, N., Hauptmann, E., Yuan, Y.T., Situ, J., Liao, X.-H., Beresnitzky, A.V., Braunstein, I. and Maes, P. (2025) 'Your Brain on ChatGPT: Accumulation of cognitive debt when using an Al assistant for essay writing task', arXiv preprint, arXiv:2506.08872.
Loveday, C. (2025) Leading the Shift: Enhancing Operational Efficiency with Al. London: Grosvenor House Publishing.
National Institute of Standards and Technology (NIST) (2023) Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. Gaithersburg, MD: NIST.
OECD (2021) OECD Digital Education Outlook 2021: Pushing the Frontiers with Artificial Intelligence, Blockchain and Robots. Paris: OECD Publishing.
Pratschke, M. (2025) From Al Tools to Al Systems: Designing Pedagogical Systems for Learning. Preprint (Zenodo).
Roediger, H.L. III and Karpicke, J.D. (2006) 'Test-enhanced learning: Taking memory tests improves long-term retention', Psychological Science, 17(3), pp. 249-255.
Sweller, J. (1988) 'Cognitive load during problem solving: Effects on learning', Cognitive Science, 12(2), pp. 257-285.
UNESCO (2023) Guidance for Generative Al in Education and Research. Paris: UNESCO.
Vygotsky, L.S. (1978) Mind in Society: The Development of Higher Psychological Processes. Edited by M. Cole, V. John-Steiner, S. Scribner and E. Souberman. Cambridge, MA: Harvard University Press.
Wang, J., Fan, W. and Chen, X. (2025) 'The effect of ChatGPT on students' learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis', Humanities and Social Sciences Communications.
Wood, D., Bruner, J.S. and Ross, G. (1976) 'The role of tutoring in problem solving', Journal of Child Psychology and Psychiatry, 17(2), pp. 89-100.
Acknowledgement
This work made use of Gemini 2.5 Pro developed and published by Google, available at https://gemini.google.comThe tool was used as a support tutor to aid in researching the topic; draft and structure work, having worked through the problem
Copyright © 2025 Saqib Safdar