Originally published on the Eduaide.Ai Blog.
A new paper (Chen et al., 2025) claims that Generative AI tends to reproduce outdated pedagogy by default, but can be steered, if intentionally designed, to reflect contemporary instructional values like student autonomy, peer collaboration, and dialogic learning.
Their controlled experiment evaluated 90 AI-generated lesson plans from three conditions: vanilla GPT-4, MagicSchool, and School AI. Each output was coded using Vaughn’s framework of student agency (dispositional, motivational, positional) and Alexander’s taxonomy of classroom talk (rote, recitation, exposition, discussion, dialogue). This demonstrated that current outputs lean heavily toward teacher-centered instruction. That is, instruction dominated by rote tasks and expository talk. Their prompt-engineered prototype shows measurable improvements across both domains—student agency and classroom dialogue.
This is the kind of contribution I like. The paper identifies a real problem and proposes a plausible, actionable solution. Their coding framework is rigorous. Their design intervention is concrete. There is obvious utility here for tool developers. In fact, we’re developing some tooling for Eduaide based on this critique.
But the solution, while practical, is also partial. The paper assumes that AI can be redirected toward ideal pedagogy through better prompt engineering. I expect we’ll see gains from this approach, but they will be limited and, in the end, marginal. We must seriously engage the possibility that some educational values are not computable. In treating AI limits as a design problem, we neglect the epistemological one right under our noses. In short, the paper underestimates the irreducibility of teaching to algorithmic systems. Again, there are simple design moves we can and should make to improve outputs, but the fundamental limitations facing AI in Education are left relatively unchanged.
The bias of the tool reflects the bias of the practice. Pedagogical bias is sociological in nature. It arises from accountability structures in the institution, class sizes, standardization, professional incentives, and a bevy of teacher effects. Sure, the paper acknowledges the need for teacher involvement, but treats it as a supplement to tool design. I’d argue it is the precondition for any successful implementation. A tool is a mirror of its uses.
So, yes, let’s build better prompts. But let’s also be clear-eyed about the limits. Not just of AI, but of the broader system it reflects.
References
Alexander, R. J. (2008). Towards dialogic teaching: Rethinking classroom talk (4th ed.). Dialogos.
Chen, B., Cheng, J., Wang, C., & Leung, V. (2025, in press). Pedagogical biases in AI-powered educational tools: The case of lesson plan generators. The Social Innovations Journal.
Vaughn, M. (2020). What is student agency, and why is it needed now more than ever? Theory Into Practice, 59(2), 109–118. https://doi.org/10.1080/00405841.2019.1702393