Artificial Intelligence as a mirror
What AI reveals about learning and education -- balancing freedom and guidance
Artificial intelligence (AI) is often described as a technological marvel. Yet the principles that underpin it stem from decades of research into human cognition and learning. This paper examines how insights from cognitive science shape AI and contrasts this with how these insights are inconsistently applied in education. By highlighting differences between machine learning and human learning, the paper advocates a human-centered approach, emphasizing learner agency, structured support and meaning-making.
AI achievements frequently dominate headlines: systems that recognize patterns, anticipate outcomes and adapt to new information. Beneath this technical success lies an important truth: modern AI is inspired by human learning. Artificial neural networks are modeled on the brain and strategies such as self-supervised learning draw directly from observations of how infants acquire knowledge. Young children learn from limited input, form internal models of the world and adapt their behavior to match complex, changing environments (Bengio, 2019; LeCun, 2022).
This insight exposes a tension in education. While AI research applies these cognitive principles rigorously, human learners often navigate classrooms where guidance is inconsistent.
Learning between structure & meaning
Cognitive science shows that learning is not the passive absorption of information. It involves the active organization of knowledge, moving from concrete experience to abstract understanding (Bransford, Brown, & Cocking, 2000). Constructivist theory emphasizes learners’ agency, while cognitivist theory stresses structured support and cognitive load. In education, these perspectives are often treated as opposites, resulting in fragmented approaches to teaching and learning.
AI development demonstrates a more integrated perspective. Effective systems combine structured architectures with the ability to learn flexibly from data. Research by Lake, Ullman, Tenenbaum, and Gershman (2017) illustrates that human-like learning depends on both efficiency and adaptability. In other words, representation and interaction are complementary rather than competing.
Education can adopt a similar balance. Without structure, learners risk confusion; without meaning, engagement remains superficial. Optimal learning occurs in the intersection of guidance and exploration, where learners construct understanding through supported inquiry.
The human learner in focus
Too often, students are treated as passive recipients or as abstract “levels” in a curriculum. In reality, learners are humans navigating uncertainty, forming strategies, experimenting, and sometimes failing. Consider a student given a research project with minimal guidance. Left to their own devices, they may rely on shortcuts, such as AI-generated content, not because they are unwilling to engage but because they lack explicit instruction on how to approach complex tasks.
This illustrates the gap between what we know about learning and how we enact it. Students need more than freedom; they need scaffolds that help them take ownership of their learning, understand the reasoning behind tasks, and reflect on their choices. When these elements are absent, autonomy becomes a hollow concept.
Vygotsky’s (1978) zone of proximal development captures this need for support. Guidance is essential for learners to stretch beyond current capabilities, but in practice, it is often underdefined. Classrooms sometimes create “space for learning” without clarifying what learners should do within that space, leading to frustration or dependence on external tools.
Technology, dependency & motivation
The presence of AI in education introduces both opportunity and risk. Students can complete tasks more quickly, but the temptation to outsource thinking can undermine deep learning. When learning is insufficiently structured, tools become substitutes rather than supports.
Self-determination theory highlights the conditions for meaningful engagement: autonomy, competence, and relatedness (Deci & Ryan, 2000). When these needs are unmet, students may follow instructions superficially, relying on shortcuts instead of developing understanding. AI can highlight this human vulnerability: systems learn without motivation or intent, but humans require purpose, reflection, and feedback to internalize knowledge.
Toward a human-centered approach
AI illustrates what is possible when learning is treated with precision. Hypotheses are tested, errors are corrected, and assumptions are revised in response to evidence. Education could benefit from a similar rigor, without reducing learning to computation. Structure and exploration must coexist, creating conditions in which learners actively construct meaning while receiving targeted guidance.
Central to this approach is learner agency. Ownership emerges not from unstructured freedom, but from clarity, scaffoldings and opportunities to reflect. Students must learn not only content, but also how to navigate uncertainty, critically evaluate information, and connect experiences to broader understanding. This aligns with heutagogical principles, emphasizing self-determined learning and reflective practice.
The contrast is stark. AI systems are engineered to learn efficiently, yet we allow human learners to develop in environments that often lack consistency, feedback and support. Recognizing this gap is the first step toward improving educational practice.
Note
Artificial intelligence does not surpass humans; it mirrors our understanding of learning. It highlights the precision and care applied to machine development and the inconsistencies in how we support human learners. Education is at its best when structure and meaning meet, when learners are supported yet challenged and when agency is cultivated through guidance and reflection.
If educators approach learning with the same rigor applied to AI and if learner development is valued as highly as system optimization, education could become not only more effective, but also more human-centered. Ultimately, the goal is not to make students perfect learners, but to create conditions in which they can grow with purpose, curiosity and confidence.
References
Bengio, Y. (2019). From system 1 deep learning to system 2 deep learning. NeurIPS.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain, mind, experience, and school. National Academy Press.
Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227 – 268.
Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, e253.
LeCun, Y. (2022). A path towards autonomous machine intelligence. OpenReview.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.




