Benjamin Riley believes AI in Education Technology is Set Up for Failure.

Over a decade, Benjamin Riley has been leading the charge in encouraging educators to delve deeper into our learning processes.

Establishing Deans for Impact in 2015, he engaged university education school deans to integrate insights from cognitive science into teacher training. Prior to this, he served as the policy director of the NewSchools Venture Fund, supporting innovative educational models. Through his latest venture, Cognitive Resonance, described as a “think-and-do tank,” Riley aims to shift focus not only to how we learn but also to the intricacies of generative artificial intelligence (AI) and their distinctions.

His Substack newsletter and Twitter feed systematically challenge lofty claims about AI-powered tutors. Critiquing Sal Khan’s YouTube demo of Open AI’s GPT4o tool, Riley highlighted its artificial educational setting, hinting at potential real-world performance discrepancies.

In April, Riley stirred controversy in the startup realm with an essay in Education Next, reprimanding AI-related companies, including Khan Academy, for utilizing students as experimental subjects.

Benjamin Riley (at right) speaking at the AI at ASU+GSV conference in San Diego in April. (Greg Toppo)

Detailing an encounter with Khanmigo for algebraic assistance, Riley discovered the AI questioning fundamental math concepts like the result of 2 + 2.5, emphasizing its potentially counterproductive impact on learning.

He argued that such interactions not only fail to assist students but also hinder learning progress, potentially leading to erroneous interpretations and wasted efforts.

The interview has been fragmented for readability and conciseness.

The 74: What are your thoughts on the tendency to overlook established learning science in favor of ed tech advancements?

Benjamin Riley: The tech-induced enthusiasm for solving educational challenges often neglects cognitive science principles essential for effective learning. Past visions of tailored learning, exemplified by projects like AltSchool or Gradient, have floundered due to a misalignment with proven learning methodologies. The resurgence of AI, though promising, risks repeating past errors by replacing critical aspects of education with technology-based solutions.

Your admiration for cognitive scientist Daniel Willingham is evident in your work. What resonates with you about his contributions?

Daniel Willingham

Willingham’s succinct articulations of cognitive processes offer profound insights. His notion that minds naturally resist intensive thinking parallels struggles in education to engage students in rigorous cognitive activities. Technologies often bypass this cognitive challenge, assuming students’ readiness for effortless learning, a fallacy that undermines educational efficacy.

When considering the engagement of large language models, how do they address the demand for critical thinking skills?

Mark Andreessen

Large language models lack the cognitive capacity for genuine thinking. Their predictive responses to prompts serve functional purposes yet fall short of supporting meaningful critical thought. Advocates suggest these models as patient tutors; however, genuine educational growth necessitates interactive, intellectually stimulating teaching — a dimension beyond the capabilities of LLMs.

As participants point out, the interaction with LLMs mimics conversation but lacks genuine cognitive processes. How do you view this disparity?

The façade of dialogue maintained by LLMs obscures their absence of cognitive engagement, posing risks in educational contexts where information accuracy is paramount.

In referencing cognitive scientist Gary Marcus, you highlight the potential dangers of LLMs’ confident yet erroneous outputs. Why is this concerning, especially for young users?

Critically appraising the discrepancy between LLM confidence and accuracy reveals risks present in current educational interactions. The prevalence of factual inaccuracies hinders authentic learning, emphasizing the need for cautious implementation to mitigate adverse effects.

Considering the potential socioeconomic implications, do you foresee a technology-driven class divide in education?

Your questions illuminate a fundamental concern over the widening sociocultural impact of technology in schools. As technologically alleviated cognitive burdens appeal to struggling students, the risk of perpetuating educational disparities looms large, threatening to exacerbate existing divisions in learning outcomes and life prospects.

How does education research navigate the evolving landscape of AI integration?

The rapid advancements in generative AI challenge conventional research methodologies, necessitating innovative approaches to study the impact of these technologies. Anticipating and addressing the potential implications of AI in education, policymakers and researchers must adapt swiftly to oversee the positive integration of technology while minimizing its negative consequences.

Citing the precedents set by social media and smartphones, the cautious exploration of AI’s effects in education emerges as a vital consideration. Acknowledging the multifaceted implications of technology integration, proactive evaluation and thoughtful deployment become imperative, akin to responsible drug development practices, to ensure the optimal utility and safety of educational AI.

Other articles

Post Image
Education
Framework developed to promote a thriving environment for students and teachers

Reflecting on the tenets that shape our educational practices is fundamental for …

Read More
Post Image
Education
Experts in Oklahoma debunk claims of significant test score increases as largely fictional.

In August, Oklahoma school districts received surprising yet encouraging news as …

Read More
Post Image
Education
MSU and MC Collaborate on Accelerated Law Degree Program

MSU President Mark E. Keenum, center left, and MC President Blake Thompson, cent …

Read More