AI in Schools Shows Both Promise and Overhype, According to Recent Reports

Are American public schools falling behind other nations like Singapore and South Korea in preparing teachers and students for the rise of generative artificial intelligence? Are our educators venturing into AI unprepared, potentially jeopardizing students’ learning?

Could it possibly be both?

Two freshly published reports, coincidentally unveiled last week, present contrasting views on the burgeoning field: One emphasizes the necessity of progressive policies to ensure equitable AI distribution across urban, suburban, and rural areas. The other advocates for a fundamental understanding of AI’s capabilities and limitations, its benefits, and potential dangers.

A recent report by the Center on Reinventing Public Education, an Arizona State University-based nonpartisan think tank, recommends that educators actively shape the trajectory of AI development. It emphasizes the importance of educators collectively articulating their expectations from AI to educational technology companies.

The report suggests that a unified entity collaborate with school districts to communicate their AI tool requirements to ed tech providers, cautioning against sending fragmented signals that may result in subpar products.

It also advocates for closer collaboration between educators, researchers, and ed tech companies in the rapidly evolving landscape of AI technologies.

“If districts are hesitant to share data with researchers – as reported by ed tech developers – we face significant challenges in determining effective practices,” shared CRPE Director Robin Lake in an interview.

The report underscores the importance of treating AI as a transformative force in classrooms, urging stakeholders at all levels, from teachers to governors, to be proactive in navigating AI’s potential impact on education. It highlights existing disparities in AI integration in schools, with suburban districts significantly more inclined towards AI teacher training than urban and rural counterparts.

These conclusions, stemming from a gathering of over 60 public and private officials in April, liken AI’s influence to extreme weather and escalating political extremities, anticipating extensive repercussions in the education sector. The report prompts educators to explore innovative AI solutions adopted by other school districts, states, and nations to address post-pandemic educational hurdles.

For example, educators in Gwinnett County, Ga., embraced AI-driven learning initiatives as early as 2017. They developed an AI Learning Framework aligned with the district’s educational goals, formulated an AI-centric curriculum pathway in collaboration with the state, and established a school that seamlessly integrates AI across various subjects.

Lake referenced initiatives in states like Indiana, which incentivizes experimental AI projects, such as a recent call for AI-facilitated tutoring. This structured approach empowers districts to express their goals effectively.

You can’t eliminate all risk. But we can do a much better job of creating an environment where districts…

Robin Lake, Center on Reinventing Public Education

She also stressed the importance of setting parameters for experimentation at the state level to avoid instances like the Los Angeles Unified School District’s setbacks, which recently disabled its touted $6 million AI chatbot following the tech firm’s restructuring.

“We can enhance the environment for districts to experiment and safeguard student interests better, even though absolute risk elimination is unfeasible,” Lake emphasized.

AI ‘automates cognition’

In stark contrast, a report by Cognitive Resonance, a newly established think tank in Austin, Texas, challenges the prevailing notion that generative AI will revolutionize education.

We shouldn’t assume its omnipresence in education,” remarked Benjamin Riley, the organization’s founder. “We need to question if broad AI integration is truly beneficial.”

The report cautions against the risks associated with utilizing AI for critical functions like lesson planning and tutoring, raising doubts about its appropriateness in instructional settings due to its tendency to generate unrealistic scenarios, misinformation, and the potential to undermine students’ cognitive development.

Riley, a longstanding advocate for cognitive science in K-12 education, shared his skepticism about the perceiv…

Photo by Shutterstock.

“I seriously doubt whether the evidence supports the assertion that this technology significantly enhances learning outcomes or critical student metrics at this juncture,” Riley expressed in an interview. “Given the technology’s nascent status, substantiated claims are scarce.”

Generative AI, by design, serves as a tool that “automates cognition” for users, reducing the need for critical thinking, subsequently impacting the learning process. “Reduced cognitive engagement hinders the depth of comprehension and retention,” Riley explained.

I profoundly question the premise, which is that we actually know that this technology is improving learning outcomes.

Benjamin Riley, Cognitive Resonance

Riley recently stirred controversy within the ed tech domain by advocating for a restrained approach to integrating generative AI in schools. He critiqued Khan Academy’s AI-driven Khanmigo chatbot for inaccuracies in math concepts and the illusion of interactive conversations it fosters.

He criticized AI’s display of characteristics that contradict educational objectives, citing cognitive scientist Gary Marcus’s evaluation terming generative AI as “often inaccurate yet confidently assertive.”

Co-authored by Riley and education policy scholar Paul Bruno from the University of Illinois, the report advises educators to pause and critically assess large language models’ (LLMs) potential and AI functionalities. Structured around four Q&A segments, the report seeks to demystify AI’s role:

  • Can large-language models learn like humans? No.
  • Can large-language models engage in reasoning comparable to humans? Not precisely.
  • Does AI render traditional content taught in schools obsolete? No.
  • Will large-language models surpass human intelligence? No definitive answer.

Riley acknowledged the inevitability of AI in education but challenged the prevalent beliefs in its transformative impact. “While AI adoption is probable across educational domains, I doubt its relevance to core educational outcomes,” Riley remarked. AI, he hypothesizes, may facilitate administrative tasks like scheduling and grading rather than revolutionizing learning methodologies.

“The ubiquity of AI applications may not align with significant educational impacts,” Riley hypothesized. “It may serve ancillary functions like administrative tasks rather than core educational objectives.”

The risks associated with relying on AI for tutoring and lesson planning are contextualized in the report, highlighting the potential fallibility of large language models in predicting effective lesson sequences. Riley and Bruno underscore the importance of providing high-quality content examples to steer AI model outputs in a positive direction.

On the aspect of tutoring, the report notes that large language models lack adaptive learning capabilities based on student interactions but instead rely on training data. This limitation may hinder their effectiveness in customizing tu…

Courtesy of Pexels.

The dueling narratives presented by Lake and Riley add a nuanced perspective to the AI-in-education discussions. Their exchange of views showcases a constructive dialogue on the optimal approach to navigating AI’s integration in educational settings, featured in their respective newsletters.

CRPE’s comprehensive report serves as a response to the potential pitfalls identified by Riley and Bruno, motivating educators and policymakers to steer AI development proactively. While Riley and Bruno offer immediate insights into AI functionality and AI generated content, CRPE projects a broader strategic outlook.

Among the key insights gleaned from CRPE’s April engagement, Lake emphasized the need to diversify the policy discourse beyond stakeholders directly involved in AI deployment. “There’s a growing realization that we must broaden the dialogue to encompass broader voices, including civil rights advocates, parents, and students,” Lake stressed.

The sole student representative, Irhum Shafkat, a senior at Minerva University shared his transformative educational journey facilitated by Khan Academy‘s AI-led resources, emphasizing the potential of technology in democratizing learning experiences. “Technology provides equitable learning opportunities that propel students based on their abilities,” Shafkat reflected.

Lake underscored the significance of embodying student perspectives in AI discussions, emphasizing the importance of empowering students to shape educational approaches. “Allowing youth to lead promotes inclusive educational strategies by foregrounding their interests and insights,” Lake highlighted.

CRPE’s report advocates for strategic solutions focusing on practical tools that troubleshoot existing gaps effectively. These tools could encompass enhanced language translation capabilities, text-to-voice support for language learners, improved feedback platforms, and research summaries for educators – emphasizing practical utility over flashy innovations.

As one participant aptly advised, “Prioritize practical functionalities over trendy applications.”

Other articles

Post Image
Education
Framework developed to promote a thriving environment for students and teachers

Reflecting on the tenets that shape our educational practices is fundamental for …

Read More
Post Image
Education
Experts in Oklahoma debunk claims of significant test score increases as largely fictional.

In August, Oklahoma school districts received surprising yet encouraging news as …

Read More
Post Image
Education
MSU and MC Collaborate on Accelerated Law Degree Program

MSU President Mark E. Keenum, center left, and MC President Blake Thompson, cent …

Read More