Reflecting on the tenets that shape our educational practices is fundamental for …
Texas Educators Attribute Low Test Scores of English Learners to Testing System
Emma Wordsmith
State test scores of English-learning students declined significantly after the introduction of design changes and computerized scoring in 2018, according to bilingual educators. This drop is attributed more to the test format than the students’ language skills.
Previously, students engaging with teachers, part of the Texas English Language Proficiency Assessment System, now interact with a computer using a microphone for responses. The Texas Education Agency employs software to assess students’ spoken language.
An analysis by the Texas Tribune reveals a decline in students’ scores following the new test implementation. In the years before 2018, approximately 50% of students in grades 4-12 achieved the highest score on the speaking section. However, since 2018, only around 10% of students have reached the top score annually.
While passing the TELPAS isn’t a graduation requirement, the scores can influence students’ academic paths. Those who do not pass may face extended time in remedial English classes, limiting elective choices and recommendations for advanced courses, crucial for college applications.
Educators express frustration with the current state education agency’s testing methods, indicating that many students are proficient in English despite receiving low scores possibly due to the test alterations.
Jennifer Phillips, an experienced bilingual educator, critiques the artificial testing environment, stating, “It’s a flawed system,” as it hinders natural language expression.
TESA includes TELPAS scores as 3% of the A-F ratings given to school districts and campuses. Despite representing a small part of the overall score, these scores play a significant role in district evaluations, leading some districts to contest recent metric changes.
TEA’s use of automated scoring for TELPAS has faced scrutiny, especially after employing similar technology in assessing the State of Texas Assessments of Academic Readiness. Educators are cautious of automated systems and have listed it as a concern in lawsuits against the state.
English Learners Assessment
Upon entering Texas public schools, students indicating a language other than English spoken at home undergo an English assessment. Texas, unlike other states, developed its own test to regularly evaluate English learners’ progress.
Annually, approximately a million emergent bilingual students in Texas public schools undertake the TELPAS, which consists of listening, reading, writing, and speaking sections.
Prior to 2018, teachers administered in-person TELPAS tests, assessing listening and reading through multiple-choice sections, and writing through classroom samples. Speaking evaluations were conducted through direct interactions or rubric assessments.
The transition to an online platform for TELPAS in 2018 standardized testing practices. The automated scoring technology aimed to improve reliability and deliver quicker speaking assessment results. Last year, the technology extended to evaluating written responses.
Students are assessed in four categories – beginner, intermediate, advanced, or advanced high – with a requirement to score advanced high in at least three domains. Previously, scoring advanced high across all categories was mandatory.
Despite English-learning students’ improved performance on the STAAR test since 2021, TELPAS scores, particularly in speaking, have remained low post the test redesign.
Ericka Dillon from Northside ISD noted that many proficient English learners struggle to achieve advanced high scores on TELPAS, impacting their evaluation.
In response to concerns raised after a data analysis revealed declining TELPAS speaking scores post the test redesign, TEA emphasized the need for standardized evaluation practices across the state, attributing the changes to challenging aspects of speaking and writing assessments.
The agency defended its automated scoring system, clarifying oversight and human intervention in the scoring process. TEA cited a technical advisory council’s approval of the technology and highlighted the human review for a portion of assessments to ensure accuracy.
Human Reviews and Rescoring
Educators remain skeptical of the automated TELPAS system, especially observing score fluctuations upon human review. Last year, a percentage of speaking assessments saw score improvements after review, indicating system inconsistencies.
Spring Branch ISD reported a higher rate of improved assessments post-rescoring, raising concerns about the accuracy of the automated system’s initial scoring.
Challenges in deciding which results to challenge arise due to limited access to students’ audio responses compared to readily available written responses for STAAR tests.
Guidelines for requesting rescores and potential costs further complicate the process, impacting educators’ decisions. Access to TELPAS responses is set to improve in the upcoming school year based on district feedback.
Assessment Impact and Recommendations
Edith Treviño, an ESL specialist turned consultant, questions the automated system’s treatment of students with accents or bilingualism, expressing concerns over penalization based on language nuances.
Treviño emphasizes the disconnect between TELPAS prompts, response criteria, and students’ abilities, echoing educators’ sentiments on the test’s limitations in accurately assessing English proficiency.
TESA’s disregard for nuanced language use and accent diversity in student responses has raised apprehension among educators, prompting calls for more student-oriented test criteria.
TEA’s reliance on an automated scoring system, perceived as lacking in assessing language nuances, challenges the system’s adequacy in reflecting students’ true language abilities.
Phillips highlights the potential impact of test results on students’ self-esteem, academic progression, and opportunities, emphasizing the consequences of language-based discrimination practices in educational settings.
Carlene Thomas, CEO of an education consulting firm and former ESL coordinator, suggests incorporating conversational assessments and enhancing student support in language practice to ensure TELPAS effectively evaluates students’ language proficiency.
Despite a push for improved assessment methods, educators maintain that TELPAS falls short in providing an accurate depiction of students’ language skills, urging for meaningful evaluations.
The full program is now LIVE for the 2024 Texas Tribune Festival, happening Sept. 5–7 in downtown Austin. Explore the program featuring over 100 engaging conversations on various topics shaping Texas and the nation. Discover the full lineup.