Texas janitor indicted in case of AI-made sexualized images of students
A Texas school district is confronting a hard test of digital safety after authorities accused a campus janitor of using AI to create sexualized composites from real student images.
Daril Martin Gonzales, 55, was arrested and indicted last week on one count of possession and attempted possession of child pornography and one count of possession and attempted possession of obscene visual representation of a child, a development first detailed by KVII as prosecutors try to apply existing statutes to images produced with AI.
According to the US Attorney's office, Gonzales worked as a janitor for Anson ISD near Abilene and moonlighted as a free sports and cheerleading photographer for middle and high school students, which gave him regular access to photos that became raw material for misuse in the era of AI.
Prosecutors said he superimposed the faces of prepubescent students onto the bodies of adults in sexually explicit videos and also attached AI-generated nude bodies to the faces of girls, and police recovered at least six videos and three photos altered in this way.
Students described shock and fear about reputational harm in a world where images can be circulated long after they are taken in connection with AI. "I felt disgusted, embarrassed, and scared. I was worried that photos of me could be posted or sold somewhere," said one. "I was embarrassed cause I didn’t want people to think of me in this way when I hadn’t done anything." "I know I can’t do anything about what he did," said another. "I don’t think I did anything wrong. He’s in the wrong."
In a police report, Gonzales called his behavior a power trip and admitted to viewing child pornography for up to six hours a day for the past 20 to 25 years, a long pattern now colliding with the easy availability of AI.
If convicted, Gonzales faces up to 20 years in federal prison followed by a possible lifetime of supervised release, an outcome that would signal how courts are adapting penalties to digital composites enabled by AI.
Beyond the immediate case, the episode highlights the tension between open access to creative tools and the need for stricter guardrails in schools, which is likely to steer procurement, safety audits and parental expectations as education leaders confront AI.
For founders and investors, including those in the UK, demand may grow for identity protection, provenance checks and incident response features, and the funding narrative could tilt toward practical safeguards that reduce misuse risks tied to AI.
The broader lesson is that institutions will judge synthetic media by the resilience of their controls, and the winners in enterprise content will be those who build trustworthy systems that minimize harm while preserving efficiency under the scrutiny that comes with AI.
Covers how technological progress is shaping new products and services, delivering clear insights into the fast-evolving AI tools industry.