Recently, my parents sued my child's school Because their son was found to be using AI for his class assignments and was punished. When I hear news like this, it always makes me pause. Because it suggests that all parties mentioned in the article were wrong. For example, my son probably should have told his teacher that he would be using AI to complete his assignments. Teachers should also have discussed with students when it is okay to use AI to complete homework assignments and when it is not. Teacher administrators also spent the better part of two years designing comprehensive responses for students using AI-generated content in assignments.
Parents also bear some responsibility in this story. A response to a lack of policy should not be sufficient reason to sue the school for a perceived missed opportunity.
I'm not a lawyer, but I know that litigation in the early stages of AI EdTech development can stifle innovation and create unnecessary backlash. I am a tenured professor and educational psychologist focused on understanding how educational technology impacts student learning and motivation.
As associate director of USC's Center for Generative AI and Society, I study how students and educators have responded to the era of AI-driven tools, and I learn from many misconceptions and an overall lack of foresight, at times. I discovered a lack of hindsight. From previous failures Such was the case with the Los Angeles Unified School District's recent EdTech fiasco.
AI-driven education technology will not solve all education problems, as some prominent venture capitalists would have us believe. But they aren't our metaphorical monsters under the bed either. Instead, they are simply another set of tools that are integrated into the educational environment.
Please take my word for it. In about five years, we will know what AI in education can and cannot help with. After that, we will begin to see AI simply as a commonplace technology that serves a narrow purpose, sometimes doing harm, sometimes good, the same way we look at our smartphones. It will be impossible to predict how capable either dynamic AI will be in the near future.
But what I do know right now is that you can't litigate to solve AI problems. If AI is to be used in education, it cannot be done in an atmosphere of anxiety. Students will be introduced to AI to help them better understand how new AI can help them learn, how it can motivate them, and how it can help them discover new ways to explore the world around them. should be encouraged to experiment.
Educators should similarly consider AI solutions to fill gaps in the classroom and personalize content for students. Everyone looks over their shoulder and wonders who sets the rules they might inadvertently break, or overly litigious parents talk about the denial of civil liberties when their kids are using technology that's still being developed. These things won't happen if you're worried about being judged. Innovation and development cannot occur in such conditions.
Instead, everyone should take a deep breath and understand that while AI in education is here to stay, its place in schools is still being explored. In the meantime, the adults in the room need to create policies that foster curiosity among students and staff so that the truly useful aspects of AI in education emerge. These policies should be drafted now, with an emphasis on documentation so that educators understand how to use AI if it is used. Educators need to encourage students to use AI when developmentally appropriate, and parents need to trust educators to make that call.
Stephen J. Aguilar is a tenured professor at the university. USC Rossier School of Educationspecializes in educational psychology. His research focuses on the impact of educational technologies, including AI, on student learning, motivation, and educational outcomes. He is also deputy director of USC's Center for Generative AI and Society.





