At a time when artificial intelligence is rapidly reshaping classrooms, a group of students at Cal State San Bernardino is taking a different approach: slow down, ask questions and stay in control.

Instead of treating AI as a shortcut or a threat, students across disciplines are stepping forward to teach their peers how to use it responsibly — with skepticism, context and clear academic boundaries.

The effort is part of a student-led workshop series guided by Viktor Wang a professor in the Department of Educational Leadership and Technology. Participants from education, career technical education, mathematics and computer science are not just learning about AI. They are demonstrating how it works, where it falls short and why human judgment still matters.

The workshops center on a simple but often overlooked idea: the question is not whether to use AI, but how, when and for what purpose.

Students worked from a shared framework that emphasizes verification, ethics and critical thinking. AI, they were told, is a tool — not a replacement for knowledge, reasoning or responsibility, but a partner that must be questioned, tested and, at times, rejected.

From there, each presenter took the concept in a different direction, grounding it in discipline, experience and real-world application.

Jonathan Muckelroy, a career technical education credential candidate in adult education and a graduate of the master of science program in physical education, approached AI through kinesiology and applied science. Challenging the idea that AI weakens rigor in scientific fields, he showed how it can support structured brainstorming and early drafting — provided that verification remains central.

In one case study, students used AI to generate possible explanations for how moderate aerobic exercise reduces perceived stress. Every claim then had to be verified using peer-reviewed sources, including PubMed, American College of Sports Medicine guidelines and course textbooks. Students labeled AI-generated content as verified, unclear or removed, turning the assignment into an exercise in scientific judgment rather than shortcut use.

Monique Rodarte
Monique Rodarte

Monique Rodarte, a master’s student in career technical education, focused on equity and bias in AI systems. Drawing on her educational and professional experiences, she examined how algorithmic tools can reproduce disparities, particularly in areas such as school discipline, placement and access.

Rather than staying abstract, Rodarte framed the issue through practical questions: Who does this system work best for? Who does it miss? Who becomes invisible when data is incomplete or biased? Her presentation positioned AI literacy as an equity issue, emphasizing transparency, accountability and the responsibility to ensure systems serve all students.

Garrett Wulc, a mathematics major planning to pursue a doctorate, brought technical precision to the workshops. Using a NASA and University of California, Irvine airfoil dataset, he demonstrated how AI can help diagnose errors in machine learning workflows — while also revealing its limits.

Garrett Wulc
Garrett Wulc

Wulc walked through issues in a baseline random forest model, including improper cross-validation and data leakage. AI tools were used to suggest improvements, but each recommendation was tested and evaluated. Some were adopted; others were rejected due to computational cost, irrelevance or lack of contextual fit. His presentation underscored a key point: confidence in AI output does not guarantee correctness.

In computer science, graduate student Logan Ashbaugh focused on how people interact with AI systems. Using a common problem — selecting an algorithm to sort a list — he showed how vague prompts can lead to incomplete or misleading answers, while more precise prompts improve relevance.

Still, he emphasized, better prompts do not replace understanding. AI-generated explanations must be checked against established knowledge and external sources. By modeling prompt refinement, evaluation and disclosure of AI use, Ashbaugh framed AI literacy as a skill rooted in judgment, not passive consumption.

Fellow graduate student Jannatul Faika illustrated that principle through a real-world debugging scenario involving a chatbot. AI-generated explanations appeared confident but failed to identify the true source of system errors due to missing context.

Jannatul Faika
Jannatul Faika

Through manual testing, comparing outputs and refining prompts, Faika demonstrated how human oversight reveals those blind spots. Her work reinforced a consistent message across the workshops: AI can assist, but humans remain responsible for decisions and outcomes.

Taken together, the presentations offered a cohesive, cross-disciplinary model for responsible AI use — one grounded in shared principles but adaptable to different fields. By placing students at the center, the workshops highlight what can happen when learners are trusted to engage deeply with emerging technology — and to question it.