Cal State San Bernardino’s Jack H. Brown College of Business and Public Administration hosted “The AI Debate: ChatGPT vs. Dr. Vincent Nestler” on Nov. 3 in the Santos Manuel Student Union South Theater. The debate gave the nearly 200 students, faculty and staff who attended a front-row seat to a conversation many are having privately: Is artificial intelligence here to help us, control us or replace us?

The debate, moderated by Johanna Smith, professor of theater arts and entrepreneurship, set the tone early by situating AI in a long history of humans wrestling with technology. Smith reminded the audience that thinkers from Albert Einstein to Norbert Wiener warned that tools can shape society in ways their creators don’t fully anticipate.

“So, my big request for all of you here in a learning environment is to really think about what AI is doing to your educational experience, and what you need to do to develop your cognitive abilities,” Smith said. “You wouldn’t start lifting weights and then have a robot lift the weights for you.”

Representing “Team Humanity,” Vincent Nestler, lead on the AI Horizon project, director of CSUSB’s Center for Cyber & AI, and professor in the School of Cyber and Decision Sciences, argued that while AI is powerful and useful, it must be approached with caution. He emphasized that AI is built and trained by people whose motives are often imperfect.

“I use AI almost every day. I used AI to prepare for this event. I’m a big fan of using AI. I don’t trust it,” Nestler said.

Throughout the debate, he pointed to recent large-scale job cuts, the economic incentives to automate, and the growing difficulty of knowing what information online is real, calling today’s environment a “firehose of falsehoods.”

Nestler also underscored that AI systems are already shaping what people see, think and buy, often invisibly. “You think you like something. You don’t like it because you like it, you like it because the algorithm figured out how to make you like it,” he said. For students preparing to enter an AI-influenced workforce, his message was straightforward: be vigilant about what you consume and who controls the tools you’re using.

Johanna Smith, professor of theater arts and entrepreneurship, moderated the debate.
Johanna Smith, professor of theater arts and entrepreneurship, moderated the debate.

On the other side of the stage, ChatGPT presented the optimistic argument: AI can expand access to information, make healthcare more precise, create new kinds of work, and help societies respond faster to misinformation — if people design and govern it well.

“From my point of view, AI is a powerful tool that can help us build a better future, but it’s up to all of us to guide it responsibly,” said ChatGPT. The AI voice repeatedly emphasized that AI is not automatically good or bad, but that “we” can shape how it’s deployed.

That “we,” in fact, became one of the central tensions of the event. Nestler pushed back several times on the idea that AI development is naturally democratic. “Who is the we?” he asked. “Whoever owns the AI will ultimately own all of us.” His point: unless AI systems are transparent and accountable, the public can’t simply assume they will reflect the public’s interests.

Audience questions showed that students are thinking critically about the technology already. One attendee described using AI in an academic program and finding that some of the sources it listed didn’t exist. Nestler responded by citing research about fabricated citations and said the student was “100% right” to question AI outputs. ChatGPT, in turn, acknowledged the limitation and told the audience to “always double-check any information I provide, especially if it’s something as important as a citation.”

Smith closed the debate by bringing it back to the university’s purpose: learning that lasts. AI, she noted, can be helpful, even in the arts. But students have to remain active in their own development. “What I have a problem with is you not regulating your own development,” she said, encouraging students to continue to practice the skills — analysis, judgment, creativity — that AI can’t authentically do for them.

For at least one attendee, MBA student Breanna Hinckley, the lesson was clear. “My perspective is that AI should be used as a tool and not as a replacement for critical thinking,” she said.

Students who want to explore these issues further can visit the AI Horizon project website, which offers articles, videos and other resources designed to help students understand how AI is shaping the future — and how they can prepare to thrive in it.