total-news-1024x279-1__1_-removebg-preview.png

ChatGPT sends shockwaves across college campuses

Just four months after Wi-Fi connectivity was introduced to classrooms, the GPT family of artificially intelligent chatbots has transformed higher education.

ChatGPT and its smarter, younger cousin GPT-4 can create photo-realistic reproductions of college term papers on command, or input midterm answers. At the start of the 2022-23 academic year, few professors had heard of it. they are learning rapidly.

Sarah Eaton, an associate professor of education at the University of Calgary who studies AI, said:

The impact of this rapidly developing technology is causing a range of concerns across universities and research disciplines due to its impact on academic integrity and learning.

But not everyone sees this technology as an earth-shaking phenomenon. Some are excited about the impact it will have on their learning.

“There is no panic here on campus. In fact, universities are a great place to look at the challenges and new problems posed by all kinds of new technologies, both good and bad. Some people try to think of the problem from the ground up, says Jenny Frederick, executive director of the Yale Poorvu Center for Teaching and Learning and associate dean of Academic Initiatives.

Across the university, professors have been looking at ways to engage students, so cheating with ChatGPT can help create assignments that are more personalized to student interests, or give students a brainstorming assignment instead of one final report. or require you to complete an essay draft.

Frederick acknowledged that at Yale University, an Ivy League school with many resources at its disposal, it may be easier for universities to embrace technology without fear.

In smaller schools such as Texas Women’s University, ChatGPT caused more hesitation.

“I think the sentiment from most academic networks like mine is one of anxiety and fear,” said Daniel Ernst, an associate professor of English at the school.

Texas Woman’s University held a workshop for teachers on ChatGPT at the end of January. Her Genevieve West, head of the school’s language, culture and gender department, said she saw a generational divide at the event. Younger professors were excited about technology, older professors expressed concerns.

Since launching in late November, ChatGPT already has 100 million users. Its use proliferated rapidly on campuses of the time.and Informal and anonymous Januarypoll, 17% of Stanford University students admitted to using ChatGPT in the fall finals. Most people said they only used AI for brainstorming, outlining and spitballing. Only a small percentage of respondents said they submitted ChatGPT’s work as their own.

The rapid adoption of new technology has led schools large and small to scramble to create guidelines on how to approach it.

Stephanie Frank, a professor of religion and humanities at Chicago’s Columbia College, has been busy over the past few weeks with a task force to decide how teachers should treat chatbots.

“The point of this was to put something out before midterms this week,” she said. The task force issued a memo to faculty on Wednesday.

Frank said Columbia organized a workgroup after a professor caught a student “pretty badly” using ChatGPT to answer a quiz. The professor canceled the upcoming quiz and asked the students to submit handwritten class notes instead. The same student provided a handwritten note, apparently copied from ChatGPT.

Instead of setting campus-wide rules, a task force at Columbia College urges professors to make individual decisions about when to allow, encourage, or even assign students to use AI. I was.

Computer scientist Youngmu Kim is on a similar committee researching chatbots at Drexel University. The committee aims to issue guidance to schools by the end of March.

“We are considering issuing guidelines for all faculty and staff,” he said. “It is not a commandment.”

Kim expects the guidelines to be looser and broader as AI technology evolves rapidly.

“The rules of AI have changed not just in the past year, but in the past week,” he said. “If we were to put out very strict guidelines now, it would look silly.

Not all schools have created or attempted to create official guidelines, and some have determined that no new rules are needed at this time.

Justin Shadock, chairman of the Honors and Discipline Committee and associate professor of philosophy at Williams College, said his school handles cases of suspected misconduct on the student council, and that’s unlikely to change. says no.

“We have a student committee that acts like a judge to decide whether a cheating allegation is really cheating,” Shaddock said.

Plagiarism crackdowns before ChatGPT were relatively easy. A plagiarism checker can match a passage from a student’s essay against the same passage on Wikipedia. For example, you can catch a violator red-handed.

But ChatGPT creates new words instead of copying old words. This complicates the task of catching cheats.

“With ChatGPT, it is much more difficult because it generates a different answer each time. This means that professors have to put questions, prompts, etc. I try to show that there is more similarity than there actually is between the questionable essay and something like a series of responses from other students,” Shaddock added.

Identifying that students are using AI-generated text in their papers may not be as easy as many professors think. Especially in large classes.

One Canadian study, not yet published, found that “two-thirds of professors failed to correctly identify text written by AI.”

“When two-thirds of college professors fail a test, we’re in a bit of a bind,” she said.

Professors are more likely to discover chatbot cheats in small seminars. This is a format that allows instructors to engage with students about their work and become familiar with each student’s style.

Despite all the concerns, many recognize that technology cannot stop evolving, and after the initial shockwave, they are looking on the bright side.

Laura Doomin, professor of English at the University of Central Oklahoma, manages a Facebook group of 2,000 faculty members to discuss the positive use of AI.

“We know we are exhausted with COVID. We had to turn around once, but now we are being asked to turn again very quickly,” she said. I was.

“Especially in January and February, people were saying, ‘We’re going to catch all the scammers,'” says Dumin. “And my thought was that you would waste all your time and be exhausted.”

Instead, Dumin encourages professors to make peace with technology and find ways to use AI productively in the classroom.

Dumin’s own students are currently submitting papers for three rounds of review. From his colleagues, from professors, and from chatbots. “And it gives you feedback and tells you what you think about what you’ve written,” she said.

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Leave a Reply

Your email address will not be published. Required fields are marked *

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp