Concerned about ChatGPT, universities begin to review teaching methods

Antony Aumann, a philosophy professor at Northern Michigan University, was evaluating student essays in his world religions class last month when he came across “by far the best text in the class.” The work analyzed the morality of the burqa ban, with clean paragraphs, correct examples and rigorous argumentation. Aumann immediately smelled something suspicious.

He called the student to ask if he had written the essay himself. The student admitted that he used ChatGPT, a chatbot that provides information, explains concepts and generates ideas in simple sentences – and that, in this case, he wrote the student’s work.

Alarmed by the discovery, Aumann has decided to change the way essays will be written in his classes this semester. He intends to require students to write first drafts of texts in class, using browsers that monitor and limit computer activity.

They will have to explain any changes made to subsequent versions of the texts. Aumann, who is perhaps considering forgoing essays in later semesters, also plans to include ChatGPT in his classes, asking students to rate the answers the chatbot provides. “What will happen in the classroom will no longer be ‘here are some questions – let’s discuss them among us humans,'” he said, but “something like ‘and what does this alien robot think about the question?'”.

Across the country, professors like Aumann, department heads, and university administrators are starting to reevaluate classroom practices in response to ChatGPT, leading to a potentially massive transformation in teaching and learning. Some professors are completely redesigning their courses, making changes that include more oral exams, group work, and activities that must be handwritten, not typed.

The initiatives are part of a real-time effort to address a new wave of technology known as generative AI. Launched in November by the OpenAI AI lab, ChatGPT is at the forefront of this wave. In response to brief requests, the chatbot generates surprisingly well-articulated and nuanced text, so much so that people use it to write love letters, poems, fan fiction, and for school assignments.

The news is causing confusion in some high schools, where teachers and administrators are struggling to tell if students are using the chatbot to do their homework. To avoid cheating, some public school networks, including those in New York and Seattle, have banned the tool on their Wi-Fi networks and school computers. But students have no problem finding ways to bypass the ban and access ChatGPT.

In higher education, colleges and universities have been reluctant to ban the AI ​​tool. Administrators doubt a ban would work and don’t want to infringe on academic freedom. Because of this, the way teachers teach is changing.

“We’re trying to institute general policies that strengthen a teacher’s authority to run a classroom” rather than attacking specific methods of cheating, said Joe Glover, administrative director at the University of Florida. “This will not be the last innovation we face.”

This is especially true as generative AI is still in its infancy. OpenAI plans to release another tool soon, GPT-4, which will be better than previous versions in text generation.

Google built rival chatbot LaMDA and Microsoft is discussing a $10 billion investment in OpenAI. Some Silicon Valley startups, including Stability AI and Character.AI, are also working on generative AI tools.

An OpenAI representative said the company recognizes its programs can be used to trick people and is developing technology to help people identify text generated by ChatGPT. ChatGPT has leapt to the top of the agenda of many universities. Administrators are creating task forces and promoting debates involving all their institutions to decide how to react to the tool. Much of the proposed guidance is adapting to technology.

At institutions like George Washington University in Washington, Rutgers University in New Brunswick, New Jersey and Appalachian State University in Boone, North Carolina, faculty are reducing the number of jobs they ask students to do at home, which during The pandemic has become the most widely used assessment method, but now appears to be vulnerable to chatbots. Now they opt for classroom, manuscript and group assignments as well as oral exams.

No more instructions like “write five pages on topic x”. Instead, some professors prepare questions they hope are too complex for the chatbots to answer and ask students to write about their own lives and current events.

Sid Dobrin, director of the University of Florida’s English department, said that “students use ChatGPT to submit plagiarism, because homework can be plagiarized.” Frederick Luis Aldama, director of humanities at the University of Texas at Austin, said he intends to teach newer or more niche texts that ChatGPT may have less information about. For example, instead of “A Midsummer Night’s Dream,” you’ll opt for William Shakespeare’s early sonnets.

For him, the chatbot can motivate people “interested in primary canonical texts to step out of their comfort zone to search for things that aren’t online.”

If the new methods they’ve adopted fail to prevent plagiarism, Aldama and other professors said they want to institute more stringent expectations and evaluation criteria. Today it is no longer enough for an essay or essay to have a thesis, an introduction, additional paragraphs and a conclusion. “We have to improve our game,” said Aldama. “The imagination, creativity and innovative analysis that normally earn an A grade must now be present in the works that will earn a B.”

Even universities want to educate students about new tools. The University at Buffalo in New York and Furman University in Greenville, South Carolina plan to integrate a discussion of AI tools into required courses that introduce concepts such as academic integrity to freshmen. “We need to include a scenario around this so students can see a concrete example,” said Kelly Ahuna, director of Academic Integrity at Buffalo. “Instead of catching problems when they happen, we want to prevent them from happening.”

Other universities are trying to put limits on the use of artificial intelligence. Washington University in St. Louis and the University of Vermont, Burlington are reviewing their academic integrity policies to include generative AI in their definitions of plagiarism.

John Dyer, vice president of admissions and educational technology at Dallas Theological Seminary, said the language used in the seminary’s honor code “seemed a bit archaic already.” He plans to update the definition of plagiarism to include: “The use of text written by a generation system, masquerading as text itself (for example, entering a prompt into an AI tool and using the result in an academic paper).” .

The misuse of AI tools is unlikely to go away, which is why some colleges and universities have said they plan to use detectors to eradicate this activity. Plagiarism detection service Turnitin said it will start incorporating more elements to identify AI this year, including ChatGPT. More than 6,000 professors from Harvard, Yale, Rhode Island and other universities have signed up to use the GPTZero program, which promises to readily detect AI-generated text.

The information comes from Edward Tian, ​​creator of the program and a senior at Princeton University.

Some students find it helpful to use AI tools to learn. Lizzie Shackney, 27, studying law and design at the University of Pennsylvania, started using ChatGPT to generate ideas for academic papers and debug a variety of coding problems. “There are topics where they want you to share and you don’t have wasted work,” she said, speaking of her own computer science and statistics classes. “Where my brain comes in handy is understanding the meaning of the code.”

But she has fears. According to her, ChatGPT sometimes explains ideas and cites sources incorrectly. The University of Pennsylvania has no rules on using the tool, so it doesn’t want to use ChatGPT if the university bans it or sees it as a hoax.

Other students don’t have the same qualms, sharing on forums like Reddit that they’ve delivered written and solved papers via ChatGPT, and that they’ve sometimes done so for colleagues as well. The hashtag #chatgpt has had over 578 million views on Twitter. Users share videos of the tool by writing academic papers and solving coding problems. One video shows a student copying a multiple-choice test and pasting it into the tool, with the caption: “I don’t know about you, but I’ll have ChatGPT do the final exams. Enjoy studying!”

#Concerned #ChatGPT #universities #review #teaching #methods

Add Comment