It’s 12 a.m. and unfinished homework looms – 3 math problems remain alongside an English paper due tomorrow morning. Exhaustion creeps in as the workload taunts. You consider a forbidden temptation: simply prompt an AI assistant to type out the essay so you can rest easy.
After the release of popular artificial intelligence tools in 2022, many have had varying perspectives on the ethical morality of AI. Above all, educators have faced numerous adversities with student academic integrity. To address the rise of AI in academics, plagiarism detection services like Turnitin have responded with new artificial intelligence capabilities to identify text generated by chatbots.
While some school administrators supported using Turnitin’s AI detection, top schools such as Vanderbilt University rejected this tool due to concerns about falsely accusing students, as Michael Coley, instructional technology consultant, explained on the Vanderbilt website.
“Vanderbilt submitted 75,000 papers to Turnitin in 2022. If this AI detection tool was available then, around 750 student papers could have been incorrectly labeled as having some of it written by AI,” Coley said. “Instances of false accusations of AI usage being leveled against students at other universities have been widely reported over the past few months, including multiple instances that involved Turnitin.”
Contrarily, Principal Rick Fleming expressed his eagerness to implement the platform within classrooms.
“We have Turnitin.com, which our social studies department and English department are using. They’ve got some real creative things they’re doing such as working on developing a writer profile of student work and the authenticity of the work in addition to the sources. A two-year subscription is about $3,200 and we use some of the capstone money that we earn for the students who earn capstone diplomas to pay for that because we feel it’s that important.”
However, some students feel like AI chatbots, while restricted, can provide necessary help for pressing assignments. Junior Hunah Quadri sees nuance in the issue.
“I understand the motive behind making the use of AI restricted but as I got older, I realized that learning the material is way more important than passing the class. Of course, I wanted to keep my GPA up but that’s impossible if I don’t learn and apply my skills,” Quadri said. “I know that a lot of people use AI to get last minute assignments done or maybe it’s just out of our laziness. For a lot of kids, the stress behind having incomplete assignments can be daunting. By using AI, it’s more likely that a student would finish an assignment which could relieve some of that stress.”
When it comes to college admissions, some schools, such as Florida Polytechnic University are dropping application essays to avoid AI generated content. College Counselor Angela Feldbush believes that if a student uses AI, it’s very evident because it’s “generic and bland”, and it’s not hard for the student to get caught.
“It’s not that what the AI generates is wrong, it’s just that it’s generally pretty superficial, and a good college admissions essay is anything but superficial. Even though you could generate an admissions essay, I think it wouldn’t help you since I think what it comes up with is going to be so surface level and trivial that it’s not going to be a good enough insight into your personality.”
Moreover, Feldbush outlined how timestamp analysis and other technical methods can conclusively identify content created by AI, removing any ability for students to defend its use.
“Most of the time when students are caught using the AI generators, they’ll admit to using the AI to help them write the paper, but you can actually do a timestamp and you can look at when the words were put on the page, and they come in in big blocks. So, it’s not something that you can defend yourself against when you have a paper that was written in 13 seconds — there’s no way that you, as a human, wrote that paper in 13 seconds. It’s really not something that you can argue against. if you’re using your own words, and you were writing it, then the process demonstrates that, and the tools will support you in showing that it’s your genuine work.”
Quadri acknowledged the temptation to use AI out of stress, yet ultimately valued learning over grades.
“AI can be valuable in some respects but in a school setting it basically trains your brain to not work. When AI does it for you, your brain is not engaged. I know teachers at West Shore are acquainted with AI when it comes to assignments,” said Quadri. “The style of writing and the way AI structures its words is undeniable, so even if kids use AI most teachers can tell. In the end, the backlash only falls on the student. Without doing the work and learning the material, they won’t retain any of the information.”
Like Quadri, Fleming is a firm believer that students who are normalizing the use of AI within their academic performance are only hurting themselves.
“There are components of it that your teachers may find acceptable for you to use, and I would embrace those, but if it’s being used in a deviant sense, then you’re only cheating yourself. My advice to students is produce authentic work. If you’re going to use AI or you’re going to at least reflect using AI, you need to disclose that to your teacher. You have students that are going to take the easy way out, but you’re only harming yourself if you do that.”
However, some students see potential upsides to AI, like junior Tanisha Bertilien, who argued it could aid brainstorming if used selectively.
“AI is valuable to students, especially in situations where they just need help brainstorming ideas, such as thinking of topics or words or maybe even like revising a couple sentences. Teachers shouldn’t fear AI, they should really look into how it can be used to help students in a way to improve their work and help them do more brainstorming activities. However, I feel like it really depends on the subject on whether it should be used it or not.”