When AI and ChatGPT are talked about, it is often in the context of cheating in school. But what is cheating? Cheating is when you turn in work that is not yours and expect to get credit for it. Cheating is the creation of content using something aside from your own mind. To write an entire essay using a chatbot is cheating because the AI did all the grammar, structuring, and integration, leaving you unable to grow in your compositional abilities. To use AI to solve a math problem or write an essay doesn’t allow you to learn.
Learning is the crux of the issue. When you use AI, it must be used in a manner that still allows you to learn. Using AI to simplify a topic is ok, as you could ask a classmate or teacher to do the same thing. Using it to suggest a word or phrase is ok, but not an entire paragraph, as it then becomes the AI’s work and not yours. You can use it to explain a math problem, as long as you try to learn from the explanation and don’t just use it to do your work for you.
”I’ll typically ask ChatGPT to just to summarize that topic into easy to read bullet points,” said one student in an interview done by Vox, a news outlet. This would be a fine usage for a chatbot, as you are simply using it to explain a complicated topic in simple terms.
But how far does this acceptable use go? Should you use it to generate ideas? No, because if you use it to generate an idea for an essay or project, it means that although you did the work and did the project, you didn’t think critically and come up with a topic on your own. It should be your responsibility to analyze a text or data and come up with a project idea from your own analysis.
“When it comes to generating ideas, it’s not really giving you an inspiration, it’s giving you an answer,” replied one high school educator when speaking with Vox. Using a chatbot to generate ideas does not allow the student to do their own work. Even if you write the essay on your own, you did not do the work of coming up with the topic, and that is part of the assignment and you are expected to be capable of creating your own ideas.
Another problem with using chatbots is that we assume they are always correct. AI is often seen as perfect and infallible, but that is not the case. AI and chatbots can make mistakes, they can generate content that is false or inaccurate, and if you are using AI to explain something, the explanation may be incorrect.
“I put these sources into Google, and all of them were fake,” said one student interviewed by Vox. This is a major issue with chatbots, as we assume that they are perfect because of their artificially intelligent nature, but they are just as flawed as a human can be. If they are fed false information or misunderstand a request, they will not give you the result you expected.
With AI sophistication on the rise, reports of rampant cheating and illegitimate assignments terrify distressed teachers and parents. However, some studies reveal that the easy access to AI has not increased the amount of cheating in schools.
“AI is not increasing the frequency of cheating. This may change as students become increasingly familiar with the technology, and we’ll continue to study it and see if and how this changes.” According to the studies of Victor Lee, an associate professor at Stanford, the amount of cheating hasn’t changed, but the way cheating occurs has. A student willing to write an essay using ChatGPT was going to find a way to cheat on that essay or on something else, with or without AI to help.
The usage of AI and chatbots in school is a confusing and controversial topic. When using AI, you must ask yourself: are you still going to learn? Are you doing the work that the teacher requires you to do, or are you cutting corners? Is the AI necessary, or could you ask a teacher or classmate to help you instead? If you are unsure if what you are doing is cheating or not, discussing it with a teacher or administrator is a fine course of action.