How Large Language Models Can Ruin the Process of Learning
Completely written by a human
Disclaimer: This post is a rather opinionated one in which I reflect on Large Language Models (LLMs) such as ChatGPT and how they are being used poorly by, what seems to me, a rather large chunk of students/learners. As any reflection, it is by nature personal and will thus cover what I have come across in my daily life. It is purposefully targeted towards the rather extreme case of students (almost) completely letting LLMs do their thinking for them. I am aware of the upside of LLMs, and use them myself in specific useful situations; for many, however, LLM usage has gone too far, ruining the process of learning. This deserves a rather unnuanced post.
“How did your assignment go?” I ask my housemate, while we are having dinner. “Oh fine, I just threw it into ChatGPT. Then the language looked a little bit weird, so I asked it to rewrite it in a more student-like style. It’s super annoying when it does that. But then I was done. It was really nice, I was done in like 10 minutes!”
To me, the above seems bad and harmful, and directly opposed to the reasons as to why the assignment was given in the first place.
However, the number of times I have seen and heard people say something along the lines of what my housemate said is, quite frankly, ridiculous.
And it is not just university or education in which I have so often heard of people using LLMs in some of the worst, self-debilitating manners. It extends to any way in which LLMs or a variation thereof have become mainstream, and in which these are now widely used. (Though in this reflection I will focus on learning the most, as it seems so blatantly self-destructive).
So why is it, then, that so many students are so unbelievably irresponsible when it comes to using LLMs?
I believe it is a combination of three main factors: inherent human laziness, wrong incentives, and insufficient knowledge. Let me start with the latter, as I want to spend the least time on it.
Most people know essentially nothing about AI and/or LLMs. For instance, if one does not roughly understand the underlying nature of LLMs, it is more difficult to grasp that references and claims provided should always be checked.
Knowledge about AI is obviously not a limiting factor when it comes to AI students, and yet the issue of blatant irresponsible AI usage seems to persist; the other two factors thus seem more important and fundamental: inherent human laziness and wrong incentives. So how do they explain the misuse of AI under such a large number of students?
Well, for a start, studying can be hard (which is a bonding experience for students to complain about). Creating new connections and learning in general are difficult, especially in situations where concepts are not easily understandable and abstract. Exactly as university is, or rather should be; a place where people strive to find new connections and learn.
The thing is, most of us have to struggle through new material. A natural reaction to seeing something difficult is flinching away from it because it is hard, and hard things require effort, something our brain has been evolved to avoid if unnecessary (because it requires energy, which was hard to get by in pre-get-your-food-at-the-local-grocery-shop times).
We thus struggle when it comes to studying (even people who willingly sign themselves up for it). And precisely at the moment of struggle is where one can potentially learn new things, or...
turn to ChatGPT for finishing the assignment because you really just do not feel like it at the current moment and you had a bad night of sleep and this assignment really is not so important come on doesn’t everyone use ChatGPT currently the lecturers should have just made a better course what is the point of continu-
And while the student with this unfortunate predisposition of natural laziness tries to reflect on whether they should use ChatGPT or not, because of course, they also understand they learn less by using it to do their thinking, the wrong incentives and structure from many universities and higher education give one final death blow: they now have to use LLMs to do their thinking and assignments.
What are these wrong incentives and structures?
Some lecturers have taken the “bold” stance of completely prohibiting LLM usage in their course. At the same time, assignments in the same course are designed such that if done with current state-of-the-art LLMs, the corresponding grades will likely be higher than that of a median student taking the class. While the lecturer is sternly stating that the students are absolutely not allowed to use generative AI for any of the assignments in their classroom, students are thinking of how to pass the exam given that they don’t actually remember much from the assignments they “did” during the course.
Don’t get me wrong, I am not blaming the lecturers here. They are, at least, trying, instead of simply ignoring the fact that LLM usage has become incredibly mainstream (especially among students).
But, students are simply not listening. Whether their reasons may be pressure from home, that they would rather spend time having a drink, or the aforementioned inherent laziness, students are massively sabotaging their learning, and higher education systems are losing their value. How could one expect students to put in so many more hours than other students who are using LLMs for their assignments, while still getting lower grades? In an environment where so many others around you are using LLMs and as a result, are getting higher grades than you, not using LLMs requires an unusual level of self-discipline and integrity. And, if the others are using it, why not you?
Of course, the simple answer to that question is: using LLMs to think for you is the most self-destructive thing you can do, given that one of the main purposes of university is to learn how to think. Unfortunately, it seems for many students, the pressure of “getting high grades” combined with our natural inclination to avoid (mental) effort outweighs the above.
So, what do we do about this?
The solution to the above problem is non-trivial, which is one of the reasons why universities have clearly been struggling with mitigating the usage of LLMs. I have only two suggestions.
First, it is clear the structure of many university courses are clearly not designed for the “new age of LLMs”. It seems to me the main thing that can be done about this is requiring at least one (although preferably more) clear individual examination(s) in an environment without any assistance. If these examinations sufficiently test what is required to pass the course, this would eliminate much of the above troubles. The student would not feel forced to use LLMs because the incentives have been turned around: using LLMs to do one’s thinking now leads to lower grades. The number of courses I have taken where LLMs can be (mis)used is high, although I’m hopeful that universities and higher education systems are trying to change this.
Second, I would want to speak directly to the student, or to anyone interested in learning new things. Please evaluate why you are using LLMs, and whether this is necessary. If you are trying to learn calculus and are busy doing exercises, perhaps do not use an LLM so quickly. The effort, the hours banging your head to get a problem, the struggle, that is when you learn, and truly can master a subject. If you are assigned 50 pages of reading, this is not done to pester you (although sometimes it really feels like it...) Is it really worth it to use an LLM to summarize the pages? Is it not better to avoid this habit of turning to LLMs whenever something becomes difficult, or you don’t want to do it? Is it not better to not have LLMs do the thinking for you, even if it means students who are using LLMs will get a higher grade than you?
Of course, the above applies to so much besides higher education systems, but as mentioned, it seems to me one of the best examples given its high degree of self-destructiveness. Still, whether you are using LLMs for something else, like doing research, programming, writing mails, brainstorming, doing parts of your job, or anything else, be careful of your thinking. What a shame it would be to not be the one in control, not to know why things resulted the way they did, not to be the one to do the thinking, not to be the one to learn.


Important and clear post! Of course before LLMs students could already use google search cut and paste strategies to “write” their assignments but now it is too easy. I guess the requirement of writing a final course assignment without any LLM option will be needed to protect the quality of university based education and learning.