First-hand experience with education and Large Language Models (LLMs)

Ahmed Ibrahim
5 min readDec 11, 2023

--

How does our learning change with LLM by DALL·E

This is a first-hand experience/opinion from a 3rd-year college student who has had a year and a half of college with Large language models (LLM) and a year and a half of it without LLM.

What is our educational system based on?

For many years, the educational content was designed to be many small (entry-level) tractable tasks AKA lessons and assignments. This is a great design where the student gets introduced to many new concepts with enough level of difficulty that can induce/challenge the neurons in the student’s brain to change and learn the topic.

A supporting tool for this system was the “grades” idea. Due to many factors like social pressure, self-esteem, hunting for jobs, and many other reasons, students were optimizing for getting better grades, which was indirectly leading to optimizing for “learning”. This was mainly because other ways of optimizing for grades without learning are perceived as unethical by almost all communities. Think about it, how terrible it is to copy/steal someone’s else work on an assignment. How bad will you feel about yourself? probably pretty bad. Thus, the direct way of getting high grades was studying and learning more — I am ignoring how much of this learning is actually useful and not directed to only bypassing the assignment because this is another whole story.

What does LLMs add to this equation?

However, the LLM changed the equation for all of us. Let’s take a step back and think about what is a LLM. It’s a model that has seen a huge amount of online data and it’s basically trying to predict the next word given the context before it. The predicted next word depends on the data it saw before in the training. We notice two things here:

  • It depends on its ability to “remember” the text (input or output) to understand the context.
  • The frequency of the text or information existing on the internet controls what the model will produce.

Educational material and LLMs

Thus, I think it’s fair to say that an LLM is a powerful tool for performing small common tasks, and educational content falls under this classification. It’s designed to be small enough to be tractable and it’s so common because literally most of the students around the world are learning the same things with small differences. As a result, LLMs became like a hack for educational content.

Let’s remember that students were optimizing for better grades, not better learning, but they were correlated objectives because other hacks are morally extremely discouraged and can be caught with plagiarism tools. However, LLM doesn’t feel morally wrong at the beginning because you aren’t stealing other people’s work. It’s a work that one produces by prompting software and getting unique — at least to the plagiarism checker — to one’s work. LLM gives you the ability to finish tasks faster and more efficiently than the usual way of work. With the fast-paced educational environment nowadays and the system optimization for grades, students find it inevitable to use LLMs. This effect is magnified when one starts to see other classmates are using LLMs and having way better grades “the main optimization variable” than them with way less effort. Imagine you are having 5 assignments with 40% of your grades due this week. You can pull all-nighters, won’t get the best quality work, and the final score will be a “B”. The second option is using an LLM and you will have your sleep, amazing quality work, and an “A”. You can see 80% of your friends are having fun and finished their work with “LLM”. What would you choose? I can tell you that more than 80% have already chosen the latter.

What’s the catch?

Ok. LLM improves work quality, less effort, and leads to better grades. why not? what’s the catch? Unfortunately, the educational system that humans spent decades building up has let us down. LLM boosts its objective variable “grades” on the interest of “learning”. When one uses LLM in small tasks, they don’t learn most of the details if at all. The brain isn’t challenged enough to develop the skill. This effect manifests when one needs to apply what they learned in real life, where LLM’s context window cannot fit the whole context, or when the task isn’t as common and needs creativity and unusually connecting different tools together to solve the problem at hand or to express an idea in mind. In those situations, LLMs become nearly useless but too late. It already made one’s abilities also useless.

Possible solutions?

I’d be lying if I said I have an immediate solution for this challenge that we are all having. However, I hope my hands-on experience can help science-driven people in power understand our perspective better to figure out possible solutions.

Nevertheless, I have thoughts.

  • Maybe we need to change how our educational system is. Instead of making students optimize for grades and always stress for more work, maybe we need to give them space to learn and optimize for learning.
  • We can change our educational material to build up on each other and have an expanding context. i.e. progressive novel or short book assignments or progressive web-application assignments. However, there is a risk that we are putting ourselves in the same trap again when the model’s abilities improve as time goes on.

If you are a student like me, here’s what I am trying to do to navigate this dark alley. I am trying to be aware of my will to learn at least as much as I can to get high grades. I try to not use LLMs whenever I can. I try to write code without copilot whenever I can. I try to practice skills I want to have whenever I can just like this article. My main and initial motive for this article was because I felt I was losing my ability to write so I decided to write a whole article with 0 LLM use.

--

--

Ahmed Ibrahim
Ahmed Ibrahim

Written by Ahmed Ibrahim

Incoming Software Engineering Intern @ Figma | Teaching Assistant @ Minerva | Former Software Engineer Intern @ Two Sigma

No responses yet