Balancing Benefits and Risks: The Ethics of AI in Education and Cognitive Learning
- Yasemin

- Nov 15, 2025
- 2 min read
I have friends who hate AI. Despise it. Abhor the thought of using it for anything. However, I also know people who use it for practically everything from school essays to their latest profile picture. However, there are aspects of AI I know could support my peers. However, conversations with my “hater” friends gave me insight into why they reject it and how it can unintentionally harm the students it aims to help.
I find it interesting that the rise of AI seems to coincide with the increase in attention and support towards cognitive learning, especially within the past 15 years. With AI, the learning process can be personalized. The format and pacing of our lessons can be adapted to suit individual needs. I can personally attest to how challenging independent learning is without structure. AI can help by providing reminders and breaking tasks into manageable steps, keeping us focused and engaged.
However, the age-old question remains: what are the ethics of AI? Remember that AI is trained on human data. It is limited to the number of datasets it is fed. Without the appropriate knowledge, developers can end up reinforcing harmful stereotypes and excluding real experiences, doing more harm than good. There is also the issue of privacy and data security, which is the main concern of my “AI hater” friends. How was the data collected? Did they ensure that all sources consented to their information and experiences being used to create a (potentially profitable) service? How is the data protected?
A huge cause for concern at the moment not only affects students, but also people in general, and that is over-reliance. These days, ChatGPT is used for even the smallest of tasks, and what it is doing is replacing our brains. We no longer have to think or remember anything! Some are even using AI to manage their mental health, asking for diagnoses and treating them like a therapist. But AI is a machine. It cannot, should not, replace teachers, tutors, or counselors. It can feign experience, but it is not real.
With all this in mind, how do we find the common ground? How do we meet in the middle?
First of all, remember that AI is a tool. We made AI to help us and make our lives easier. Especially with such a sensitive topic as ethics, developers must be transparent with their users about how the AI tools work and how data is collected and used. They should also involve real educators and students in development to avoid reinforcing biases and ensure that real needs are met. Beyond the classroom, policymakers must pay closer attention to how fast AI is developing and create standards, rules, and monitoring mechanisms to ensure AI remains fair, safe, and beneficial.
So, am I a lover or a hater? Neither. I understand that AI can be empowering, but also has potential for harm. AI is inevitable, but my hope is that one day we can study in classrooms where we have gained the support of AI, but have not lost the human touch. With careful design and ethical oversight, I am confident that AI can help students thrive without replacing the people who guide and support them.



Comments