The Ethical Landscape of Generative AI in Education: Insights from the Explore event

ErasmusX
5 min readMay 6, 2024

--

written by Bianca Raicu

The ethical and responsible use of Generative AI in higher education is a pressing topic that demands open discussions and careful examination. We have seen this need among faculties, students, teachers, and professional staff. ErasmusX’s recent Explore event, held on March 12th, provided a platform for stakeholders to engage in discussions and share insights on this topic.

From the essence of human learning to trust and regulations

The event kicked off with a panel discussion featuring Dr. João Gonçalves, Dr. Shu Li, Sonia de Jager, and Mira Nikolova. Their diverse perspectives shed light on various aspects of generative AI, prompting critical reflections on its implications for learning.

  1. Learning in the age of generative AI:

Questions around learning were raised, extending them to Large Language Models (LLMs). These models aim to replicate the essence of human cognition, which we don’t fully understand yet. Hence, the question of why we should accelerate the development of LLMs is important to address.

Moreover, existing inequalities can be exacerbated by language models. There is a growing call for increased criticality in the use of GenAI in education.

2. Perspectives around GenAI in higher education

The perspectives on the ethical and responsible use of Generative AI in higher education are multifaceted.

  • Some educators express enthusiasm for its potential to enhance teaching and learning experience;
  • Others voice concerns around the (un)ethical use of generative AI;
  • Certain teachers and students also view generative AI as a tool for creativity that can expand on their academic work;
  • Students can feel frustration due to the blurred boundary between what is allowed and what is not allowed;
  • The generated content itself poses risks related to reliability that leaves both students and teachers challenged.

The blurred boundaries between creativity and cheating, coupled with the challenges of ensuring clarity and reliability in generated content, underscore the importance of establishing clear guidelines and regulations on course, faculty, and university levels. Collaborative efforts are essential to develop inclusive and representative frameworks that address the diverse needs and perspectives of all stakeholders.

3. Regulations, biases, and trust:

Regulating Generative AI poses significant challenges, particularly in keeping pace with its rapid evolution. Despite initiatives like the EU AI Act, there remains a gap between regulatory frameworks and practical implementation.

So, how can we adapt these general guidelines — and develop our own — to fit the needs of our university, while also addressing the existing biases and the challenge of trustworthiness? How can we regulate this technology when many of the challenges are on the user side, not just the deployers of the technology?

Moreover, the Erasmian Language Model reiterated the inherent biases of AI systems, emphasizing the need for transparent communication and co-created guidelines to foster trust between teachers, students, and the technology itself. Reaching a consensus in terms of guidelines, critical thinking, fact-checking, and digital literacy remain essential to mitigating the risks of AI-generated content.

The practical implications of integrating AI in education

After the event’s panel discussion, small-scale Table Talks facilitated dynamic exchanges among participating students, teachers, policy-makers and industry experts. The facilitators of the Table Talks were Ella Akin, Dr. Valdas Dovidavicius, Laura Mantilla Vargas, Dr. Marlon Domingus, Dr. Shu Li, and Shruthi Venkat.

  1. GenAI as a tool:

Just like a regular ‘colleague’, GenAI can be integrated into academic activities without glorifying it. Its benefits can be acknowledged, while accompanied by self-reflection and critical thinking, rather than focusing on the usage of GenAI for fraud. Not only does it provide easy and free access to information, but it can also help people understand materials better. If people learn how to use it cautiously and consciously, the benefits can be reaped effectively.

2. Teacher-Student Dynamics and Ethical Responsibilities:

The evolving dynamics between teachers and students due to the emergence of generative AI necessitate shared responsibility and clear guidelines when using it ethically. These guidelines should be collaborated on by students and teachers alike, considering their limited time available and the different skills needed to use generative AI ethically and responsibly.

Moreover, we already see changes in assessment methods, for instance. There are more frequent oral exams instead of written exams. To take it further, different EUR facutlties need to align in terms of guidelines and practices.

3. Tackling Bias and Volatility:

Bias is inherent to large language models such as ChatGPT. For example, OpenAI themselves admit that ChatGPT is skewed towards Western views and Anglocentrism. It can also reinforce the user’s view depending on how it is formulated in the prompt. Even with the development of the Erasmian Language Model, it is very difficult to fully eliminate bias, as humans themselves are inherently biased too.

Moreover, large language models are volatile, and their results are driven by the content and curation of their databases as well as the build of their algorithms, both of which have grown too complex to fully understand and control it.

Hence, it is the responsibility of the users to learn how to fact-check and reflect on their own biases as well as the biases of the model. We need to learn how to navigate generative AI — not just in education, but also in our day-to-day lives.

The Explore event, in a nutshell

A vocal need emerged for proactive measures at EUR. Not just advocating and discussing, but also:

  1. Clear guidelines and policies on using GenAI in higher education need to be developed in co-creation with students and educators;
  2. Teachers and students need a space to experiment with integration of GenAI in teaching and learning, and to reflect on doing it in ethical and responsible ways;
  3. The importance of fundamental skills such as critical thinking has become even more acute, and GenAI literacy can help build and reassess them;
  4. GenAI is here to stay, and the possibility of completely isolating higher education from it or banning it entirely is not an option.

The ethical landscape of generative AI might be one of the biggest challenges that universities have faced lately. We need to engage with these action points that could potentially maximize the benefits of AI in education while tackling its challenges.

Are you ready to start acting? Email Milan or Georgiana for questions or suggestions.

--

--

ErasmusX

We are a team of passionate people forming the driving force behind educational experimental innovation for the Erasmus University Rotterdam (EUR).