Responding to challenges together: Higher Education in an AI-Driven World

Last week I was privileged to be invited to present at the annual ISTW conference themed Higher Education in an AI-Driven World: Project-Based Learning as a Gateway to Real-Life Educational Experiences, hosted by the Silesian University of Technology in Gliwice, Poland. A sign for the conference saying 'International Staff Training Week'

It was an extraordinary gathering of educators and researchers from across Europe, including participants from Poland, Germany, Ukraine, Romania and, of course, Australia (me!), each grappling with the same pressing question: how can universities remain relevant and support students to be fully prepared graduates at a time when artificial intelligence is reshaping not only learning and teaching, but also the professions into which they are entering?


My Contribution: Cultivating Critical and Ethical AI Literacy

In my own presentation, I explored how higher education might respond to the challenges of generative AI by cultivating students’ capacity to engage with these tools critically and ethically. I examined what it means to be an AI-literate graduate—one who can question, evaluate, and apply AI outputs with academic integrity and informed judgement.

Anchoring my talk in project-based learning, I discussed how authentic, inquiry-driven tasks can foster a culture of ethical scholarship and reflective practice. Drawing on my own and others’ recent research into information literacy, bias, and academic integrity, I outlined key pedagogical strategies that could be useful for supporting responsible AI use in learning and assessment.

My keynote led to a great discussion about the emotional and cognitive load that this moment brings for both educators and students. Times of rapid technological change can be unsettling, but they also present a rare opportunity to rethink what we actually value in education:

  • What does learning mean in the age of AI?
  • What should we assess, and how might those assessments evolve?
  • How do students themselves understand and anticipate AI’s impact on their professional lives?

These conversations reinforced for me the importance of wellbeing, empathy, and reflective dialogue, which I believe must sit alongside innovation if higher education is to flourish in this new landscape.

The remainder of this post shares my takeaways from several presentations that focused on the role of GenAI in learning and teaching.


Are We at Risk of the Infantilisation of the Human Species in the Face of AI?

Professor Serhiy Lyenov (on behalf of his research partner Professor Aleksandra Kuzior, Silesian University of Technology)

Lyenov explored the provocative question of whether the rapid adoption of AI could lead to a form of societal infantilisation—a regression of human cognitive and moral independence. Drawing on the McKinsey 2024 finding that 78% of organisations now use AI for at least one function, the talk contrasted Society 4.0 (automation-driven) with Society 5.0 (human-centred innovation). The warning was clear: the thoughtless use of AI could erode core human qualities, including independent thinking, reflective reasoning, and creativity.

For me, this emphasises the vital importance for educators to continue to challenge students to build these capabilities through pedagogical strategies and assessment design that offer students opportunities to understand how these abilities contribute within their professional and academic contexts.

Throughout the presentation, the speakers illustrated these risks through research on cognitive offloading (Gong & Yang, 2024) and automation bias, noting that everyday tools like navigation apps already diminish spatial memory (Dahmani & Bohbot, 2020). Algorithmic curation narrows exposure to diverse perspectives, while “AI realism” (Nightingale & Farid, 2022) blurs our ability to discern authenticity. They also cited evidence that showing AI-generated examples can reduce originality in human design tasks (ACM, 2024)—something I have observed when over-prescriptive exemplars unintentionally limit students’ responses.

While their broader claim—that dependence on AI may produce “learned helplessness” and biological regression—was presented more as philosophical speculation than empirical fact, the message resonates: uncritical integration of AI risks dulling the capacities, such as curiosity, moral reasoning, and creative insight, that define human intelligence.

When Can We Trust AI Models

Professor Paweł Kasprowski

Kasprowski examined the reliability and transparency of AI systems within the context of the EU AI Act. He contrasted classic programmed systems—predictable and testable—with AI-based systems trained on vast datasets that produce probabilistic outputs. AI systems are inherently less predictable: they may outperform humans in many domains yet still fail in untested or rare scenarios.

Through compelling examples, Kasprowski demonstrated these limitations. For instance, a self-driving car cannot be “debugged” through code when it crashes; it must be retrained with more data—and sometimes there simply isn’t enough of the right data to cover every scenario. Small manipulations, such as stickers on stop signs, can completely deceive computer vision models. Similar vulnerabilities apply to text-based models trained on imperfect, often biased internet data. He emphasised that large language models (LLMs) function as black boxes—their decision-making processes remain opaque even to their developers.

Kasprowski’s key message was a call for informed caution. While AI systems excel at pattern recognition and efficiency, they lack moral reasoning and empathy and are governed solely by the goal functions humans define. Poorly specified goals can produce disastrous outcomes. Trust, therefore, must be conditional and coupled with transparency, oversight, and human-centred governance.

Students and Generative AI: Teaching, Learning, and Who Can Do What

Professor Melanie Ellis, Silesian University of Technology

Professor Ellis approached AI in higher education from an applied linguistics and pedagogy perspective, arguing that the key question is not just what AI can do, but what we want it to do—and why. Citing the EU AI Act (2024) and UNESCO’s AI Competency Framework, she highlighted the need for Critical AI Literacy, described by Mustafa et al. as the capacity “to deeply understand, evaluate and question AI systems and tools, especially regarding their safety biases and decision making processes.”

Ellis encouraged educators to investigate how students actually use generative AI, and how this intersects with the specific educational context, institutional expectations, and assessment design. She proposed negotiated boundaries and emphasised explicit discussion with students about acceptable AI use for specific tasks, embedding agreements within assessment rubrics. Such dialogue fosters responsibility and transparency rather than fear or prohibition.

Her central theme was integrity through self-awareness: in an AI-driven world, the only dependable safeguard is the learner’s sense of authenticity and accountability. Educators should help students reflect on questions such as:

  • “Can I do this myself?”
  • “Do I understand why I am doing what I am doing?”
  • “If the power went down or the internet went off, would I still know how to do this?”
  • “Am I being true to myself?”

Ellis cautioned that overreliance on AI risks deskilling students, producing graduates who are unable to function without digital scaffolds. True AI literacy involves both competence with technology and confidence in one’s independent thinking.

Collaborative Teaching and Learning in the Age of AI: How Can Higher Education Stay Relevant?

 Hubertus Weyer, Wiesbaden Business School

Hubertus positioned AI within the wider landscape of disruption affecting universities. Given volatile enrolments, financial instability, climate transition, and sociopolitical tension, he argued that to remain relevant, higher education must reimagine not only what is taught but how learning occurs. Referencing Creating the University of the Future (Ehlers & Eigbrecht, 2025), he advocated for cultivating “future skills,” including critical AI understanding, adaptability, and collaborative intelligence.

He contrasted academia’s cautious approach to AI with the corporate sector’s enthusiastic adoption, suggesting universities risk obsolescence if they remain overly protective or rigid. Yet he also warned against uncritical embrace: AI should catalyse pedagogical transformation, not replace human connection.

Drawing on Schneider and Hutt’s Off the Mark (2020), Hubertus invited us to reconsider grading and ranking systems that reward reproduction over reflection. Instead, he called for reflective and collaborative assessments—tasks that prioritise relationship-building and authentic problem-solving over competition. His concluding provocation—that AI may paradoxically reawaken human collaboration—invited us to see the technology as an opportunity to reclaim what makes learning truly human: dialogue, creativity, and collective meaning-making.


AI in Language Learning: Practical Insights from the Classroom

As the conference was hosted by the Foreign Language Centre, a number of inspiring presentations by teachers of foreign languages—including English—focused on collaborative international virtual exchanges and practical strategies for teaching languages in an information environment increasingly shaped by Generative AI. These sessions offered a grounded view of how educators are adapting to the realities of AI.

The overarching insight from every presentation was the passion, humanity, creativity, and innovation educators bring to their work, and their desire to embed AI tools in ways that scaffold ethical, critical, and measured use.

“Can AI Do My Project?” — Polish–Japanese Student Collaboration in the AI Age

Iwona Seta-Dąbrowska

Iwona described an ongoing collaboration between students at the Silesian University of Technology (Poland) and Otaru University of Commerce (Japan). Having run for several years, the project provides a unique lens for observing how teaching and learning have shifted since the arrival of generative AI.

She opened with a provocative logic: if AI can replace some jobs, might it also replace student projects? As teachers cannot supervise every stage of student engagement, transparency and negotiation become essential. Iwona encouraged educators to discuss openly where and how AI may assist, for example:

  • Brainstorming and idea generation
  • Translation and communication support
  • Formatting and language correction
  • Research assistance (with clearly negotiated boundaries)
  • Visual generation aligned to learning outcomes

Her key message: clarity and trust are the antidotes to misuse. Over-reliance on AI can strip learning of productive friction and reduce complex projects to mere technical execution. She reminded us that AI cannot truly understand culture, emotion, humour, or identity; collaboration across cultures depends on empathy, creativity, and shared experiences—qualities that remain profoundly human.

Teaching English Through Content: When CLIL Meets AI

Janusz Sroka and Iwona Seta-Dąbrowska

A photo of Iwona and Janusz presenting.
Iwona and Janusz presenting.

Our hosts explored the intersection of CLIL (Content and Language Integrated Learning) and generative AI. CLIL emphasises learning content through language and language through content, grounded in four pillars: content, communication, cognition, and culture.

They showed how AI tools can enhance CLIL classrooms by helping to:

  • Generate discussion topics
  • Design comprehension questions and answers
  • Transcribe and adapt YouTube/multimedia materials

Benefits observed:

  • Greater variety of materials and higher student engagement
  • Faster, more varied pacing of classes
  • Better access to linguistic and conceptual scaffolds
  • Richer exposure to linguistic recycling (vocabulary consolidation)
  • Enhanced creativity for both students and teachers

Challenges and tensions:

  • Reliability and authenticity of AI-generated materials
  • AI simplification—plausible but potentially inaccurate outputs
  • Risk of cognitive shortcuts (surface learning)
  • Linguistic accuracy and the difficulty of precise prompting for learners
  • Ethical concerns around data and bias

Therefore:

  • AI as a scaffold, not a substitute
  • Let CLIL provide a human-centred framework for integration
  • Adopt critical and reflective use of both human and AI intelligence
  • Give clear instructions to AI—results depend on the prompt
  • Use AI as a feedback assistant; co-design tasks (AI + human)
  • Preserve CLIL’s cognitive and linguistic richness

In short, CLIL offers an ideal framework for integrating AI responsibly because it keeps human communication, culture, and cognition at its heart.

“Me, Myself and I — or AI?” Teaching Writing Skills in the Age of AI

Katarzyna Dudek

Katarzyna offered a compelling snapshot of a writing classroom in transition. Before AI, both students and teachers viewed writing as a process that built authentic voice, critical thought, and academic discipline. Now, many students see it as a task better outsourced to a machine—faster, cleaner, and mark-driven. Teachers, meanwhile, face the burden of verifying authenticity and rethinking what written assessments can reveal.

“Not allowing us to use ChatGPT is like forbidding a farmer from using a tractor.”

Rather than prohibition, Katarzyna designed a structured process that harnesses AI for learning rather than replacement:

  1. Students draft by hand.
  2. They scan the draft and convert it to editable text (OCR).
  3. They prompt ChatGPT to edit the draft (with a “hidden” lesson in prompt engineering).
  4. They submit both versions, with commentary on what changed and why.
  5. They reflect on differences between the texts and discuss the platform’s editorial choices.

This approach turns writing into a metacognitive exercise: students learn not only how to write but how AI edits and interprets their work—AI as a mirror for learning, not a shortcut to completion.


The Centrality of the Human in the Age of AI

Across these presentations, a single thread connected every idea: the future of higher education lies in the role of the human and how we critically and creatively use AI to complement rather than replace teaching and learning. Our task is not to keep pace with technology, but to ensure that learning remains grounded in empathy, reflection, and ethical purpose.

Universities are challenged to equip graduates not only with the skills to operate AI systems but to question them; to notice bias, to weigh evidence, and to understand the social and moral implications of platforms that are rapidly becoming ubiquitous. As educators, we must model these capacities ourselves, holding space for uncertainty, dialogue, and the shared work of discernment.

The role of higher education is not to keep pace with machines, but to keep humanity at the centre of learning.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.