How European business schools use artificial intelligence to rethink teaching

Imperial, Vlerick, and Porto are weaving artificial intelligence into teaching, assessment, and curriculum design. Their experiments range from professor avatars and simulation-based learning to assignment audits and human-led evaluation.

European business schools are moving beyond pilots to integrate artificial intelligence across teaching and assessment. At Imperial Business School, a learning innovation team is testing professor “digital twins,” artificial intelligence avatars trained on module content, lecture transcripts, and live session video. The goal is to provide contextual, round-the-clock support in the instructor’s voice and style, especially for globally dispersed online MBA cohorts. Vlerick Business School in Brussels is taking aim at assessment integrity with an internal tool that rates tasks for vulnerability to generative assistance, while Porto Business School is embedding artificial intelligence across curriculum, faculty development, and classroom management with a strong emphasis on human judgment.

Imperial’s IDEA Lab invited faculty to opt in by recording personalized clips and sharing course materials to train their digital twins, which students then used for both general questions and structured learning activities. Learners tended to engage more conversationally with the professor avatars than with generic bots, and some courses asked students to verbalize their reasoning with the twin before submitting written reflections. The school also trialed scenario-based exercises, including an entrepreneurship negotiation with an artificial intelligence venture capital chatbot that referenced each team’s plan and deal red lines. Early challenges surfaced, such as lecture transcripts containing quiz answers and mixed student comfort levels, but staff see high potential when paired with simulations and roleplay.

Vlerick’s push began after campus-wide access to Microsoft Copilot accelerated faculty concerns about validity. The school built a GPT-powered checker that flags assignment components as low, medium, or high risk of being easily completed by generative tools and offers redesign suggestions. Rather than policing partial use, Vlerick now labels tasks as either artificial intelligence prohibited or artificial intelligence encouraged. Guided by Bloom’s Taxonomy and a recent chapter co-authored by associate dean Steve Muylle, faculty map which elements can be augmented and which must remain human, such as critical synthesis or authentic delivery. Classes where students leaned too heavily on artificial intelligence saw strong coursework but weak paper-and-pencil exam results, prompting more in-class discussion and multi-agent simulations to cultivate collaboration with both machines and people.

Porto Business School treats artificial intelligence as foundational rather than optional. Faculty use it to prepare materials, generate synthetic datasets, and test classroom scripts, while students study topics such as machine learning, prompt engineering, and ethics and build proofs of concept that compare traditional and artificial intelligence-driven techniques. Lecturer André Santana stresses intention and alignment with learning goals, warning that faster outputs are not the same as competency. Assessments remain human-led, with students asked to clarify what they did themselves and what was augmented. Santana frames the core challenge as cultural, not technical, noting that artificial intelligence can accelerate learning and support creativity, but inspiration and evaluation must remain human-centered.

Across Imperial, Vlerick, and Porto, artificial intelligence is already reshaping pedagogy, from immediate support and immersive simulations to risk-aware assessment design. The common thread is not whether to use the technology, but how to integrate it in ways that protect academic rigor and preserve what students value most in education: authentic human judgment and interaction.

55

Impact Score

LLM-PIEval: a benchmark for indirect prompt injection attacks in large language models

Large language models have increased interest in Artificial Intelligence and their integration with external tools introduces risks such as direct and indirect prompt injection. LLM-PIEval provides a framework and test set to measure indirect prompt injection risk and the authors release API specifications and prompts to support wider assessment.

NVIDIA may stop bundling memory with gpu kits amid gddr shortage

NVIDIA is reportedly considering supplying only bare silicon to its aic partners rather than the usual gpu and memory kit as gddr shortages constrain fulfillment. The move follows wider industry pressure from soaring dram prices and an impending price increase from AMD of about 10% across its gpu lineup.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.