How European business schools use artificial intelligence to rethink teaching

Imperial, Vlerick, and Porto are weaving artificial intelligence into teaching, assessment, and curriculum design. Their experiments range from professor avatars and simulation-based learning to assignment audits and human-led evaluation.

European business schools are moving beyond pilots to integrate artificial intelligence across teaching and assessment. At Imperial Business School, a learning innovation team is testing professor “digital twins,” artificial intelligence avatars trained on module content, lecture transcripts, and live session video. The goal is to provide contextual, round-the-clock support in the instructor’s voice and style, especially for globally dispersed online MBA cohorts. Vlerick Business School in Brussels is taking aim at assessment integrity with an internal tool that rates tasks for vulnerability to generative assistance, while Porto Business School is embedding artificial intelligence across curriculum, faculty development, and classroom management with a strong emphasis on human judgment.

Imperial’s IDEA Lab invited faculty to opt in by recording personalized clips and sharing course materials to train their digital twins, which students then used for both general questions and structured learning activities. Learners tended to engage more conversationally with the professor avatars than with generic bots, and some courses asked students to verbalize their reasoning with the twin before submitting written reflections. The school also trialed scenario-based exercises, including an entrepreneurship negotiation with an artificial intelligence venture capital chatbot that referenced each team’s plan and deal red lines. Early challenges surfaced, such as lecture transcripts containing quiz answers and mixed student comfort levels, but staff see high potential when paired with simulations and roleplay.

Vlerick’s push began after campus-wide access to Microsoft Copilot accelerated faculty concerns about validity. The school built a GPT-powered checker that flags assignment components as low, medium, or high risk of being easily completed by generative tools and offers redesign suggestions. Rather than policing partial use, Vlerick now labels tasks as either artificial intelligence prohibited or artificial intelligence encouraged. Guided by Bloom’s Taxonomy and a recent chapter co-authored by associate dean Steve Muylle, faculty map which elements can be augmented and which must remain human, such as critical synthesis or authentic delivery. Classes where students leaned too heavily on artificial intelligence saw strong coursework but weak paper-and-pencil exam results, prompting more in-class discussion and multi-agent simulations to cultivate collaboration with both machines and people.

Porto Business School treats artificial intelligence as foundational rather than optional. Faculty use it to prepare materials, generate synthetic datasets, and test classroom scripts, while students study topics such as machine learning, prompt engineering, and ethics and build proofs of concept that compare traditional and artificial intelligence-driven techniques. Lecturer André Santana stresses intention and alignment with learning goals, warning that faster outputs are not the same as competency. Assessments remain human-led, with students asked to clarify what they did themselves and what was augmented. Santana frames the core challenge as cultural, not technical, noting that artificial intelligence can accelerate learning and support creativity, but inspiration and evaluation must remain human-centered.

Across Imperial, Vlerick, and Porto, artificial intelligence is already reshaping pedagogy, from immediate support and immersive simulations to risk-aware assessment design. The common thread is not whether to use the technology, but how to integrate it in ways that protect academic rigor and preserve what students value most in education: authentic human judgment and interaction.

55

Impact Score

Goldman Sachs flags ‘circular revenue’ risk in Nvidia’s OpenAI and Intel investments

Goldman Sachs warns that Nvidia’s multibillion-dollar stakes in customers such as OpenAI and Intel could create circular revenue, inflating growth via chip sales funded by Nvidia’s own investments. The bank also questions whether customer spending is sustainable as more demand shifts to venture-backed Artificial Intelligence startups and governments.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.