Professor warns Artificial Intelligence in education risks dependence on Big Tech algorithms

Northumbria University’s Kimberley Hardcastle says the biggest risk of Artificial Intelligence in classrooms is not cheating but ceding judgment to Big Tech algorithms. Fresh Anthropic data shows students already relying on chatbots to create content and solve assignments.

Generative Artificial Intelligence is reshaping education’s foundations by shifting control of knowledge from people to algorithms, according to Kimberley Hardcastle, a business and marketing professor at Northumbria University in the UK. She told Business Insider that while institutions fixate on plagiarism, grading and Artificial Intelligence literacy, the deeper hazard is that students and educators are outsourcing judgment to commercial systems like ChatGPT, Claude and Gemini. The result, she warned, is a quiet transfer of critical thinking and authority away from human deliberation toward proprietary models built and tuned by Big Tech.

Evidence of this shift is already visible in classroom behavior. Anthropic, the company behind Claude, analyzed about one million student conversations in April and found that 39.3 percent involved creating or polishing educational content, while 33.5 percent asked the chatbot to solve assignments directly. Hardcastle said this is not only about avoiding work. It changes how knowledge is constructed. When learners bypass the cognitive journey of synthesis and evaluation, they alter their epistemological relationship with knowledge itself and begin to rely on Artificial Intelligence not just to supply answers but to define what counts as a good answer. That can affect job prospects through a shifted cognitive framework in which validation and creation of knowledge depend on Artificial Intelligence mediation rather than human judgment.

Hardcastle’s central concern is an “atrophy of epistemic vigilance,” the erosion of the instinct and ability to independently verify, challenge and construct knowledge without algorithmic help. Today’s students are encountering Artificial Intelligence midstream in their cognitive development, making them what she calls Artificial Intelligence displaced rather than Artificial Intelligence native learners. She described a transformation in cognitive practices that could extend beyond classrooms. If independent evaluation declines, society risks defaulting to algorithms as arbiters of truth.

Beyond individual cognition, Hardcastle warned of structural risks. If Artificial Intelligence systems become the primary mediators of knowledge, Big Tech firms effectively influence what is treated as valid knowledge. The danger is not dramatic control but subtle epistemic drift, where repeated deference to Artificial Intelligence generated summaries lets commercial training data and optimization metrics shape which questions get asked and which methods appear legitimate. That drift can entrench corporate influence and move authority from human judgment to algorithmic logic.

For education, the question is not whether to resist Artificial Intelligence but how to integrate it while preserving human epistemic agency. Hardcastle urged universities to move beyond compliance and operational fixes and to confront fundamental questions about knowledge authority in an Artificial Intelligence mediated world. Without deliberate action, she said, Artificial Intelligence could erode independent thought even as Big Tech profits from steering how knowledge is created and validated.

55

Impact Score

Key large language model papers from October 13 to 18

A roundup of notable large language model research from the third week of October 2025, spanning generative modeling, multimodal embeddings, and evaluation. Highlights include a diffusion transformer built on representation autoencoders and a language-centric scaling law for embeddings.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.