Professor warns Artificial Intelligence in education risks dependence on Big Tech algorithms

Northumbria University’s Kimberley Hardcastle says the biggest risk of Artificial Intelligence in classrooms is not cheating but ceding judgment to Big Tech algorithms. Fresh Anthropic data shows students already relying on chatbots to create content and solve assignments.

Generative Artificial Intelligence is reshaping education’s foundations by shifting control of knowledge from people to algorithms, according to Kimberley Hardcastle, a business and marketing professor at Northumbria University in the UK. She told Business Insider that while institutions fixate on plagiarism, grading and Artificial Intelligence literacy, the deeper hazard is that students and educators are outsourcing judgment to commercial systems like ChatGPT, Claude and Gemini. The result, she warned, is a quiet transfer of critical thinking and authority away from human deliberation toward proprietary models built and tuned by Big Tech.

Evidence of this shift is already visible in classroom behavior. Anthropic, the company behind Claude, analyzed about one million student conversations in April and found that 39.3 percent involved creating or polishing educational content, while 33.5 percent asked the chatbot to solve assignments directly. Hardcastle said this is not only about avoiding work. It changes how knowledge is constructed. When learners bypass the cognitive journey of synthesis and evaluation, they alter their epistemological relationship with knowledge itself and begin to rely on Artificial Intelligence not just to supply answers but to define what counts as a good answer. That can affect job prospects through a shifted cognitive framework in which validation and creation of knowledge depend on Artificial Intelligence mediation rather than human judgment.

Hardcastle’s central concern is an “atrophy of epistemic vigilance,” the erosion of the instinct and ability to independently verify, challenge and construct knowledge without algorithmic help. Today’s students are encountering Artificial Intelligence midstream in their cognitive development, making them what she calls Artificial Intelligence displaced rather than Artificial Intelligence native learners. She described a transformation in cognitive practices that could extend beyond classrooms. If independent evaluation declines, society risks defaulting to algorithms as arbiters of truth.

Beyond individual cognition, Hardcastle warned of structural risks. If Artificial Intelligence systems become the primary mediators of knowledge, Big Tech firms effectively influence what is treated as valid knowledge. The danger is not dramatic control but subtle epistemic drift, where repeated deference to Artificial Intelligence generated summaries lets commercial training data and optimization metrics shape which questions get asked and which methods appear legitimate. That drift can entrench corporate influence and move authority from human judgment to algorithmic logic.

For education, the question is not whether to resist Artificial Intelligence but how to integrate it while preserving human epistemic agency. Hardcastle urged universities to move beyond compliance and operational fixes and to confront fundamental questions about knowledge authority in an Artificial Intelligence mediated world. Without deliberate action, she said, Artificial Intelligence could erode independent thought even as Big Tech profits from steering how knowledge is created and validated.

55

Impact Score

Artificial Intelligence divides employers as hiring and headcount shift

U.S. hiring beat expectations in April, but employers remain split on whether Artificial Intelligence should drive layoffs, productivity gains, or internal redeployment. At the same time, candidate use of Artificial Intelligence is outpacing employer adoption in hiring, adding new pressure to screening and entry-level recruiting.

What businesses need to know about the EU cyber resilience act

The EU cyber resilience act is turning product cybersecurity into a legal requirement for companies that sell digital products into the European Union. A key compliance milestone arrives in September 2026, well before the full regulation takes effect in 2027.

Claude Mythos and cyber insurance’s next inflection point

Claude Mythos is being treated by governments and regulators as a potential systemic cyber risk with implications for financial stability and insurance markets. Its emergence is intensifying pressure on insurers to clarify whether Artificial Intelligence-enabled cyber losses are covered, excluded, or require new stand-alone products.

OpenAI expands ChatGPT ads with self-serve manager

OpenAI is widening its ChatGPT ads pilot with a beta self-serve Ads Manager, new bidding options and broader measurement tools. The push signals a deeper move into advertising as the company expands the program into several international markets.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.