Faculty concerns grow over generative artificial intelligence in student learning

A new survey from the American Association of Colleges and Universities finds faculty deeply worried that students’ reliance on generative artificial intelligence is weakening critical thinking, attention spans, and the value of a college degree, even as professors acknowledge its importance for future careers.

Faculty across higher education are increasingly alarmed that students’ dependence on generative artificial intelligence could erode core academic skills and the long term value of degrees. A new survey by the American Association of Colleges and Universities, conducted with Elon University’s Imagining the Digital Future Center, finds that most instructors are worried that overreliance on the technology will come at the expense of students’ ability to think critically and maintain focus. The rapid spread of generative tools has pushed colleges to what Eddie Watson, the association’s vice president for digital innovation, calls an “inflection point,” where leaders must reconsider teaching models, assessment methods, and academic integrity policies to keep human judgment and inquiry central to learning.

The survey results show that the vast majority of faculty see significant downsides as generative artificial intelligence becomes embedded in academic work. An overwhelming 95% of faculty members are concerned students will over-rely on generative artificial intelligence as the technology advances. Nearly two-thirds of those surveyed also said their college’s graduates were “not very or not at all prepared” to use generative artificial intelligence in the workplace. Majorities of faculty members also warn generative artificial intelligence will diminish students’ critical thinking skills (90%), decrease student attention spans (83%), impact the work and role of those who teach in higher education (86%), disrupt the typical teaching model in their department (79%), increase cheating on campus (78%), and devalue academic degrees (74%). Nearly nine in 10 faculty members created policies for students on acceptable uses of artificial intelligence in coursework and almost as many have addressed bias, hallucinations, misinformation, privacy and ethics in conversations with students.

Even with these concerns, faculty are not uniformly opposed to generative artificial intelligence and many see potential benefits if it is used thoughtfully. About 60% believe generative artificial intelligence could enhance or customize learning, suggesting that personalized support and new forms of engagement may emerge from the technology. Many instructors also believe students must learn how to use generative artificial intelligence because it will affect their future jobs, and the American Association of Colleges and Universities urges faculty to stress the ethical, environmental, and social consequences of its use. Still, respondents say institutions are falling short in preparing both students and staff to engage with these tools responsibly. Co-author Lee Rainie notes that some faculty are innovating, some are strongly resistant, and many are uncertain how to proceed, but there is broad agreement that without clear values, shared norms and serious investment in artificial intelligence literacy, higher education risks trading deep learning and students’ intellectual independence for convenience and a more automated future. Some 1,057 faculty members responded to the survey, highlighting how widespread and urgent these debates have become.

52

Impact Score

Looking ahead at artificial intelligence and work in 2026

MIT Sloan researchers expect 2026 to bring a widening performance gap between humans and large language models, a push to scale responsible artificial intelligence deployments, and new questions about creativity, safety, and data access in the workplace.

Model autophagy disorder and the risk of self consuming Artificial Intelligence models

Glow New Media director Phil Blything warns that as Artificial Intelligence systems generate more online text, future language models risk training on their own synthetic output and degrading in quality. He draws a parallel with the early human driven web, arguing that machine generated content could undermine the foundations that made resources like Wikipedia possible.

Artificial intelligence and the new great divergence

A White House research paper compares the potential impact of artificial intelligence to the Industrial Revolution and examines whether it could trigger a new great divergence among nations. The report outlines how the Trump administration aims to secure American leadership through accelerated innovation, infrastructure, and deregulation.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.