Faculty concerns grow over generative artificial intelligence in student learning

A new survey from the American Association of Colleges and Universities finds faculty deeply worried that students’ reliance on generative artificial intelligence is weakening critical thinking, attention spans, and the value of a college degree, even as professors acknowledge its importance for future careers.

Faculty across higher education are increasingly alarmed that students’ dependence on generative artificial intelligence could erode core academic skills and the long term value of degrees. A new survey by the American Association of Colleges and Universities, conducted with Elon University’s Imagining the Digital Future Center, finds that most instructors are worried that overreliance on the technology will come at the expense of students’ ability to think critically and maintain focus. The rapid spread of generative tools has pushed colleges to what Eddie Watson, the association’s vice president for digital innovation, calls an “inflection point,” where leaders must reconsider teaching models, assessment methods, and academic integrity policies to keep human judgment and inquiry central to learning.

The survey results show that the vast majority of faculty see significant downsides as generative artificial intelligence becomes embedded in academic work. An overwhelming 95% of faculty members are concerned students will over-rely on generative artificial intelligence as the technology advances. Nearly two-thirds of those surveyed also said their college’s graduates were “not very or not at all prepared” to use generative artificial intelligence in the workplace. Majorities of faculty members also warn generative artificial intelligence will diminish students’ critical thinking skills (90%), decrease student attention spans (83%), impact the work and role of those who teach in higher education (86%), disrupt the typical teaching model in their department (79%), increase cheating on campus (78%), and devalue academic degrees (74%). Nearly nine in 10 faculty members created policies for students on acceptable uses of artificial intelligence in coursework and almost as many have addressed bias, hallucinations, misinformation, privacy and ethics in conversations with students.

Even with these concerns, faculty are not uniformly opposed to generative artificial intelligence and many see potential benefits if it is used thoughtfully. About 60% believe generative artificial intelligence could enhance or customize learning, suggesting that personalized support and new forms of engagement may emerge from the technology. Many instructors also believe students must learn how to use generative artificial intelligence because it will affect their future jobs, and the American Association of Colleges and Universities urges faculty to stress the ethical, environmental, and social consequences of its use. Still, respondents say institutions are falling short in preparing both students and staff to engage with these tools responsibly. Co-author Lee Rainie notes that some faculty are innovating, some are strongly resistant, and many are uncertain how to proceed, but there is broad agreement that without clear values, shared norms and serious investment in artificial intelligence literacy, higher education risks trading deep learning and students’ intellectual independence for convenience and a more automated future. Some 1,057 faculty members responded to the survey, highlighting how widespread and urgent these debates have become.

52

Impact Score

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.