Virginia Tech Research Signals End of Unproctored Testing in Artificial Intelligence Age

A Virginia Tech study finds unproctored online testing is now fully vulnerable to cheating via artificial intelligence, urging fundamental changes in assessment methods.

Virginia Tech research warns that unproctored online testing has become entirely vulnerable to cheating thanks to advanced large language models such as ChatGPT. The study, led by industrial organizational psychologist Louis Hickman, highlights that developments in reasoning large language models—including OpenAI´s ´o1´ model—have rendered traditional unsupervised digital assessments obsolete, as anyone can now copy-paste test questions into these models to receive high-performing answers in seconds.

Unproctored assessments have long enabled employers and educators to evaluate wide groups of candidates and students efficiently. However, the research shows that new reasoning models, trained with reinforcement learning and capable of internal monologue for self-improvement, have moved beyond earlier limitations—particularly on quantitative ability tests. The latest generation of models achieves strong results even on complex assessments, including personality, situational judgment, verbal ability, and numerical reasoning, effectively undermining the integrity and predictive value of such tests. Industry surveys indicate that about one-third of job applicants and most students are leveraging large language models for critical assessments and coursework.

With the validity of unproctored testing declining, experts now recommend essential overhauls in test administration and design. Proposed solutions include reverting to supervised proctoring, integrating large language models into the assessment process, imposing strict time constraints, employing software to detect artificial intelligence usage, requesting test-takers to verbalize thought processes, and analyzing digital traces for authenticity. While each approach has trade-offs, the research makes clear that employers and educators can no longer rely on legacy testing protocols and must adapt to the realities posed by generative artificial intelligence.

78

Impact Score

Global regulatory trends on the use of generative artificial intelligence

Governments in the EU, Japan, the United States, and the United Kingdom are moving quickly to regulate generative artificial intelligence, using a mix of binding laws, guidelines, and standards. Diverging philosophies and timelines are making cross-border compliance planning increasingly complex for companies.

Perplexity launches Computer to orchestrate many Artificial Intelligence models

Perplexity is rolling out Computer, a cloud-based agent that coordinates 19 Artificial Intelligence models for complex workflows, as it pivots toward high-value enterprise users and deep research. The launch underscores a broader bet on multi-model orchestration, custom benchmarks and a boutique business strategy over mass adoption.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.