Virginia Tech Research Signals End of Unproctored Testing in Artificial Intelligence Age

A Virginia Tech study finds unproctored online testing is now fully vulnerable to cheating via artificial intelligence, urging fundamental changes in assessment methods.

Virginia Tech research warns that unproctored online testing has become entirely vulnerable to cheating thanks to advanced large language models such as ChatGPT. The study, led by industrial organizational psychologist Louis Hickman, highlights that developments in reasoning large language models—including OpenAI´s ´o1´ model—have rendered traditional unsupervised digital assessments obsolete, as anyone can now copy-paste test questions into these models to receive high-performing answers in seconds.

Unproctored assessments have long enabled employers and educators to evaluate wide groups of candidates and students efficiently. However, the research shows that new reasoning models, trained with reinforcement learning and capable of internal monologue for self-improvement, have moved beyond earlier limitations—particularly on quantitative ability tests. The latest generation of models achieves strong results even on complex assessments, including personality, situational judgment, verbal ability, and numerical reasoning, effectively undermining the integrity and predictive value of such tests. Industry surveys indicate that about one-third of job applicants and most students are leveraging large language models for critical assessments and coursework.

With the validity of unproctored testing declining, experts now recommend essential overhauls in test administration and design. Proposed solutions include reverting to supervised proctoring, integrating large language models into the assessment process, imposing strict time constraints, employing software to detect artificial intelligence usage, requesting test-takers to verbalize thought processes, and analyzing digital traces for authenticity. While each approach has trade-offs, the research makes clear that employers and educators can no longer rely on legacy testing protocols and must adapt to the realities posed by generative artificial intelligence.

78

Impact Score

FLUX.2 image generation models now released, optimized for NVIDIA RTX GPUs

Black Forest Labs, the frontier Artificial Intelligence research lab, released the FLUX.2 family of visual generative models with new multi-reference and pose control tools and direct ComfyUI support. NVIDIA collaboration brings FP8 quantizations that reduce VRAM requirements by 40% and improve performance by 40%.

Aligning VMware migration with business continuity

Business continuity planning long focused on physical disasters, but cyber incidents, particularly ransomware, are now more common and often more damaging. In a survey of more than 500 CISOs, almost three-quarters (72%) said their organization had dealt with ransomware in the previous year.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.