Research on introspection and self-knowledge in large language models
Researchers are probing how large language models understand their own knowledge, behavior, and internal states, and how reliably they can report on themselves. Recent work spans calibration, situational awareness, introspective self-modeling, mechanistic interpretability, and debates about the limits of model self-reports.