AnoLLM is a framework that leverages large language models for unsupervised tabular anomaly detection. The method converts tabular records into a standardized textual serialization, then adapts a pre-trained large language model with that serialized data. AnoLLM assigns anomaly scores based on the negative log likelihood produced by the adapted model. The authors position this approach as an alternative to traditional tabular methods that often require extensive feature engineering and can strip or lose textual information during preprocessing.
The framework emphasizes preserving data integrity and streamlining preprocessing for mixed-type datasets. By serializing rows into text, AnoLLM can naturally incorporate textual features alongside numerical and categorical fields without separate feature transformations. The approach adapts a pre-trained model to the serialized tabular format rather than building specialized handcrafted encoders for each column type. That design choice is intended to reduce the labor and complexity of feature engineering while retaining textual signals that can be informative for anomaly detection.
The authors report empirical results showing AnoLLM delivers the best performance on six benchmark datasets that contain mixed feature types. They also evaluate the method across 30 datasets from the ODDS library, which the article describes as predominantly numerical, and find that AnoLLM performs on par with top performing baselines on those datasets. Taken together, the results indicate the approach is especially effective where textual and mixed-type data are present and remains competitive on primarily numerical collections, offering a unified, model-driven alternative to conventional tabular anomaly detection pipelines.
