Pattern Computer announced the publication of its research, “Adaptive example selection for prototype-based explainable mitosis detection in digital pathology,” in Nature: Scientific Reports. The work presents an explainable Artificial Intelligence framework designed to pair high-performance deep learning with transparent, human-aligned reasoning for use in regulated and high-stakes industries.
The framework targets a persistent challenge in modern Artificial Intelligence systems: many models operate as black boxes, which can limit adoption where decisions must be understood, trusted, and validated. In digital pathology, deep learning models can produce diagnostic outputs without making their reasoning clear, creating concerns around liability, reliability, and clinical oversight. Pattern positions explainability as essential for verifying model logic, identifying unexpected behavior, and supporting audits when errors occur.
In its primary application, mitosis detection in digital pathology, the system achieves strong predictive performance while maintaining 96% fidelity between predictions and explanations. Each decision is supported by a small set of intuitive, comparable examples that aim to show both what the model predicted and why it reached that result. At the center of the approach is adaptive, contrastive example selection, which presents supporting and opposing evidence for every prediction and enables a counterfactual style of reasoning.
Pattern says this method differs from conventional explainability techniques that depend on abstract feature importance or opaque internal signals. Instead, it uses real-world examples to provide evidence-based explanations that remain interpretable while preserving high fidelity. The study also points to an operational advantage: explainability can expose hidden model weaknesses, giving teams a way to improve systems continuously and deploy them more robustly.
Although the research was validated in digital pathology, Pattern says the approach is intended to scale to other domains where transparency is critical, including medical imaging, drug discovery, manufacturing quality control, and digital forensics. The company says it is now working to expand the framework to larger datasets, integrate it into real-time workflows, and move toward production deployment, with a broader goal of building a universal explainable Artificial Intelligence platform for transparent and accountable decision-making.
