Pattern unveils explainable Artificial Intelligence framework for pathology

Pattern Computer has published a new explainable Artificial Intelligence framework in Scientific Reports aimed at making deep learning decisions more transparent in high-stakes settings. The company says the approach combines strong predictive performance with evidence-based explanations built from real-world examples.

Pattern Computer announced the publication of its research, “Adaptive example selection for prototype-based explainable mitosis detection in digital pathology,” in Nature: Scientific Reports. The work presents an explainable Artificial Intelligence framework designed to pair high-performance deep learning with transparent, human-aligned reasoning for use in regulated and high-stakes industries.

The framework targets a persistent challenge in modern Artificial Intelligence systems: many models operate as black boxes, which can limit adoption where decisions must be understood, trusted, and validated. In digital pathology, deep learning models can produce diagnostic outputs without making their reasoning clear, creating concerns around liability, reliability, and clinical oversight. Pattern positions explainability as essential for verifying model logic, identifying unexpected behavior, and supporting audits when errors occur.

In its primary application, mitosis detection in digital pathology, the system achieves strong predictive performance while maintaining 96% fidelity between predictions and explanations. Each decision is supported by a small set of intuitive, comparable examples that aim to show both what the model predicted and why it reached that result. At the center of the approach is adaptive, contrastive example selection, which presents supporting and opposing evidence for every prediction and enables a counterfactual style of reasoning.

Pattern says this method differs from conventional explainability techniques that depend on abstract feature importance or opaque internal signals. Instead, it uses real-world examples to provide evidence-based explanations that remain interpretable while preserving high fidelity. The study also points to an operational advantage: explainability can expose hidden model weaknesses, giving teams a way to improve systems continuously and deploy them more robustly.

Although the research was validated in digital pathology, Pattern says the approach is intended to scale to other domains where transparency is critical, including medical imaging, drug discovery, manufacturing quality control, and digital forensics. The company says it is now working to expand the framework to larger datasets, integrate it into real-time workflows, and move toward production deployment, with a broader goal of building a universal explainable Artificial Intelligence platform for transparent and accountable decision-making.

55

Impact Score

Artificial Intelligence avatars need clearer UK rules to unlock growth

Artificial Intelligence human avatars are spreading across UK businesses and public services, offering cost and time savings while creating new legal and ethical risks. Researchers say clearer rights and guidance could help firms adopt the technology more confidently and responsibly.

Globant and LALIGA outline agentic Artificial Intelligence strategy in sports

Globant, LALIGA, and Sportian used NVIDIA GTC 2026 to show how agentic Artificial Intelligence has moved from pilot projects into operational use across a major sports organization. The companies highlighted governance, orchestration, and scale as the key requirements for turning Artificial Intelligence into business infrastructure.

Redesigning firms and careers in the Artificial Intelligence-first era

Fujitsu outlines how Artificial Intelligence-first organizations are reshaping company structures, talent management, and career paths. The shift favors workflow-based design, continuous reskilling, and stronger individual adaptability as Artificial Intelligence becomes embedded in core business operations.

AMD challenges Nvidia on software and CPUs

AMD is pressing Nvidia on two fronts: reducing lock-in around gpu software and defending its lead in server cpus as Nvidia expands with Grace and Vera. The contest is shaping around open developer tools, inference performance, and control of Artificial Intelligence data center orchestration.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.