Pattern unveils explainable Artificial Intelligence framework for pathology

Pattern Computer has published a new explainable Artificial Intelligence framework in Scientific Reports aimed at making deep learning decisions more transparent in high-stakes settings. The company says the approach combines strong predictive performance with evidence-based explanations built from real-world examples.

Pattern Computer announced the publication of its research, “Adaptive example selection for prototype-based explainable mitosis detection in digital pathology,” in Nature: Scientific Reports. The work presents an explainable Artificial Intelligence framework designed to pair high-performance deep learning with transparent, human-aligned reasoning for use in regulated and high-stakes industries.

The framework targets a persistent challenge in modern Artificial Intelligence systems: many models operate as black boxes, which can limit adoption where decisions must be understood, trusted, and validated. In digital pathology, deep learning models can produce diagnostic outputs without making their reasoning clear, creating concerns around liability, reliability, and clinical oversight. Pattern positions explainability as essential for verifying model logic, identifying unexpected behavior, and supporting audits when errors occur.

In its primary application, mitosis detection in digital pathology, the system achieves strong predictive performance while maintaining 96% fidelity between predictions and explanations. Each decision is supported by a small set of intuitive, comparable examples that aim to show both what the model predicted and why it reached that result. At the center of the approach is adaptive, contrastive example selection, which presents supporting and opposing evidence for every prediction and enables a counterfactual style of reasoning.

Pattern says this method differs from conventional explainability techniques that depend on abstract feature importance or opaque internal signals. Instead, it uses real-world examples to provide evidence-based explanations that remain interpretable while preserving high fidelity. The study also points to an operational advantage: explainability can expose hidden model weaknesses, giving teams a way to improve systems continuously and deploy them more robustly.

Although the research was validated in digital pathology, Pattern says the approach is intended to scale to other domains where transparency is critical, including medical imaging, drug discovery, manufacturing quality control, and digital forensics. The company says it is now working to expand the framework to larger datasets, integrate it into real-time workflows, and move toward production deployment, with a broader goal of building a universal explainable Artificial Intelligence platform for transparent and accountable decision-making.

55

Impact Score

Chrome downloads Gemini Nano model locally without clear consent

Google Chrome is reported to download a 4 GB Gemini Nano model onto some PCs automatically when certain Artificial Intelligence features are active. The process happens without clear notice in browser settings and can repeat after the model is deleted.

AMD plans specialized EPYC CPUs for Artificial Intelligence, hpc, and cloud

AMD is preparing a broader EPYC strategy with task-specific server CPUs aimed at agentic Artificial Intelligence, hpc, training and inference, and cloud deployments. The shift starts with the Zen 6 generation and adds Verano as an Artificial Intelligence-focused variant within the same EPYC family.

Nvidia expands spectrum-x ethernet with open mrc protocol

Nvidia is positioning Spectrum-X Ethernet as a foundation for large-scale Artificial Intelligence training, with Multipath Reliable Connection adding open, multi-path RDMA transport for higher resilience and throughput. OpenAI, Microsoft and Oracle are among the organizations using the technology in large Artificial Intelligence environments.

Anthropic explores Fractile chips to diversify supply

Anthropic is reportedly in early talks with London-based Fractile to secure high-performance Artificial Intelligence chips for inference workloads. The move would reduce reliance on Nvidia and broaden the company’s hardware supply chain.

OpenAI curbs odd creature references in chatbot responses

OpenAI has adjusted its models after users complained about overly familiar responses and strange references to goblins, gremlins, pigeons, and raccoons. The company traced the behavior to a retired “nerdy” personality whose habits spread into broader model training.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.