Claimify: Enhancing Accuracy in Language Model Outputs

Claimify enhances claim extraction from language models, improving fact-checking accuracy in Artificial Intelligence outputs.

Claimify, a new tool by Microsoft Research, proposes an innovative approach to extracting factual claims from large language model (LLM) outputs. While LLMs can generate vast amounts of content, ensuring its accuracy remains a significant challenge. Claimify aims to address this by extracting verifiable claims and excluding unverifiable content, enhancing the fact-checking process.

The framework introduced in Claimify operates by following core principles that demand claims captured must be verifiable, clearly supported by source material, and understandable without additional context. This approach ensures that claims do not omit critical context that could affect the fact-checking judgment. Unlike previous methods, Claimify can identify and manage ambiguities in source texts, only extracting claims when there’s confidence in the interpretation.

Claimify’s performance sets it apart from its predecessors, yielding 99% accuracy in ensuring claims are substantiated by their source sentences. It outperforms existing methods in balancing verifiable content inclusion while minimizing omitted contextual details. This capability extends its utility beyond mere claim verification, potentially aiding in evaluating the overall quality of LLM-generated texts.

73

Impact Score

AMD ROCm software for artificial intelligence

AMD’s open ROCm stack targets artificial intelligence workloads on AMD GPUs with upstream framework support, extensive libraries, and scale-out tooling. The page aggregates models, partner case studies, and developer resources including containers and cloud access.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.