Claimify: Enhancing Accuracy in Language Model Outputs

Claimify enhances claim extraction from language models, improving fact-checking accuracy in Artificial Intelligence outputs.

Claimify, a new tool by Microsoft Research, proposes an innovative approach to extracting factual claims from large language model (LLM) outputs. While LLMs can generate vast amounts of content, ensuring its accuracy remains a significant challenge. Claimify aims to address this by extracting verifiable claims and excluding unverifiable content, enhancing the fact-checking process.

The framework introduced in Claimify operates by following core principles that demand claims captured must be verifiable, clearly supported by source material, and understandable without additional context. This approach ensures that claims do not omit critical context that could affect the fact-checking judgment. Unlike previous methods, Claimify can identify and manage ambiguities in source texts, only extracting claims when there’s confidence in the interpretation.

Claimify’s performance sets it apart from its predecessors, yielding 99% accuracy in ensuring claims are substantiated by their source sentences. It outperforms existing methods in balancing verifiable content inclusion while minimizing omitted contextual details. This capability extends its utility beyond mere claim verification, potentially aiding in evaluating the overall quality of LLM-generated texts.

73

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.