Sage unveils Artificial Intelligence trust label to empower SMBs

Sage introduced an Artificial Intelligence Trust Label designed to make automated features in its products more transparent and accountable for small and mid-sized businesses. The company plans to begin rolling it out across selected products in the UK and US later this year.

Atlanta, June 4, 2025: Sage announced the development of its Artificial Intelligence Trust Label, described as a first-of-its-kind initiative to bring greater clarity and accountability to how Artificial Intelligence is built and used in business software. Targeted at small and mid-sized businesses, the label is intended to give customers clear, accessible insight into how automated features operate across Sage products so they can adopt the technology with confidence.

The label centers on key trust indicators, including compliance with privacy and data regulations, how customer data is used, safeguards to prevent bias and harm, and systems that monitor accuracy and ethical performance. Sage says the framework is designed for non-technical users, translating complex engineering and governance practices into signals businesses can easily understand. “Artificial Intelligence adoption should never come down to blind trust,” said Aaron Harris, chief technology officer at Sage, emphasizing the company’s focus on transparency around data usage and safeguards.

New research cited by Sage links trust directly to adoption. While 94 percent of SMBs already using Artificial Intelligence report benefits, 70 percent have not fully adopted the technology. Among those who trust Artificial Intelligence, 85 percent say they actively use it in their business, compared with 48 percent among those who do not. Additionally, 43 percent of SMBs report low trust in companies building business-focused Artificial Intelligence tools. The findings are based on an online survey of 1,500 SMB decision makers conducted by Global Counsel Insight between May 3 and May 19 across the US, UK, France and Spain, with equal country weighting and respondents screened for decision-making responsibilities and work hours. SMBs were defined as up to 500 employees in the US and up to 250 in other markets.

Later this year, Sage will start rolling out the Artificial Intelligence Trust Label across selected Artificial Intelligence-powered products in the UK and US. Customers will encounter the label within the product experience and can access deeper detail via Sage’s Trust and Security Hub. The framework was shaped by direct feedback from SMBs about the assurances they need to build confidence in automated tools. The announcement follows steps Sage has taken to formalize responsible technology practices, including publishing Artificial Intelligence and data ethics principles in 2023, adopting the US NIST Artificial Intelligence Risk Management Framework globally, signing the Pledge for Trustworthy Artificial Intelligence in the World of Work, and implementing emerging standards such as the UK Government’s Artificial Intelligence Cyber Security Code of Practice.

Sage is calling for collaboration between industry and government to establish a transparent, certified labelling system that encourages wider adoption. The company said it is exploring opportunities to share its framework more broadly. “We are building a model for how Artificial Intelligence can earn trust across the business software sector,” Harris said, adding that transparency is essential if the technology is to empower SMBs.

50

Impact Score

technologies that could help end animal testing

The uk has set timelines to phase out many forms of animal testing while regulators and researchers explore alternatives. The strategy highlights organs on chips, organoids, digital twins and Artificial Intelligence as tools that could reduce or replace animal use.

Nvidia to sell fully integrated Artificial Intelligence servers

A report picked up on Tom’s Hardware and discussed on Hacker News says Nvidia is preparing to sell fully built rack and tray assemblies that include Vera CPUs, Rubin GPUs and integrated cooling, moving beyond supplying only GPUs and components for Artificial Intelligence workloads.

Navigating new age verification laws for game developers

Governments in the UK, European Union, the United States of America and elsewhere are imposing stricter age verification rules that affect game content, social features and personalization systems. Developers must adopt proportionate age-assurance measures such as ID checks, credit card verification or Artificial Intelligence age estimation to avoid fines, bans and reputational harm.

Large language models require a new form of oversight: capability-based monitoring

The paper proposes capability-based monitoring for large language models in healthcare, organizing oversight around shared capabilities such as summarization, reasoning, translation, and safety guardrails. The authors argue this approach is more scalable than task-based monitoring inherited from traditional machine learning and can reveal systemic weaknesses and emergent behaviors across tasks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.