Lakera Focuses on Securing Large Language Models

Lakera develops cybersecurity products to safeguard Large Language Models and data privacy in Artificial Intelligence systems.

Lakera is a technology company specializing in the security of Large Language Models (LLMs) and the broader Artificial Intelligence ecosystem. Based in San Francisco, Lakera provides a portfolio of products designed to help organizations address the growing threats associated with deploying LLM-powered applications, such as data leaks, prompt injection attacks, and privacy risks.

The company´s offerings include Lakera Guard, an API-driven security platform for integrating protection into LLM workflows, and Lakera Red, which focuses on proactive red teaming and vulnerability testing of Artificial Intelligence models. Additionally, Lakera provides browser extensions such as the PII Extension to prevent inadvertent sharing of personally identifiable information during interactions with conversational models.

Lakera engages actively with the developer and security communities by offering a comprehensive documentation portal, security playbooks, and the Gandalf challenge—a gamified environment to simulate and learn about LLM security risks. The firm also maintains a visible presence at industry conferences, such as RSAC, and shares ongoing research, best practices, and product news through its blog and newsletters, positioning itself as a proactive player in the emerging field of Artificial Intelligence safety and trustworthiness.

62

Impact Score

Google Vids opens free video generation to all Google users

Google has made Google Vids available to anyone with a Google account, adding free access to video generation with its latest models. The move expands Google’s end-to-end video workflow and increases pressure on rivals that charge for similar tools.

Court warns against chatbot legal advice in Heppner case

A federal court found that chats with a publicly available generative Artificial Intelligence tool were not protected by attorney-client privilege or the work-product doctrine. The ruling highlights litigation risks when executives or employees use chatbots for legal guidance without lawyer supervision.

Newsom orders California to weigh Artificial Intelligence harms in contract rules

Gov. Gavin Newsom has signed an executive order directing California agencies to account for potential Artificial Intelligence harms in state contracting while expanding approved use of generative tools across government. The move follows a dispute involving Anthropic and reflects a broader split between California and the Trump administration on Artificial Intelligence oversight.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.