Lakera Focuses on Securing Large Language Models

Lakera develops cybersecurity products to safeguard Large Language Models and data privacy in Artificial Intelligence systems.

Lakera is a technology company specializing in the security of Large Language Models (LLMs) and the broader Artificial Intelligence ecosystem. Based in San Francisco, Lakera provides a portfolio of products designed to help organizations address the growing threats associated with deploying LLM-powered applications, such as data leaks, prompt injection attacks, and privacy risks.

The company´s offerings include Lakera Guard, an API-driven security platform for integrating protection into LLM workflows, and Lakera Red, which focuses on proactive red teaming and vulnerability testing of Artificial Intelligence models. Additionally, Lakera provides browser extensions such as the PII Extension to prevent inadvertent sharing of personally identifiable information during interactions with conversational models.

Lakera engages actively with the developer and security communities by offering a comprehensive documentation portal, security playbooks, and the Gandalf challenge—a gamified environment to simulate and learn about LLM security risks. The firm also maintains a visible presence at industry conferences, such as RSAC, and shares ongoing research, best practices, and product news through its blog and newsletters, positioning itself as a proactive player in the emerging field of Artificial Intelligence safety and trustworthiness.

62

Impact Score

Intel details disaggregated Core Ultra Series 3 Panther Lake H die

Intel’s Core Ultra Series 3 Panther Lake H mobile processors use a disaggregated multi-tile design that splits compute, graphics, and I/O across different process nodes. The layout closely follows Lunar Lake, with variations in graphics tiles between mainstream and ultraportable configurations.

Pentagon surveillance powers collide with artificial intelligence limits

A dispute between the Pentagon and leading artificial intelligence companies is exposing how far US surveillance law lags behind modern data collection and analysis capabilities. Contracts, not legislation, are currently setting the boundaries for military use of powerful artificial intelligence tools.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.