Lakera, a Swiss startup building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European venture capital firm Atomico. This significant funding will enable Lakera to expand its global presence, particularly in the U.S., and further develop its innovative solutions to safeguard against the growing concerns of security and data privacy in the AI industry.
The Growing Concerns around Generative AI
Generative AI has emerged as a poster child for the burgeoning AI movement, driven by popular apps such as ChatGPT. However, it remains a cause for concern within enterprise settings due to issues around security and data privacy. Large language models (LLMs) are the engines behind generative AI, enabling machines to understand and generate text just like a human. However, these LLMs require instructions or prompts to guide their output, which can be constructed in such a way as to trick the application into doing something it’s not supposed to.
Prompt Injections: A Growing Concern
"Prompt injections" are a real and growing concern in the AI industry. These attacks involve constructing malicious prompts that can trick the generative AI application into divulging confidential data or providing unauthorized access to private systems. Lakera is specifically addressing this threat with its innovative technology.
Lakera’s Solution: A Low-Latency AI Application Firewall
Founded out of Zurich in 2021, Lakera officially launched last October with $10 million in funding. The company has developed a "low-latency AI application firewall" that secures traffic into and out of generative AI applications. Lakera’s inaugural product, Lakera Guard, is built on a database that collates insights from various sources, including publicly available open-source datasets, in-house machine learning research, and an interactive game called Gandalf.
Gandalf: An Interactive Game to Detect Malicious Attacks
Lakera’s Gandalf game invites users to attempt to trick the system into revealing a secret password. As users progress through levels, the game becomes increasingly sophisticated, enabling Lakera to build a "prompt injection taxonomy" that separates malicious attacks into categories.
How Lakera Guard Works
At its core, Lakera Guard is designed to detect and prevent malicious prompt injections in real-time. The system uses advanced machine learning algorithms to identify and filter out suspicious prompts, ensuring the security of generative AI applications.
The Benefits of Lakera’s Solution
Lakera’s innovative technology offers numerous benefits for organizations deploying generative AI applications. By safeguarding against malicious prompt injections, businesses can:
- Ensure the confidentiality and integrity of sensitive data
- Prevent unauthorized access to private systems
- Maintain the trust and confidence of users and stakeholders
Expanding Global Presence
With this significant funding, Lakera is poised to expand its global presence, particularly in the U.S. The company will continue to develop and refine its innovative solutions to address the growing concerns around security and data privacy in the AI industry.
Conclusion
Lakera’s $20 million Series A round demonstrates the growing recognition of the need for robust security measures in the generative AI industry. As organizations increasingly rely on AI-powered applications, Lakera’s innovative technology will play a crucial role in safeguarding against malicious threats and ensuring the long-term success of these solutions.
About Lakera
Lakera is a Swiss startup building innovative technology to protect generative AI applications from malicious prompts and other threats. Founded out of Zurich in 2021, the company has developed a low-latency AI application firewall that secures traffic into and out of generative AI applications.
Contact Us
For more information about Lakera’s innovative solutions, please visit our website or contact us directly. We look forward to collaborating with organizations committed to advancing the security and integrity of AI-powered applications.