🎉 NEW: Open-Source MCP Tool for EU AI Act Compliance - Now in Beta! - Check it out on GitHub

Back to All Episodes
Episode 5 - Alexa's Input with Liana Anca Tomescu Cover

About the Guest

Liana Anca Tomescu is the founder and CEO of SonnyLabs.ai, an AI security company helping developers build secure AI applications. With a background in cybersecurity and AI, Liana is on a mission to make AI security accessible to every developer, enabling teams to move fast without breaking security. She's passionate about shifting left the security approach in AI development.

About Alexa's Input

Alexa's Input is a podcast and newsletter exploring the intersection of technology, AI, and innovation. Each episode features in-depth conversations with founders, engineers, and experts who are shaping the future of tech.

EPISODE 5 - FEATURED ON ALEXA'S INPUT

Alexa's Input Podcast with Liana Anca Tomescu - SonnyLabs.ai founder

Shift Left Your AI Security with SonnyLabs

Episode Description

In this insightful conversation, Alexa sits down with Liana Anca Tomescu, founder and CEO of SonnyLabs.ai, to discuss the critical importance of building security into AI applications from the very beginning. This episode explores the concept of "shifting left" in AI security - moving security considerations earlier in the development process rather than treating them as an afterthought.

Liana shares how SonnyLabs is helping developers protect their AI applications from prompt injection attacks and other security vulnerabilities. She discusses the unique challenges of securing AI systems, why traditional security approaches don't work for AI, and how developers can build secure AI applications without slowing down their development velocity. This is essential listening for anyone building with AI who wants to understand how to protect their users and their business from day one.

Key Topics Covered

  • What Are Prompt Injection Attacks: Understanding the unique security threats facing AI applications
  • Shifting Left in AI Security: Why security needs to be built into AI from day one, not bolted on later
  • Traditional Security vs AI Security: Why firewalls and antivirus don't protect against AI threats
  • How SonnyLabs Works: Real-time protection against prompt injection and jailbreak attacks
  • Building Secure AI Applications: Practical steps developers can take to protect their AI systems
  • Speed Without Compromise: How to move fast in AI development without breaking security
  • The Future of AI Security: What's coming next in AI security and compliance
  • Founding SonnyLabs: Liana's journey from identifying the problem to building the solution

Key Insights

  • 💡
    AI security is fundamentally different from traditional securityPrompt injection attacks exploit the very nature of how LLMs work - you can't stop them with traditional security tools
  • 💡
    Shift left means building security in from the startJust like in traditional software development, it's far cheaper and easier to build security in from day one than to retrofit it later
  • 💡
    Every AI application with user input is vulnerableIf your AI system takes any form of user input, it's potentially vulnerable to prompt injection attacks
  • 💡
    Security shouldn't slow you downWith the right tools and approach, you can build secure AI applications without sacrificing development speed
  • 💡
    AI security is a competitive advantageCompanies that build secure AI from the start will win customer trust and avoid costly breaches

Who Should Listen

This episode is essential for AI developers and engineers building LLM-powered applications, startup founders building AI products, technical leaders and CTOs evaluating AI security solutions, security professionals looking to understand AI-specific threats, and anyone curious about the unique security challenges facing AI systems. If you're building with AI or considering adding AI to your product, this conversation provides critical insights you need to protect your users and your business.

Action Items for Developers

  • Test your AI application for prompt injection vulnerabilities using tools like SonnyLabs
  • Implement input validation and sanitization as your first line of defense
  • Add real-time monitoring to detect and block suspicious prompts before they reach your LLM
  • Educate your team about AI-specific security threats and best practices
  • Review your system prompts to ensure they can't be easily overridden by user input
  • Implement rate limiting and user authentication to reduce attack surface
  • Create an incident response plan specifically for AI security breaches

Links & Resources

🎙️ Special Thanks

Special thanks to Alexa and the Alexa's Input podcast for featuring Liana and helping spread awareness about AI security. This cross-podcast collaboration brings critical AI security insights to a broader audience.