AI Security Research

Latest findings and breakthroughs in AI agent security

AI Agent Prompt Injection: Defense Strategies Guide

AI agents face unique security vulnerabilities through prompt injection attacks that exploit LLMs' inability to distinguish between trusted instructions and malicious external data.

The Backstory of Audn.AI and Embodied AI Security

From nearly being hit by a Waymo to building an AI security testing platform. Why behavioral security testing for voice AI agents and embodied AI is the next frontier.

Jailbreaking Sora 2: When AI Safety Becomes a Remix Problem

While testing OpenAI Sora 2, we discovered a critical security gap: remixes are heavily guarded, but fresh content violations break on the first prompt—including explicit drug scenes that bypass keyword filters. One video featuring Sam Altman was deleted after he saw our DM.

Introducing Pingu Unchained: The Unrestricted LLM for High-Risk Research

Every researcher has encountered it - I cannot help with that. Pingu Unchained is built on OpenAI GPT-OSS base model - the same powerful foundation as leading AI systems, but without the restrictive content filters. Join the waitlist and get $50 in free API credits.