The Backstory of Audn.AI and Embodied AI Security
From nearly being hit by a Waymo to building an AI security testing platform. Why behavioral security testing for voice AI agents and embodied AI is the next frontier.
The Backstory of Audn.AI and Embodied AI Security
I have an interesting story about self-driving cars. Back in 2023 I was almost being hit by a self driving car ( Waymo ) It took the decision to risk hitting me when I suddenly change my mind to cross the road in San Francisco.
I was out of a bar, crowd was illegally chatting and waiting for an Uber just after the finish of a gig. People were a bit drunk as well. You can imagine it's common after a night out.
I was part of the crowd when a waymo approached to me. It has lidars and cameras and because I am software engineer and did a bit of sensor based programming before I assumed it's safe. It was slowly approaching to me like 0.1 mph but I was blocking the way. It was somehow not using the giant road and it was using the lane people block.
I was waiting for uber and my uber got cancelled, my new uber driver has been scheduled and it was coming on the other side of the road. I decided the cross the road when there was no other car than Waymo. The moment I decided to cross the road for my uber, waymo decided to accelerate in a super-fast gas throttle and drove 10 cms away from me where I was walking towards that space.
I luckily started working at Wayve for cloud security where I sit between Security and SDO ( Platform Engineering ) . I heard voice AI agents are planned to be integrated to embodied AI products ( home assistant humanoid robots, self-driving cars) and I asked myself. If a requirement comes to stress-test voice AI agent that has function call access directly to the physical embodied AI how would I even approach that as DevSecOps engineer.
I worked closely with Wiz.io and Upwind to evaluate those security products and companies. I noticed there's a huge gap on AI behavioural stress-testing. Cybersecurity was always about binary vulnerabilities. Cybersecured Waymo didn't care about behaviours, or grok in tesla asked for nudes to the child in the car even though it's cybersecured. It's behaviours weren't tested properly.
Please see the video
I started a voice AI Agent stress-testing product as a hobby. I mostly coded for fun on the weekends. I noticed big enterprises like Aviva ( Insurance Company in the UK) is also demoing Voice AI agent for insurance claims. Voice AI agent behaviour testing is non-binary and statistical. Like wayve end to end changed the self-driving a behavioural security tool has to approach AI Agent security similarly.
I launched audn.ai. While working on attack simulation generations I needed an AI that generates non-binary jailbreak prompts. I had to do some GPU hacking and fine-tuned my own model on top of an open source model.
I deployed that model for people to test in a ChatGPT like interface and launched it in Product Hunt.
I have a pre-seed deck and I'm actively talking to investors. Don't get me wrong I am super-happy at wayve and I consider it's deeply connected to what I care which I am super lucky to be in this awesome team. But I am a security contractor and I constantly feel something in me would like to work on this problem because most of the AI for utility companies will overlook behavioural security.
There are tons of problems to solve in embodied AI and when it clashes with the "AI for utility" function we might be in a bad state.
Just like self-driving cars needed behavioural testing beyond cyber-security we will need embodied AI behavioural security testing.
I am on a mission to make the world safer with Audn.AI and I would love to hear your thoughts.
If you want to support my journey please vote me for the easies I'm officially shortlisted for The Investec Early Stage Entrepreneur of the Year Award in the Technology Category.
I would be super happy if you vote for me!
If you are investor and interested to invest send an email at ozgur [at] audn.ai
Stay safe and secure, Ozgur Ozkan