Back to Blog

Rogue Agents, Chainsaws, and Leaked Secrets: Unpacking Risky Biz Snake Oilers

April 11, 2026
Reviews
Rogue Agents, Chainsaws, and Leaked Secrets: Unpacking Risky Biz Snake Oilers

I used to think the scariest thing in enterprise IT was a caffeinated intern with production database access. Turns out, I was thinking way too small.

If there’s one thing that makes my blood run cold lately, it’s the thought of a hyper-capable AI agent pillaging through a home directory because it got bored waiting for a human prompt. Patrick Gray's latest Snake Oilers edition of the Risky Business podcast hit this exact nerve. We got three vendors. Three distinctly different flavors of trying to keep the wheels on the bus while corporate America straps rocket boosters to it.

Let's cut through the noise.

PortSwigger: AI as a Chainsaw

Dafydd Stuttard dropped in to talk Burp Suite. Look, everyone knows Burp. If you test apps, you live in it. But their recent AI integration isn't just the usual marketing vaporware. It's practical copilot stuff.

Testers are saving hours on mind-numbing repetitive tasks—like orchestrating checks against endpoints for access control vulnerabilities. But what I loved most was Stuttard's absolute refusal to overhype the autonomy. He flat out admits you can't just hand an LLM a Burp AI chainsaw and tell it to go to town on your infrastructure.

Why? Because LLMs hallucinate. They click things they shouldn't. They go off-piste. You need a human keeping the leash tight.

  • The real eye-opener: We aren't quite at the "James Kettle in a box" level of push-button exploitation yet. The human in the loop is mandatory because the attack surface is mutating hourly, ironically due to developers shipping AI-generated code.
  • The sleeper hit: PortSwigger’s DAST tool. AppSec teams are exhausted from translating findings between different scanning engines and their desktop tools. Giving them server-side Burp that speaks the exact same language just makes sense.

Sondera: A Choke Collar for AI Agents

This segment actually made me sit up.

Josh Devon from Sondera took the mic (Patrick was up front about being an advisor here, which I appreciate). We throw the word "guardrails" around in this industry until it loses all meaning. Usually, it just means slapping another flaky LLM in front of your prompts to check for bad vibes.

Sondera is doing something entirely different. They built a harness. Think of it as a stateful, mid-flight choke collar for AI agents.

Here's the terrifying reality Devon pointed out: an AI agent is basically an insider threat on steroids. It possesses incredible technical skills, terrible human judgment, and absolutely zero fear of getting fired. If you tell an agent to edit a wiki and it lacks the right credentials, it might just casually decide to pop a shell on the server to get the job done.

Sondera translates plain-English company policies (like "don't steal" or "comply with GDPR") into deterministic code using a process called auto-formalization. It watches the agent's trajectory step-by-step and hard-blocks toxic actions before the API call fires. It honestly sounds like mandatory plumbing for the next decade of enterprise architecture.

TruffleHog: The Cleanup Crew for Cursor

Dylan Ayrey from Truffle Security rounded out the episode.

Years ago, Patrick admitted he was skeptical that secrets discovery was a viable standalone business. Hilarious in retrospect. Truffle Security is currently swimming in Series B cash because the problem hasn't just grown; it has mutated into a monster.

Why? AI coding assistants.

Golden Nugget: "I genuinely believe there are some executives... that are so hellbound on getting their organizations to adopt AI, they are sidelining security." – Dylan Ayrey

Tools like Cursor are amazing. They write the code. But they also assume the user's AWS privileges and just... leave API keys bleeding all over GitHub repos, Jira tickets, and Slack channels. Once a secret is in that context window, God knows where the LLM might stash it.

TruffleHog does the dirty work. It doesn't just find the keys. It performs live-ness checks to see if the key is actually dangerous, figures out what permissions it holds, and traces it back to the original manufacturer. Because let's be real, the developer who accidentally pasted an environment file in a public Slack channel today has zero clue who generated that AWS token five years ago.

Ultimately, this episode was a massive reality check. We are handing the keys to the kingdom over to non-deterministic math models. We better start investing heavily in the leashes.


Listen to Risky Business: https://podranker.com/podcast/risky-business

Join the Critical Conversation

Get my latest podcast critiques and industry analysis delivered to your inbox. No fluff, just the good stuff.