BEWARE OF GSX LIST AND HOUSING SCAMMERS – view the official ASIS/GSX vendors

AI, Bad Actors, & Secure Implementations — A U.S. Government Insider Conversation

In anticipation of GSX, we sat down with presenters of upcoming sessions in order to get a better understanding of the topics at hand. This week we are featuring, “AI, Bad Actors, & Secure Implementations — A U.S. Government Insider Conversation,” presented by Rob Petrosino, AI Impact & Bad Actors at FBI | Vissi.io. Read on for what he had to say and don’t forget to register for GSX 2025

Q: How did you become interested in your topic?

A: I’ve spent my career helping companies and agencies make sense of fast-moving tech. Over the years, I’ve seen AI move from backroom R&D projects to boardroom agendas. But it wasn’t until deepfakes started showing up in fraud cases, political misinformation campaigns, and real-time influence operations that I realized this wasn’t just an emerging issue; it was a full-blown crisis. That moment when AI could copy a face, a voice, and even a memory? That’s when I knew we needed to take this more seriously. I realized after enabling organizations to adopt emerging technology I was in a position to help them understand how this tech could also impact their future.  

Q: Tell us about your presentation and why security professionals should have this topic on their radar. 

A: This session breaks down the real-world threats posed by deepfakes and generative AI. We’re not talking about theoretical risks anymore, we have full-blown real-world use cases. These tools are being used right now to defraud companies, hijack elections, and manufacture influence campaigns. If you’re in security and you’re not tracking how AI can manipulate voice, video, and text, then your organization is already behind. The number of deepfake tools has exploded. Detection is no longer a nice-to-have. It’s mandatory. The threats are scalable, personalized, and often indistinguishable from reality. 

Q: What advice would you give security professionals interested in this topic? 

A: First, educate your team. It is impossible for you to maintain an understanding of everything that is happening in the world of AI, using external resources like me!  If they don’t understand the difference between a generative model and a predictive one, they can’t defend against it. Second, build protocols for synthetic media review. Trust but verify is outdated. Now it’s detect, attribute, and contain. Invest in detection tools. Pilot them. Compare results. Finally, keep your response playbook updated. A convincing fake video or voice memo can trigger chaos in minutes. Speed and clarity matter more than ever. 

Q: How do you see this issue evolving in the next 2 to 5 years? 

A: The fakes are only going to get better. Models will become faster, cheaper, and capable of real-time manipulation. Expect to see targeted scams using voice and face clones of your actual employees. Nation-states will invest heavily in agentic AI systems that can run influence ops with almost no human oversight. Regulations will try to keep up, but enforcement will be tricky. If your team isn’t training today, it’ll be playing catch-up tomorrow.  

Q: Why do you attend GSX? 

A: GSX is where the real conversations happen. It’s not just buzzwords. It’s where you meet the people who actually have to secure systems, protect assets, and lead during incidents. I come here to pressure-test my ideas, to share what I’ve seen across government and enterprise, and to keep myself sharp. Every year, I leave with more questions—and that’s the point. This space moves fast. GSX keeps it honest.