We will be talking to top security experts, legislature, and school administrators to get an inside look on how parents and school staff can be the two golden components of any successful security plan.

Episode List

Why AI Can’t Replace Mental Healthcare for Teens

Mar 12th, 2026 12:00 PM

In this Secured clip, Dr. Jacqueline Benson, licensed clinical psychologist and Founder & CEO of Center Stage Psychology, addresses a growing concern: teens turning to AI for mental health support.Dr. Benson makes a critical distinction — AI is not mental health treatment. While AI tools can provide information and function as digital resources, they were not designed to replace licensed professionals. Mental health care requires years of clinical training and, just as importantly, real-time human judgment. Effective therapy is individualized, responsive, and deeply human — something AI cannot replicate.She also warns that while AI can offer helpful information, it can just as easily surface harmful or inaccurate content. In vulnerable moments, that misinformation can do real damage. Emerging cases have already shown AI interactions worsening psychotic symptoms or even encouraging suicidality in teens.Dr. Benson emphasizes that licensed mental health professionals must play an active role in shaping ethical safeguards around AI systems, particularly because the populations most likely to rely on these tools — adolescents and vulnerable individuals — are also those most at risk.For parents, her advice is clear: stay engaged. Learn about AI alongside your children. Have open conversations. And when serious mental health concerns arise, prioritize support from trusted, licensed professionals over quick digital responses.Speed and accessibility do not equal safety — especially when it comes to mental health.

Will Firearm Safety Education Make Schools Safer?

Mar 11th, 2026 12:00 PM

In this Secured clip, Dr. Jared L. Ross, Assistant Professor at the University of Missouri, weighs in on Tennessee’s new policy mandating firearm safety education in K–12 schools.Ross calls the policy a positive step toward ensuring that students gain a foundational understanding of firearm safety. The legislation includes age-appropriate instruction — teaching younger students not to touch unattended firearms and guiding older students on safe storage practices.However, he argues the policy does not go far enough.According to Ross, the absence of a hands-on training component limits its effectiveness. He also notes that the curriculum omits core principles such as the four universal rules of firearm safety — widely recognized standards in safe handling.With an estimated 400 to 500 million firearms in the United States, Ross contends that basic firearm literacy should be considered a practical safety issue rather than a political one. In his view, understanding firearm safety is comparable to learning driver’s education or financial literacy — a foundational life skill in today’s environment.He also points out that implementing more comprehensive training would not necessarily overburden schools, as local law enforcement, school resource officers, and established organizations already have the capacity to support structured safety education.

How Are Extremists Using Information Warfare

Mar 10th, 2026 11:00 AM

In this Secured clip, Irina Tsukerman, President of Scarab Rising, Inc. and geopolitical analyst at the Arabian Peninsula Institute, explains how extremist groups have transformed propaganda into their most powerful weapon.What once relied on underground sermons or low-quality recordings has evolved into sophisticated, multi-platform content ecosystems. Al Qaeda affiliates and ISIS offshoots now embed their messaging into popular digital formats — from short-form video to gaming-inspired visuals — lowering skepticism and normalizing extremist narratives.By borrowing aesthetics from mainstream culture, including video games like Call of Duty, propaganda becomes gamified. Radicalization feels less like indoctrination and more like participation. The goal isn’t just recruitment — it’s narrative dominance, shaping perceptions until extremist messaging blends into the digital background.Tsukerman emphasizes that the solution isn’t simply removing content. Suppression alone creates gaps that new propaganda quickly fills. The real challenge is replacement — building authentic counter-narratives that directly compete with extremist fantasies and resonate with the same audiences.In today’s digital environment, authenticity — not censorship alone — is the decisive factor in defending the information space.

What’s Next for Safe Human-Machine Collaboration?

Mar 9th, 2026 11:00 AM

In this Secured clip, Mark Gagas of Sensory Robotics outlines the future of human-machine collaboration in high-risk industrial environments.Rather than relying on rigid safeguards that trigger full system shutdowns, the next generation of industrial robotics will be context-aware. These machines will continuously interpret their surroundings, adjusting speed, direction, and behavior in real time based on human presence and activity.Instead of a binary “stop or go” response, safety and productivity logic will operate together. Systems will slow down, reroute, or dynamically reshape protective zones rather than halting operations entirely. This approach reduces unnecessary downtime while maintaining high safety standards.Gagas also points to the evolution of safety certifications. As robots and humans increasingly share workspace, standards will need to reflect real collaborative work patterns — not just legacy safeguarding models built around separation and barriers.Ultimately, the vision is clear: intelligent robotic systems that safely coexist alongside workers without traditional cages or fencing, integrating adaptive safety directly into operational design.

How Is AI Reshaping Trust and Fraud in the Workplace

Mar 6th, 2026 1:00 PM

In this Secured clip, Andrew Feigenson, CEO of InformData, explains how AI is fundamentally reshaping trust in the workplace — both by enabling more sophisticated fraud and by strengthening detection.With the rise of AI-generated resumes, fabricated credentials, and synthetic identities, identity fraud is becoming harder to detect and easier to scale. This evolution raises the bar for employers and screening providers, who can no longer rely on traditional verification methods to ensure accuracy.At the same time, AI is equipping organizations with more powerful tools to combat these risks. Machine learning can identify subtle data patterns that signal fraud, detect inconsistencies that human reviewers might overlook, and accelerate verification processes — improving both security and the candidate experience.But Feigenson emphasizes that the most important shift is conceptual. Risk is no longer binary. It’s not simply “cleared” or “not cleared,” nor is it confined to a single moment in time. Instead, trust must be contextual, ongoing, and adaptive.He draws a parallel to cybersecurity: just as one-time security scans are insufficient in a constantly evolving threat landscape, background screening cannot remain a static, check-the-box compliance exercise. It must become part of a broader, continuous trust strategy — one that protects not only the organization, but also its clients, partners, and workforce.

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free