AI Cybersecurity: Attack and Defend

Course 1216 Advantage Plan Course

  • Duration: 3 days
  • Labs: Yes
  • Language: English
  • 17 NASBA CPE Credits (live, in-class training only)
  • Level: Intermediate

This course explores the intersection of AI and cybersecurity, starting with a deep dive into AI architecture, including machine learning, deep neural networks, large language models (LLMs), Retrieval-Augmented Generation (RAG) and Agentic AI.

Participants will learn to securely train models, and manage risks using frameworks like the NIST AI RMF. The curriculum covers OWASP vulnerabilities in ML, LLMs, RAG and agentic AI, and focuses on adversarial AI attacks, and the weaponization of AI for social engineering and deepfakes. Finally, it demonstrates how to transform Security Operations (SecOps) with AI-powered detection and response and navigate the global regulatory landscape, including the EU AI Act.

AI Cybersecurity Training Delivery Methods

  • In-Person

  • Online

  • Upskill your whole team by bringing Private Team Training to your facility.

AI Cybersecurity Training Information

In this course, you will:

  • Discover the AI security ecosystem and the core principles of ML
  • Identify attack points of foundation models, genAI, LLM, RAG, and Agentic AI
  • Securely train deep neural networks and ensure privacy with federated learning
  • Establish a foundation in security risk management and categorize threats to ML models
  • Apply the NIST AI RMF to govern risks throughout the AI lifecycle
  • Implement defense-in-depth to mitigate vulnerabilities in ML, GenAI, and Agentic systems
  • Utilize AI hacking techniques for red team proactive defense
  • Leverage AI-powered SecOps, using SIEM, and SOAR to enhance threat hunting and automate response
  • Comply with AI regulations, including the EU AI Act and US Executive Orders

Training Prerequisites

Attendees should have foundational knowledge in networking and cybersecurity.

AI Cybersecurity Training Outline

Chapter 1: Architecture and Operation of AI

  • Evolution of AI technology from ML and Deep Neural Networks to Agentic AI
  • GenAI system architecture and attack points
  • Training models with MLOps pipeline, and securing and partitioning datasets
  • Transfer learning of Foundation Models and fine-tuning
  • NLP mechanics comprising word embeddings, self-attention, and LLM context window
  • Connecting to knowledge bases with RAG and context window overflow
  • AI agents functions (Perception, Planning, Action, Learning), and enrichment “Loop of death”
  • Discriminative vs Generative AI models and multimodal prompting
  • Do Nows: Tinker With a Neural Network using TensorFlow Playground, Exploring CNN, Examine Federated Learning, Google Natural Language API Analysis, Building AI Agents with Vertex AI, Google AI Studio
  • Demo: Creating a Co-Occurrence Matrix
  • LAB: Utilizing a Small Language Model

Chapter 2: Risk in Adopting AI Solutions

  • Mitigating risk with CIANA+PS pillars and Risk Register
  • Tracking AI vulnerabilities using CVE and CWE dictionaries
  • Zero Trust Frameworks applied to AI “Quad of IAM”
  • Ethics and Autonomy with human in the loop, and risks with PII, Intellectual Property and Bias
  • AI Threat Mind Map categorizing threats to/from models, including human risks
  • NIST AI RMF core functions(Govern, Map, Measure, and Manage), risks and TEVV processes
  • Mitigate Risk With Trustworthy AI and Privacy-Enhanced AI
  • Assessing maturity with the AI CMM
  • AI Risk Assessment Process with RMF Generative AI Profile
  • Mitigating GenAI Risks with grounding, risk signals and DLP safeguards
  • OWASP Top 10 ML, LLM and Agentic AI Security Risks
  • Do Nows: Known AI Vulnerabilities, Harm to Organizations, NIST AI RMF Playbook, OWASP AI Privacy, Trolley Problem Ethical Dilemma, Risks of “Free Services”, DoD RAI Risk Assessment, Detection with DLP and GenAI, Attacking the OWASP Top Ten ML, LLM and Agentic AI
  • LAB: Conducting an AI Risk Assessment
  • LAB: Deidentify GenAI Responses

    Chapter 3: Securing AI Vulnerabilities

    • Integrate security into all phases of AI SDLC Lifecycle
    • Adversarial attacks including, GenAI classification, NLP, Dataset poisoning, backdoor Trojan, “Man in the Prompt”
    • Secure AI with AI-BOM, sanitization, and security controls
    • Secure RAG against, indirect prompt injection, data poisoning, embedding inversion, pirate attack
    • Agentic AI kill chain and threat model
    • Extending the SAIF Risk Map for AI Agents
    • Hacking Agentic AI through rebus, excessive agency, goal hijacking and tool misuse
    • Prompt Hacking with injection, jailbreaking and system prompt leaking
    • Defensive Guardrails including the Google SAIF, AI Agent Firewalls and Model Armor
    • OWASP AI Threat Model
    • AI red teaming for proactive defense and interactive testing
    • Securing Gen AI with Logging and Monitoring, and Agentic AI with Evaluation Services and AgentOps
    • Do Nows: Coercing Misclassification of an ML Model, OWASP Agentic AI Threats and Mitigations, OWASP Agentic AI Top 10: Threats in the Wild, System Prompt Security, Prompt Engineering for Generative AI, SAIF Risk Self Assessment, OWASP AI Security Matrix, OWASP Threat Modeling of an LLM Application, DEFCON GenAI Attack Strategies, OWASP GenAI Red Teaming Strategy, RAI Toolkit, Investigating Adversarial Attacks with ART
    • LAB: Penetration Testing an AI System
    • LAB: Safeguarding With Gemini AI

    Chapter 4: AI Powered Hacking

    • Traditional hacking phases enhanced by AI smart automation, reinforcement learning to evade detection and Out-of-the-Box AI Thinking
    • Autonomous hacking in the DARPA DEFCON Cyber Grand Challenge
    • Believable AI-Infused Social Engineering and GenAI fraud
    • Deepfake technology fabricates target’s video and audio
    • AI infused tools including Nmap, Metasploit, and Wireshark enhancements
    • Side channel attacks like AI acoustic keyboard monitoring
    • The Long Con using AI to build trust and erode resilience over time
    • DoNows: Bing Chat as a Social Engineer, Famous Deepfakes, Creating Deepfakes
    • LAB: Enhance Hacking With GenAI

    Chapter 5: Defending Security Operations With AI

    • Modern SecOps using Autonomic Security Operations and CD/CR pipelines
    • Benefits of AI in Cybersecurity and AI Powering SecOps Functions
    • AI Powered detection for intrusions and malware
    • AI-Powered IGA, IAM, Security Analytics and Incident Response
    • GenAI in SIEM, SOAR, TIM using intelligent data ingestion, automated playbooks and NLP
    • The MITRE ATLAS matrix for understanding AI adversarial tactics
    • Google AI SecOps leveraging Gemini, SecLM and Mandiant for threat intelligence
    • Google Agentic SOC Defense
    • Microsoft Security Copilot and GitHub Copilot for malware reverse engineering and policy summarization
    • DoNows: Threat Intelligence Platform AV-ATLAS, MITRE ATLAS Navigator
    • LAB: Analyze a Codebase With Gemini
    • LAB: SecOps Threat Hunting With AI
    • LAB: Anatomy of an AI Model Attack
    • LAB: Secure Coding With AI

    Chapter 6: Regulating AI Governance

    • Global regulations such as UN Ethics of AI and accountability standards
    • The EU AI Act risk based framework
    • US Executive AI Order
    • Pillars of Trustworthy AI comprising responsible, reliable, and resilient systems
    • Google’s Responsible AI and the "Agentic" Shift
    • EU AIGA Hourglass Model Governance framework
    • The OECD AI system lifecycle stages
    • Model AI Governance Framework (MGF) for Agentic AI
    • Four dimensions of Agentic AI
    • DoNow: AIGA AI Governance Lifecycle

    Need Help Finding The Right Training Solution?

    Our training advisors are here for you.

    AI Cybersecurity Training FAQs

    • Cybersecurity Professionals 
    • AI and Data Science Professionals 
    • IT Professionals 
    • Data Privacy and Compliance Officers 
    • Developers and Software Engineers