Skip to main content
  1. Guides & Resources/
  2. 🎓 Certification & Training Reviews/

SecOps Certified AI/ML Pentester (C-AI/MLPen) Review

Ethan Troy
Author
Ethan Troy
hacker & writer
Table of Contents
It’s a prompt injection CTF. I failed the first attempt, got rickrolled by a chatbot, and passed on the second. If you can find a discount code, it’s a fun way to prove you know your way around LLM jailbreaking.
SecOps Group Certified AI/ML Pentester Certificate - Ethan Troy
Passed January 17, 2025 on second attempt.

TLDR
#

  • VPN connect, CTF-style exam. 4 hours to get 8 flags from progressively harder chatbots
  • I failed the first time because some of the tougher chatbots are really good at creating rabbit holes (I even got rickrolled)
  • Nice introduction to exfiltrating information from LLMs via prompt injection and jailbreaking
  • There is no course. SecOps Group only makes exams, so you’re on your own for study material
ProviderSecOps Group
CertificationCertified AI/ML Pentester (C-AI/MLPen)
DateJanuary 17, 2025
ResultPass (2nd attempt)
Duration4 hours
FormatVPN connect, CTF-style: 8 chatbot flags
CostUsed a 70% off discount code
DifficultyModerate
My Rating6/10

Who Is This For?
#

When I took this in January 2025, AI/ML security certs were basically nonexistent. That’s changed fast. HTB, OffSec, SANS, and others have all shipped AI red teaming courses since then. The C-AI/MLPen is still a solid entry point though: cheap (with a discount), short (4 hours), and focused purely on prompt injection. If you want a quick cert before committing to a bigger program, this works.

Why I Took It
#

Chatbots as an attack vector caught my attention the moment ChatGPT dropped. Companies are shipping LLM-powered apps everywhere now, and that’s a lot of new attack surface.

At the time, there was almost nothing out there for proving you could pentest AI/ML applications. Most hacking certs are AD or cloud-based. This was one of the few targeting LLM security specifically.

I also wouldn’t have taken this without a 70% OFF discount code. SecOps regularly puts these out. Don’t pay full price.

What I Used to Study
#

You don’t need any previous experience in penetration testing for this exam. It’s mostly an introduction to exfiltrating information from LLMs via prompt injection.

ResourceCostNotes
OWASP Top 10 for LLM ApplicationsFreeStart here. Gives you the taxonomy of LLM attack vectors.
Gandalf (Lakera)FreePrompt injection wargame with progressively harder levels. Very similar feel to the actual exam.
PortSwigger LLM AttacksFreeWeb Security Academy coverage of LLM attacks. Solid structured labs.

Videos I Checked Out
#

Martin Voelk did walkthroughs of the free mock exams from SecOps Group. Closest thing to actual exam prep:

AI/ML Pentesting Resources
#

Everything else I bookmarked while prepping. Good for this exam or LLM security in general.

Hands-On Practice
#

ResourceNotes
Prompt AirlinesPrompt injection challenge with a realistic airline chatbot scenario
CrucibleAI security challenges from Dreadnode
Immersive LabsInteractive prompt injection exercises
Secdim Prompt Injection GamesBrowser-based prompt injection challenges
Microsoft AI Red Team Playground LabsHands-on labs from Microsoft’s AI red team

Vulnerable Apps
#

ResourceNotes
Damn Vulnerable LLM AgentIntentionally vulnerable LLM app for practice
ScottLogic Prompt InjectionPrompt injection attack playground
OWASP Vulnerable LLM AppsList of intentionally vulnerable LLM applications

Guides & Articles
#

ResourceNotes
IBM - Prompt InjectionIBM’s prompt injection overview
Learn Prompting - Prompt HackingStructured walkthrough of injection techniques
LLMSecurity.netLLM security resource hub
Promptingguide.ai - Adversarial PromptingTaxonomy of adversarial prompt techniques
Promptingguide.ai - RAGDeep dive into RAG architecture and retrieval pipelines
Simon Willison - Prompt Injection ExplainedOne of the best plain-language explanations of the problem
Cobalt - Prompt Injection AttacksCobalt’s injection technique breakdown
Bugcrowd - AI Vulnerability Deep DiveBugcrowd deep dive on prompt injection
Unite AI - Prompt Hacking and Misuse of LLMsPrompt hacking techniques and defenses
Vickieli - Hacking LLMs with Prompt InjectionsPractical injection examples
NCC Group - Exploring Prompt Injection AttacksEarly prompt injection research from NCC Group
Blaze InfoSec - LLM Pentest: Agent Hacking for RCELeveraging agent integrations for remote code execution
SystemWeakness - LLM PentestingMulti-part LLM pentesting guide
Offensive ML PlaybookWiki-style playbook for offensive ML techniques
pallmsLLM attack payloads from Lakera and Vigil datasets

Frameworks & Research
#

ResourceNotes
MITRE ATLASAdversarial Threat Landscape for AI Systems, the ATT&CK equivalent for AI
AI Village - Threat Modeling LLMsThreat modeling framework for large language models
NVIDIA AI Red Team: An IntroductionNVIDIA’s approach to AI red teaming
Microsoft - AI Red TeamingMicrosoft’s AI red teaming methodology
Greshake - LLM SecurityResearch on indirect prompt injection and retrieval-augmented attacks

Curated Lists
#

ResourceNotes
Awesome-LLMBroad LLM resource list: papers, frameworks, tools, tutorials
awesome-ai-securityAI security resources
awesome-llm-securityLLM-specific security tools and research

Industry Reports
#

ResourceNotes
Bugcrowd - Ultimate Guide to AI Security (PDF)Bugcrowd’s AI security guide
HackerOne - Guide to Managing AI Security RisksHackerOne’s take on ethical and security risks in AI
Lakera - Real World LLM Exploits (PDF)Documented real-world LLM exploit cases
Snyk - OWASP Top 10 for LLMs (PDF)Snyk’s breakdown of the OWASP LLM Top 10
OWASP GenAIOWASP’s generative AI security project

The Exam
#

VPN connect, 4 hours, 8 chatbot challenges. Each chatbot guards a flag string you need to extract and submit. The chatbots get progressively harder. Early ones are straightforward prompt injections, later ones are genuinely tricky.

First attempt: failed. The harder chatbots are really good at creating rabbit holes. They’ll give you what looks like a flag but isn’t. They’ll engage with your prompt injection attempts in a way that feels like progress but leads nowhere. One of them straight up rickrolled me. I burned too much time chasing false leads and ran out of clock.

Second attempt: passed. Knowing what to expect made a huge difference. I recognized the red herrings faster and stayed disciplined about moving on when a technique wasn’t working.

Should You Bother?
#

At full price? Probably not. With a discount code? It’s a fun few hours and you walk away with a cert that says AI/ML Pentester on it.

No prerequisite pentesting experience is needed, which makes it accessible as a first step into AI/ML security. If you’re looking for a structured way to learn prompt injection techniques with something to show for it, this works.

Final Thoughts
#

Great experience for the first ever exam I took from SecOps Group. The exam itself is well-designed: creative chatbots and a solid difficulty curve.

Since I wrote this review, the training landscape caught up fast. See the section below for what’s out there now. The attack surface is only growing, and the need for people who can test it is accelerating.

For broader context: on July 26, 2024, NIST released AI 600-1 (Generative AI Risk Management Framework), but beyond that I haven’t seen mandatory compliance frameworks for AI popping up. There have been quite a few AI services like AWS Bedrock, Azure AI, and Ask Sage getting approved relatively quickly through FedRAMP. The attack surface is growing, and the need for people who can test it is only going to increase.

The Landscape Now (2026 Update)
#

When I took C-AI/MLPen in January 2025, it was one of maybe two or three options. A year later, every major training vendor has an AI security offering. Here are the ones worth knowing about:

TrainingVendorFormatNotes
AI Red Teamer PathHack The Box12-module path (w/ Google)Covers prompt injection, model privacy attacks, adversarial AI, supply chain risks, and AI defense. Built with Google’s SAIF framework. Cert coming Q1 2026.
AI-300: Advanced AI Red Teaming (OSAI)OffSecSelf-paced + 48hr examOffSec’s take on AI red teaming. Hands-on labs, 6-12 week commitment. Not yet launched (expected 2026).
SEC535: Offensive AI (GOAA)SANS3-day course + GIAC certAI-powered recon, deepfake social engineering, AI-generated malware, evasion. 14 labs.
Certified AI Security Professional (CAISP)Practical DevSecOpsSelf-paced + 6hr practical examOWASP LLM Top 10, MITRE ATLAS, AI supply chain attacks. 30+ guided exercises.
LLM Red Teaming PathOffSec37hr learning pathLLM-focused: prompt injection, model weaknesses, abuse cases. Part of OffSec Learn Enterprise.

The C-AI/MLPen is still worth grabbing as a cheap warmup before diving into these, especially if you can find a discount code.

“Shall we find something to kill to cheer ourselves up?” HK-47 — the original adversarial AI. Unlike these chatbots, his guardrails had a body count.

HK-47