AI Security Penetration Tester / AI Red Team Engineer

🌍 Remote, USA 🎯 Full-time πŸ• Posted Recently

Job Description

Title: AI Security Penetration Tester / AI Red Team Engineer

Position Overview

We are seeking a highly skilled AI Security Penetration Tester / AI Red Team Engineer to lead offensive security engagements focused on AI/ML-powered applications and platforms. This role is responsible for identifying, exploiting, and demonstrating security risks across traditional and AI-specific attack surfaces, including LLMs, AI-enabled APIs, and AI-driven business logic.

    You will collaborate with Engineering, Security, Red Teams, SOC, and AI research teams to proactively identify weaknesses, simulate real-world AI attacks, and guide remediation strategies to strengthen enterprise AI security posture.Key Responsibilities
  • Conduct AI-focused penetration testing across web, API, mobile, and AI-powered systems.
  • Perform AI red teaming exercises including prompt injection, jailbreak testing, model evasion, and adversarial ML attacks.
  • Identify risks such as model poisoning, data leakage, adversarial inputs, and AI business logic abuse.
  • Perform threat modeling and architecture reviews for AI-enabled applications.
  • Develop and enhance AI-focused offensive security tools and testing methodologies.
  • Research emerging AI attack techniques and assess potential business impact.
  • Deliver comprehensive penetration testing reports and executive-ready presentations.
  • Lead engagements end-to-end including scoping, execution, reporting, and remediation validation.
  • Partner with engineering teams to provide actionable security recommendations.
  • Collaborate with Red Teams and SOC to continuously improve AI security playbooks.
    Required Qualifications
  • 3+ years of hands-on penetration testing experience (web, API, mobile).
  • Demonstrated experience in AI red teaming, LLM security testing, or adversarial ML.
  • Proficiency with tools such as Burp Suite Pro, Netsparker, Checkmarx, or similar.
  • Working knowledge of AI/ML frameworks (TensorFlow, PyTorch, LLM APIs, LangChain).
  • Strong understanding of OWASP Top 10, API security, and modern attack vectors.
  • Excellent written and verbal communication skills.
  • Relevant security certifications (GWAPT, OSWE, OSWA, CREST, etc.) preferred.
  • Bachelor’s degree in Computer Science, Cybersecurity, or equivalent experience.
    Preferred Qualifications
  • Experience testing LLM-based applications, chatbots, copilots, or AI workflows.
  • Familiarity with MLOps, model deployment security, and cloud AI platforms (AWS, Azure, Google Cloud Platform).
  • Ability to build custom offensive tools/scripts in Python, Go, or similar languages.
  • Exposure to SOC operations, detection engineering, or purple team exercises.
  • Contributions to AI security research, blogs, talks, or open-source projects.
    What Success Looks Like
  • AI vulnerabilities identified before production release
  • Clear demonstration of AI attack paths and business risk
  • Actionable remediation guidance adopted by engineering teams
  • Continuous evolution of AI red teaming methodologies
  • Measurable improvement in AI security posture

Apply tot his job

Apply To this Job

Ready to Apply?

Don't miss out on this amazing opportunity!

πŸš€ Apply Now

Similar Jobs

Recent Jobs

You May Also Like