<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>fc4a0972-622</externalid>
      <Title>Principal Product Manager, AI Model Security</Title>
      <Description><![CDATA[<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more.</p>
<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence , ultra-capable systems that remain controllable, safety-aligned, and anchored to human values.</p>
<p>Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society , advancing science, education, and global well-being.</p>
<p>We are hiring a Product Manager to own AI model security , the discipline of making our frontier models resilient against adversarial attack and purpose-built for security practitioners.</p>
<p>This role has a dual mandate: (1) harden our models against the full spectrum of LLM security threats , prompt injection, data exfiltration, jailbreaking, training data extraction, zero-day exploit generation, model poisoning, and agentic workflow exploitation , and (2) partner closely with Microsoft Security product teams (Azure Security, Security Copilot) to ensure our models deliver best-in-class capabilities for real-world security workflows.</p>
<p>Responsibilities:</p>
<p>Own the model security roadmap: Define and prioritize the security hardening strategy for our frontier models across the full OWASP LLM threat surface , prompt injection (direct and indirect), data exfiltration, jailbreak resistance, system prompt leakage, training data extraction, and adversarial manipulation of agentic workflows.</p>
<p>Drive zero-day and exploit defense: Work with researchers to evaluate and mitigate the risk of models being used to generate zero-day exploits, malware, or novel attack vectors.</p>
<p>Build and scale red-teaming frameworks: Design, run, and iterate adversarial testing programs , both automated and human-driven , to continuously probe model vulnerabilities.</p>
<p>Establish metrics (e.g., jailbreak success rate, injection bypass rate, exfiltration resistance) and drive measurable improvement over time.</p>
<p>Partner with Microsoft Security product teams: Work closely with Azure Security and Security Copilot teams to translate their product requirements into model training priorities.</p>
<p>Ensure our models are purpose-built for threat detection, incident triage, vulnerability assessment, log analysis, and compliance reasoning.</p>
<p>Define security-specific model evaluations: Build benchmark suites and evaluation frameworks that measure real-world security usefulness , not just academic performance.</p>
<p>Drive training data strategy to improve domain-specific model quality for security practitioners.</p>
<p>Shape security policy and launch readiness: Establish clear security criteria for model launches.</p>
<p>Own the security dimension of go/no-go decisions, with frameworks that balance capability, risk, and deployment context.</p>
<p>Stay at the frontier: Track the rapidly evolving LLM security landscape , new attack techniques, emerging standards (OWASP, NIST AI RMF), regulatory requirements (EU AI Act), and academic research.</p>
<p>Translate what you learn into actionable product priorities.</p>
<p>Influence model training and architecture: Partner with researchers and engineers to embed security considerations into model training, fine-tuning, RLHF, and post-training safeguards.</p>
<p>Qualifications:</p>
<p>Bachelor’s Degree AND 5+ years experience in product management, security engineering, or software development OR equivalent experience</p>
<p>Demonstrated hands-on experience with AI/ML systems , you have personally built, evaluated, or shipped ML-powered products or security tools</p>
<p>Deep familiarity with LLM security threats: prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models , through professional experience, red-teaming, or security research</p>
<p>Experience defining product requirements and driving decisions in partnership with researchers or ML engineers</p>
<p>Track record of building evaluation systems, security benchmarks, or adversarial testing frameworks , not just consuming them</p>
<p>Ability to operate autonomously, make decisions with incomplete information, and drive projects from ambiguity to shipped outcomes</p>
<p>Preferred Qualifications:</p>
<p>Technical background in computer science, security, or AI/ML , a postgraduate degree is a plus but not required</p>
<p>Experience in offensive security, penetration testing, or red teaming , ideally applied to AI/ML systems</p>
<p>Familiarity with security workflows and tooling (SIEM, SOAR, EDR, threat intelligence platforms) and how practitioners use them in production</p>
<p>Understanding of the model lifecycle (pre-training, fine-tuning, RLHF, deployment, monitoring) and where security interventions are most effective</p>
<p>Experience working with or within enterprise security organizations (e.g., Microsoft Security, CrowdStrike, Palo Alto Networks, or similar)</p>
<p>Published research, blog posts, or public contributions in AI security, adversarial ML, or LLM red teaming</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI/ML systems, LLM security threats, prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models, product requirements, security engineering, software development, evaluation systems, security benchmarks, adversarial testing frameworks, autonomous decision-making, project management, offensive security, penetration testing, red teaming, security workflows, tooling, SIEM, SOAR, EDR, threat intelligence platforms, model lifecycle, pre-training, fine-tuning, RLHF, deployment, monitoring</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a technology company that develops and markets artificial intelligence and machine learning products and services.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-product-manager-ai-model-security/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>25c868eb-c32</externalid>
      <Title>Principal Product Manager, AI Model Security</Title>
      <Description><![CDATA[<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more.</p>
<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence , ultra-capable systems that remain controllable, safety-aligned, and anchored to human values.</p>
<p>Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society , advancing science, education, and global well-being.</p>
<p>We are hiring a Product Manager to own AI model security , the discipline of making our frontier models resilient against adversarial attack and purpose-built for security practitioners.</p>
<p>This role has a dual mandate: (1) harden our models against the full spectrum of LLM security threats , prompt injection, data exfiltration, jailbreaking, training data extraction, zero-day exploit generation, model poisoning, and agentic workflow exploitation , and (2) partner closely with Microsoft Security product teams (Azure Security, Security Copilot) to ensure our models deliver best-in-class capabilities for real-world security workflows.</p>
<p>Responsibilities:</p>
<p>Own the model security roadmap: Define and prioritize the security hardening strategy for our frontier models across the full OWASP LLM threat surface , prompt injection (direct and indirect), data exfiltration, jailbreak resistance, system prompt leakage, training data extraction, and adversarial manipulation of agentic workflows.</p>
<p>Drive zero-day and exploit defense: Work with researchers to evaluate and mitigate the risk of models being used to generate zero-day exploits, malware, or novel attack vectors.</p>
<p>Build and scale red-teaming frameworks: Design, run, and iterate adversarial testing programs , both automated and human-driven , to continuously probe model vulnerabilities.</p>
<p>Establish metrics (e.g., jailbreak success rate, injection bypass rate, exfiltration resistance) and drive measurable improvement over time.</p>
<p>Partner with Microsoft Security product teams: Work closely with Azure Security and Security Copilot teams to translate their product requirements into model training priorities.</p>
<p>Ensure our models are purpose-built for threat detection, incident triage, vulnerability assessment, log analysis, and compliance reasoning.</p>
<p>Define security-specific model evaluations: Build benchmark suites and evaluation frameworks that measure real-world security usefulness , not just academic performance.</p>
<p>Drive training data strategy to improve domain-specific model quality for security practitioners.</p>
<p>Shape security policy and launch readiness: Establish clear security criteria for model launches.</p>
<p>Own the security dimension of go/no-go decisions, with frameworks that balance capability, risk, and deployment context.</p>
<p>Stay at the frontier: Track the rapidly evolving LLM security landscape , new attack techniques, emerging standards (OWASP, NIST AI RMF), regulatory requirements (EU AI Act), and academic research.</p>
<p>Translate what you learn into actionable product priorities.</p>
<p>Influence model training and architecture: Partner with researchers and engineers to embed security considerations into model training, fine-tuning, RLHF, and post-training safeguards.</p>
<p>Qualifications:</p>
<p>Bachelor’s Degree AND 5+ years experience in product management, security engineering, or software development OR equivalent experience</p>
<p>Demonstrated hands-on experience with AI/ML systems , you have personally built, evaluated, or shipped ML-powered products or security tools</p>
<p>Deep familiarity with LLM security threats: prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models , through professional experience, red-teaming, or security research</p>
<p>Experience defining product requirements and driving decisions in partnership with researchers or ML engineers</p>
<p>Track record of building evaluation systems, security benchmarks, or adversarial testing frameworks , not just consuming them</p>
<p>Ability to operate autonomously, make decisions with incomplete information, and drive projects from ambiguity to shipped outcomes</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI/ML systems, LLM security threats, prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models, product management, security engineering, software development, model training, fine-tuning, RLHF, post-training safeguards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-product-manager-ai-model-security-2/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
  </jobs>
</source>