{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/llm-security-threats"},"x-facet":{"type":"skill","slug":"llm-security-threats","display":"Llm Security Threats","count":2},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fc4a0972-622"},"title":"Principal Product Manager, AI Model Security","description":"<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more.</p>\n<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence , ultra-capable systems that remain controllable, safety-aligned, and anchored to human values.</p>\n<p>Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society , advancing science, education, and global well-being.</p>\n<p>We are hiring a Product Manager to own AI model security , the discipline of making our frontier models resilient against adversarial attack and purpose-built for security practitioners.</p>\n<p>This role has a dual mandate: (1) harden our models against the full spectrum of LLM security threats , prompt injection, data exfiltration, jailbreaking, training data extraction, zero-day exploit generation, model poisoning, and agentic workflow exploitation , and (2) partner closely with Microsoft Security product teams (Azure Security, Security Copilot) to ensure our models deliver best-in-class capabilities for real-world security workflows.</p>\n<p>Responsibilities:</p>\n<p>Own the model security roadmap: Define and prioritize the security hardening strategy for our frontier models across the full OWASP LLM threat surface , prompt injection (direct and indirect), data exfiltration, jailbreak resistance, system prompt leakage, training data extraction, and adversarial manipulation of agentic workflows.</p>\n<p>Drive zero-day and exploit defense: Work with researchers to evaluate and mitigate the risk of models being used to generate zero-day exploits, malware, or novel attack vectors.</p>\n<p>Build and scale red-teaming frameworks: Design, run, and iterate adversarial testing programs , both automated and human-driven , to continuously probe model vulnerabilities.</p>\n<p>Establish metrics (e.g., jailbreak success rate, injection bypass rate, exfiltration resistance) and drive measurable improvement over time.</p>\n<p>Partner with Microsoft Security product teams: Work closely with Azure Security and Security Copilot teams to translate their product requirements into model training priorities.</p>\n<p>Ensure our models are purpose-built for threat detection, incident triage, vulnerability assessment, log analysis, and compliance reasoning.</p>\n<p>Define security-specific model evaluations: Build benchmark suites and evaluation frameworks that measure real-world security usefulness , not just academic performance.</p>\n<p>Drive training data strategy to improve domain-specific model quality for security practitioners.</p>\n<p>Shape security policy and launch readiness: Establish clear security criteria for model launches.</p>\n<p>Own the security dimension of go/no-go decisions, with frameworks that balance capability, risk, and deployment context.</p>\n<p>Stay at the frontier: Track the rapidly evolving LLM security landscape , new attack techniques, emerging standards (OWASP, NIST AI RMF), regulatory requirements (EU AI Act), and academic research.</p>\n<p>Translate what you learn into actionable product priorities.</p>\n<p>Influence model training and architecture: Partner with researchers and engineers to embed security considerations into model training, fine-tuning, RLHF, and post-training safeguards.</p>\n<p>Qualifications:</p>\n<p>Bachelor’s Degree AND 5+ years experience in product management, security engineering, or software development OR equivalent experience</p>\n<p>Demonstrated hands-on experience with AI/ML systems , you have personally built, evaluated, or shipped ML-powered products or security tools</p>\n<p>Deep familiarity with LLM security threats: prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models , through professional experience, red-teaming, or security research</p>\n<p>Experience defining product requirements and driving decisions in partnership with researchers or ML engineers</p>\n<p>Track record of building evaluation systems, security benchmarks, or adversarial testing frameworks , not just consuming them</p>\n<p>Ability to operate autonomously, make decisions with incomplete information, and drive projects from ambiguity to shipped outcomes</p>\n<p>Preferred Qualifications:</p>\n<p>Technical background in computer science, security, or AI/ML , a postgraduate degree is a plus but not required</p>\n<p>Experience in offensive security, penetration testing, or red teaming , ideally applied to AI/ML systems</p>\n<p>Familiarity with security workflows and tooling (SIEM, SOAR, EDR, threat intelligence platforms) and how practitioners use them in production</p>\n<p>Understanding of the model lifecycle (pre-training, fine-tuning, RLHF, deployment, monitoring) and where security interventions are most effective</p>\n<p>Experience working with or within enterprise security organizations (e.g., Microsoft Security, CrowdStrike, Palo Alto Networks, or similar)</p>\n<p>Published research, blog posts, or public contributions in AI security, adversarial ML, or LLM red teaming</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fc4a0972-622","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-product-manager-ai-model-security/","x-work-arrangement":null,"x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI/ML systems","LLM security threats","prompt injection","jailbreaking","data exfiltration","adversarial attacks on generative models","product requirements","security engineering","software development","evaluation systems","security benchmarks","adversarial testing frameworks","autonomous decision-making","project management","offensive security","penetration testing","red teaming","security workflows","tooling","SIEM","SOAR","EDR","threat intelligence platforms","model lifecycle","pre-training","fine-tuning","RLHF","deployment","monitoring"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:15:26.485Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI/ML systems, LLM security threats, prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models, product requirements, security engineering, software development, evaluation systems, security benchmarks, adversarial testing frameworks, autonomous decision-making, project management, offensive security, penetration testing, red teaming, security workflows, tooling, SIEM, SOAR, EDR, threat intelligence platforms, model lifecycle, pre-training, fine-tuning, RLHF, deployment, monitoring"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_25c868eb-c32"},"title":"Principal Product Manager, AI Model Security","description":"<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more.</p>\n<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence , ultra-capable systems that remain controllable, safety-aligned, and anchored to human values.</p>\n<p>Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society , advancing science, education, and global well-being.</p>\n<p>We are hiring a Product Manager to own AI model security , the discipline of making our frontier models resilient against adversarial attack and purpose-built for security practitioners.</p>\n<p>This role has a dual mandate: (1) harden our models against the full spectrum of LLM security threats , prompt injection, data exfiltration, jailbreaking, training data extraction, zero-day exploit generation, model poisoning, and agentic workflow exploitation , and (2) partner closely with Microsoft Security product teams (Azure Security, Security Copilot) to ensure our models deliver best-in-class capabilities for real-world security workflows.</p>\n<p>Responsibilities:</p>\n<p>Own the model security roadmap: Define and prioritize the security hardening strategy for our frontier models across the full OWASP LLM threat surface , prompt injection (direct and indirect), data exfiltration, jailbreak resistance, system prompt leakage, training data extraction, and adversarial manipulation of agentic workflows.</p>\n<p>Drive zero-day and exploit defense: Work with researchers to evaluate and mitigate the risk of models being used to generate zero-day exploits, malware, or novel attack vectors.</p>\n<p>Build and scale red-teaming frameworks: Design, run, and iterate adversarial testing programs , both automated and human-driven , to continuously probe model vulnerabilities.</p>\n<p>Establish metrics (e.g., jailbreak success rate, injection bypass rate, exfiltration resistance) and drive measurable improvement over time.</p>\n<p>Partner with Microsoft Security product teams: Work closely with Azure Security and Security Copilot teams to translate their product requirements into model training priorities.</p>\n<p>Ensure our models are purpose-built for threat detection, incident triage, vulnerability assessment, log analysis, and compliance reasoning.</p>\n<p>Define security-specific model evaluations: Build benchmark suites and evaluation frameworks that measure real-world security usefulness , not just academic performance.</p>\n<p>Drive training data strategy to improve domain-specific model quality for security practitioners.</p>\n<p>Shape security policy and launch readiness: Establish clear security criteria for model launches.</p>\n<p>Own the security dimension of go/no-go decisions, with frameworks that balance capability, risk, and deployment context.</p>\n<p>Stay at the frontier: Track the rapidly evolving LLM security landscape , new attack techniques, emerging standards (OWASP, NIST AI RMF), regulatory requirements (EU AI Act), and academic research.</p>\n<p>Translate what you learn into actionable product priorities.</p>\n<p>Influence model training and architecture: Partner with researchers and engineers to embed security considerations into model training, fine-tuning, RLHF, and post-training safeguards.</p>\n<p>Qualifications:</p>\n<p>Bachelor’s Degree AND 5+ years experience in product management, security engineering, or software development OR equivalent experience</p>\n<p>Demonstrated hands-on experience with AI/ML systems , you have personally built, evaluated, or shipped ML-powered products or security tools</p>\n<p>Deep familiarity with LLM security threats: prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models , through professional experience, red-teaming, or security research</p>\n<p>Experience defining product requirements and driving decisions in partnership with researchers or ML engineers</p>\n<p>Track record of building evaluation systems, security benchmarks, or adversarial testing frameworks , not just consuming them</p>\n<p>Ability to operate autonomously, make decisions with incomplete information, and drive projects from ambiguity to shipped outcomes</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_25c868eb-c32","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-product-manager-ai-model-security-2/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI/ML systems","LLM security threats","prompt injection","jailbreaking","data exfiltration","adversarial attacks on generative models","product management","security engineering","software development","model training","fine-tuning","RLHF","post-training safeguards"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:01.831Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI/ML systems, LLM security threats, prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models, product management, security engineering, software development, model training, fine-tuning, RLHF, post-training safeguards"}]}