{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/adversarial-machine-learning"},"x-facet":{"type":"skill","slug":"adversarial-machine-learning","display":"Adversarial Machine Learning","count":4},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b5d2dd81-5ff"},"title":"Principal Applied Scientist","description":"<p>As the advertising ecosystem expands, sophisticated adversarial actors are leveraging generative AI, automation, and distributed infrastructure to bypass safety controls. The Ads Trust and Safety team requires a Principal Applied Scientist to contribute to the research and technical strategy for Threat Modelling team. We are looking for a security domain expert who can advance the state of the art in Threat Modeling, and Adversarial Defense. This role involves transitioning trust mechanisms from static verification to dynamic, behavioural-based integrity systems. You will architect solutions to detect and neutralize high-complexity fraud vectors, such as phishing, payment fraud, cloaking, malware distribution, token misuse, and authentication, ensuring the ads platform remains safe for users, advertisers, and publishers. The primary success metric is the robust identification and mitigation of advanced abuse vectors with minimal impact on legitimate advertiser friction and ad-serving latency.</p>\n<p>Responsibilities:</p>\n<p>Strategic Threat Modeling: Develop and maintain comprehensive adversarial frameworks to map the lifecycle of emerging threats, from account compromise (ATO) to malicious payload delivery.</p>\n<p>Evolution of Advertiser Trust: Advance the continuous, signal-based security protocol. Research and implement behavioural biometrics and Proof of Liveness models to detect synthetic identities and coordinated fraud rings.</p>\n<p>Adversarial Research: Proactively identify &#39;unknown unknown&#39; vulnerabilities through red-teaming and exploratory data analysis, developing models to predict attacker behaviour before widespread exploitation.</p>\n<p>Technical Leadership: Drive the technical roadmap for integrity and security, mentoring senior engineers and influencing cross-functional stakeholders on security investment priorities.</p>\n<p>Qualifications:</p>\n<p>Bachelor&#39;s, Master&#39;s, or PhD degree in Computer Science, Cybersecurity, Mathematics, or a related field, with 10+ years of related experience.</p>\n<p>Deep technical expertise in Cybersecurity, Anti-Abuse, or Adversarial Machine Learning.</p>\n<p>Strong programming skills in C++ or Python (at least one is required), with experience in building production-quality security or ML systems.</p>\n<p>Hands-on experience in one or more of the following: Web Security standards and Authentication Protocols (OAuth, OIDC).</p>\n<p>Malware analysis, de-obfuscation, or reverse engineering.</p>\n<p>Building fraud detection models at scale.</p>\n<p>Proven ability to design and implement defence mechanisms against complex abuse vectors (e.g., botnets, synthetic identity, evasion/cloaking).</p>\n<p>Strong communication and collaboration skills, with experience articulating complex security risks to business and product leadership.</p>\n<p>Preferred Qualifications:</p>\n<p>5+ years of experience in an Adversarial/Trust &amp; Safety role at a major internet platform or cybersecurity firm.</p>\n<p>Familiarity with the Ad-Tech stack (RTB, OpenRTB) and associated fraud incentives.</p>\n<p>Background in Graph Neural Networks (GNNs) for fraud ring detection or behavioural biometrics.</p>\n<p>Track record of impact via security research publications, patents, or contributions to industry security standards.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b5d2dd81-5ff","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-applied-scientist-33/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Cybersecurity","Anti-Abuse","Adversarial Machine Learning","C++","Python","Web Security standards","Authentication Protocols","Malware analysis","De-obfuscation","Reverse engineering","Fraud detection models"],"x-skills-preferred":["Graph Neural Networks","Behavioural biometrics"],"datePosted":"2026-04-24T12:13:01.732Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cybersecurity, Anti-Abuse, Adversarial Machine Learning, C++, Python, Web Security standards, Authentication Protocols, Malware analysis, De-obfuscation, Reverse engineering, Fraud detection models, Graph Neural Networks, Behavioural biometrics"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d63f049e-ad7"},"title":"Security Lead, Agentic Red Team","description":"<p>Job Title: Security Lead, Agentic Red Team</p>\n<p>We&#39;re a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence. Our mission is to close the &#39;Agentic Launch Gap&#39;; the critical window where novel AI capabilities outpace traditional security reviews.</p>\n<p>As the Security Lead for the Agentic Red Team, you will direct a specialized unit of AI Researchers and Offensive Security Engineers focused on adversarial AI and agentic exploitation. Operating as a technical player-coach, you will architect complex, multi-turn attack scenarios while managing cross-functional partnerships with Product Area leads and Google security to influence launch criteria.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Direct Agile Offensive Security: Lead a specialized red team focused on rapid, high-impact engagements targeting production-level AI models and systems.</li>\n<li>Perform Complex AI Exploitation: Develop and carry out advanced attack sequences that focus on vulnerabilities unique to GenAI, such as escalating privileges through tool usage, poisoning data, and executing multi-turn prompt injections.</li>\n<li>Design Automated Validation Systems: Collaborate with Google teams to engineer &#39;Auto RedTeaming&#39; solutions that transform manual vulnerability discoveries into robust, automated regression testing frameworks.</li>\n<li>Engineer Technical Countermeasures: Create innovative defense-in-depth frameworks and control systems to mitigate agentic logic errors and non-deterministic model behaviors.</li>\n<li>Manage Threat Intelligence Assets: Develop and oversee an evolving inventory of exploit primitives and agent-specific attack patterns used to establish release criteria and evaluate model security benchmarks.</li>\n<li>Establish Security Scope: Collaborate with Google for conventional infrastructure protection, allowing the team to concentrate solely on agentic logic, model inference, and AI-centric exploits.</li>\n</ul>\n<p>About You:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Security, or equivalent practical experience.</li>\n<li>Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.</li>\n<li>Deep technical understanding of LLM architectures and agentic workflows (e.g., chain-of-thought reasoning, tool usage).</li>\n<li>Proven ability to work in a consulting capacity with product teams, driving security improvements in fast-paced release cycles.</li>\n<li>Experience managing or technically leading small, high-performance engineering teams.</li>\n</ul>\n<p>In addition, the following would be an advantage:</p>\n<ul>\n<li>Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).</li>\n<li>Familiarity with AI safety benchmarks and evaluation frameworks.</li>\n<li>Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers.</li>\n<li>Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively.</li>\n</ul>\n<p>The US base salary range for this full-time position is between $248,000 - $349,000 + bonus + equity + benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d63f049e-ad7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7560787","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$248,000 - $349,000 + bonus + equity + benefits","x-skills-required":["Bachelor's degree in Computer Science, Information Security, or equivalent practical experience","Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning","Deep technical understanding of LLM architectures and agentic workflows","Proven ability to work in a consulting capacity with product teams","Experience managing or technically leading small, high-performance engineering teams"],"x-skills-preferred":["Hands-on experience developing exploits for GenAI models","Familiarity with AI safety benchmarks and evaluation frameworks","Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers","Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively"],"datePosted":"2026-03-16T14:41:55.843Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California, US; New York City, New York, US"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Bachelor's degree in Computer Science, Information Security, or equivalent practical experience, Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning, Deep technical understanding of LLM architectures and agentic workflows, Proven ability to work in a consulting capacity with product teams, Experience managing or technically leading small, high-performance engineering teams, Hands-on experience developing exploits for GenAI models, Familiarity with AI safety benchmarks and evaluation frameworks, Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers, Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":248000,"maxValue":349000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f73f108d-30a"},"title":"Senior Security Engineer, Agentic Red Team","description":"<p>Job Title: Senior Security Engineer, Agentic Red Team</p>\n<p>We&#39;re a team of scientists, engineers, machine learning experts, and more, working together to advance the state of the art in artificial intelligence.</p>\n<p><strong>About Us</strong> The Agentic Red Team is a specialized, high-velocity unit within Google DeepMind Security. Our mission is to close the &#39;Agentic Launch Gap&#39;,the critical window where novel AI capabilities outpace traditional security reviews.</p>\n<p><strong>The Role</strong> As a Senior Security Engineer on the Agentic Red Team, you will be the primary technical executor of our adversarial engagements. You will work &#39;in the room&#39; with product builders, identifying architectural flaws during the design phase long before formal reviews begin.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Execute Agile Red Teaming: Conduct rapid, high-impact security assessments on agentic services, focusing on vulnerabilities unique to GenAI such as prompt injection, tool-use escalation, and autonomous lateral movement.</li>\n<li>Develop Advanced Exploits: Engineer and execute complex attack sequences that exploit non-deterministic model behaviors, agentic logic errors, and data poisoning vectors.</li>\n<li>Build Automated Defenses: Write code to transform manual vulnerability discoveries into automated regression testing frameworks (&#39;Auto Red Teaming&#39;) that prevent regression in future model versions.</li>\n<li>Embed with Product Teams: Partner directly with developers during the design and build phases to provide immediate feedback, effectively shortening the feedback loop between offensive findings and defensive engineering.</li>\n<li>Curate Threat Intelligence: Maintain and expand a library of agent-specific attack patterns and exploit primitives to establish robust release criteria for new models.</li>\n</ul>\n<p><strong>About You</strong> In order to set you up for success as a Software Engineer at Google DeepMind, we look for the following skills and experience:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Security, or equivalent practical experience.</li>\n<li>Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.</li>\n<li>Strong coding skills in Python, Go, or C++ with experience building security tools or automation.</li>\n<li>Technical understanding of LLM architectures, agentic workflows (e.g., chain-of-thought reasoning), and common AI vulnerability classes.</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).</li>\n<li>Experience working in a consulting capacity with product teams or in a fast-paced &#39;startup-like&#39; environment.</li>\n<li>Familiarity with AI safety benchmarks, evaluation frameworks, and fuzzing techniques.</li>\n<li>Ability to translate complex probabilistic risks into actionable engineering fixes for developers.</li>\n</ul>\n<p><strong>Salary &amp; Benefits</strong> The US base salary range for this full-time position is between $166,000 - $244,000 + bonus + equity + benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f73f108d-30a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7596438","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000 - $244,000 + bonus + equity + benefits","x-skills-required":["Python","Go","C++","Red Teaming","Offensive Security","Adversarial Machine Learning","LLM architectures","agentic workflows","chain-of-thought reasoning","AI vulnerability classes"],"x-skills-preferred":["prompt injection","adversarial examples","training data extraction","AI safety benchmarks","evaluation frameworks","fuzzing techniques"],"datePosted":"2026-03-16T14:39:43.939Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California, US; New York City, New York, US; Zurich, Switzerland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, C++, Red Teaming, Offensive Security, Adversarial Machine Learning, LLM architectures, agentic workflows, chain-of-thought reasoning, AI vulnerability classes, prompt injection, adversarial examples, training data extraction, AI safety benchmarks, evaluation frameworks, fuzzing techniques","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":244000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_716d3247-e3f"},"title":"ML/Research Engineer, Safeguards","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the role</strong></p>\n<p>We are looking for ML Engineers and Research Engineers to help detect and mitigate misuse of our AI systems. As a member of the Safeguards ML team, you will build systems that identify harmful use—from individual policy violations to sophisticated, coordinated attacks—and develop defenses that keep our products safe as capabilities advance. You will also work on systems that protect user wellbeing and ensure our models behave appropriately across a wide range of contexts. This work feeds directly into Anthropic&#39;s Responsible Scaling Policy commitments.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop classifiers to detect misuse and anomalous behavior at scale. This includes developing synthetic data pipelines for training classifiers and methods to automatically source representative evaluations to iterate on</li>\n<li>Build systems to monitor for harms that span multiple exchanges, such as coordinated cyber attacks and influence operations, and develop new methods for aggregating and analyzing signals across contexts</li>\n<li>Evaluate and improve the safety of agentic products—developing both threat models and environments to test for agentic risks, and developing and deploying mitigations for prompt injection attacks</li>\n<li>Conduct research on automated red-teaming, adversarial robustness, and other research that helps test for or find misuse</li>\n</ul>\n<p><strong>You may be a good fit if you</strong></p>\n<ul>\n<li>Have 4+ years of experience in ML engineering, research engineering, or applied research, in academia or industry</li>\n<li>Have proficiency in Python and experience building ML systems</li>\n<li>Are comfortable working across the research-to-deployment pipeline, from exploratory experiments to production systems</li>\n<li>Are worried about misuse risks of AI systems, and want to work to mitigate them</li>\n<li>Have strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li>\n</ul>\n<p><strong>Strong candidates may also have experience with</strong></p>\n<ul>\n<li>Language modeling and transformers</li>\n<li>Building classifiers, anomaly detection systems, or behavioral ML</li>\n<li>Adversarial machine learning or red-teaming</li>\n<li>Interpretability or probes</li>\n<li>Reinforcement learning</li>\n<li>High-performance, large-scale ML systems</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship</strong></p>\n<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>\n<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong></p>\n<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_716d3247-e3f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4949336008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $500,000USD","x-skills-required":["Python","Machine Learning","Research Engineering","Adversarial Machine Learning","Red-teaming","Interpretability","Probes","Reinforcement Learning","High-performance, large-scale ML systems"],"x-skills-preferred":["Language modeling and transformers","Building classifiers, anomaly detection systems, or behavioral ML"],"datePosted":"2026-03-08T13:46:45.711Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Machine Learning, Research Engineering, Adversarial Machine Learning, Red-teaming, Interpretability, Probes, Reinforcement Learning, High-performance, large-scale ML systems, Language modeling and transformers, Building classifiers, anomaly detection systems, or behavioral ML","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":500000,"unitText":"YEAR"}}}]}