<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>d63f049e-ad7</externalid>
      <Title>Security Lead, Agentic Red Team</Title>
      <Description><![CDATA[<p>Job Title: Security Lead, Agentic Red Team</p>
<p>We&#39;re a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence. Our mission is to close the &#39;Agentic Launch Gap&#39;; the critical window where novel AI capabilities outpace traditional security reviews.</p>
<p>As the Security Lead for the Agentic Red Team, you will direct a specialized unit of AI Researchers and Offensive Security Engineers focused on adversarial AI and agentic exploitation. Operating as a technical player-coach, you will architect complex, multi-turn attack scenarios while managing cross-functional partnerships with Product Area leads and Google security to influence launch criteria.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Direct Agile Offensive Security: Lead a specialized red team focused on rapid, high-impact engagements targeting production-level AI models and systems.</li>
<li>Perform Complex AI Exploitation: Develop and carry out advanced attack sequences that focus on vulnerabilities unique to GenAI, such as escalating privileges through tool usage, poisoning data, and executing multi-turn prompt injections.</li>
<li>Design Automated Validation Systems: Collaborate with Google teams to engineer &#39;Auto RedTeaming&#39; solutions that transform manual vulnerability discoveries into robust, automated regression testing frameworks.</li>
<li>Engineer Technical Countermeasures: Create innovative defense-in-depth frameworks and control systems to mitigate agentic logic errors and non-deterministic model behaviors.</li>
<li>Manage Threat Intelligence Assets: Develop and oversee an evolving inventory of exploit primitives and agent-specific attack patterns used to establish release criteria and evaluate model security benchmarks.</li>
<li>Establish Security Scope: Collaborate with Google for conventional infrastructure protection, allowing the team to concentrate solely on agentic logic, model inference, and AI-centric exploits.</li>
</ul>
<p>About You:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Security, or equivalent practical experience.</li>
<li>Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.</li>
<li>Deep technical understanding of LLM architectures and agentic workflows (e.g., chain-of-thought reasoning, tool usage).</li>
<li>Proven ability to work in a consulting capacity with product teams, driving security improvements in fast-paced release cycles.</li>
<li>Experience managing or technically leading small, high-performance engineering teams.</li>
</ul>
<p>In addition, the following would be an advantage:</p>
<ul>
<li>Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).</li>
<li>Familiarity with AI safety benchmarks and evaluation frameworks.</li>
<li>Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers.</li>
<li>Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively.</li>
</ul>
<p>The US base salary range for this full-time position is between $248,000 - $349,000 + bonus + equity + benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$248,000 - $349,000 + bonus + equity + benefits</Salaryrange>
      <Skills>Bachelor&apos;s degree in Computer Science, Information Security, or equivalent practical experience, Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning, Deep technical understanding of LLM architectures and agentic workflows, Proven ability to work in a consulting capacity with product teams, Experience managing or technically leading small, high-performance engineering teams, Hands-on experience developing exploits for GenAI models, Familiarity with AI safety benchmarks and evaluation frameworks, Experience writing code (Python, Go, or C++) to build automated security tools or fuzzers, Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a team of scientists, engineers, and machine learning experts working together to advance the state of the art in artificial intelligence.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7560787</Applyto>
      <Location>Mountain View, California, US; New York City, New York, US</Location>
      <Country></Country>
      <Postedate>2026-03-16</Postedate>
    </job>
    <job>
      <externalid>f73f108d-30a</externalid>
      <Title>Senior Security Engineer, Agentic Red Team</Title>
      <Description><![CDATA[<p>Job Title: Senior Security Engineer, Agentic Red Team</p>
<p>We&#39;re a team of scientists, engineers, machine learning experts, and more, working together to advance the state of the art in artificial intelligence.</p>
<p><strong>About Us</strong> The Agentic Red Team is a specialized, high-velocity unit within Google DeepMind Security. Our mission is to close the &#39;Agentic Launch Gap&#39;,the critical window where novel AI capabilities outpace traditional security reviews.</p>
<p><strong>The Role</strong> As a Senior Security Engineer on the Agentic Red Team, you will be the primary technical executor of our adversarial engagements. You will work &#39;in the room&#39; with product builders, identifying architectural flaws during the design phase long before formal reviews begin.</p>
<p><strong>Key Responsibilities:</strong></p>
<ul>
<li>Execute Agile Red Teaming: Conduct rapid, high-impact security assessments on agentic services, focusing on vulnerabilities unique to GenAI such as prompt injection, tool-use escalation, and autonomous lateral movement.</li>
<li>Develop Advanced Exploits: Engineer and execute complex attack sequences that exploit non-deterministic model behaviors, agentic logic errors, and data poisoning vectors.</li>
<li>Build Automated Defenses: Write code to transform manual vulnerability discoveries into automated regression testing frameworks (&#39;Auto Red Teaming&#39;) that prevent regression in future model versions.</li>
<li>Embed with Product Teams: Partner directly with developers during the design and build phases to provide immediate feedback, effectively shortening the feedback loop between offensive findings and defensive engineering.</li>
<li>Curate Threat Intelligence: Maintain and expand a library of agent-specific attack patterns and exploit primitives to establish robust release criteria for new models.</li>
</ul>
<p><strong>About You</strong> In order to set you up for success as a Software Engineer at Google DeepMind, we look for the following skills and experience:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Information Security, or equivalent practical experience.</li>
<li>Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.</li>
<li>Strong coding skills in Python, Go, or C++ with experience building security tools or automation.</li>
<li>Technical understanding of LLM architectures, agentic workflows (e.g., chain-of-thought reasoning), and common AI vulnerability classes.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).</li>
<li>Experience working in a consulting capacity with product teams or in a fast-paced &#39;startup-like&#39; environment.</li>
<li>Familiarity with AI safety benchmarks, evaluation frameworks, and fuzzing techniques.</li>
<li>Ability to translate complex probabilistic risks into actionable engineering fixes for developers.</li>
</ul>
<p><strong>Salary &amp; Benefits</strong> The US base salary range for this full-time position is between $166,000 - $244,000 + bonus + equity + benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000 - $244,000 + bonus + equity + benefits</Salaryrange>
      <Skills>Python, Go, C++, Red Teaming, Offensive Security, Adversarial Machine Learning, LLM architectures, agentic workflows, chain-of-thought reasoning, AI vulnerability classes, prompt injection, adversarial examples, training data extraction, AI safety benchmarks, evaluation frameworks, fuzzing techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a technology company that specializes in artificial intelligence research and development.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7596438</Applyto>
      <Location>Mountain View, California, US; New York City, New York, US; Zurich, Switzerland</Location>
      <Country></Country>
      <Postedate>2026-03-16</Postedate>
    </job>
  </jobs>
</source>