<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>d2f5b1e5-545</externalid>
      <Title>Research Scientist, Gemini Safety</Title>
      <Description><![CDATA[<p>We&#39;re seeking a versatile Research Scientist to join our Gemini Safety team. As a Research Scientist, you will apply and develop data and algorithmic cutting-edge solutions to advance our latest user-facing models. Your work will focus on advancing the safety and fairness behavior of state-of-the-art AI models, driving the development of foundational technology adopted by numerous product areas, including Gemini App, Cloud API, and Search.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Post-training/instruction tuning state-of-the-art LLMs, focusing on text-to-text, image/video/audio-to-text modalities and agentic capabilities</li>
<li>Exploring data, reasoning, and algorithmic solutions to ensure Gemini Models are safe, maximally helpful, and work for everyone</li>
<li>Improve Gemini&#39;s adversarial robustness, with a focus on high-stakes abuse risks</li>
<li>Design and maintain high-quality evaluation protocols to assess model behavior gaps and headroom related to safety and fairness</li>
<li>Develop and execute experimental plans to address known gaps, or construct entirely new capabilities</li>
<li>Drive innovation and enhance understanding of Supervised Fine Tuning and Reinforcement Learning fine-tuning at scale</li>
</ul>
<p>To succeed as a Research Scientist in the Gemini Safety team, we look for the following skills and experience:</p>
<ul>
<li>PhD in Computer Science, a related field, or equivalent practical experience</li>
<li>Significant LLM post-training experience</li>
<li>Experience in Reward modeling and Reinforcement Learning for LLMs Instruction tuning</li>
<li>Experience with Long-range Reinforcement learning</li>
<li>Experience in areas such as Safety, Fairness, and Alignment</li>
<li>Track record of publications at NeurIPS, ICLR, ICML</li>
<li>Experience taking research from concept to product</li>
<li>Experience with collaborating or leading an applied research project</li>
<li>Strong experimental taste: Good judgment regarding baselines, ablations, and what is worth testing</li>
<li>Experience with JAX</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>PhD in Computer Science, LLM post-training experience, Reward modeling and Reinforcement Learning for LLMs Instruction tuning, Long-range Reinforcement learning, Safety, Fairness, and Alignment, NeurIPS, ICLR, ICML publications, Research from concept to product, Collaborating or leading an applied research project, JAX</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a subsidiary of Alphabet Inc., a multinational conglomerate.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7731944?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Zurich, Switzerland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>