<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>465e2cfb-ddc</externalid>
      <Title>Staff Machine Learning Research Scientist, LLM Evals</Title>
      <Description><![CDATA[<p>As a Staff Machine Learning Research Scientist on the LLM Evals team, you will lead the development of novel evaluation methodologies, metrics, and benchmarks to measure the capabilities and limitations of frontier LLMs.</p>
<p>Your primary responsibilities will include:</p>
<ul>
<li>Driving research on the effectiveness and limitations of existing LLM evaluation techniques.</li>
<li>Designing and developing novel evaluation benchmarks for large language models, covering areas such as instruction following, factuality, robustness, and fairness.</li>
<li>Communicating, collaborating, and building relationships with clients and peer teams to facilitate cross-functional projects.</li>
<li>Collaborating with internal teams and external partners to refine metrics and create standardized evaluation protocols.</li>
<li>Implementing scalable and reproducible evaluation pipelines using modern ML frameworks.</li>
<li>Publishing research findings in top-tier AI conferences and contributing to open-source benchmarking initiatives.</li>
<li>Mentoring and guiding research scientists and engineers, providing technical leadership across cross-functional projects.</li>
<li>Staying deeply engaged with the ML research community, tracking emerging work and contributing to the advancement of LLM evaluation science.</li>
</ul>
<p>The ideal candidate will have 5+ years of hands-on experience in large language model, NLP, and Transformer modeling, in the setting of both research and engineering development.</p>
<p>You will thrive in a high-energy, fast-paced startup environment and be ready to dedicate the time and effort needed to drive impactful results.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$264,800-$331,000 USD</Salaryrange>
      <Skills>large language model, NLP, Transformer modeling, evaluation methodologies, metrics, benchmarks, instruction following, factuality, robustness, fairness</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4628044005</Applyto>
      <Location>San Francisco, CA; Seattle, WA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1c4de3ab-a58</externalid>
      <Title>Machine Learning Engineer, Global Public Sector</Title>
      <Description><![CDATA[<p>We&#39;re hiring a Machine Learning Engineer to bridge the gap between frontier research and real-world impact. As a key member of our GPS Engineering team, you will lead the charge in research into Agent design, Deep Research and AI Safety/reliability, developing novel methodologies that not only power public sector applications but set new standards across the entire Scale organisation.</p>
<p>Your mission is threefold:</p>
<ul>
<li>Frontier Research &amp; Publication: Leading research into LLM/agent capabilities, reasoning, and safety, with the goal of publishing at top-tier venues (NeurIPS, ICML, ICLR).</li>
<li>Cross-Org Impact: Developing generalised techniques in Agent design, AI Safety and Deep Research agents that scale across our commercial and government platforms.</li>
<li>Mission-Critical Applications: Engineering high-stakes AI systems that impact millions of citizens globally.</li>
</ul>
<p>You will:</p>
<ul>
<li>Pioneer Novel Architectures: Design and train state-of-the-art models and agents, moving beyond “off-the-shelf” solutions to create custom architectures for complex public sector reasoning tasks.</li>
<li>Lead AI Safety Initiatives: Research and implement robust safety frameworks, including red teaming, alignment (RLHF/DPO), and bias mitigation strategies essential for sovereign AI.</li>
<li>Drive Deep Research Capabilities: Develop agents capable of long-horizon reasoning and autonomous information synthesis to solve complex problems for national security and public policy.</li>
<li>Publish and Contribute: Represent Scale in the broader research community by publishing high-impact papers and contributing to open-source breakthroughs.</li>
<li>Consult as a Subject Matter Expert: Act as a technical authority for public sector leaders, advising on the theoretical limits and safety requirements of emerging AI.</li>
<li>Build Evaluation Frontiers: Create new benchmarks and evaluation protocols that define what success looks like for high-stakes, non-commercial AI applications.</li>
</ul>
<p>Ideally, you’d have:</p>
<ul>
<li>Advanced Degree: PhD or Master’s in Computer Science, Mathematics, or a related field with a focus on Deep Learning.</li>
<li>Research Track Record: A portfolio of first-author publications at major conferences (NeurIPS, ICML, CVPR, EMNLP, etc.).</li>
<li>Engineering Rigour: Strong proficiency in Python, deep learning frameworks (PyTorch/JAX), with the ability to write production-ready code that scales.</li>
<li>Safety Expertise: Experience in alignment, robustness, or interpretability research.</li>
</ul>
<p>Nice to haves:</p>
<ul>
<li>Experience with large-scale distributed training on massive clusters.</li>
<li>Experience in building agentic systems that are reliable.</li>
<li>Experience in Sovereign AI or working with highly regulated data environments.</li>
<li>A zero-to-one mindset: Comfortable navigating ambiguity and defining research directions from scratch.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Deep Learning, PyTorch, JAX, AI Safety, Alignment, Robustness, Interpretability, Large-scale Distributed Training, Agentic Systems, Sovereign AI, Regulated Data Environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4413274005</Applyto>
      <Location>Doha, Qatar; London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8549c317-12f</externalid>
      <Title>Senior Research Scientist, Reward Models</Title>
      <Description><![CDATA[<p>As a Senior Research Scientist on our Reward Models team, you&#39;ll lead research efforts to improve how we specify and learn human preferences at scale.</p>
<p>Your work will directly shape how our models understand and optimize for what humans actually want , enabling Claude to be more useful, more reliable, and better aligned with human values.</p>
<p>This role focuses on pushing the frontier of reward modeling for large language models. You&#39;ll develop novel architectures and training methodologies for RLHF, research new approaches to LLM-based evaluation and grading (including rubric-based methods), and investigate techniques to identify and mitigate reward hacking.</p>
<p>You&#39;ll collaborate closely with teams across Anthropic, including Finetuning, Alignment Science, and our broader research organization, to ensure your work translates into concrete improvements in both model capabilities and safety.</p>
<p>We&#39;re looking for someone who can drive ambitious research agendas while also shipping practical improvements to production systems. You&#39;ll have the opportunity to work on some of the most important open problems in AI alignment, with access to frontier models and significant computational resources.</p>
<p>Your work will directly advance the science of how we train AI systems to be both highly capable and safe.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead research on novel reward model architectures and training approaches for RLHF</li>
</ul>
<ul>
<li>Develop and evaluate LLM-based grading and evaluation methods, including rubric-driven approaches that improve consistency and interpretability</li>
</ul>
<ul>
<li>Research techniques to detect, characterize, and mitigate reward hacking and specification gaming</li>
</ul>
<ul>
<li>Design experiments to understand reward model generalization, robustness, and failure modes</li>
</ul>
<ul>
<li>Collaborate with the Finetuning team to translate research insights into improvements for production training pipelines</li>
</ul>
<ul>
<li>Contribute to research publications, blog posts, and internal documentation</li>
</ul>
<ul>
<li>Mentor other researchers and help build institutional knowledge around reward modeling</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have a track record of research contributions in reward modeling, RLHF, or closely related areas of machine learning</li>
</ul>
<ul>
<li>Have experience training and evaluating reward models for large language models</li>
</ul>
<ul>
<li>Are comfortable designing and running large-scale experiments with significant computational resources</li>
</ul>
<ul>
<li>Can work effectively across research and engineering, iterating quickly while maintaining scientific rigor</li>
</ul>
<ul>
<li>Enjoy collaborative research and can communicate complex ideas clearly to diverse audiences</li>
</ul>
<ul>
<li>Care deeply about building AI systems that are both highly capable and safe</li>
</ul>
<p>Strong candidates may also:</p>
<ul>
<li>Have published research on reward modeling, preference learning, or RLHF</li>
</ul>
<ul>
<li>Have experience with LLM-as-judge approaches, including calibration and reliability challenges</li>
</ul>
<ul>
<li>Have worked on reward hacking, specification gaming, or related robustness problems</li>
</ul>
<ul>
<li>Have experience with constitutional AI, debate, or other scalable oversight approaches</li>
</ul>
<ul>
<li>Have contributed to production ML systems at scale</li>
</ul>
<ul>
<li>Have familiarity with interpretability techniques as applied to understanding reward model behavior</li>
</ul>
<p>The annual compensation range for this role is $350,000-$500,000 USD.</p>
<p>Logistics:</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
</ul>
<ul>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
</ul>
<ul>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
</ul>
<p>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different:</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000-$500,000 USD</Salaryrange>
      <Skills>reward modeling, RLHF, large language models, novel architectures, training methodologies, evaluation and grading, rubric-based methods, reward hacking, specification gaming, generalization, robustness, failure modes, computational resources, scientific rigor, communication skills, interpretability techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5024835008</Applyto>
      <Location>Remote-Friendly (Travel Required) | San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>64176983-af0</externalid>
      <Title>Research Engineer, Reward Models Platform</Title>
      <Description><![CDATA[<p>You will work as a Research Engineer on Anthropic&#39;s Reward Models Platform. Your primary responsibility will be to design and build infrastructure that enables researchers to rapidly iterate on reward signals. This includes tools for rubric development, human feedback data analysis, and reward robustness evaluation. You will also develop systems for automated quality assessment of rewards, including detection of reward hacks and other pathologies. Additionally, you will create tooling that allows researchers to easily compare different reward methodologies and understand their effects. You will collaborate with researchers to translate science requirements into platform capabilities and optimize existing systems for performance, reliability, and ease of use.</p>
<p>You will have the opportunity to contribute directly to research projects yourself and have a direct impact on our ability to scale reward development across domains. You will work closely with researchers and translate ambiguous requirements into well-scoped engineering projects.</p>
<p>To be successful in this role, you should have prior research experience and be excited to work closely with researchers. You should have strong Python skills and experience with ML workflows and data pipelines, and building related infrastructure/tooling/platforms. You should be comfortable working across the stack, ranging from data pipelines to experiment tracking to user-facing tooling.</p>
<p>Strong candidates may also have experience with ML research, building internal tooling and platforms for ML researchers, data quality assessment and pipeline optimization, experiment tracking, evaluation frameworks, or MLOps tooling. They may also have experience with large-scale data processing, Kubernetes, distributed systems, or cloud infrastructure, and familiarity with reinforcement learning or fine-tuning workflows.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000-$500,000 USD</Salaryrange>
      <Skills>Python, ML workflows, data pipelines, infrastructure/tooling/platforms, rubric development, human feedback data analysis, reward robustness evaluation, automated quality assessment, reward hacks, pathologies, experiment tracking, evaluation frameworks, MLOps tooling, ML research, building internal tooling and platforms for ML researchers, data quality assessment and pipeline optimization, Kubernetes, distributed systems, cloud infrastructure, reinforcement learning, fine-tuning workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that develops artificial intelligence systems. It was founded by a group of researchers and engineers.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5024831008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1ee5ad51-8f0</externalid>
      <Title>SWE - Grids - Fixed Term Contract - 6 Months - London, UK</Title>
      <Description><![CDATA[<p>We are seeking an experienced and hands-on Software Engineer for a fixed-term contract to join the Energy Grids team at Google DeepMind. In this individual contributor role, you will work at the cutting edge of power systems and machine learning, developing and deploying innovative AI solutions to optimize the operation of electrical power grids.</p>
<p>Your work will be critical to delivering a real-world validation of our approach, with a primary focus on core software engineering tasks to:</p>
<p>Enable rapid, trustworthy experimentation. Maintain rigorous benchmarking and testing. Manage scale for both data and model size. Ensure and maintain high data quality for both real-world and synthetic data.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Design, implement, and maintain robust and reliable systems and workflows for generating large-scale synthetic and real datasets of power grid optimization problems.</li>
<li>Design and implement rigorous unit, integration, and system tests to ensure the reliability, accuracy, and maintained performance of our models and software, with a focus on data pipelines.</li>
<li>Maintain and contribute to our machine learning codebase, ensuring efficient data structures and seamless integration with our power system models and optimization solvers.</li>
<li>Ensure the codebase supports ongoing experimentation, while simultaneously increasing scalability, robustness, and reliability via improved integration testing and performance benchmarking.</li>
<li>Work closely and collaboratively with a team of engineers, research scientists, and product managers to deliver real-world impact.</li>
</ul>
<p><strong>Minimum Qualifications</strong></p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Software Engineering, or equivalent practical experience.</li>
<li>Excellent proficiency in C++, Python, or Jax.</li>
<li>Demonstrated experience developing or utilizing solutions for robustness or quality assurance within software and/or ML systems.</li>
<li>Experience processing, generating, and analyzing large-scale data, e.g. for ML applications.</li>
<li>Proven ability to discuss technical ideas effectively and collaborate in interdisciplinary teams.</li>
<li>Motivated by the prospect of real-world impact and focused on excellence in software development.</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with Google&#39;s technical stack and/or Google Cloud Platform (GCP).</li>
<li>Familiarity with modern hardware accelerators (GPU / TPU).</li>
<li>Experience with modern ML training frameworks, such as Jax.</li>
<li>Experience in developing software in a translational research or production setting.</li>
<li>Proficiency in Julia</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>contract</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Python, Jax, Robustness, Quality Assurance, Software Development, Machine Learning, Data Analysis, Google&apos;s technical stack, Google Cloud Platform (GCP), Modern hardware accelerators (GPU / TPU), Modern ML training frameworks (Jax), Software development in a translational research or production setting, Proficiency in Julia</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a subsidiary of Alphabet Inc., a multinational conglomerate.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7750738</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9fc281b0-84b</externalid>
      <Title>Senior Mechanical Engineer, Actuation &amp; Fluid Systems</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Mechanical Engineer to support the development of hydraulic and actuated systems for our X-BAT jet-engine VTOL military UAV platform. This role involves designing components and fluid systems, engaging with suppliers, performing system and component analysis and sizing, and owning multiple aspects of X-BAT&#39;s hydraulics and actuation systems.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Collaborating with the engineering team to design, analyse, and implement hydraulic and actuated system designs</li>
<li>Managing suppliers to establish component and system specifications, purchasing COTS equipment, and ensuring timely delivery of systems to the aircraft</li>
<li>Designing and executing tests to ensure component and system-level performance meets requirements</li>
<li>Owning and managing lightweight system-level requirements</li>
<li>Working with cross-functional stakeholders to ensure excellent implementation of designs</li>
</ul>
<p>The ideal candidate will have a BS in Mechanical Engineering (or equivalent), 5+ years of experience in fluid or hydraulic system design, and strong CAD skills (preferably NX). They should also be proficient in hydraulic system analysis, including 1st principles, Simscape, and Amesim.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$110,000 - $160,000 a year</Salaryrange>
      <Skills>Mechanical Engineering, Fluid System Design, Hydraulic Components, CAD Skills (NX), Hydraulic System Analysis, Environmental and Operational Robustness, Ground Test and Flight Test Campaigns, Reliability and Safety Methods</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems for military and civilian use.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/269355be-4438-4d8c-9ee7-336456be9051</Applyto>
      <Location>Dallas</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1e0f3b52-1ae</externalid>
      <Title>Research Scientist, Gemini Safety</Title>
      <Description><![CDATA[<p>We&#39;re seeking a versatile Research Scientist to join our Gemini Safety team, responsible for advancing the safety and fairness behaviour of state-of-the-art AI models. As a key member of our team, you will apply and develop cutting-edge data and algorithmic solutions to ensure Gemini models are safe, maximally helpful, and work for everyone.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Post-training/instruction tuning state-of-the-art language models, focusing on text-to-text, image/video/audio-to-text modalities and agentic capabilities</li>
<li>Exploring data, reasoning, and algorithmic solutions to ensure Gemini models are safe and work for everyone</li>
<li>Improving Gemini&#39;s adversarial robustness, with a focus on high-stakes abuse risks</li>
<li>Designing and maintaining high-quality evaluation protocols to assess model behaviour gaps and headroom related to safety and fairness</li>
<li>Developing and executing experimental plans to address known gaps or construct entirely new capabilities</li>
</ul>
<p>To succeed in this role, you should have a PhD in Computer Science or a related field, significant LLM post-training experience, and a track record of publications at top conferences. Experience in reward modelling and reinforcement learning for LLMs instruction tuning, long-range reinforcement learning, safety, fairness, and alignment is an advantage.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>PhD in Computer Science or a related field, Significant LLM post-training experience, Post-training/instruction tuning state-of-the-art language models, Exploring data, reasoning, and algorithmic solutions, Improving Gemini&apos;s adversarial robustness, Reward modelling and reinforcement learning for LLMs instruction tuning, Long-range reinforcement learning, Safety, fairness, and alignment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a leading artificial intelligence research organisation developing advanced AI technologies for widespread public benefit and scientific discovery.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7421111</Applyto>
      <Location>Mountain View, California, US</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>2f942bce-976</externalid>
      <Title>Analog Design, Sr Engineer</Title>
      <Description><![CDATA[<p>Our Hardware Engineers at Synopsys are responsible for designing and developing cutting-edge semiconductor solutions. They work on intricate tasks such as chip architecture, circuit design, and verification to ensure the efficiency and reliability of semiconductor products. These engineers play a crucial role in advancing technology and enabling innovations in various industries.</p>
<p>At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content.</p>
<p>You Are:
You are a passionate and inventive analog circuit design engineer with a deep-rooted curiosity for emerging technologies and industry-leading semiconductor processes. You thrive in dynamic, collaborative environments and are recognised for your ability to balance technical depth with practical implementation.</p>
<p>Responsibilities:
Designing and developing best-in-class ESD and Latch-Up robust solutions for advanced interface IPs using cutting-edge FinFet, FDSOI, and BCD processes.
Owning the full lifecycle of ESD structures—from schematic design, simulation, and layout to silicon qualification and production release.
Leading and executing I/O development, including I/O ring design, review, and optimisation for performance and robustness.
Developing and qualifying Interface Testchips, ensuring comprehensive ESD and Latch-Up validation to meet global customer requirements.
Running ESD simulations by building detailed ESD networks and performing advanced analyses to ensure design integrity.
Applying foundry-provided PERC (Physical Verification Rule Check) rules and using PERC check tools to validate compliance and enhance design quality.
Collaborating closely with foundry partners, design, and layout teams to ensure timely and effective integration of ESD and LU solutions.</p>
<p>The Impact You Will Have:
Elevating the reliability and performance of Synopsys&#39; interface IPs, directly influencing the success of global semiconductor customers.
Driving innovation in analog circuit design for next-generation silicon technologies, helping Synopsys maintain its leadership in the industry.
Reducing field failures and increasing product longevity by delivering robust ESD and Latch-Up protection solutions.
Accelerating time-to-market for customer products through efficient and high-quality design practices.
Fostering a culture of technical excellence and continuous improvement within the analog design team.
Building strong partnerships with foundries and cross-functional teams, enhancing collaboration and knowledge sharing across projects.</p>
<p>What You’ll Need:
Proven experience in analog circuit design, with a focus on I/O development and ESD/LU robustness.
Hands-on expertise with FinFet, FDSOI, and BCD process technologies from leading foundries.
Strong background in ESD and Latch-Up qualification methodologies, including testchip development and validation.
Proficiency in ESD simulation, ESD network construction, and use of industry-standard tools.
Comprehensive understanding of PERC rules and practical experience with PERC verification tools.
Experience working with cross-functional teams including foundry, design, and layout groups.</p>
<p>Who You Are:
An analytical thinker with excellent problem-solving skills and keen attention to detail.
A collaborative team player who values diversity, inclusion, and open communication.
A proactive learner who stays current with industry trends and emerging technologies.
An effective communicator, able to translate complex technical information to diverse audiences.
A results-driven individual who is adaptable, resilient, and comfortable with fast-paced, high-impact work.</p>
<p>The Team You’ll Be A Part Of:
You’ll join a passionate, multidisciplinary team of analog and mixed-signal engineers dedicated to advancing Synopsys’ interface IP portfolio. The team is focused on delivering robust, innovative, and high-quality solutions that meet the rigorous demands of a global customer base. Collaboration, continuous improvement, and technical mentorship are at the core of our culture, ensuring you’ll have the support and opportunities needed to thrive and grow.</p>
<p>Rewards and Benefits:
We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Analog circuit design, ESD and Latch-Up robustness, FinFet, FDSOI, and BCD process technologies, ESD simulation, PERC rules and verification tools, Cross-functional team collaboration, Machine learning, Artificial intelligence, Cloud computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a leading provider of electronic design automation (EDA) software and services. The company was founded in 1986 and is headquartered in Mountain View, California.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/noida/analog-design-sr-engineer/44408/92446615456</Applyto>
      <Location>Noida</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>01a10ada-f52</externalid>
      <Title>Technical Lead, Safety Research</Title>
      <Description><![CDATA[<p><strong>Technical Lead, Safety Research</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Safety Systems</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$460K – $555K</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI&#39;s mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.</p>
<p><strong>About the Role</strong></p>
<p>As a tech lead, you will be responsible for developing our strategy in new directions to address potential harms from misalignment or significant mistakes. This will in practice include:</p>
<ul>
<li>Setting north star goals and milestones for new research directions, and developing challenging evaluations to track progress.</li>
</ul>
<ul>
<li>Personally driving or leading research in new exploratory directions to demonstrate feasibility and scalability of the approaches.</li>
</ul>
<ul>
<li>Working horizontally across safety research and related teams to ensure different technical approaches work together to achieve strong safety results.</li>
</ul>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Set the research directions and strategies to make our AI systems safer, more aligned and more robust.</li>
</ul>
<ul>
<li>Coordinate and collaborate with cross-functional teams, including the rest of the research organization, T&amp;S, policy and related alignment teams, to ensure that our AI meets the highest safety standards.</li>
</ul>
<ul>
<li>Actively evaluate and understand the safety of our models and systems, identifying areas of risk and proposing mitigation strategies.</li>
</ul>
<ul>
<li>Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, robustness, and more.</li>
</ul>
<ul>
<li>Implement new methods in OpenAI’s core model training and launch safety improvements in OpenAI’s products.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter</li>
</ul>
<ul>
<li>Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use.</li>
</ul>
<ul>
<li>Bring 4+ years of experience in the field of AI safety, especially in areas like RLHF, adversarial training, robustness, fairness &amp; biases.</li>
</ul>
<ul>
<li>Hold a Ph.D. or other degree in computer science, machine learning, or a related field.</li>
</ul>
<ul>
<li>Possess experience in safety work for AI model deployment</li>
</ul>
<ul>
<li>Have an in-depth understanding of deep learning research and/or strong engineering skills.</li>
</ul>
<ul>
<li>Are a team player who enjoys collaborative work environments.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$460K – $555K</Salaryrange>
      <Skills>AI safety, RLHF, adversarial training, robustness, fairness &amp; biases, deep learning research, engineering skills, team player, collaborative work environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. It is a privately held company.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/273b4c99-273e-4a70-aff9-19c0d959dcef</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>1d33ea55-7f5</externalid>
      <Title>Researcher, Robustness &amp; Safety Training</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Safety Systems</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$295K – $445K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the Team</strong></strong></p>
<p>The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI&#39;s mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.</p>
<p>The Model Safety Research team aims to fundamentally advance our capabilities for precisely implementing robust, safe behavior in AI models, and to leverage these advances to make OpenAI’s deployed models safe and beneficial.  This requires a breadth of new ML research to address the growing set of safety challenges as AI becomes more powerful and used in more settings.  Key focus areas include how to enforce nuanced safety policies without trading off helpfulness and capabilities, how to make the model robust to adversaries, how to address privacy and security risks, and how to make the model trustworthy in safety-critical domains.</p>
<p>We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely.</p>
<p><strong><strong>About the Role</strong></strong></p>
<p>OpenAI is seeking a senior researcher with passion for AI safety and experience in safety research. Your role will set directions for research to enable and empower safe AGI and work on research projects to make our AI systems safer, more aligned and more robust to adversarial or malicious use cases. You will play a critical role in shaping how a safe AI system should look like in the future at OpenAI, making a significant impact on our mission to build and deploy safe AGI.</p>
<p><strong><strong>In this role, you will:</strong></strong></p>
<ul>
<li>Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, robustness, and more.</li>
</ul>
<ul>
<li>Implement new methods in OpenAI’s core model training and launch safety improvements in OpenAI’s products.</li>
</ul>
<ul>
<li>Set the research directions and strategies to make our AI systems safer, more aligned and more robust.</li>
</ul>
<ul>
<li>Coordinate and collaborate with cross-functional teams, including T&amp;S, legal, policy and other research teams, to ensure that our products meet the highest safety standards.</li>
</ul>
<ul>
<li>Actively evaluate and understand the safety of our models and systems, identifying areas of risk and proposing mitigation strategies.</li>
</ul>
<p><strong><strong>You might thrive in this role if you:</strong></strong></p>
<ul>
<li>Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter</li>
</ul>
<ul>
<li>Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use.</li>
</ul>
<ul>
<li>Bring 4+ years of experience in the field of AI safety, especially in areas like RLHF, adversarial training, robustness, fairness &amp; biases.</li>
</ul>
<ul>
<li>Hold a Ph.D. or other degree in computer science, machine learning, or a related field.</li>
</ul>
<ul>
<li>Possess experience in safety work for AI model deployment</li>
</ul>
<ul>
<li>Have an in-depth understanding of deep learning research and/or strong engineering skills.</li>
</ul>
<ul>
<li>Are a team player who enjoys collaborative work environments.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$295K – $445K</Salaryrange>
      <Skills>AI safety, RLHF, adversarial training, robustness, fairness &amp; biases, deep learning research, engineering skills, computer science, machine learning, related field</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/2560ed50-5535-42b8-b069-9ebc28ce7493</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>bd0e1e90-d4b</externalid>
      <Title>Researcher, Trustworthy AI</Title>
      <Description><![CDATA[<p><strong>Researcher, Trustworthy AI</strong></p>
<p><strong>About the team</strong></p>
<p>The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI&#39;s mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.</p>
<p><strong>About the role</strong></p>
<p>We are looking to hire exceptional research scientists/engineers that can push the rigor of work needed to increase societal readiness for AGI. Specifically, we are looking for those that will enable us to translate nebulous policy problems to be technically tractable and measurable.</p>
<p><strong>In this role, you will enable:</strong></p>
<ul>
<li>Set research and strategies to study societal impacts of our models in an action-relevant manner and figure out how to tie this back into model design</li>
</ul>
<ul>
<li>Build creative methods and run experiments that enable public input into model values</li>
</ul>
<ul>
<li>Increasing rigor of external assurances by turning external findings into robust evaluations</li>
</ul>
<ul>
<li>Facilitating and growing our ability to effectively de-risk flagship model deployments in a timely manner</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter</li>
</ul>
<ul>
<li>Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use.</li>
</ul>
<ul>
<li>Possess 3+ years of research experience (industry or similar academic experience) and proficiency in Python or similar languages</li>
</ul>
<ul>
<li>Thrive in environments involving large-scale AI systems and multimodal datasets</li>
</ul>
<ul>
<li>Enjoy working on large-scale, difficult, and nebulous problems in a well-resourced environment</li>
</ul>
<ul>
<li>Exhibit proficiency in the field of AI safety, focusing on topics like RLHF, adversarial training, robustness, LLM evaluations</li>
</ul>
<ul>
<li>Have past experience in interdisciplinary research</li>
</ul>
<ul>
<li>Show enthusiasm for socio-technical topics</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>Salary</strong></p>
<ul>
<li>$380K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees,</p>
<p><strong>Experience Level</strong></p>
<p>entry</p>
<p><strong>Employment Type</strong></p>
<p>full-time</p>
<p><strong>Workplace Type</strong></p>
<p>hybrid</p>
<p><strong>Category</strong></p>
<p>Engineering</p>
<p><strong>Industry</strong></p>
<p>Technology</p>
<p><strong>Salary Range</strong></p>
<p>$380K • Offers Equity</p>
<p><strong>Required Skills</strong></p>
<ul>
<li>Python</li>
</ul>
<ul>
<li>Research experience</li>
</ul>
<ul>
<li>AI safety</li>
</ul>
<ul>
<li>RLHF</li>
</ul>
<ul>
<li>Adversarial training</li>
</ul>
<ul>
<li>Robustness</li>
</ul>
<ul>
<li>LLM evaluations</li>
</ul>
<p><strong>Preferred Skills</strong></p>
<ul>
<li>Interdisciplinary research</li>
</ul>
<ul>
<li>Socio-technical topics</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$380K • Offers Equity</Salaryrange>
      <Skills>Python, Research experience, AI safety, RLHF, Adversarial training, Robustness, LLM evaluations, Interdisciplinary research, Socio-technical topics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/71acba5c-dbae-406f-b983-f40943c43068</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>b04bd171-7c3</externalid>
      <Title>Full Stack Software Engineer, ChatGPT Partnerships</Title>
      <Description><![CDATA[<p><strong>Full Stack Software Engineer, ChatGPT Partnerships</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$185K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The ChatGPT team operates at the intersection of research, engineering, product, and design to bring OpenAI’s technology to a global audience.</p>
<p>Within ChatGPT, the <strong>Growth Partnerships</strong> team is dedicated to expanding distribution, unlocking new user acquisition channels, and building high-leverage integrations that deliver ChatGPT to users where they are. Working closely with external partners and internal platform teams, we design product experiences, APIs, and growth surfaces that scale adoption while upholding trust and safety.</p>
<p>Our work is multidisciplinary—merging product, engineering, and business impact to transform partnerships into sustainable growth engines for ChatGPT.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking an experienced <strong>Full Stack Engineer</strong> to join the ChatGPT Growth Partnerships team and help establish the technical foundation that underpins partner-led growth. You will work on end-to-end product experiences—including frontend applications, backend services, APIs, experimentation, and data—that facilitate seamless integrations, onboarding, activation, and monetization through partners.</p>
<p>This high-impact role is ideal for engineers who thrive in fast-paced, ambiguous settings, can move from concept to launch, make sound product and technical judgments, and deliver quickly while maintaining robust engineering standards.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build and own full-stack product experiences supporting partner integrations, onboarding flows, activation funnels, and growth surfaces.</li>
</ul>
<ul>
<li>Design and develop backend services and APIs for scalable, secure partner experiences.</li>
</ul>
<ul>
<li>Collaborate with product, partnerships, design, data science, and research teams to translate strategy into shipped product.</li>
</ul>
<ul>
<li>Lead experimentation initiatives (A/B tests, metrics, instrumentation) to understand drivers of adoption, retention, and value through partnerships.</li>
</ul>
<ul>
<li>Identify leverage points where small technical innovations can unlock significant growth impact.</li>
</ul>
<ul>
<li>Establish best practices for building extensible, partner-friendly systems at scale.</li>
</ul>
<ul>
<li>Contribute to a culture of ownership, clarity, inclusiveness, and thoughtful debate within engineering.</li>
</ul>
<p><strong>You Might Thrive in This Role If You</strong></p>
<ul>
<li>Have delivered full-stack features on the web that drive user acquisition, activation, or monetization (e.g., onboarding, integrations, dashboards, purchase flows).</li>
</ul>
<ul>
<li>Are comfortable with frontend and backend development, including API and service design, as well as data flows.</li>
</ul>
<ul>
<li>Think with a system-level perspective and focus on scalability and long-term robustness.</li>
</ul>
<ul>
<li>Are highly analytical, experienced in experiment design, and able to connect technical work to business outcomes.</li>
</ul>
<ul>
<li>Enjoy navigating ambiguity and structuring new problem areas.</li>
</ul>
<ul>
<li>Possess strong product intuition and prioritize user- and developer-friendly experiences.</li>
</ul>
<ul>
<li>Are motivated by impact and inspired to help shape how partnerships fuel ChatGPT’s growth.</li>
</ul>
<p><strong>Location</strong></p>
<p>San Francisco, New York, or Seattle</p>
<p><strong>Work Type</strong></p>
<p>Full-time</p>
<p><strong>Join us to help grow the ChatGPT partner ecosystem and reach millions of users through thoughtful engineering and product leadership.</strong></p>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$185K – $385K • Offers Equity</Salaryrange>
      <Skills>Full-stack development, Frontend development, Backend development, API design, Service design, Data flows, Experimentation, A/B testing, Metrics, Instrumentation, Scalability, Long-term robustness, Analytical skills, Experiment design, Business outcomes, Product intuition, User- and developer-friendly experiences, Cloud computing, Containerization, DevOps, Agile development, Scrum, Kanban, Continuous integration, Continuous deployment, Continuous testing, Test-driven development, Behavior-driven development, API security, Data security, Cloud security, Compliance, Regulatory requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/50626871-6bbf-4d8f-a534-176f929f1f37</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>b8f14b8a-01e</externalid>
      <Title>Member of Technical Staff - Multimodal Safety - MAI Super Intelligence Team</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft are looking for a talented Member of Technical Staff - Multimodal Safety - MAI Super Intelligence Team at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff, Multimodal Safety, you will work to develop and implement cutting-edge safety methodologies for post-training multimodal large language models to be served to millions of users through Copilot every day. We work on the bleeding edge and leverage the most powerful pretrained models and algorithms, making it critical that we ensure our AI systems behave safely and align with organizational values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Leverage expertise in multimodal safety to uncover potential risks and develop novel mitigation strategies, including alignment techniques and robustness improvements for multimodal large language models.</li>
<li>Create and implement comprehensive evaluation frameworks and red-teaming methodologies to assess model safety across diverse scenarios, edge cases, and potential failure modes.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science, or related technical discipline AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proven expertise in multimodal LLM safety with experience in diffusion models and generative image/video/audio.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Track record building evaluation frameworks, automated red-teaming, and reusable guardrail systems for safety at scale.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year.</li>
<li>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>multimodal safety, diffusion models, generative image/video/audio, evaluation frameworks, red-teaming methodologies, alignment techniques, robustness improvements, guardrail systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices. They are a leader in the technology industry and have a strong presence in the global market. Microsoft is known for its innovative products and services, such as Windows, Office, and Azure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-multimodal-safety-mai-super-intelligence-team/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>