<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>536aa8eb-f7c</externalid>
      <Title>Technical Influence Operations Threat Investigator</Title>
      <Description><![CDATA[<p>We are looking for a Technical Influence Operations Threat Investigator to join our Threat Intelligence team. In this role, you will be responsible for detecting, investigating, and disrupting the misuse of Anthropic&#39;s AI systems for influence operations, disinformation campaigns, coordinated inauthentic behavior, and other forms of information manipulation.</p>
<p>You will work at the intersection of AI safety and information integrity, combining deep expertise in influence operations with technical investigation skills to identify threat actors who leverage AI to generate synthetic content, amplify narratives, manipulate public discourse, or undermine democratic processes. Your work will directly shape how Anthropic defends against one of the most rapidly evolving categories of AI misuse.</p>
<p>Important context: In this position you may be exposed to explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.</p>
<p>Responsibilities:</p>
<ul>
<li>Detect and investigate attempts to misuse Anthropic&#39;s AI systems for influence operations, including AI-generated disinformation, coordinated inauthentic behavior, astroturfing, and narrative manipulation campaigns</li>
</ul>
<ul>
<li>Conduct technical investigations using SQL, Python, and other tools to analyze large datasets, trace user behavior patterns, and uncover coordinated networks of threat actors conducting influence operations</li>
</ul>
<ul>
<li>Develop influence operation-specific detection capabilities, including abuse signals, behavioral clustering techniques, and detection methodologies tailored to AI-enabled information manipulation</li>
</ul>
<ul>
<li>Create actionable intelligence reports on influence operation TTPs, emerging narrative threats, and threat actor campaigns leveraging AI systems</li>
</ul>
<ul>
<li>Conduct cross-platform threat analysis linking on-platform activity to broader influence campaigns across social media, messaging platforms, and other digital ecosystems</li>
</ul>
<ul>
<li>Monitor and analyze state-sponsored and non-state influence operations that may leverage AI capabilities, with particular focus on operations originating from or targeting geopolitically significant regions</li>
</ul>
<ul>
<li>Collaborate with policy and enforcement teams to make informed decisions about user violations and ensure appropriate mitigation actions</li>
</ul>
<ul>
<li>Engage with external stakeholders including government agencies, platform integrity teams, academic researchers, and threat intelligence sharing communities</li>
</ul>
<ul>
<li>Forecast how advances in AI technology,including improved content generation, voice synthesis, and multimodal capabilities,will reshape the influence operations landscape and inform safety-by-design strategies</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have deep subject matter expertise in influence operations, coordinated inauthentic behavior, disinformation, or information warfare</li>
</ul>
<ul>
<li>Have demonstrated proficiency in SQL and Python for data analysis and threat detection</li>
</ul>
<ul>
<li>Have experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations</li>
</ul>
<ul>
<li>Have hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations</li>
</ul>
<ul>
<li>Have experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems</li>
</ul>
<ul>
<li>Have excellent stakeholder management skills and ability to work with diverse teams including researchers, policy experts, legal teams, and external partners</li>
</ul>
<ul>
<li>Can present analytical work to both technical and non-technical audiences, including government stakeholders and senior leadership</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Experience at a major technology platform working on influence operations, platform integrity, or content authenticity</li>
</ul>
<ul>
<li>Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts</li>
</ul>
<ul>
<li>Experience investigating operations linked to Chinese, Russian, Iranian, or other state-sponsored information campaigns</li>
</ul>
<ul>
<li>Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions</li>
</ul>
<ul>
<li>Familiarity with social network analysis techniques and tools for mapping coordinated behavior</li>
</ul>
<ul>
<li>Background in AI safety, machine learning security, or technology abuse investigation</li>
</ul>
<ul>
<li>Experience building and scaling threat detection systems or abuse monitoring programs</li>
</ul>
<ul>
<li>Active Top Secret security clearance</li>
</ul>
<p>The annual compensation range for this role is $230,000-$290,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote-hybrid</Workarrangement>
      <Salaryrange>$230,000-$290,000 USD</Salaryrange>
      <Skills>Deep subject matter expertise in influence operations, coordinated inauthentic behavior, disinformation, or information warfare, Proficiency in SQL and Python for data analysis and threat detection, Experience tracking and attributing influence campaigns to specific threat actors, including state-sponsored operations, Hands-on experience with large language models and understanding of how AI technology could be weaponized for influence operations, Experience with open-source intelligence (OSINT) methodologies and tools for investigating online information ecosystems, Experience at a major technology platform working on influence operations, platform integrity, or content authenticity, Background in intelligence analysis, information operations, or counter-disinformation within government or military contexts, Fluency in Mandarin Chinese, Russian, Farsi, and/or Arabic (speaking, reading, and writing) combined with a nuanced understanding of the geopolitical landscape and cultural context of the respective regions, Familiarity with social network analysis techniques and tools for mapping coordinated behavior, Background in AI safety, machine learning security, or technology abuse investigation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5140239008</Applyto>
      <Location>Remote-Friendly, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>051843ef-f93</externalid>
      <Title>Vendor and Contract Manager, Safeguards</Title>
      <Description><![CDATA[<p>As the Vendor and Contract Manager on the Safeguards team, you will own the end-to-end lifecycle of Anthropic&#39;s safety-critical vendor, partner, and consultant relationships. This includes identifying and selecting vendors, contract negotiation, onboarding, ongoing performance management, and renewal.</p>
<p>The vendors and partners you&#39;ll manage span verification, threat intelligence, process outsourcing, capability evaluation, civil society consultation, and research collaboration. You&#39;ll build repeatable processes where they&#39;re needed while staying nimble enough to handle novel partnership structures, like research collaborations, civil society consultations, and model red-teaming engagements that don&#39;t fit neatly into standard procurement workflows.</p>
<p>You&#39;ll work closely with legal, procurement, finance, and engineering teams, and you&#39;ll be the person who knows where every Safeguards contract stands, what we&#39;re spending, and where we should consider a change.</p>
<p>This is a role for someone who&#39;s comfortable operating across commercial, legal, and technical contexts in a fast-moving environment , someone who can negotiate contract terms, work with legal teams to redline contracts, set up model access for a research partner, and handle a vendor performance issue in one day.</p>
<p>*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.</p>
<p>Responsibilities:</p>
<p>Vendor Selection &amp; Onboarding - Understand the broad vendor landscape for Safeguards and drive vendor selection processes with expert input, factoring in tradeoffs between capability, price, and internal resources across categories including verification, threat intelligence, process outsourcing, and capability evaluation</p>
<p>Conduct vendor due diligence and coordinate security and data governance reviews for vendors handling sensitive model access or content</p>
<p>Forecast future partnership needs and proactively research vendors and partners that could meet emerging Safeguards requirements</p>
<p>Contract &amp; Budget Management - Manage contracts across the Safeguards vendor and partner portfolio, working with legal and procurement teams on contract redlining, negotiation, and execution</p>
<p>Work with legal teams and potential research partners to develop novel agreements for research collaboration, civil society consultation, and model red-teaming</p>
<p>Handle invoicing, payment, and renewal processes with partners</p>
<p>Own Safeguards vendor budget tracking and planning in partnership with finance teams, maintaining a clear picture of current spend and forecasting future needs</p>
<p>Ongoing Vendor &amp; Partner Management - Manage vendor and researcher access to models and products during testing phases and trials</p>
<p>Oversee and monitor vendor performance and usage, flagging issues and resolving concerns and disputes as they arise</p>
<p>Report on vendor performance, spend, and contract status to Safeguards leadership</p>
<p>You may be a good fit if you have:</p>
<p>5+ years in vendor management, procurement, or contract operations, ideally in risk, fraud, compliance, or trust &amp; safety contexts at a technology company</p>
<p>Demonstrated experience reviewing and negotiating contracts, including comfort with redlining and working alongside legal counsel</p>
<p>Track record managing vendor budgets, including forecasting, tracking spend, and making tradeoff recommendations</p>
<p>Understanding of AI safety, account abuse, or platform integrity issues , you know what verification vendors, threat intelligence providers, and content screening tools actually do</p>
<p>Experience onboarding vendors and standing up new vendor relationships from scratch, not just managing existing ones</p>
<p>Strong cross-functional collaboration skills, particularly with legal, procurement, finance, and engineering teams</p>
<p>Comfort with ambiguity and fast-moving environments , you&#39;ve built or significantly improved vendor management processes, not just inherited them</p>
<p>Nice to have:</p>
<p>Experience in AI safety or AI-adjacent vendor ecosystems</p>
<p>Familiarity with procurement tools such as Ironclad or Zip</p>
<p>Annual compensation range for this role is $245,000-$285,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>vendor management, procurement, contract operations, risk management, fraud prevention, compliance, trust and safety, AI safety, account abuse prevention, platform integrity, verification vendors, threat intelligence providers, content screening tools, Ironclad, Zip, research collaboration, civil society consultation, model red-teaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156596008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>da06ef8d-890</externalid>
      <Title>Vendor and Contract Manager, Safeguards</Title>
      <Description><![CDATA[<p>As the Vendor and Contract Manager on the Safeguards team, you will own the end-to-end lifecycle of Anthropic&#39;s safety-critical vendor, partner, and consultant relationships , from identifying and selecting vendors through contract negotiation, onboarding, ongoing performance management, and renewal.</p>
<p>The vendors and partners you&#39;ll manage span verification, threat intelligence, process outsourcing, capability evaluation, civil society consultation, and research collaboration. You&#39;ll build repeatable processes where they&#39;re needed while staying nimble enough to handle novel partnership structures, like research collaborations, civil society consultations, and model red-teaming engagements that don&#39;t fit neatly into standard procurement workflows.</p>
<p>You&#39;ll work closely with legal, procurement, finance, and engineering teams, and you&#39;ll be the person who knows where every Safeguards contract stands, what we&#39;re spending, and where we should consider a change.</p>
<p>This is a role for someone who&#39;s comfortable operating across commercial, legal, and technical contexts in a fast-moving environment , someone who can negotiate contract terms, work with legal teams to redline contracts, set up model access for a research partner, and handle a vendor performance issue in one day.</p>
<p>*Important context for this role: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Vendor Selection &amp; Onboarding: Understand the broad vendor landscape for Safeguards and drive vendor selection processes with expert input, factoring in tradeoffs between capability, price, and internal resources across categories including verification, threat intelligence, process outsourcing, and capability evaluation</li>
<li>Conduct vendor due diligence and coordinate security and data governance reviews for vendors handling sensitive model access or content</li>
<li>Forecast future partnership needs and proactively research vendors and partners that could meet emerging Safeguards requirements</li>
<li>Contract &amp; Budget Management: Manage contracts across the Safeguards vendor and partner portfolio, working with legal and procurement teams on contract redlining, negotiation, and execution</li>
<li>Work with legal teams and potential research partners to develop novel agreements for research collaboration, civil society consultation, and model red-teaming</li>
<li>Handle invoicing, payment, and renewal processes with partners</li>
<li>Own Safeguards vendor budget tracking and planning in partnership with finance teams, maintaining a clear picture of current spend and forecasting future needs</li>
<li>Ongoing Vendor &amp; Partner Management: Manage vendor and researcher access to models and products during testing phases and trials</li>
<li>Oversee and monitor vendor performance and usage, flagging issues and resolving concerns and disputes as they arise</li>
<li>Report on vendor performance, spend, and contract status to Safeguards leadership</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>5+ years in vendor management, procurement, or contract operations, ideally in risk, fraud, compliance, or trust &amp; safety contexts at a technology company</li>
<li>Demonstrated experience reviewing and negotiating contracts, including comfort with redlining and working alongside legal counsel</li>
<li>Track record managing vendor budgets, including forecasting, tracking spend, and making tradeoff recommendations</li>
<li>Understanding of AI safety, account abuse, or platform integrity issues , you know what verification vendors, threat intelligence providers, and content screening tools actually do</li>
<li>Experience onboarding vendors and standing up new vendor relationships from scratch, not just managing existing ones</li>
<li>Strong cross-functional collaboration skills, particularly with legal, procurement, finance, and engineering teams</li>
<li>Comfort with ambiguity and fast-moving environments , you&#39;ve built or significantly improved vendor management processes, not just inherited them</li>
</ul>
<p><strong>Nice to have:</strong></p>
<ul>
<li>Experience in AI safety or AI-adjacent vendor ecosystems</li>
<li>Familiarity with procurement tools such as Ironclad or Zip</li>
</ul>
<p><strong>Logistics:</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>vendor management, procurement, contract operations, risk management, fraud prevention, compliance, trust and safety, AI safety, account abuse prevention, platform integrity, cross-functional collaboration, ambiguity tolerance, fast-paced environments, AI safety vendor ecosystems, procurement tools, Ironclad, Zip</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156596008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>486aaff1-a4a</externalid>
      <Title>Safety Response Operations Lead</Title>
      <Description><![CDATA[<p><strong>Safety Response Operations Lead</strong></p>
<p>At OpenAI, our Trust, Safety &amp; Risk Operations teams safeguard our products, users, and the company from abuse, fraud, scams, regulatory non-compliance, and other emerging risks. We operate at the intersection of operations, compliance, user trust, and safety working closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are safe, compliant, and trusted by a diverse, global user base.</p>
<p>The <strong>Global Safety Response Operations</strong> team provides 24/7 coverage for user safety, risk, and regulatory escalations across OpenAI’s products, handling the highest-priority cases that require human judgment and rapid response. The team operates as the core escalation and delivery arm of OpenAI’s safety operations, ensuring that our products remain safe and aligned with policy while enabling timely, empathetic, and consistent user support.</p>
<p><strong>About the Role</strong></p>
<p>The Global Safety Response Operations Lead is a hands-on team lead who both manages a regional Safety Response team and personally handles high-risk safety cases. This role combines frontline safety work with people leadership, operational ownership, and cross-functional coordination.</p>
<p>You will lead a team of Safety Response Analysts who handle OpenAI’s most sensitive and high-impact cases, while also personally contributing to casework for complex, high-risk, or high-visibility issues. You will own the execution of high severity escalations in your region and ensure proper execution and communication to cross functional stakeholders</p>
<p>You will be accountable for ensuring your region consistently meets utilization, quality, and SLA targets while serving as the operational interface with Product, Policy, Legal, Investigations, and regional stakeholders.</p>
<p>This is a 24/7 global operation that requires flexibility to support rotating shifts, including nights, weekends, and holidays, as part of a leadership on-call model.</p>
<p><strong>In This Role, You Will:</strong></p>
<ul>
<li>Lead and coach a regional team of Safety Response Analysts, ensuring high performance, engagement, and consistent decision quality.</li>
</ul>
<ul>
<li>Own regional operational outcomes, including utilization, SLA adherence, backlog health, and quality benchmarks.</li>
</ul>
<ul>
<li>Handle and oversee the most complex and high-risk cases, serving as the first line of escalation and incident lead for your region.</li>
</ul>
<ul>
<li>Contribute directly to frontline work (20–30%), including investigations, enforcement decisions, and regulatory or legal escalations.</li>
</ul>
<ul>
<li>Partner cross-functionally with Product, Policy, Legal, Investigations, and local market teams to execute safety outcomes and manage risk.</li>
</ul>
<ul>
<li>Drive operational excellence and continuous improvement, improving workflows, tools, automation, and escalation paths.</li>
</ul>
<ul>
<li>Identify emerging risks and trends, translating frontline insights into actionable recommendations for policy, product, or enforcement.</li>
</ul>
<p><strong>You Might Thrive in This Role If You:</strong></p>
<ul>
<li>You have 5+ years in Trust &amp; Safety, Risk Operations, Investigations, Fraud, Annotation, or platform integrity.</li>
</ul>
<ul>
<li>You have 4+ years of people leadership or senior-level operational ownership.</li>
</ul>
<ul>
<li>You are a strong decision-maker in ambiguous, high-risk environments, able to balance speed, accuracy, and defensibility when handling sensitive or high-impact cases.</li>
</ul>
<ul>
<li>You communicate complex safety and risk decisions clearly and credibly, whether writing escalation narratives, briefing Legal or Policy, or aligning Product and leadership during incidents.</li>
</ul>
<ul>
<li>You can translate between frontline operations and strategic stakeholders, turning messy real-world cases into structured insights and turning policy or product direction into clear, executable guidance for your team.</li>
</ul>
<ul>
<li>You are skilled at influencing without authority, building trust with Product, Policy, Legal, Investigations, and regional partners to drive alignment and resolve ambiguity.</li>
</ul>
<ul>
<li>You are deeply familiar with content moderation, user safety, fraud, or developer risk frameworks, including the legal, policy, and technical considerations that shape enforcement.</li>
</ul>
<ul>
<li>You use data, tooling, and automation to improve quality, efficiency, and scale to not just measure performance, but to make better decisions.</li>
</ul>
<ul>
<li>You are comfortable leading in a 24/7, high-pressure operational environment, providing calm, credible leadership during incidents, spikes, and regulatory or reputational events.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Trust &amp; Safety, Risk Operations, Investigations, Fraud, Annotation, Platform Integrity, Content Moderation, User Safety, Fraud, Developer Risk, Data Analysis, Tooling, Automation, Leadership, Communication, Influencing, Data-Driven Decision Making, Operational Excellence, Continuous Improvement</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/d9be9157-c307-4ac4-9aa6-ad2fe7104808</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>c0ccd7e3-4cb</externalid>
      <Title>Data Scientist, Preparedness</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Data Scientist, Preparedness</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Data Science</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$347K – $400K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.</p>
<p>Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.</p>
<p>The mission of the Preparedness team is to:</p>
<ol>
<li>Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic to our society</li>
</ol>
<ol>
<li>Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems</li>
</ol>
<p>Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.</p>
<p><strong>About the Role</strong></p>
<p>We’re hiring a Data Scientist to help build, evaluate, and continuously improve mitigations that prevent extreme harms from AI systems. This role is for an experienced, highly autonomous individual contributor who can take ambiguous problem statements, structure rigorous analyses, and translate findings into actionable product and policy changes.</p>
<p>This position goes beyond “running evals.” You’ll help create mitigation intelligence and monitoring systems that enable OpenAI to detect issues early, measure effectiveness over time, and reduce both over-blocking (unnecessary friction) and under-blocking (missed harm).</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Evaluate and improve mitigation systems, including classifiers and detection pipelines across domains (e.g., biosecurity, cybersecurity, and emerging risk areas).</li>
</ul>
<ul>
<li>Diagnose false positives and false negatives with deep error analysis, root cause investigation, and clear recommendations for mitigation adjustments.</li>
</ul>
<ul>
<li>Build monitoring and measurement frameworks to track mitigation effectiveness over time and across user segments and use cases.</li>
</ul>
<ul>
<li>Identify trends in over-blocking vs. under-blocking, quantify customer impact, and propose prioritized interventions.</li>
</ul>
<ul>
<li>Develop insights from customer feedback, complaints, and usage patterns to detect shifts in adversarial behavior and system failure modes.</li>
</ul>
<ul>
<li>Expand risk monitoring into new areas, including cybersecurity threats and model loss-of-control or sabotage scenarios, in partnership with domain experts.</li>
</ul>
<ul>
<li>Communicate results to technical and executive stakeholders with crisp narratives, decision-ready metrics, and clear tradeoffs.</li>
</ul>
<p><strong>You might thrive in this role if you are:</strong></p>
<ul>
<li>An autonomous operator: you can take a problem statement and independently structure the analysis end-to-end.</li>
</ul>
<ul>
<li>Strong at executive-ready communication: concise, clear, and outcome-oriented.</li>
</ul>
<ul>
<li>Skilled in turning analysis into productable changes: you’re comfortable influencing across functions to drive mitigation improvements.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Significant experience in data science or applied analytics in high-stakes domains (e.g., security, trust &amp; safety, abuse prevention, fraud, platform integrity, or reliability).</li>
</ul>
<ul>
<li>Strong foundations in experimentation, causal thinking, and/or observational inference; ability to design robust measurement under imperfect data.</li>
</ul>
<ul>
<li>Fluency in SQL and Python (or equivalent) for analysis, modeling, and building monitoring workflows.</li>
</ul>
<ul>
<li>Experience building metrics, dashboards, and operational monitoring that meaningfully changes outcomes (not just reporting).</li>
</ul>
<ul>
<li>Track record of driving cross-functional impact with engineering, product, and research partners</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$347K – $400K • Offers Equity</Salaryrange>
      <Skills>data science, applied analytics, security, trust &amp; safety, abuse prevention, fraud, platform integrity, reliability, SQL, Python, experimentation, causal thinking, observational inference, measurement, metrics, dashboards, operational monitoring, machine learning, deep learning, natural language processing, computer vision, data engineering, data architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is a technology company that focuses on developing and applying artificial intelligence in a way that benefits humanity. It was founded in 2015 and has since grown to become one of the leading AI research and development companies in the world.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/efcc3430-14c8-4022-8350-8146ffb867ab</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>