<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>9651b7fa-8b9</externalid>
      <Title>Strategic Risk Analyst, Behavioral &amp; Psychological Risk</Title>
      <Description><![CDATA[<p>As a Strategic Risk Analyst, Behavioral &amp; Psychological Risk, you will bring deep expertise in human behavior to our central view of risk across OpenAI&#39;s products and platforms.</p>
<p>You will analyze how users think, feel, and behave in interaction with AI systems,especially in high-risk contexts such as self-harm, manipulation, coercion, and influence,and translate these insights into decision-ready risk assessments, mitigation strategies, and product guidance.</p>
<p>This role bridges clinical/behavioral expertise and intelligence analysis, turning psychological signals and patterns into structured judgments, early indicators, and actionable recommendations.</p>
<p>You will partner closely with investigators, engineers, policy, and trust &amp; safety teams to shape how we understand and mitigate potential risks in human-AI interactions.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Developing insights into how AI systems are used in complex or high-risk situations (e.g., self-harm, suicidal ideation, substance-use escalation, and threats of violence), identifying recurring patterns and emerging trends that help guide product, safety, and policy decisions.</li>
</ul>
<ul>
<li>Synthesizing behavioral, psychological, and intelligence signals into clear narratives about user needs, system dynamics, and potential areas of risk or vulnerability.</li>
</ul>
<ul>
<li>Producing decision-ready briefs and assessments that inform product, safety, and policy decisions.</li>
</ul>
<ul>
<li>Developing and refining behavioral risk frameworks, taxonomies, and indicators (e.g., severity models, escalation pathways, psychological harm categories).</li>
</ul>
<ul>
<li>Identifying early indicators of emerging issues and assessing whether observed patterns represent meaningful safety concerns, helping prioritize and inform appropriate mitigations.</li>
</ul>
<ul>
<li>Assessing the effectiveness of mitigations,such as product changes, safeguards, and guidance,using behavioral evidence and real-world outcomes.</li>
</ul>
<ul>
<li>Contributing to incident reviews and post-incident analysis by bringing a behavioral perspective to root cause analysis and prevention.</li>
</ul>
<ul>
<li>Bridging research and operations, translating academic and clinical literature into practical safeguards, policies, and product decisions.</li>
</ul>
<p>You might thrive in this role if you:</p>
<ul>
<li>Bring 5+ years in forensic, clinical, trust and safety, or applied academic settings assessing risk of violence, self-harm, or addiction, with strong mixed-methods research skills.</li>
</ul>
<ul>
<li>Have familiarity with AI systems, language models, or human-AI interaction dynamics, and are interested in applying psychological expertise to emerging AI risks (experience working on AI safety, trust &amp; safety, or related domains is a plus).</li>
</ul>
<ul>
<li>Can translate human behavior into structured intelligence, connecting individual cases to system-level patterns and risks.</li>
</ul>
<ul>
<li>Are comfortable working across qualitative and quantitative inputs, including casework, interaction data, research literature, and metrics.</li>
</ul>
<ul>
<li>Have experience designing or using risk frameworks, taxonomies, or evaluation methods to structure ambiguity.</li>
</ul>
<ul>
<li>Communicate clearly across disciplines, turning complex behavioral insights into concise, actionable recommendations.</li>
</ul>
<ul>
<li>Thrive in fast-moving, ambiguous environments, and can prioritize effectively under uncertainty.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$288K – $320K</Salaryrange>
      <Skills>human behavior, AI systems, language models, human-AI interaction dynamics, risk assessment, mitigation strategies, product guidance, intelligence analysis, structured judgments, early indicators, actionable recommendations, incident reviews, post-incident analysis, root cause analysis, practical safeguards, policies, product decisions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company that aims to ensure that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/7cae487d-f280-4ab0-90cf-c9671ff0c015</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
  </jobs>
</source>